foreign the digital space is a world of connection and opportunity and take this moment for example the web has made it possible for you to enroll in this program where you'll learn from the personal stories of developers at meta by the time you have completed this professional certificate you can become a creator of digital experiences connection is evolving and so are you you might not have a background in Tech at all and that's okay even if you have no experience this program can get you job ready within a single year so how can this professional certificate prepare you for a job at an organization like meta the database engineer professional certificate will help you build job ready skills for database engineering role while earning a credential from meta from meta Engineers you will learn about how they collaborate to create and test high performance databases you'll also discuss interesting topics with other aspiring database engineers and complete a range of coding exercises to improve your skills it's important to complete all the courses in this certificate in order as each course will build on your skills although we have a recommended schedule for each course the program is entirely self-paced which means your time is your own to manage as you make your way through the course is in the certificate you'll learn how to model and structure a database according to best practice and create manage and manipulate data using SQL one of the most widely used languages for working with databases you'll also learn how to use the Django web framework to connect the front end of a web application to your database for your final project you will create a functional relational database designed and developed with best practice architecture to Showcase as part of your portfolio during your job search you'll also be ready to collaborate with other developers as you will have learned to use git and GitHub for Version Control in the final course you will prepare for the coding interview you'll practice your interview skills refine your resume and tackle some common coding challenges that typically form part of technical job interviews once you complete the program you'll get access access to the Mata career programs job board a job search platform that connects you with over 200 employers who've committed to sourcing Talent through meta certificate programs who knows will end up whatever the future of connection looks like you'll be part of its creation let's get started [Music] hello and welcome to this course in database engineering almost everyone has used the database and more likely information about us is probably present in many databases all over the world but who understands what a database is and how important database engineering is to Global industry government and organizations a very straightforward description of a database is that it is a form of electronic storage in which data is held of course that explanation does not even come close to explaining the impact of database technology to give an idea of databases in a real world context let's briefly describe some typical use cases for example your bank uses a database to store data for customers bank accounts and transactions a hospital stores patient data staff data laboratory data and more and an online store retains your profile information along with your shopping history and accounting transactions many of these Services have access to a diverse range of data they collect and store other items such as your location how long you spend on their platform and friends you connected with alongside many more facts such online services and social media platforms generate enormous amounts of data due to their large user base and constant user activity and with the internet of things or iot many extra devices are now connected to the internet these continual streams of data have led to a revolution in database technology to accommodate the volume variety and complexity of what has become known as Big Data whatever the source of the data a database will typically carry out the following actions all of which a database engineer must be familiar with store the data form connections or relationships between segmented areas of the data filter the data to show relevant records search data to return matching records and have functions to allow the data to be updated changed and deleted as required don't worry if you don't fully understand all these terms for now you're just receiving a brief introduction to databases and data during the course you'll explore these Concepts in more detail alongside the many other tasks that form the duties of a database engineer you learn about the concepts of data and databases how data is related in a database and different database structures and their uses how to perform create read update and delete operations how to use SQL operators to sort and filter data what database normalization is and how to normalize a database you'll get to build a fully operational database and you'll also install and set up software called xampp on your computer to help progress your local and remote database learning you're not expected to be a database engineer just now there are many videos in your course that will gradually guide you toward that goal watch pause rewind and re-watch the videos until you are confident in your skills then consolidate your knowledge by Consulting the course ratings and put your skills into practice during the course exercises along the way Elon counters several knowledge quizzes where you can self-check your progress and you're not alone in considering a career as a database engineer which is why you'll also work with course discussion prompts that enable you to connect with your classmates it's a great way to share knowledge discuss difficulties and make new friends to be successful in this course it is helpful to commit to a regular and disciplined approach to learning you need to be serious about your study and if possible map Aid a study scheduled with dates and times you can devote to attending the course it's an online self-paced course but it does help to think of your study in terms of regular attendance a learning Institute in summary this course provides you with a complete introduction to databases and is part of a program of courses that lead you toward a career in database engineering I really like this idea that in the end while solving human problems through technology and we're not as a software engineer my role is not to Simply develop technological solutions they need to have this human outcome [Music] I'm Daniel Bloomfield ramajim I'm a software engineer for meta I joined the company in 2017 and I work out of a Washington DC office I immediately think of my mother's recipe book which is a spiral notebook so she keeps her recipes in a spiral notebook every page of the number and she keeps kind of an index at the beginning of that notebook so she can easily find the recipe well that's a database so you know my mom is a database engineer perhaps or not engineer but certainly she relies and has created her databases so I like that fun kind of example because it shows the range of that things that are possible once you store data in a structured way that is can be easily retrieved and so it's the recipe book but it's also you know the picture I just shared on Facebook that now my friends get to see anywhere they are in the world so quickly all that is powered by a lot of infrastructure databases at the core I think data is at the heart of every application and so learning to create an effective data layer that can provide the user with you know quick responses accurate responses and results is really critical and as a database engineer in particular I think you are involved in such a critical component of building applications and you have such a large influence and everything else that follows from the data so that includes the user interface it includes the clients the apis like all of that gets influenced but how the data layer gets modeled stored and all the characteristics of being able to retrieve that data effectively and making it the consumption of that information easy for the rest of the tech stack so you have a really large influence and I hope you walk away knowing that you have a very large influence on the development of applications so technical skills are used on a daily basis um I mean certainly I code and when I say code I mean not only I program using kind of a standard programming language languages for web but I also work in the database Space by creating the pipelines that produce that extract transform and load a lot of the data that makes its way into the applications that we develop the soft skills I use on a daily basis certainly include a lot of communication and organization you know it's easy to come out of school and you're so excited about the coding the technical skills you've learned and those are obviously very very important but I thought code was King I'm here at a program and you know I expect people to understand the output of what I'm what I'm making and I learned that that is insufficient and I I saw great examples of people who were doing great technically but they were excelling further more on just being able to explain what they were doing um not only to the team internally but to the people who are going to maybe use those features in the tool set the perfect is the enemy of the good particularly I think with database development it's easy to over complicate Solutions especially when you're doing with data modeling and data storage you want to cover every possible variation of data every different use case Edge case for using that data and that can lead you down the path of creating very complex data schemas that become hard to maintain can be not well performed so you need to iterate on your work for databases I would say you know try to focus on the Here and Now needs of data be specially mindful of these perceived uh needs for massive data scalability which may again lead you to over complicate your solution when in fact you might need something much smaller scale at least to kick off and get better understanding of what the feature and the data requirements truly are so start small iterate frequently write more often and I know usually when you think of Engineers or database programmers uh you think the output of your work is you know it's the program it's the code it's the SQL queries and yes it is but that alone I think is insufficient so complement that with writing now what are you going to write about well certainly there's documentation that goes along of your code um I have a good colleague I meta that always tells me when I said I'm done with something he looks at me and says are you done done and I said the first time he said it's like yes I'm done and he followed up with did you is your documentation ready is your code checked in the right places is the Wiki page updated did you write a post about this and so he emphasized the importance of um code is the 80 you know 20 is that additional communication that's needed so I would say write more often don't be afraid to write imperfectly write something put something out there whether it's sharing the status of something that you're doing whether it's just enhancing documentation for something that you're working on get into the habit of writing more often try to take what you're learning and connect it to something practical that you can see a use for whether it's learning about databases and you know thinking about a recipe book that your mother or father has in a spiral notebook or maybe baseball cards that you tracked as a child or comic books that you tracked as a child try to think of ways to apply what you're learning technically to these real life problems they can be small problems like finding the recipes at the right time but they can be of course bigger more perhaps interesting problems around just Technical and digital communication between people we all use data and databases and our daily online lives for example uploading photos to our social media feeds downloading files at work and playing games online are all examples of database usage but what exactly is data and how does it interact with the database if you're struggling to answer these questions don't worry by the end of this video you'll be able to describe what a database is at a conceptual level identify real world examples of the use of databases and demonstrate an understanding of how data is organized in a database so let's return to the first of our questions what is data in basic terms data is facts and figures about well anything for example if data were collected on a person then that data might include their name age email and date of birth or the data could be facts and figures related to an online purchase this could be the order number description order quantity and date and even the customer's email data is crucial for individuals as well as organizations where is all this data stored in our Digital World data is no longer stored in manual files instead developers use something called databases a database is a form of electronic storage in which data is organized systematically it stores and manipulates data electronically to make it more manageable efficient and secure there are many real world examples of where databases are used for example a bank can use a database to store data for its customers bank accounts and transactions and a hospital uses a database to store patient data staff data laboratory data and much more at this point you might be asking yourself but what does a database actually look like well a database looks like data organized systematically and this organization typically looks like a spreadsheet or a table what exactly does the term systematic mean all data contains elements or features and attributes by which they can be identified for example a person can be identified by attributes like their age height or hair color and this data is separated and stored in what's known as entities that represent those elements as you just learned an entity is like a table it contains rows and columns that store data relating to a specific element in other words these are relational elements they're related to one another these entities could be physical representations like an employee a customer or a product or they can be conceptual like an order an invoice or quotation entities then store data in a table like format against the attributes or features related to the element for example an online store could hold customers data in a customer entity containing specific attributes relating to the customer these attributes could include first name last name date of birth and email and they could also have product data stored in a product entity against attributes like product code description price and availability in the relational database World these entities are known as relations or tables the attributes become The Columns of the table and each table row represents an instance of that entity as an example let's take the entities from the online store example that you just explored these two examples could be combined into a list of orders the store received from its customers within a database this data could be rendered as an order table or entity and the data could be organized into rows that contain a unique order number the name of the customer who placed the order the product that they ordered and the price of that product there are many ways to organize data in a database relational databases aren't the only kind of databases that you'll encounter as a database engineer you'll work with many different types of databases here's a few common examples of other types of databases an object-oriented database is where data is stored in the form of objects instead of tables or relations an example of this kind of database could be an online bookstore the stores database could render authors customers books and Publishers as classes like sets or categories the objects or instances of these classes would then hold the actual data graph databases store data in the form of nodes in this case entities like customers orders and products are represented as nodes the relationships between them are represented as edges and finally there's document databases where data is stored as Json or JavaScript object notation objects the data is organized into collections like tables within each collection are documents written in Json that record data so in this example customer documents are held in a customer collection while order and product documents are stored in the order and product collections but where are the databases themselves stored a database can be hosted on a dedicated machine within the premises of an organization or it could be hosted on the cloud cloud databases are currently a more popular choice this is because they allow you to store manage and retrieve data through a cloud platform and access data through the internet and they all provide a lower cost option for data management than other similar options you should Now understand the concept of a database you should also be able to identify examples of databases and demonstrate how data is organized within a database great start you'll be storing and managing data in no time picture yourself in the following scenario you're managing the database of a large online store your database must be able to retrieve a customer's details from one table and then find the order recorded against another table so how does the database establish a relationship between these pieces of data over the next few minutes you'll explore this process and by the end of this video you'll be able to explain why data in a database should be related and identify an instance of related data in a database data stored in a database cannot exist in isolation it must have a relationship with other data so that it can be processed into meaningful information so how do you make sure that all the data in your database is related let's explore how data is related by using the online store as our example in the database of your online store you could have an order table and a customer table to locate the details of a customer's order you would check the order number against the customer ID in other words the database establishes a link between the data and the tables let's look at the customer table in more detail in this table The Columns are customer ID first name last name and email in relational database terms these are fields then there are several rows which contain data for each of these fields in relational databases they are known as records of the table so all these fields and rows work together to store information on the customer also known as The Entity every row and record in the customer table is an instance of the customer entity for example Sarah Hogan who is a customer ID of C1 is one customer instance I'm Katrina Langley who is a customer ID of C4 is another customer instance what's most important is that each of these customer instances or records must be uniquely identifiable but what if two or more customers share similar info like the same first name or last name to avoid this confusion within the database you can use a field that contains only unique values like the customer ID this is called a primary key field it contains unique values that cannot be replicated elsewhere in the table so even if two customers share the same name they'll still have separate customer IDs this means that your database can determine which customer is the required one let's look at the order table next just like the customer table the order table also has fields and Records and in this table the primary key field is the order ID but there's also a field named customer ID with the exact same data as the customer data so what is the purpose of the customer ID in this table the customer ID is there to help identify who it is that plays the order so by adding the customer ID field to the order table our relationship is established between the customer table and the order table and because of this relationship you can pull data in a meaningful way from both tables the customer ID field in the order table is known as the foreign key field a foreign key is a field in one table that connects to the primary key field in the original table which in this case is the customer table so the customer ID is the primary key of the customer table but it becomes the foreign key in the order table this way the relationship is established and the data and these two tables are related you should now be able to explain the relationships between data in a database and identify instances of related data great work you've probably heard of terms like big data and Cloud databases maybe you've even encountered them in this course but do you know what they mean in this video you'll discover more about these terms and you'll be able to identify different types of databases and explain how databases have evolved in response to new trends like big data databases have been around for a long time and have been influenced by many different Trends but they've undergone a huge change in recent decades thanks to the growth of the internet they now must be able to store ever increasing amounts of unstructured data however this poses difficulties as they mostly store structured data let's briefly look at some of the different types of databases and how they've been affected by this trend relational databases have limitations when it comes to storing data because they mostly store structured data yet databases are now required to store more and more unstructured data so the trend in recent years has been to rely on nosql databases instead nosql databases are a type of database that store data in a variety of different formats essentially they provide databases with a flexible structure this makes scaling Easy by facilitating a change to the database structure itself without the need for complex data models nosql databases are used by social media platforms The Internet of Things artificial intelligence and other applications that generate massive amounts of unstructured data types of nosql databases include document databases key value databases and graph databases now that you're familiar with different types of databases let's take a closer look at big data and Cloud databases essentially these terms are used to describe a recent change in our approach to data and databases let's start with a look at Big Data big data is complex data that can increase in volume with time in other words is data that can grow exponentially with time but where does this kind of complex data come from social media platforms online shopping sites and other services generate massive amounts of data every second of the day as they capture the actions of billions of users around the world and with the internet of things or iot more and more devices are connected to the internet generating even more and more data this is how complex data or big data is created all this data is highly unstructured or semi-structured traditional database systems could deal with structured data using tables records and relationships but big data is a whole new challenge big data is a combination of structured semi-structured and unstructured data collected from many different sources and it adds more power to data because it can address complex business problems that traditionally structured data can't handle finally Big Data helps to provide unique insights that can help to improve decision making so it's highly valued across many Industries for example the manufacturing sector processes big data to predict equipment failure by evaluating the current state of Machinery assess production processes by monitoring the production line respond to customer feedback proactively and anticipate future demands by monitoring current sales retail processes big data to anticipate customer demand improve customer experience analyze customer behavior and spending patterns and identify pricing Improvement opportunities and the Telecommunications sector utilizes big data analytics and network usage analytics to plan for infrastructure Investments design new services that Meet customer demands analyze service quality data to predict customer satisfaction and plan for customer retention mechanisms now that you're familiar with big data and how it helps to power businesses let's move on to another Trend and databases the use of cloud databases organizations are moving to the cloud to free themselves from the difficulties of dealing with the infrastructure of physical servers like maintenance and storage costs some examples of cloud storage Services include Dropbox on iCloud with these cloud storage Services it's possible to store documents and other data on the cloud a much more affordable solution another Trend in databases is business intelligence or bi traditionally databases were just a means of storing data but organizations now utilize their data with business intelligence related Technologies and strategies with these Technologies organizations can analyze their data and extract valuable information to help them to make informed business decisions new trends are constantly emerging in database technology and they'll keep advancing with time but for now these are a few of the leading trends that you should be aware of at this stage in the course you're probably familiar with the basics of databases and how they store and manage data but it's also important that you know how to interact with databases in order to work with data as a data engineer you can interact with databases using structured query language or is more commonly known SQL also pronounced as SQL over the next few minutes you'll learn how to explain what SQL is and outline the role of SQL and databases so what sort of interactions do database Engineers need to establish with databases some of the operations you could carry out in the data might require you to create read update and delete data these operations are also known as crude operations you might already be familiar with some of these operations if not don't worry they'll be covered in depth at later stages in this course let's find out more about SQL SQL is a standard language that can be used with all databases it's particularly useful when working with relational databases which require a language that can interact with structured data some examples of relational databases that SQL can interact with include MySQL postgres SQL Oracle and Microsoft SQL Server the next question this raises is how does a database interpret or read and execute instructions given using SQL a database interprets and makes sense of SQL instructions with the use of a database management system or dbms as a web developer you'll execute all SQL instructions on a database using a dbms the dbms takes responsibility for transforming SQL instructions into a form that's understood by the underlying database this was just a very quick introduction to SQL at this early stage you should be able to explain what SQL is and explain the role of SQL in databases in the upcoming videos you'll learn more about SQL and develop a deeper understanding of the language imagine that you've just been hired to create a database for college first you need to create tables to hold data in all aspects of the college then you'd need to insert data into these tables and then modify this data whenever something changes that's a lot of work but it's all possible with the use of SQL and crude operations not familiar with these operations no problem over the next few minutes you'll learn how to explain the tasks that SQL syntax is used for when building a database and demonstrate an understanding of the SQL subsets and sub-languages so let's return to our College database scenario how can you possibly make all these changes in the database well with the help of what web developer is called crude operations performing crud operations is the most common task when working with a database crud stands for create read update and delete or in operational terms create add or insert data read data update existing data and delete data alongside these operations there are many other things that SQL can do depending on what SQL is used for it can be divided into many subsections or sub-languages these include ddl or data definition language DML also known as data manipulation language dql or data query language and DCL also called Data control language let's take a closer look at these languages under commands starting with data definition language or ddl ddl as the name says helps you define data in your database but what does it mean to Define data before you can store data in the database you need to create the database and related objects like tables in which your data will be stored for this the ddl part of SQL has a command named create then you might need to modify already created database objects for example you might need to modify the structure of a table by adding a new column you can perform this task with the ddl alter command you can remove an object like a table from a database using the ddl drop command data manipulation language or DML commands are used to manipulate data in the database like inserting updating or deleting data most crud operations fall under DML to add data to a table you can use the insert command this command lets you specify the fields to add data to along with the values to be inserted if you need to edit data that's already inserted into a table just deploy the update command and you can specify data to be removed by using the delete command so far you've learned how to add database objects and manage data within them so how do you read or retrieve that data to read data stored in a database you can use data query language dql defines the select command to be able to retrieve data select lets you retrieve data from one or multiple tables letting you specify the data fields that you want based on preferred filter criteria and finally you can also use DCL or data control language to control access to the database for example using DCL commands you control access the data stored in the database Grant and revoke ddl commands are used to give users access privileges to data and to revert access privileges already given to users you should now be familiar with how SQL acts as the interface between the database and its users and you should also be able to identify SQL operations in sub languages great work by now you're most likely familiar with the basics of databases and you might even have come across some simple SQL syntax but why do developers use SQL to interact with databases SQL is a popular language choice for databases because of the many advantages that it offers now over the next few minutes you'll identify the advantages of SQL and demonstrate how the advantages of SQL assist with database tasks SQL is the interface or bridge between a relational database and its users and offers web developers a wide range of advantages let's look at a few of them the biggest advantage of SQL is that it requires very little coding skills to use is just a set of keywords there aren't many lines of code required to perform basic crude operations or add create update and delete tasks on the database so it's a very developer or user-friendly language sql's interactivity makes it even more user friendly because it lets developers write complex queries in a short space of time so if you need to work with a relational databases for your next project you just need to know what keywords to use and when SQL is also a standard language that can be used with all relational databases like MySQL this also means that there's a lot of support and information available so SQL can run on any computer once you have database software installed SQL is also a portable language once you write your code it can then be used on any hardware and any operating system or platform wherever you need so if you write SQL code in a desktop and then move it to a production server environment it will run the same in both locations also SQL is a comprehensive language that covers all areas of database management and administration for example it allows you to create databases insert update and delete data retrieve and share data among multiple users and manage database security this is made possible through subsets of SQL like ddl or data definition language DML also known as data manipulation language dql or data query language and DCL also known as data control language and the final advantage of SQL is that it lets database users process large amounts of data quickly and efficiently you now know that SQL is a simple standard portable comprehensive and efficient language that can be used to communicate and work with relational databases you're well on your way to mastering SQL well done as you might already know you can interact with the database using SQL but just like with other coding languages you need experience with SQL syntax and subsets before you can make use of it over the next few minutes you'll learn how to create a database using the data definition language or ddl subset of SQL utilize the data manipulation language also known as the DML subset to populate and modify data in a database and read in query data within databases using the data query language or dql subset of SQL in order to demonstrate SQL syntax and its subsets I'm going to show you the SQL commands that can be used to develop a database for a college however take note that the demonstration which follows will only briefly show each step in the process you just need to develop a working familiarity with SQL for now you'll explore the language and as subsets in much more detail later in this program the first task is to create the database to do this I write a create statement using sql's ddl subset so the syntax to create a database is create database followed by the name of the database I then place a semicolon at the end of the statement let's create a college database as an example using the syntax create database College once you've created a database the next step is to create the tables you can create tables using the create table syntax followed by the table name just repeat these same steps for each new table you want to add to your database I can use this syntax to create a student table in my college database this table will hold information on each student to create the table I just write create table student now we need to populate the table of data this is where I can use the data manipulation language or DML subset of SQL to add table data I use the insert into syntax this inserts rows of data into a given table I just type insert into followed by the table name and then a list of required columns or Fields within a pair of parentheses then I add the values keyword and specify in order the values for each of the fields as an example let's add data to the student table in our College database I'll use the student table I created earlier and add student data to it by specifying values for each of the following columns ID first name last name and date of birth and then populate the table with the required data but what if I need to update or modify data for example let's say I've input the wrong date of birth for a student to change this data I can use the update syntax which is part of the DML subset of SQL first I add the update keyword followed by the student table name then I use the set keyword followed by columns and values I want to update written as key value pairings in this instance is the date of birth column and a new date of birth value finally I add the where clause and a condition to filter the records I need for example to change the data for the student with the ID of 2 I can type where ID equal to 2. it's also possible to delete data from a table let's delete the table record for the student with the ID of 3 using the delete syntax first I type delete from then the table name this tells MySQL where the data must be deleted from this is followed by the where clause in a condition such as ID equal to 3 which would remove all data on Row 3 of the table so I could instruct MySQL to remove the data of the student on row 3. once I run the statement the student's data is removed from my table you're now familiar with how to add update or delete data in a database but how would you read data stored in your database tables that's where sql's dql or data query language comes in the main syntax of dql is Select as its name says it's used to select data from the database a select statement is written using the select keyword followed by The Columns that hold the data you required you then write the from keyword followed by the name of the table you want to select data from as an example you could use the select statement to query the student table to find the name of the student with an ID of 1. you just need to add the wire keyword followed by the student's ID this would then return the name of John Murphy you're now familiar with the basics of SQL syntax and subsets don't worry if you're still trying to figure out these subsets you'll explore each of them in more depth later in this specialization and you'll also get an opportunity to try them for yourself at this stage of the course you're probably familiar with the basics of how databases store and interact with data but how do they store all this data and present it in a logical way in the form of tables by the end of this video you'll be able to outline what a database table is at a conceptual level and explain how data is structured in a database table as you probably already know the table is made up of rows and columns which hold data and a table is stored in a database in a database that holds multiple tables these tables are known as relations as they all relate to one another in a more conceptual or logical sense a table is also known as an entity and in object-oriented databases or oodb an entity is an object that is attributes that are like columns or fields in a table so in essence a table entity an object all refer to the same concept within every table are columns also sometimes called Fields or attributes each cardum or field has a unique name and data type for example I have a table that contains data on employees in a company the table organizes the data in the columns such as ID and role and each column can hold different types of data like numeric or string a set of columns or Fields form a row in relational database terminology a row is known as a record so a record is a combination of columns or fields that contain data in my employee table for example each rows is single employee record let's return to columns for a moment as you now know every column has a data type the data type of a column defines what type of value a column can hold like integer character date and time and so on it's up to the developer to decide the data type for each column and it's also a guideline for SQL around what data type to expect in each column and how to interact with the underlying data stored physically however data types can vary depending on the database system for example you might have different types from MySQL SQL server or access always refer to the documentation of the relevant database system to check what data types it supports generally all database systems support string data types for storing characters in strings of characters numeric data types to store exact or whole numbers and approximate numbers date and time data types to store information on date and time and binary data types to store images files and other information another important concept related to tables is domains a domain is the set of legal values that can be assigned to an attribute basically this means making sure that the values of field can hold are well defined for example you can only place numbers in a numerical domain and you can only place characters or strings of characters in a string domain and each of these domains must include length values and other relevant rules that Define its function each row or record in a table is also uniquely identified by what's known as a primary key a column in the table that has unique values will become the primary key of the table in the employee table for example the ID column is the primary key as each ID is unique this is because the other columns could contain repeating values for example two employees may share the same name or role it's also possible for a primary key to be a combination of columns if a single column alone doesn't possess unique values you should now be familiar with what a database is and be able to explain how it is structured you should also be able to explain Key Concepts such as columns rows and keys great work at this stage of the course you're probably familiar with the relational database model but to fully understand how a relational database model works you first need to understand how tables within a database are related essentially relationships are established between tables with the use of keys by the end of this video you'll be able to identify the main Keys used in tables in a relational database and explain the relationship between keys in a table the relational database model is based on two main Concepts entities which are defined as tables and relations that connect to related tables to realize how this model works you need to understand the different key attributes that exist in the relational database to demonstrate let's use the example of a sports competition that uses three tables to keep track of the league the league table the teams table and the points table each table has relevant columns where each column represents an attribute of the table entity the league table keeps track of each team's position in the league their name and the state they represent the team's table tracks the team name the team captain and the team coach and the points table records the team's position in the league the team's name and how many points the team has this season notice that the teams table includes team name which also belongs to the league table these attributes could be of a simple attribute type that can hold a single value for example in a table of Staff members in a college each staff name attribute has a single value in each row or they could also have a multi-value attribute that can have multiple values like a list of subjects taught however multi-value attributes should be avoided in relational database design you'll learn more about this concept later in the course let's use the example of the staff table to explore some examples of attribute keys let's begin with the key attribute this is a value used to uniquely identify an individual record of data in a table for example in the staff table the key attribute is Staff ID this attribute has unique value in each row of the table so it's the perfect way to uniquely identify each record of data in a relational database there are a range of different types of key attributes there's also the candidate key attribute this is any attribute that contains a unique value in each row of the table in the case of the staff table both the staff ID and contact numbers are examples of candidate keys each has a new unique value in each row the other columns can contain repeated information so they're designated as non-key attributes a composite key is a key that is composed of two or more attributes to form a unique value in each new row in the staff table an example of a composite key is a combination of the staff name and staff title assuming that there isn't another instance of the same combination elsewhere in the table a composite key is usually considered when a single attribute key can't be identified a relational database must also contain a primary key which you should already be familiar with in the staff table the staff ID is the primary key an alternate key also known as the secondary key is a candidate key that was not selected to be the primary key just like a primary key it's a column that contains a unique value in each field for the staff table the contact number is the secondary key on each row and finally there's a foreign key the foreign key is an attribute in a table that references a unique key in another table typically a foreign key references the primary key of another table for example the staff ID might also be a foreign key in one or more tables within the college database the relationship between primary and foreign keys will be discussed in more detail at a later point in this course you're now familiar with the different types of keys in a relational database you've reached the end of this module and introduction to databases in this module you've discovered the basics of databases and data received an intro to SQL or structured query language and explored the basic structure of a database it's now time to recap the key points and Concepts you learned and skills that you gained you began the module with an introduction of databases and data following the completion of this first lesson you can now provide an overview of what a database is at a conceptual level outline real-world examples of database usage and explain how data is organized within a database you can now also explain the importance of related data within a database identify an instance of related data within a database and provide an overview of new trends in database applications you're also able to identify different types of databases and provide a high level overview of how databases have evolved following your exploration of databases and data you are then introduced to SQL this lesson focused on the basics of SQL or structured query language during this lesson you learned how to outline the purposes of SQL and demonstrate an understanding of the role of SQL and databases you can also identify the advantages of SQL such as its low entry level its wide range of applications and its portability across operating systems and you can also explain how these advantages will assist you when working with databases and you're also able to provide a high level overview of how SQL ddl DML and dql syntax is used and identify the main SQL commands used in databases in the final lesson of this module you explored basic database structure now that you've reached the end of this lesson you can explain the concept of a database table and outline what it's used for and identify the key components of a database table such as columns rows data types and keys you're now familiar with the basics of databases you can explain how they store data identify methods for interacting with databases through SQL and outline the basic structure of a database that's a great start to your database Journey well done you probably know the database tables store data in the forms of columns and rows but how do you make sure that every column accepts the correct type of data for instance that your cost column stores values in decimal or your product quantity column accepts positive numbers this is exactly what data types do with data types you can determine what kind of data is accepted by each field in your table over the next few minutes you'll learn how to explain the numeric data type in a database and differentiate between integer and decimal data types before you begin exploring numeric data types let's take a moment to explore the concept of data types when you create a table in a database you need to Define column names and the data type of the content that will reside in those columns data type tells a database management system such as MySQL how to interpret the value of the column data types maintain data in the right format and make sure the value of each column is as expected the most used data types are numeric string and date and time data types let's take the example of a table from the database of an online store this table collects information in customers in the form of columns called customer name order date product quantity and total price each of these columns must store data in the form of a suitable data type the customer name column can use string data order date can use a date type and product quantity and total price columns are best suited to numeric data the focus of this video is a numeric data type numeric data types is the generic term used for all specific data types that let a column store data as numbers in the database the two most common numeric data types used in databases are the integer data type used for a whole number value and the decimal data type used for a number with a fraction value to return to our earlier table example the product quantity column is defined as an integer data type this is because it holds whole numbers only fractional numbers can be inserted but they'll always be automatically rendered up or down to the nearest whole number in the database and the total price column is a decimal type this is because it holds fractional numbers for example an item that costs eighty dollars and ninety cents is a fractional value AC is a whole number and 90 is a decimal whole numbers can also be inserted the database will add a decimal point along with a fractional value of 0. in most database Management Systems you'll find different types of integer and decimal data types each type is intended to store minimum and a maximum number value for example in mySQL database management system tiny integer or tiny int is used for a very small integer number value where the maximum possible value that can be inserted is 255. while integer or in can be used to store a very big number the maximum value that it can store is over 4 billion these data types can also accept negative and positive values in some database Management Systems you can also Force columns to accept positive numbers this increases the maximum value they can store you should now be able to explain the numeric data type in a database and you should also be capable of differentiating between integer and decimal data types great work you probably already know that when you create a table in a database you must Define the column names and the data type of the content that will reside in those columns you can use a string data type to define a columns data type particularly in instances when it accepts both numeric and text characters in this video you'll explore the string data type and by the end of the video you'll be able to explain the string data type in a database and differentiate between Char and varchar data types when you create a table in a database it is important for data Integrity to ensure that only valid values are inserted in your table for example you should use string data type when you intend to store data that contains a mix of character types so if you define a column as a string then any type of text can be inserted this includes alphabet characters numeric characters and special characters let's explore an example to find out more about how string data types work let's take the example of a student table from a college database this table stores student login information for the college's online portal it stores this information under the following four columns student name username password and email address the student name column contains only alphabet characters the username column contains alphanumeric characters and the password and email column contains a mix of character types string data type is a generic term used for different string data types in the database the most used string data types are Char which stands for character this data type is used to hold characters of a fixed length and varchar stands for variable character this holds characters of a variable length let's explore these string data types further Char means that the given length of the characters is predetermined it can't be changed after declaration column attributes are defined as Char followed by a character length in parentheses for example Char 50 means that a column only permits 50 characters of space in each field Char is the best option if you've a predefined size of character that you want to maintain in the student table you can set a maximum length of 50 characters for the username column in SQL with char 50. for example the table contains the records for a student with the username mark123 which is a total of seven characters however because the column is defined as Char 50 this username occupies the length of 50 characters within the space the varachar data type Works in a similar way to char however it is a variable length this means that the length can be changed it's not fixed varachar is often used when you're not sure how many characters might be inserted in the column field so you can type varchar50 in SQL to allow for any input up to maximum of 50 characters in the student table example the student name column would most likely contain names of varying length so you could Define the student name column as varachar50 in SQL this means that the name of each student only occupies as much space as there are characters in their name for example Mark Simpson occupies far less than 50 characters but this field could hold a name up to the value of 50 characters if required finally let's briefly explore some more commonly used examples of string data types tiny text is used to define columns that require less than 255 characters like short paragraphs text is used to define Columns of less than 65 000 characters like an article medium text defines Columns of 16.7 million characters for example the text of a book and the long text data type stores up to 4 gigabytes of Text data you should now be able to explain the string data type as used in a database and you should also be capable of differentiating between string data types including Char and varchar great work to ensure the accuracy and reliability of the data in your database you must limit the type of data that can go into your database table in this video you'll learn how to describe the purpose of constraints in a database and identify default constraints to set default values in a table database constraints are used to limit the type of data that can be stored in a table this ensures that all data inserted into the table is accurate and reliable if the database detects a violation between the constraint and the data operations then it aborts these operations an example of a violation might be an attempt to insert or upload invalid data to a table the database realizes that the data is invalid and rejects it constraints can be column level where the rule applies to a specific column they can also be applied at a table level for example I could use the foreign key constraint to prevent actions that would destroy links between tables I'll demonstrate this in more detail in a later lesson two of the most used database constraints include not null a method of preventing empty value fields and default a method of assigning default values for now let's begin our exploration of default values with a not null constraint the not null SQL constraint is used to ensure the data fields are always completed and never left blank let's explore this concept using the example of a table from an online store that records the IDS and names of customers the table records this data in its customer ID and customer name columns these columns must always contain data if there's no data or values inserted into either of these columns then the creation of a new customer record is aborted the not null default value is implemented using a SQL statement a typical not null SQL statement begins with the creation of a basic table in the database I can write a create table Clause followed by customer to define the table name followed by a pair of parentheses within the parentheses I add two columns customer ID and customer name I also Define each column with relevant data types end for customer ID as it stores numeric values and varchar for customer name as it stores string values finally I also declare a not null constraint for each column this makes sure that neither column will accept null values now any operation that attempts to place a null value in these columns will fail like inserting or updating data next let's look at how the default constraint Works in a table the default constraint sets a default value for a column of no value is specified this means that if no data is entered for a specific field within a column then a table will automatically insert a default value instead to gain a better understanding of default values let's look at a table that holds player records for a football club's database the table is called player table and contains two columns the first is player name and list the names of each player in the team and the second column is City and lists which city each player is from most of the players in this club are from Barcelona so I can specify a default value of the city column as Barcelona this means that I don't have to enter Barcelona repeatedly into the city field for each new player if no value is entered in the table then each field is automatically filled with the default value of Barcelona let's find out how the default command is incorporated into a SQL statement first I use the create table commands to create a table and then call it player then within a pair of parentheses I input the column names assign a string data type for each and assign a default value of not null for the name column finally I add the default keyword statement followed by the default value Barcelona for the city column now when I add data into the table for a new player I don't need to type in Barcelona for players who are from the city instead it will be inserted automatically you should now be familiar with the importance of using database constraints you should also be able to explain database constraints as a method of enforcing rules on a column or table level good work you've just been hired by an online bookstore to build and maintain databases that can store information on millions of books and customers but how do we even begin to create and alter databases that store constantly expanding information or process millions of orders from all over the world the answer to these questions lies in SQL create and read commands in this video you'll learn how to create a database using SQL syntax and you'll also discover how to drop or delete a database however before you create a database you first need a clear idea of its purpose for example if you're building a database for an online Bookshop then your database needs to record data like book titles authors customers and sales the data on these topics must be stored and organized in relevant tables in a database users can then access retrieve and update the data as needed so how can you create a database using SQL syntax to create a database just type the create and database keywords these keywords are Then followed by the name of your database but what about removing or dropping a database to drop a database just type the keywords drop and database then follow these keywords with the name of the database you want to remove let's look at these keywords in action to create a database I need to use SQL syntax to demonstrate let's create a second bookstore database using SQL syntax first I type the keywords create and database followed by the name of my new database in this case the database is called bookstore 2 underscore DB I always use a meaningful and relevant name when creating a new database this helps to make it easier to document my work database names must also be unique and can only have a maximum of 63 characters if my database name doesn't meet these requirements then an error message will appear finally I add a semicolon to the end of the statement then I run the statement and the new database bookstore underscore db2 appears in the left hand sidebar so I've created a second database I can also remove databases using SQL statements first I select the SQL tab then I input my query in the code box that appears I type the keywords drop and database followed by the name of the database I want to delete in this case it's bookstore underscore DB now I run the query then SQL deletes the database in this video you learned how to create and delete databases using SQL syntax great work building a database involves working with substantial amounts of data but how do you organize your data so that you can find exactly what you need and make sense of it with SQL you can create a table within your database to hold and organize your data in this video you'll learn how to create tables in a database management system using SQL syntax let's begin with a look at the SQL create table statement syntax I begin the statement where the keywords create table to let SQL know that I want to create a new table I then add the name of the table I want to create finally I add a pair of parentheses Within These parentheses I type the name of each column that must be included within the table followed by its respective data type now that you're familiar with the syntax let's look at it in action be aware that before you can create tables you must already have a database on the server in other words you can't build tables if there's no database to create them in so let's assume that I already have a database ready to work with in this example I'll create a customer table in the bookstore database to store customer data like names and phone numbers First I write a SQL statement that contains the create table commands followed by the name of the table in this case customers then I add an open parenthesis is within this parenthesis that I need to create my own columns so I'll write the name of my first column which is customer name this is followed by the data type varchar this data type means that the column can hold data of any type I then add a numeric value within a pair of parentheses then I add a comma and write the name of the second column which is phone number I add into the data type so that only whole numbers can be stored then I add a closing parenthesis and semicolon finally I execute the statement the customers table is now stored in the database in this video you learned how to create tables in a database using SQL syntax well done at this stage you're probably familiar with creating tables and databases but no table is ever static database developers are always restructuring tables sometimes they need to add new columns delete old ones or edit the data they contain you can complete many of these basic restructuring actions using SQL syntax in this video you'll learn how to alter a database table by adding and removing columns and modify the attributes of a database table let's start by exploring the SQL alter statement syntax the first part of an altar statement is the alter and table keywords these keywords inform the database that there is a table whose contents must be altered this is followed by the name of the table to be altered I then include the add keyword this keyword tells SQL that there's one or more items to be added to the table there are other keywords I could include here instead but for the purposes of this example I'll work with ADD finally I insert a pair of parentheses Within These parentheses I declare the name of a new column to be added to the table along with this data type now that you're familiar with the alter table statement let's explore an example of the statement in action however before you can begin altering tables you must already have a database on the server so as always make sure that you know how to create databases before proceeding and ensure that you already have a table in your database with data that you can alter in my example I have a students table located within a database called College my students table holds information on their IDs names and emails of each student in a college I can demonstrate the alter statement by adding deleting and modifying columns in this table my first task is to add three new columns to the table age nationality and Country to add these columns using SQL syntax I first type the alter table command followed by the name of the table students next I use the add command to let the database know that I want to add new columns to the table then I input a pair of parentheses that contain the columns I want to add along with the type of data they'll hold these columns are an age column which holds data in integer format a country column with varchar is the data type and nationality and Country columns that hold varchar or string data I also add a character limit of 50 to the country columns fields and a limit of 255 to the nationality columns fields then I execute this statement I now have two new columns in students table in the college database country and nationality are very similar columns and in most cases will probably hold the same type of information so I can write a SQL statement to remove the nationality column just like the last example I start my statement with alter table students table next I type the drop column command followed by nationality this command instructs SQL to delete or drop this column from the table then I run the statement notification message appears requesting confirmation of deletion I press ok to confirm the nationality column has now been dropped now it's time to alter the structure of the table the country cardium has a limit of 50 characters just as I said it originally but now I'm going to change it so that it holds 100 characters instead using the alter table command I start with the syntax alter table followed by students table then I type the modify command the country column name and the varchar data type and finally I had a pair of parentheses containing my new value of 100. I then execute the query my country columns limit has now been updated to 100 characters in this video you learn how to alter and modify tables in a database using SQL syntax well done when working with databases you'll often have to add new rows and columns to existing tables or even create new tables from scratch for example if you run a college database you'll have to add new rows for every new student with SQL you can perform these tasks quickly using the insert statement by the end of this video you'll be able to identify and understand insert SQL syntax and insert data into tables with the use of the insert into clause let's begin with an exploration of the insert into syntax to write an insert statement first write and insert into clause then specify the table name followed by a list of columns contained within a pair of parentheses and separated by commas then use the values keyword and write a list of values within a pair of parentheses it's important to remember that each value corresponds to a specific column and so should reflect the same data type an order you can also add multiple rows into a table at the same time first write the insert into clause and table just like before then use the values keyword and add multiple rows of values just make sure that each new row is separated from a previous one by a comma now let's explore some examples of an insert statement in this example I'll use a table called players from a sports club database and I want to insert new player data into this table First I write the insert into command followed by the name of the table in this case it's players then I add the column names within a pair of parentheses these columns must contain the basic information that the club requires about each player so I'll name the columns ID name age and start date next I insert the values keyword and then add the values I want to assign to each column within a pair of parentheses I start adding the data for a new player named uvul age 25 with an ID of 1 and a start date of 2020 10 15. it's important to use the correct format of year month and day when entering dates in a table otherwise an error message will appear I can also use the current date function followed by a pair of empty parentheses next to my values just like I've done for the new player uvil now that I've scripted all values for uvul note that each value relates to a specific column number one corresponds to the player ID column uvul to player name 25 to player age and the date to start date this means that the order in which I type my values within the parentheses is very important otherwise I might accidentally store these values in the wrong columns it's also important to note that all non-numeric values are placed within quotation marks just like uvul and the date value in the statement finally I execute the statement the output now shows one row of data for evil just as my code instructed but what if I wanted to insert multiple records of data into the table let's say the two new players have joined the team the first player is Mark age 27 with an ID of 2 and a start date of 2020 10 12. and the second player is Carl aged 26 with an idea 3 and a start date of 2020 10 7. both Mark and Carl must be inserted into the database as you learned earlier this is a very straightforward task First I write the insert into command then I write the table name players next I type the ID name age and start date columns within a pair of parentheses then I write the values keyword and insert two records of data these data records are contained within two pairs of parentheses separated by a comma one for Mark and another for Carl finally I run or execute the statement I then check my output which shows that all three players are now in the table so far I've explored how to add data to the table but it's also possible to show existing data in the player table by executing the following SQL query first I type the select Clause followed by an asterisk this asterisk tells SQL to return all columns within the table then I type the from keyword and the name of the table I execute the statement and the output shows all data available from the player table you can now identify and make sense of the insert syntax as well as insert new data into tables with the insert into clause good work there will often be times that you'll need to query data from a table in your database for example you might need to retrieve a list of names from a table or return a set of results from a math calculation you can perform these actions using the select statement and that's what I'll demonstrate now so over the next few minutes you'll learn how to utilize the SQL select statement to query data from a table in a database and perform other SQL select tasks such as math calculations date and time queries and concatenation functions to get started let's explore the syntax of a SQL statement a basic SQL select statement is written as the select keyword the name of the column that contains the data you need to query then the from keyword and finally the name of the table you want to query for example if I want to query data about the names of players from a soccer club database I could use the following syntax the select keyword the player name column and the from keyword and finally the name of the table which is player table and although it's not mandatory a semicolon is often added to Mark the end of a SQL statement let's take a closer look at how this statement works as an example I'll extract the information from a table called players held in a soccer club database this table records details about players in a soccer club like ID name age and skill level I can use the SQL select statement on this table to obtain information on the club's players the expected outcome of this select query is that it will return a result set that displays all player names held in the table I can write the statement as select name from players the select command is used to retrieve data name is the column that stores the player's name in the database from is a keyword that identifies the source table and players is the table name I then run the statement the query returns a table column that lists the players names from the player's table with each name on its own row so in the example you just explored I retrieve data from one column in a table but what if I wanted to retrieve data from multiple columns maybe I need to retrieve the name and skill level of each player I can obtain this information using a SQL select statement written as select name level from players I had a comma between name and level so that SQL understands there are two separate columns I run the query and it Returns the data from the name and level columns in the players table I could also use a select statement to retrieve all data from all columns in the players table there are two methods I could use to achieve this the first method is to list all column names in a standard select statement as follows select ID name age level from players once again each column is separated by a comma I then run the query and get all the requested information and table format the second method is to use an asterisk as shorthand so instead of typing out all column names I just type select asterisk from players then I run the query it returns all the information available in all table columns just like in the first method you're now familiar with how to use the select statement in MySQL so next time you need to query data in your database you've now got different methods to choose from when working with tables there might be instances where you need to retrieve information from one or more tables in order to populate columns in another table you can complete these actions using the insert into select statement over the next few minutes you'll explore the SQL Syntax for these actions and by the end of this video you'll know how to identify and understand insert into syntax and insert data from a source table into a Target table using the insert into clause first let's find out more about the insert into select statement essentially the insert into select statement is used to query data from a column within a source table and place the results of that query in the column within a Target table for example you could use an insert into select statement to query data in column C in the source table and place the results of that query in column B of the target table so what does the insert into select statement syntax look like here's an example first type in insert into Clause followed by the name of the target table and the name of the column you want to insert data into then type the select keyword with the name of the column you want to extract data from and finally type A from keyword and the name of the source table that holds that column or Source data to find out more about how this syntax Works let's explore an example of insert into select to demonstrate the statement I use tables from a soccer club database that contain important data about the club however before I begin querying this data let's quickly review these tables in this database I have a table called players that holds the records of four different players in the team I also have a table called country that holds information about the countries that these players are from but right now the country table is missing the names of the countries in other words it has no data I can perform a SQL query using the insert into select statement to populate this missing data do you remember the example of the source and Target tables from earlier in this video in this instance the player table is a source table that I need to query and the country table is a target table in which SQL places the results from my query so to query data from my source table and populate my target table with it I write and insert into select statement note that for the purposes of this demonstration I've organized the player data in the player table in the same order in which it must appear within the country table so to perform this task I first click on the SQL tab to open the code editor then I write an insert into command followed by the name of my target table which is country table I then State the name of the column that the data from my query must be inserted into within a pair of parentheses in this instance the column is called country name next I type the select keyword and state what column I want to query within the source table which is country finally I type the from keyword and state the name of the source table I want to query the data from which is player table I had a semicolon to the end of my query then run it now I select country table my target table from the database and check that the country name column has been populated with the correct data you now know how to query tables using the insert select statement well done I recently created a database table for a college called student table it contains the following pieces of deed on each student in the college ID first name last name home address college address contact number and Department let's use the update syntax to update the home address and contact number of the student assigned the ID of 3 in the table so I click the SQL tab in PHP my admin now I use the update Clause followed by the name of the table that I want to update which is student table then I add the set clause followed by the names of the columns to be updated which are home address and contact number next to the name of each column I add an equal to symbol and place the new values to be inserted into the table in single quotation marks I also make sure to separate these column value pairings with a comma finally I add the where clause to identify the exact record I want to update This Record has a student ID column field that was assigned the value of 3. so I write where ID equals 3. now that I've completed the syntax I can select go I then receive a message confirming that the change has been made and when I check the table it displays the updated values for the assigned columns alongside student 3. so that's how you update the information for one student however the update syntax can also be used to update the information from multiple students at once let's suppose that the college's engineering department has moved their classes to a new location on campus called the Harper building and I need to update the Department's address on the table for all engineering students I can perform this task using the update SQL syntax the syntax is very similar to the previous example first I use the update Clause followed by the name of the table then I add the set clause and state that I want to update the values within the college address column to Harper building so I type this as set college address equals Harper building next I type the where clause and state that I want this update to occur within the college address column for all students assigned the value of Engineering in the department column I then click go to run the statement now I just checked that the table has updated the college address of the engineering department to Harper building I could also use the update statement to update multiple field values in multiple records for example I can return to the original SQL statement and add a new column value pairing to the set clause if I want to update the home address column I add a comma and write home address equals the new address information within quotes make sure to separate out the column value pairings with a comma in this example I'm updating two columns with one update statement in this example I'll demonstrate how to delete a single record from a table in a database I'm using the student table from a college database and deleting the record of a student with the last name of Miller I go to the SQL tab in phpmyadmin and I write a delete statement beginning with the keywords delete from then I specify the table name as student table I add a where clause and the condition to delete the data I want I need the database to scan the list of students and identify the last name or value of Miller and then delete that record from the table so I type where followed by last name equal to Miller I then run this statement by pressing the Go Button I get a confirmation that the record Miller has been deleted from the database I can then access the student table on the left panel to ensure that the record or instance of Miller has been removed let's explore another example This Time by deleting multiple records from the student table now I want to delete the records for the two students within the engineering department the beginning of the statement is the same as the previous example I begin with the delete and from keywords followed by the name of the table I'm working with which is student table the where Clause is the key difference in this example I type where followed by Department equal to engineering this instructs SQL to identify all records that have a value of engineering within the department column and remove those rows from the table but I need to be careful if I don't correctly specify the where Clause then all the records in the table will be deleted so now that I've completed my statement I select go to run it I then check the table by clicking it on the left panel and confirm that the records for the two engineering students have been deleted from the table finally let's quickly explore how to delete all records from a table in this task the syntax remains largely the same as in the previous examples the key difference is that I remove the where Clause so that now my syntax just States delete from student table I remove the where clause so that now my syntax just States delete from student table in other words I'm instructing the database to remove all records from the student table then I click go and confirm the deletion once the deletion has been confirmed I check the student table which is now empty I now know that all records have been deleted you've reached the end of this module in crude operations in this module you've discovered how to create read update and delete data within a database you've also examined different SQL data types like numeric string and default values it's now time to recap the key topics you learned and skills that you gained you began the module with an introduction to SQL data types following the completion of this first lesson you can now identify and understand the numeric data type so that you can store data as numbers in a database utilize numerical data types in a database and differentiate between integer and decimal data types so that you can store numerical values of different sizes including positive and negative ones following your exploration of numerical data types you then moved on to investigate string data types during your investigation you learned how to identify and understand string data types in a database demonstrate how to use string data types and outline the key differences between Char and varchar data types the final concept that you learned about in this lesson was default values now that you've completed the lesson on default values you're able to demonstrate an understanding of the concept of constraints in a database with SQL syntax and identified the default constraint to set default values in a table as part of this lesson you also want to talk a series of exercises focused on SQL data types having successfully completed these exercises you can now demonstrate your ability to work with numeric data types string data types and default values and outline how to select the correct data type for your data once you completed your study of SQL data types you then moved on to explore the topic of creating and reading data in a database now that you've completed this lesson you're able to create a database and create tables alter database tables and drop a database you can also demonstrate how to create a table in a database with SQL syntax Ultra table structure using the SQL alter table statement and insert or add data into a table with SQL insert statement in addition to these new skills you also learned how to retrieve data from tables with the SQL select statement and insert query data from one or more tables into another Target table using the insert into select statement you also had an opportunity to demonstrate these new skills through a series of exercises and these exercises you proved your ability to create a database and a table and then populate the table with data and manage data within your tables using the select and insert into select statements you then moved on to the final lesson in which you explored how to update and delete data having completed this lesson you can now demonstrate knowledge of the SQL update statement and utilize the update statement to update both single and multiple field values of a record finally you also proved your abilities with these new skills by completing an exercise in which you deleted records from a table you're now familiar with the create read update and delete operations you're also skilled in the use of SQL data types great work you're making good progress on your journey towards becoming a database engineer imagine a corporate database with information about hundreds of employees how can you calculate important things such as salary increases or calculate changes to allowances for all the employees accurately and efficiently with SQL you can use arithmetic operators to make these adjustments by the end of this video you'll understand and be able to describe SQL arithmetic operators and know how to use these arithmetic operators to perform functions in SQL but first what exactly are operators in SQL operators are specific words or characters that help you to perform different activities in a database they're like conjunctions or connection words you would use to compose a sentence or the operation Keys used to perform a sum in a calculator so why do you need to know about operators well when handling data in a database at some point you'll need to query and manipulate data for different purposes SQL operators allow you to manipulate data as necessary to perform these different activities in the database for example you can use an arithmetic operation to calculate how many leave days an employee has left or you can compare whether employees are meeting company targets there are various types of operators in SQL each with different functions let's explore a few examples of arithmetic operators arithmetic operators are commonly used in computer languages to perform a calculation and return a result much like common arithmetic operators in mathematics you can use arithmetic operators in SQL to carry out mathematical operations in a database the SQL arithmetic operators and their symbols are close for addition minus for subtraction and asterisk for multiplication forward slash for division and percentage for modulus which provides the remainder value of a division calculation so how does SQL arithmetic operations work when performing a calculation an operator takes two operands and returns a result for example an addition operator can take five as both of its operands and return 10 as its result in SQL you can apply the same concept by using the select command for the various operations let's illustrate this concept using the addition operator you can use the select command followed by one operand the addition operator and the second operand and just like the previous example SQL calculates the two operands and produces the result you can repeat the SQL syntax with the other arithmetic operators with a subtraction operator the output result is zero a multiplication operator returns a result of 25. the division operator calculates result as one and with a modulus operator the result is zero as five divided by 5 equals one with no remainder now let's take a closer look at how to use these arithmetic operators in SQL I'll demonstrate how to use arithmetic operators in SQL to perform basic mathematical operations let's try an addition an operation to add two numbers first I use the select command and then type the numbers 10 and 15 separated by a plus operator followed by a semicolon although the semicolon is optional in this case I prefer to still use it as it represents the end of a SQL statement the select command retrieves the value which is a sum of 10 plus 15 and displays it on screen let's run this query the query produces the result of the example addition operation which is 25. just as I performed this Edition operation I can do the same with subtraction multiplication division and modulus operators I can use the minus sign for subtraction an asterisk star for multiplication forward slash for division and the percentage sign for modulus for example I can type select 100 modulus 10. this divides 100 by 10 and gives me the remainder of the division operation in this case the remainder is 0 as 100 divided by 10 equals 10 with no remainder so I run the query and the remainder of 0 is displayed and that's how you can use the operator symbols for different basic operations in SQL you've learned about SQL arithmetic operators on how to perform basic operations with them in SQL you're now ready to learn how to apply these arithmetic operators in more practical ways awesome work so far you have learned how to use SQL arithmetic operators to perform basic functions in this video you'll take one step further by learning how to use these arithmetic operators in practice when working with a database let's explore the use of arithmetic operators using the example of a corporate table of employee data it's populated with the following information about each employee ID name and salary I'm going to demonstrate how to use each of the arithmetic operators in SQL to perform various functions with this practical data and retrieve the desired results in this first example let's say that an employer wants to give each employee on their team a 500 bonus but first they'd like to see what each employee's salary looks like after adding the 500 dollars I can type select salary Plus 500 from employee to add the 500 bonus to each employee's salary the select command works by retrieving the salary of each employee from the employee table and then adding 500 to each value let's execute the query by clicking the Go Button the result is that each employee's salary has increased by 500. you can create other statements that use the select command in a similar way for example let's say the employer wants to deduct 500 from the salary of each employee to do this I type the SQL query statement select salary minus 500 from employee again I'm using the select command to retrieve the salary value of each employee record stored in the employee table this time I use the subtraction sign minus to subtract 500 from the salary of each employee now I click the Go Button and the database returns a table that shows the output result of salaries after the deductions in the next example let's imagine a scenario where the employer would like to increase the employees salaries by doubling their current annual salary to do this I use the select command to perform the same function as discussed previously I then use the multiplication sign an asterisks symbol to multiply the salary value of each employee by 2. so the SQL statement now reads select salary asterisks 2 from employee I can now click the Go Button to generate an output of the results of salaries from multiplying the values by two now let's suppose the employer needs to determine the monthly salary of each employee I can perform this task using an SQL query statement as I have done in the previous scenarios however this time I'll divide the annual salary by 12 months to determine the monthly salary my statement reads as follow select salary forward slash 12 from employee the select command retrieves the salary value for each employee stored in the employee table then division sign or the forward slash divides the annual salary value by 12. so I click the Go Button to generate the output results the employer can now use this output result to determine what each employee's monthly salary is in our final example the employer would like to know if the ID of each employee is an even or odd number I can use the modulus operator to complete this task I just need to type the following statement select ID modulus 2 from employee I then click go to execute the statement this divides each employee ID by 2 and Returns the remainder of the division operation in this case a remainder of 0 denotes an even ID as all even numbers are divisible by 2 meaning there will be no remainder the result shows that there are three odd IDs and two even IDs you should now know how to make use of the SQL arithmetic operators in a database you're making great progress keep it up imagine you're running a database for a soccer club as a database engineer there's a lot of work required to manage this database for example you might need to categorize players into groups according to their ages how can you complete this kind of task you can use SQL comparison operators over the next few minutes you'll learn how to explain the concept of SQL comparison operators and utilize SQL comparison operators in a database so what are SQL comparison operators comparison operators are used to compare two values or Expressions where the outcome result can be either true or false they can be used to filter data and to include and exclude data so how are these operators used in SQL SQL uses common mathematical comparison operators by means of the symbols equal to less than and greater than it also uses less than or equal to greater than or equal to and not equal to now let's explore how to use these comparison operators and relevant symbols in a practical way using a database to demonstrate the use of SQL comparison operators I'll use the example of an employee table from a company database this table includes information on each employee's ID name and salary now let's assume the employer wants to extract relevant data from the employee table about the employees salaries for different purposes each instance of data extraction will require a different comparison operator so in the first example the employer wants to identify all employees receiving a salary equal to eighteen thousand dollars per year I can identify these employees using the equal to operator first I click the SQL tab in the main menu then I write select asterisk from employee where salary equal to symbol eighteen thousand let's break the SQL statement down the select command is used to retrieve data the asterisk star denotes that I am selecting data from all columns the from keyword and the table name specify where the data will come from and then I Define the condition using the where clause in this case the condition uses the equal to symbol to check if the salary value in each record of the table is equal to eighteen thousand dollars if the result is true then the data is retrieved so I run the query and generate an output the output result of this query is that the employees Carl and John earn eighteen thousand dollars per year you can apply the other comparison operators in a similar way to perform different functions let's take another example to find out more in this next example the employer needs to know which employees are receiving a salary that is less than twenty four thousand dollars per year this task requires a different operator to find this information I can write select asterisk from employee where salary less than symbol 24 000. this SQL statement utilizes the less than symbol to check whether the value stored in each field of the salary column is less than twenty four thousand dollars once again I select the Go Button to execute the query and generate an output the output of this query is that the employees Carl and John earn less than twenty four thousand dollars let's take another example where the employer needs to determine which employees receive a salary that is less than or equal to twenty four thousand dollars per year in this case I need to write the following query select asterisk from employee where salary less than or equal to symbol 24 000. the only thing in this statement that has changed from the previous example is the operator symbol this less than or equal to symbol tells the SQL statement to check whether the value stored in each field of the salary column is less than or equal to twenty four thousand dollars I click the Go Button to execute the query the output results show that there are four employees who earn less than or equal to twenty four thousand dollars per year what if the employer wants to know which employees receive a salary that is greater than or equal to twenty four thousand dollars per year to generate these results I can use the greater than or equal to operator in my SQL statement so I write the following SQL query select asterisk from employee wear salary greater than or equal to symbol 24 000. this time the greater than or equal to symbol is used to check whether the value stored in each field of the salary column is greater than or equal to twenty four thousand dollars I click go to execute the query and the output shows that there are three employees who earn twenty four thousand dollars or more per year the final comparison operator available in SQL is the not equal to operator in this final example the employer wants to know which employees receive a salary that is not equal to 24 000 per year I can determine this using the following SQL code select asterisk from employee wear salary then I type less than and the greater than symbols to denote the not equal to operation and then I type 24 000. as with the previous operators the SQL statement utilizes the operator to check the values stored in each field of the salary column in this case it checks for values that do not equate to twenty four thousand dollars the output results of this query show that there are three employees whose salaries are not equal to twenty four thousand dollars per year you should now be able to describe comparison operators and use them in a database in SQL congratulations on building another important database skill there are several Clauses available in SQL for sorting and filtering data in a table one of the most useful of these is the order by clause with this Clause you can reorder the data in a table by one or more columns for example in a table that holds data on students in a college you could sort the data by date of birth the table would then present all students in the order of oldest to youngest by the end of this video you'll be able to demonstrate the purpose of the order by clause for sorting data explain the different forms in which the order by Clause can be used to sort data and describe how the ascending and ascending keywords behave when used in sort columns let's begin with an exploration of the purpose of the order clause the order by Clause is an optional Clause that can be added to a select statement its purpose is to help sort data in either ascending or descending order for example you can sort a list of student names in an alphabetical order from A to Z or vice versa to get a better understanding of how the order by Clause Works let's explore the syntax in its most basic form the syntax of the order by Claus is as follows it begins with a select statement then a list of the columns to be sorted with each one separated by a comma next is a from keyword followed by the name of the table to be sorted finally the order by Claus is added followed by the name of the column to be sorted at the end of the column name State how you want the data to be sorted you can do this by specifying ASC for ascending order or d-e-s-c for descending but the order by Clause doesn't limit you to just the one column you can also use this syntax to order the data from multiple columns the Syntax for sorting multiple columns is very similar to that used for a single column the key difference from multiple columns is that you must type the name of each column after the order by clause just make sure that you separate each column with a comma and specify whether you want to sort the columns in ascending or descending order it's also possible to specify all columns after the select keyword by using an asterisk this is a much easier method than listing all columns one by one finally it's also important to note that the type of data in your table affects how it is sorted if the column has a numeric data type the records will be sorted in the ascending or descending numerical order and if a column is a text-based or string data type then it will be sorted in ascending or descending alphabetical order next let's explore some examples of the order by clause in a SQL statement let's begin with an example of ordering data by single column for this example I'll use a table that lists details of students in a college I need to sort or order this data in ascending order of each student's country of nationality so in this instance the order by column must be the nationality of each student each student's nationality is listed in the table's fifth column I begin by writing the select statement followed by the names of the columns that I want in the result ID first name last name and nationality then I write the from keyword followed by the name of the table which is student table next I type the order by clause then I specify the name of the column by which I want my data to be sorted which is nationality finally I type ASC so that the data is sorted in ascending order I then execute the statement all students in a table have now been sorted according to nationality in ascending order note that even if I was to Omit ASC from the end of my code I'd still get the same result this is because the order by Clause sorts all data in ascending order by default let's run the same statement but this time using desc or descending instead of ASC all students in the table have now been sorted by nationality for a second time but in this instance they've been sorted by descending or reverse alphabetical order finally let's explore an example of sorting data by multiple columns in this example I'll sort each student by nationality and date of birth I write the select statement then I write the names of the columns I want in my result ID first name last name date of birth and nationality I then write the from keyword and student table the name of the table next I type the order by clause and specify the names of the columns by which I want my data to be sorted which are nationality and date of birth I then add ASC after nationality so that the data is sort of an ascending order of nationality and I add d-e-s-c after date of birth so that the data from this column will be sorted in descending order then I run the statement this returns my table with the data for the specified columns organized as instructed which is alphabetically for nationality and youngest to oldest this was a short introduction to the SQL order by Clause you can now demonstrate the purpose of the order by clause for sorting data and you can also explain the different forms in which the order by Clause can be used to sort data great work an admin department at the University wants to create different reports for students in the engineering faculty the department needs to fill the right students from the engineering faculty to retrieve their details from the student database so how can this be done with SQL well the where Clause is useful in scenarios like this in this video you'll learn how to explain the purpose of the where Clause demonstrate how to filter data using the where clause and make use of different operators in the where Clause condition so what is the where clause the where Clause is used to filter data more specifically it is used to filter and extract records that satisfy a specified condition to better understand how the wear Clause is used it may help to break down its syntax in a SQL select statement the syntax begins with the standard SQL select statement followed by The Columns you want to query next is the from Clause followed by the table name then you can bring in the where clause after the where Clause you can specify a condition you may be wondering what the purpose of the condition is well the condition makes it possible to filter out and fetch the required records from the table you can think of the conditions as filter criteria only the records that meet the condition will be retrieved for example you can use the condition or filter criteria to check if the desired column name is equal to a certain value or operand in between the column and value you can place an operator as you've just discovered the operand follows the operator let's take a quick look at it in more detail the operand can be either a text value or a numeric value it all depends on the data type of the table columnar field to demonstrate let's take an example where student ID equals zero one in this case the condition is instructed to filter a numeric value so it functions as a filter criteria once you run the SQL select statement it retrieves the records as instructed let's take another example where first name equals John a text value all text values must be enclosed in a pair of single quotes once again you just run the SQL select statement and filters the required records to specify your filter condition you can make use of a wide range of operators you've just reviewed an example of the equals operator and others you may have encountered in a previous lesson let's quickly review these other operators SQL comparison operators include equal to less than and greater than there's also less than or equal to greater than or equal to and not equal to in addition to these symbols the where Clause can also use the between like and in operators with the between operator you can filter records within a specific numeric or time and date range the like operator is used to specify a pattern in the wire Clause filter criteria and the in operator is used to specify multiple possible values for column now let's explore some examples of the where clause in select statements recall the scenario of the admin Department that wants to create reports for its engineering and science students I can use the where Clause to filter out the details of students who are in the engineering faculty in this case I need to retrieve all details or all columns from the student table so I write select asterisk from student table next I type where followed by the filter criteria the criteria is written as faculty then an equal operator finally I write engineering enclosed in single quotes which are required for text values or operands so I'm instructing my SQL to fetch only the details of the students who are attached to the engineering faculty then I run the query as per the filter condition does retrieve the student records of the three students in the student table listed in the engineering faculty note that I could have used other operators such as greater than less than less than or equal to greater than or equal to and not equal in the same way as the equal to operator in this where Clause condition you can use any of these operators with numeric values or operands now let's review some examples that use between like and in operators in the where Clause condition the college is a financial aid program available to students of a certain age the funding Department would like to notify eligible students only I can use the between operator in the where Clause condition to filter the records in the student table as before I type select asterisk from student table followed by where after the wire keyword I specify the filter column A dob or date of birth then I insert the between operator lastly I give the date range as first of January 2010 and 30th of June 2010. running this query retrieves the records of four students whose date of birth falls in the specified date range note that I could use any numeric range here not just dates for the next example let's assume the admin department requires the details of the students who are in the science faculty I can do this with the like operator which can be used when you want to specify a pattern in the where Clause filter criteria within the select statement and after the where keyword I type faculty for faculty column then the like operator followed by SC percentage sign within single quotes the percentage character in the pattern is a wild card character that represents zero one or multiple characters the underscore sign can also be used to represent one single character in this case my wire Clause asks MySQL to search for and filter out values within the faculty column that start with s c followed by any number of characters so I run the statement and it filters out the five Student Records whose faculty column has a value of science that is starting with the pattern SC in the final example the admin department needs to know the details of the students who are studying in specific locations you can use the in operator in the where condition to retrieve the relevant Student Records remember the in operator is used to specify multiple possible values for column within the select statement and following the where keyword I type the column name which is country the in operator then an open pair of parentheses Within These I place the values USA and UK each in single quotes my select query will filter out all Student Records that have a value of USA or UK in the country column running this query returns four records two students from the USA and two students from the UK so the in operator searches for multiple possible values in the country column and filters out based on them note that although the examples in this video looked at the where clause in the select statement it can also be used in other statements such as update and delete you've now learned what the where Clause is and you should now know how to use it to filter data as well as how to use different operators great work suppose you have a database that contains the records of college students from all over the world as part of an annual report a list of all the different countries these students belong to is required it's very likely many students will come from the same country so how can you retrieve the results you're looking for without any duplicates look no further than the select distinct clause in the next few minutes you'll learn how to describe the select distinct statement and explain what it's used for demonstrate how to use it in a SQL query and explain how it interacts with a single column multiple columns and null values in a few practical examples let's start by exploring what the select distinct statement is in its most basic form distinct as its name States returns only distinct or different values in other words it Returns the results without any duplicates let's take a closer look at duplicate values as you can imagine columns in a table can often contain duplicate values in a College's Student Records for example the contrary column will likely contain duplicate values as there can be many students who are from the same country let's assume you want to find out which countries the students in the college are from so you can get an understanding of which nationalities are represented in the college you can begin by using a SQL select statement you can write the select keyword then country followed by the from keyword and the student table name running this select query gives you seven records as the result with multiple duplicate records in this case there are duplicate records for Australia and the USA so how can you eliminate these duplicates and retrieve a unique set of results you can use the select distinct statement you can write a select statement just like before but this time distinct after the word select the word distinct will return all unique values in the table with no duplicates you can then write the from keyword followed by the student table name once you run this statement the countries now only appear once in the resulting records all the duplicates have been removed this is how the selected statement can be used to return distinct values from one column in this case you've returned distinct values from the country column now let's take a few moments to explore the select distinct statement in action the examples that follow focus on the select listing statement with the use of multiple columns or when applied to a column that has a null value with the student table in this example I want to write a query to determine which countries are represented by students in different faculties I can use a select distinct statement as before but this time I'll add the word faculty before a country running this statement produces six records the science faculty is students from three different countries as does the engineering faculty so with this statement which uses multiple columns I've generated each unique faculty and Country combination now let's return to the table once more and examine how select distinct deals with null values in columns in this example I have a new student named Julia Smith from the USA she's not yet been assigned a faculty or School address as a result both Fields Within These columns assigned to Julia Smith contain a value of null so let's see what happens when I run the same select distinct statement as a previous example how does it handle the null values in other words what results does it return for Julia I select go and receive the same result as the last time but now there's also a record for Julia with a null faculty value and USA as the country this is because the distinct Clause considers null to be a unique value so it outputs null and USA as a unique faculty in country combination in this video you learned how to use the select listing statement to eliminate duplicate values in a select query result you also observed how it behaves in response to values in a single column and multiple columns and to null values in columns great work you're at the end of SQL operators and sorting and filtering data well done having a lot of data stored in the database is great making sense of all that data is even better that's why using SQL to manipulate the data becomes a much sought after skill you might remember that SQL operators can accomplish tasks such as arithmetic and comparisons data can be filtered using the where clause and sorted using the order by clause just to finish up let's recap the key points of this module here at the end of this module you should now be able to demonstrate the SQL arithmetic operators and tables demonstrate the SQL comparison operators in a database describe the purpose of an order by clause and demonstrate ascending and descending sorts by single and multiple columns in the videos and filtering the where Clause was used with the SQL select statement to filter records for the where Clause you should now be able to explain the purpose of the where Clause describe how it is used to filter data and demonstrate using the SQL where Clause with comparison operators finally you also explored the select distinct clause following this you should now be able to explain the purpose of Select distinct describe how it is used to eliminate duplicate values and demonstrate using select distinct with a single column multiple columns and null values after this module you should now be able to do some SQL operations on the data within your database that's one of the first steps to giving real value to that data with your SQL skills the database is now no longer just a store it's also something you can investigate and draw conclusions from before developing a database through a software application you first need to plan how you'll organize your data this plan is referred to as a schema it's essentially a blueprint of what your data looks like in this video you'll learn how to explain the concept of a database schema identify the different meanings of the term schema across different database systems and outline the advantages of a database schema let's begin focusing on what developers mean when they use the term schema the general meaning of a schema is that it's an organization or grouping of information and the relationships among them in the context of a mySQL database a schema means a collection of data structures or an abstract design of how data is stored in a database essentially schema and database are interchangeable terms within MySQL a schema is how data is organized in the database and how it's related to other data but schema is defined in different ways across different database systems in a SQL Server a database schema is a collection of different components like tables Fields data types and keys in postgresql a database schema is a namespace with name database objects like views indexes and functions an oracle schema system assigns a single schema to each user Argyle even names each schema after its respective user but no matter which type of database you encounter the two most important Concepts you need to understand when working with schemas remain the same organization of data in the form of tables and the relationships between the tables let's Now cover the components of a database a SQL Server schema is comprised of what are known as schema objects many of these objects will probably already be familiar to you from your study of databases they include tables columns relationships data types and keys an example of a SQL database schema is a music database with data and artists albums and genres all stored in Separate Tables however these tables can still be related to one another through various keys in other words the data within this database is organized in Separate Tables or entities yet the tables are also related to one another essentially a database schema is comprised of all of the important data and the relationships the unique keys for all entries and databases and a name and data type for each column in a table so now that you're familiar with what a database is let's move on and explore the advantages of a database schema schemas provide logical groupings for database objects they also make it easier to access and manipulate these database objects than other available methods schemas also provide greater database security you can grant permission to separate and protect database objects based on user access rights and finally it's possible to transfer ownership of schemas and their objects between users and other schemas in this video you've learned that a database schema is a structure that represents the storage of data in a database you also Now understand how the meaning of schema changes across different database systems lastly you explore the advantages of a database schema by the end of this video you'll know how to create a simple database schema using SQL you'll do this by building the schema for a shopping cart database consisting of three tables let's start by creating a new database called shopping cart DB first I type the create database keyword followed by the name shopping cart DB then I run the statement the shopping cart DB database appears in the left hand Explorer now I can create the tables inside this database first I need to create the customer table which stores the following information on each customer customer ID name address email and phone number to create this table I use the create table keyword and then I type customer followed by parentheses in the parentheses I specify the fields and their data types as follows the customer ID data type is integer while the others are varchar I give the name and email Fields a character limit of 100. I assign a character limit of 255 for address and I assign a limit of 10 characters for phone also note that I've used the primary key keyword on the customer ID column this designates that field as the primary key of the table a role you'll learn more about soon next is the product table which stores the product ID name price and description I can specify this table as follows the product ID has the integer data type the name is a varchar with a 100 character limit the price has a numeric type with parameters of 8 and 2. the description is varchar with a 255 character limit and the product ID is set as the primary key within this table and finally there's the cart order table which holds the order ID customer ID product ID quantity order date and status this table is set up as follows the order ID customer ID product ID and quantity are all integer types order date is date and status is varchar with a 100 character limit order ID is the primary key here however this table also introduces something new in the form of two foreign keys before moving forward let's quickly discuss what primary and foreign keys are you may have noticed that the cart order table contains the customer ID and product ID fields these same fields are also found in the other two tables this is because these fields in the cart order table are directly linked to the same fields in the corresponding tables to establish this relationship each table must contain a primary key the referencing table then uses foreign keys that pointed to the external Source table or the referenced table you'll learn more about primary and foreign keys in Greater detail in a later lesson but for now let's return to the shopping cart database example all primary Keys have been set up so the foreign keys for the cart order table come next I create the first one by using the foreign key keyword along with the customer ID column name to link to the customer ID field in the customer table I then use the references keyword followed by The Source table name customer and then customer ID in parentheses creating a foreign key for product ID is similar but with product and product ID as references so I use the foreign key keyword and name it product ID I then reference it in the product Source table product and then product ID then I execute these statements and the tables appear nested beneath shopping cart DB in the left hand Explorer in this video you learn the steps for creating a simple database schema using SQL the same process applies for both small and large scale databases when creating your databases you need to be able to distinguish between different kinds of database schemas in other words you need to answer the question of what kind of database best suits my project over the next few minutes you'll explore some different types of database schemas and by the end of this video you'll be able to explain the concept of a logical database schema and outline the concept of a physical database schema let's begin by exploring logical database schema a logical database schema is how the data is organized in terms of tables in other words it shows what tables should be in a database and explains how the attributes of different tables are linked together creating a logical database schema means illustrating relationships between components of your data this is also called entity relationship or ER modeling it specifies what the relationships between entity types are let's take the example of a simple ER model that shows the logical schema of an ordering application it demonstrates the relationship between an order the shipment in which it will be shipped and The Courier assigned to it the ID attribute in each table is the primary key of the respective entities it provides a unique identifier for each entry row or record in the entities in the order entity to shipment ID and Courier ID are called foreign keys but in fact they're also the primary keys of the shipment and Courier entities respectively this creates a relation between these entities and the order table which in turn has its own ID as its primary key the other type of schema is physical schema physical schema is how data is stored on disk in other words this involves creating the actual structure of your database using code in my sequel another relational databases developers use SQL to create the database tables and the other database objects for example you can create a physical schema for an online store database by writing SQL statements to create tables for customers products and transactions however physical schema creation could differ slightly between different database systems database schemas are vital when it comes to the creation of databases and they form the basis of your application you should also be able to describe how a logical database schema refers to the organization of data in tables and that you use an ER model to specify relationships between entities and you should also now know that you can control how data is physically stored on disk by creating a physical schema with SQL statements at this stage of the course you spend some time exploring the relational model for databases however it's crucial that you have a proper understanding of how the relational model influences the design and structure of a database and how it helps to build relationships between tables once you understand how your database is structured then you can determine how best to extract information from it over the next few minutes you'll learn how to outline the basics of the relational model identify the different relationships between tables and explain the basics of an ER diagram to understand how the relational model influences our databases let's take the examples of two tables from a college database the first table shows a list of students along with a resigned student and course identification numbers and the second table list courses that students can study along with the ID for each course and its Department so the big question in this example is which student is studying what course and as each student studying one or multiple courses these are basic examples of why it's important to structure and connect tables correctly there are three types of relationships between any two tables in a relational database one to many one-to-one and many to many let's begin with an exploration of the one-to-many relationship in a one-to-many relationship a record of data in a row of one table is linked to multiple records in different rows of another and the student table a student with the ID of 1 is enrolled in two courses on the course table so a one-to-many relationship can be drawn between these tables this relationship can also be Illustrated in a basic entity relationship diagram or ERD a student is enrolled in many courses using shapes and symbols the diagram depicts the two entities student and course in rectangle shapes with enrolled to describe the relationship in a diamond shape and many is depicted using the Crow's foot notation symbol the relationship can also be Illustrated using a more complex ER diagram that depicts keys course ID in the student table is a foreign key or FK and this references the primary key or PK course ID column that exists in the course table next let's take a look at one-to-one relationships in one-to-one relationships one single record of one table is associated with one single record of another table to demonstrate this relationship I'll use two new tables one that endlines key information about the staff in each College Department the other is the Department location table that records key data about the location of each department on campus in this instance each department head is in one Department building on the college campus so each staff member from the Department staff table is associated with one record from the Department table these relationships can also be depicted in an e or diagram as one department head leads one Department and finally there's also many-to-many relationships this type of relationship Associates one record of one table with multiple records of another table and the same relationship also works in the other direction in this example the student Maurice Doyle is undertaking two research projects and each project is supervised by a different staff member likewise one staff member can supervise or collaborate with multiple students on their research projects these relationships can also be depicted in an ER diagram as many students are supervised by many staff you should now be able to outline the different relationships that exist between tables in a relational database model good work by now you're probably familiar with querying values or records within database tables but how do you query specific records and values after duplicated across a table when you come across obstacles like these you can use Keys as your solution in this video you'll learn how to explain the purpose of primary key in a database table and select a simple and composite primary key you may have encountered several examples of primary Keys during this course and in these examples you saw that they're using tables as a unique method to identify a record and prevent duplicates let's take an example of a student table with five attributes ID name date of birth email and grade how could we identify a specific student to enter their grade like the student Mary on Row 2 all you need to do is find the unique ID of Mary to identify a record of her data however in this example you can't use the student name column because there are two students in a table called Mary and you can't use the date of birth either because another student at a table called Dan has the same birthday neither of these records are unique to Mary so what's the best approach the solution is to locate a candidate key this is an attribute that's unique to each row of the table and I cannot have a null value in other words it cannot be empty in this example there are two possible candidate Keys the student ID and the student email both rows contain a unique value for each student so either one can be used as a primary key let's assign the student ID as the primary key whichever column we reject is the primary key becomes the alternate or secondary key in this instance the email column is a secondary key but what happens if you can't locate a unique value within a table maybe all rows of duplicated values in this instance you can create a composite primary key this type of key the combination of two or more attributes let's take the example of the delivery department of an online store they have a delivery table that tracks the deliveries placed by their customers however there's no single column with unique values in each row so no column can be considered as the primary key in this case the best approach is to combine the customer ID and product code columns to create a unique value for each specific record of data with these columns you can determine which customer ordered what product so together these columns become the composite primary key and this key can be used to track the delivery status for each customer you're now familiar with one single column primary and composite primary key you should now also be able to identify the most appropriate situation in which to use each one great work imagine a scenario where a bookstore has a database that contains two tables customer table to track customer information and an order table to track customers orders but how can they determine which customer made which order the solution is to add a customer ID column into the order table column as a foreign key over the next few minutes you'll learn how to describe the purpose of a foreign key and demonstrate how to use it to connect different tables in a relational database so what exactly is a foreign key a foreign key is one or more columns used to connect two tables in order to create cross-referencing between them by Foreign developers mean external so the foreign key in one table will refer to an external or foreign column in another table let's find out more about how a foreign key works by exploring the tables from the database of an online store the store's customer table contains information about the customer's name and address while their order table contains information about each customer's order date and status the issue is how to connect these tables to make sure that each order is associated with the right customer establishing this connection is important so that you can process and deliver orders to the right customers update order details or cancel orders if required a foreign key is a great method of establishing a relationship between these tables so that these other tasks can be carried out but before you learn about how to use a foreign key let's take a few moments to explore the concept in a bit more detail to find out more about how foreign Keys work let's take the example of the relationship between two generic entity tables these tables are called table 1 or T1 and table two or T2 the purpose of connecting these tables is to relate records of data that exist in both tables with each other the foreign key in T1 should point to a related column in T2 in this case the foreign key column values in T1 must correspond to existing values in the reference column in T2 and the reference column in T2 must contain unique values in each row of data this will most likely be the primary column in T2 in addition the reference table T2 is known as the parent table while the referencing table T1 is the child table don't worry if all this seems a bit complicated let's simplify things by exploring an entity relationship diagram using the customer and Order tables from earlier in this diagram the order table relates to the customer table by including the customer ID attribute and defining it as foreign key inside the order table the relationship between these two tables is one to many you might have encountered this type of relationship in an earlier video one to many means that each customer may have many orders but each order must refer to one single customer only this means there must be a customer record available in the customer table before any order can be made but it is not necessary to have an order once a new customer is created therefore the customer table represents a parent table and the order table represents a child table this means that the parent can exist and the child may not exist but the opposite scenario cannot occur in this example the customer ID value existing in the order table can be used to fetch the records of a specific customer to determine who placed the order for example to generate an invoice or to deliver an order to customer address it is also possible for a table to have more than one foreign key each will be used to connect the referencing or child table with other referenced or parent tables in this case you'll have multiple parents to the same child let's add a new table product table into the previous diagram to explain this in more detail the order table now has two foreign keys one foreign key links it with the customer table via the customer ID and the other links it with the product table via the product ID the relationship between these tables is one to one each order must be related to a specific product record and each product might be related to an order record but doesn't have to be for example you can receive a new product in your inventory but no customer is placed an order on it yet if an order has not been placed in this product then it's not related to any order yet so this then raises the question who is the parent and who's the child the customer the order or the product the answer is that there are now two parents the customer and the product tables and there's one child which is the order table you should Now understand the purpose of a foreign key and should also be able to demonstrate how it can be used to connect tables in a relational database well done when building a database there's often a lot of different tables that you'd need to consider including but how do you determine what to include and what to exclude the answer is to identify the entities you're interested in maintaining data on in this video you'll learn how to explain the meaning of entities in a relational database differentiate between attribute types and be able to identify entities and their attributes so let's begin by exploring what an entity is an entity can be described as an object that has properties which Define its characteristics an entity can be anything that represents a single object in a database such as a place or a person in a relational database system each interesting object in a project could be considered an entity for example a customer or individual and an entity in a table is comprised of rows and columns created in database Management systems such as MySQL let's explore this concept in more depth using the example of a table that holds delivery records for the database of an e-commerce store the table name represents the entity name deliveries and each column represents the entity related attributes and the system holds customer or entity relevant attributes such as ID name and delivery status details these attributes hold relevant data about the table entity so each instance of the customer entity in this e-commerce system contains a record of data about each customer but there are also different types of attributes in a relational database system these include simple attributes composite attributes and single valued attributes and there are also multi-valued attributes derived attributes and key attributes let's explore these attributes in more detail using the example of a student table in a relational database system a simple attribute is an attribute that cannot be classified further in the example of the Student Records the grade values cannot be classified further a composite attribute is an attribute that can be split into different components for example the name value of each student could be split into sub attributes such as first and last name a single value attribute can only store one value in the student table example the date of birth column can only contain one value per student so these values could be defined as a single valued attribute with a multi-valued attribute the attribute can store multiple values in a field for example the student email column could hold more than one email per student a college email address and a personal email address however this practice should be avoided in a relational database a derived attribute is where the value of one attribute is derived from another in the student table the age of each student can be derived from their respective date of births and finally there's the key attribute this is the field that holds a unique value used to identify unique entity record a good example are the values contained in the student ID column each ID is a unique value which can be used to obtain data about a specific student remember that there's no point in considering entities or attributes that will not be used in your project you only need to capture data in your database system that helps the users of your system complete certain tasks and activities you should Now understand the concept entities in a relational database and be able to differentiate between attribute types good work at this stage you might be familiar with the process for creating tables within a database but there are several issues that you're likely to encounter when working with tables such as unnecessary data duplication issues with updating data and the effort required to query data fortunately these issues can be resolved with the use of database normalization by the end of this video you'll be able to explain what database normalization is and you'll also be able to demonstrate an understanding of insert update and deletion anomalies and be able to list some of the issues associated with them normalization is an important process used in database systems it structures tables in a way that minimizes challenges by reducing data duplication avoiding data modification implications and helping to simplify data queries from the database to gain a better understanding of normalization and the challenges IT addresses let's explore an example of a table that hasn't been normalized in this example I'll use a college enrollment table the table serves multiple purposes by providing a list of the college students courses and departments in the outline of relationships or associations between students courses and departments and name and contact details for the head of each department creating tables that serve multiple purposes causes serious challenges and problems for database systems the most common of these challenges include insert anomaly update anomaly and deletion anomaly let's begin with an overview of insert anomaly insert anomaly occurs when new data is inserted into a table which then requires the insertion of additional data I'll use the college enrollment table to demonstrate an example in the college enrollment table the student ID column serves as the primary key each field in a primary key cardio must contain Data before new records can be added to any other column in the table for example I can enter a new course name in the table but I can't add any new records until I enroll in new students and I can't enroll new students without assigning each student an ID the ID column can contain empty fields so I can't insert a new course unless I insert new student data I've encountered the insert anomaly problem an update anomaly occurs when you attempt to update a record in a table column only to discover that this results in further updates across the table let's return to the college enrollment table once again to understand how an updated anomaly occurs in the enrollment table the course and Department information is repeated or duplicated for each student on that course this duplication increases database storage and makes it more difficult to maintain data changes I'll demonstrate this with a scenario in which Dr Jones the director of the Computing Department leaves his post and is replaced with another director I now need to update all instances of Dr Jones in the table with the new director's name and I also need to update the records of every student enrolled in the department this poses a major challenge because if I miss any students then the table will contain inaccurate or inconsistent information this is a prime example of the update anomaly problem updating data in one column requires updates in multiple others next let's review the final challenge deletion anomaly a deletion anomaly is when the deletion of a record of data causes the deletion of more than one set of data required in the database for example Rose the student assigned the ID of four has decided to leave her course so I need to delete her data but deleting roses data results in a loss of the records for the design Department as they're dependent on rows on her ID this is an example of the deletion anomaly problem removing one instance of a record of data causes the deletion of other records so how can you solve these problems as you learned earlier the answer lies in database normalization normalization optimizes the database design by creating a single purpose for each table to normalize the college enrollment table I need to redesign it as you discovered earlier the table's current design serves three different purposes so the solution is to split the table in three essentially creating a single table for each purpose this means that I now have a student table with information on each student a course table that contains the records for each course and a department table with information for each department this separation of information helps to fix the anomaly challenges it also makes it easier to write SQL queries in order to search for sort and analyze data you should now be able to explain what database normalization is and you should also be able to demonstrate an understanding of anomalies and challenges well done as a database engineer you'll very often come across columns in a table that are filled with duplicates of data and multiple values this can make it quite challenging to view search and sort your data but with the correct implementation of normalization this challenge can be dealt with by the end of this video you'll be able to demonstrate how to design a database in first normal form identify the atomicity rule and how to enforce it and analyze effective ways to eliminate the repeating group of data problems in data sets as you might already know from previous videos in this lesson the normalization process makes it easier and more efficient for engineers to perform basic database tasks it's an especially useful process for helping to fix the well-known insert delete and update anomalies however in order to achieve database normalization you first need to perform the three fundamental normalization forms the database normalization forms include first normal form or 1nf second normal form or 2nf and third normal form or 3 NF this video focuses on designing a database to meet the first normal form or one and F rules these rules enforce data atomicity and eliminate unnecessary repeating groups of data and database tables data atomicity means that there must only be one single instance value of the column attribute in any field of the table in other words your table should only have one value per field by eliminating repeating groups of data you can avoid repeating data unnecessarily in the database instances of repeated data can cause data redundancy and inconsistency to understand this better let's explore an example to demonstrate data atomicity I've built an unnormalized table called course table within a college database it includes information about the college's Computing courses along with the names and contact details of the course tutors on the course ID column serves as the table's primary key however there are multiple values in each row of the contact number column each row contains two contact details for each tutor a cell phone number and a landline number this table isn't in one NF it violates the atomicity rule by including multiple values in a single field I can try and fix this by creating a new row for each number this solves my data atomicity problem the table now has just one value in each field but this solution has also created another problem the primary key is no longer unique because multiple rows now have the same course ID another way that I could solve the problem of atomicity while retaining the primary key is by creating two columns for contact numbers one column for cell phones and a second column for landline numbers but I still have the issue of unnecessary repeated groups of data Mary Evans is the assigned tutor for two of the courses so her name appears twice in the table as do her contact details these instances of data will continue to reappear if she's assigned more courses to teach and it's likely that our details will appear in other tables within a database system this means I could have even more groups of repeated data this creates another problem if this tutor changes any of their details then I'll have to update their details in this table and all others in which it appears and if I miss any of these tables then I'll have inconsistency and invalid data within my database system to solve this issue I can redesign my table to adhere to 1nf or first normal form first I identified the repeating groups of data in this case it's the tutor's name and contact numbers next I identify the entities I'm dealing with which are course and tutor then I split the course table so that I now have one table for each entity a course table that contains information about the courses and a tutor table that maintains a name and contact numbers of each tutor now I need to assign a primary key to the tutor table so I select the tutor ID column I've solved the problem of data atomicity but I also need to provide a link between the two tables I can connect the two tables by using a foreign key I just add the Chooser ID column to the course table now both tables are linked I've now achieved data atomicity and eliminated unnecessary repeating groups of data you should now be familiar with one NF and the rules that you should apply to avoid it good work as a database engineer you'll very often come across columns in a table that are filled with duplicates of data and multiple values this can make it quite challenging to view search and sort your data but with the correct implementation of normalization this challenge can be dealt with by the end of this video you'll be able to explain how to design a database in a second normal form outline the functional dependency concept and Define the partial dependency concept before you begin make sure that you've watched the video on first normal form or 1nf database normalization is a progressive process so you must be familiar with one NF before you can Implement 2 and F so why do database developers require database normalization if you're going to store content you should aim to have the best possible database best means that it is a proper structure that reduces duplication and ultimately allows for accurate data analysis and data retrieval to get the best results engineers build tables in a way that optimizes the database structure this video focuses on how to design tables in a relational database to meet the second normal form criteria but before you can learn how to do this you need to understand what is meant by the terms functional and partial dependency functional dependency refers to the relationship between two attributes in a table the unique value of a column in a relation determines the value of another column to demonstrate this concept let's take the example of a table known as r this table contains two columns called X and Y respectively X is a column with a set of unique values which are not replicated elsewhere in the table a primary key for example why is a column without a set of unique values like a non-primary key R is the table or relation in which the columns X and Y exist why as a non-primary key with duplicated values is dependent on X this is because X is the table's primary key as it only contains unique values don't worry if you don't quite understand this concept yet I'm going to demonstrate functional dependency in more detail let's take the example of a table called student that holds key information on students in the college the table contains three columns a student ID column a name column and a date of birth column I need to use this table to find the date of birth for a specific student I can't use the name column because it has duplicated values there are two students named Tony if I query this column I'll just receive both instances of Tony and I can't use the date of birth column either because there are two students who share the same date of birth but I can complete this task by using the student ID column all values in this column are unique so it's designated as a table's primary key and the values of this primary key column determine the information of the other columns this means that each column in the table is functionally dependent on the student ID column is the only column that can be used to return specific data now that you've explored the concept of functional dependency let's look at partial dependency partial dependency refers to tables with a composite primary key this is a key that consists of a combination of two or more columns to demonstrate let's take the example of a table that shows the vaccination status of patients in a hospital database the table shows the vaccination status of two patients David and Kate it also displays the patient ID vaccine ID and vaccine name there's no one single column with unique values in each row so there's no single column that can be used as a primary key so it's best to combine both the patient ID and vaccine ID columns as a composite primary key to create a unique value in each record the vaccination table must meet the second normal form or 2nf so all non-key attributes the vaccine name patient name and Status must depend on the entire primary key value which our patent ID and vaccine ID it can't depend on just part of the value otherwise this creates partial dependency let's apply this rule to find out if it's true for every non-key column so how do I check that the patient with the idea 50 has taken vaccine 1. I checked the value of both the patient ID and the vaccine ID Keys the combined value is the only way to return the vaccination status value of a specific patient this means that there's a functional dependency between a status value and the primary key value but if I just want to find out the vaccine name then I don't need both combined values the only information I need to return the vaccine name is the vaccine ID as you learned earlier this is called partial dependency this should be avoided in most instances as it violates a 2nf rule similarly if I want to identify the patient's name I don't need both combined values I can just use the patient ID to return the patient's name next let's look at how to upgrade this table to 2nf first I need to make all non-key columns dependent on all components of the primary key so I identify the entities included in the vaccination table in this instance there are three entities vaccination status as represented by the status column vaccine which is the vaccine ID and vaccine name columns and patient represented by the patient name and patient ID columns I then break up the table into three Separate Tables as follows patient table vaccine table and vaccination status now in each of these new tables all non-primary key attributes depend only in the primary key value I've eliminated all unnecessary replication of the vaccine and patient names within the vaccination table the three tables are now in the second normal form or 2nf you should now be familiar with the 2nf rule and how to upgrade a table to 2nf good work when working with tables in the database you may often encounter tables that contain repetitive data perhaps two columns contain values that are very similar so you might split the table in two to simplify the data when building relational databases you can solve similar issues of repetitive data using what's known as third normal form or 3nf by the end of this video you'll be able to understand how to design a database in third normal form and explain the concept of transitive dependency before you begin make sure that you've watched the videos in the first and second normal form a database must be in first and second normal form before it can be in third normal form in addition to these rules databases can contain any instances of transitive dependency in the context of third normal form transitive dependency means that a non-key attribute cannot be functionally dependent on another non-key attribute in other words non-key attributes cannot depend upon one another a key attribute in a database is an attribute that helps to uniquely identify a row of data in a table to demonstrate this concept let's take the example of a basic table with three columns a b and c the concept of transitive dependency means that the value of a determines the value of B likewise the value of B determines the value of C the relation between these table columns is represented by a b and c this means that a determines C through b this is the type of relation that database Engineers call transitive dependency let's see how this works using a more complex example I have a table of best-selling books within Europe from the database of an online bookstore the table organizes the books according to five attributes bulk ID and title author name author language and Country in this table ID is the only key or primary key that exists in the table all other attributes are non-key attributes but to determine what these non-key attributes are I must use the ID of the top selling books this means if I want to find any specific information about any attribute I need to use the ID attribute value to find the targeted attribute value for example if I use the ID of 3 then I can locate the author cormicho Dwyer the language Irish the country Ireland and so on however it's also possible to determine the country based on the language or to determine the language based on the country and both country and language are non-key attributes for example in the context of Europe it's always possible to determine the country is France if the language is French and vice versa this means that over transitive dependency in this relation a non-key attribute depends on another non-key attribute this dependency relation can be presented as follows language determines country and Country determines language the rest of the attributes are fine as they only depend on the ID primary key so you can't say for instance that author name determines book title or that author name determines language for example the author Michelle Laurie has written two books in two different languages French and Spanish as I've just pointed out the only transitive dependency that exists in this example is between language and Country so how do I solve this transitive dependency within my table and remove any repetition of data I can split the table into two tables while joining them to conform with three NF rules so I keep the top books table while splitting off the contrary and language columns into a new table called country but I also leave the country column inside the top books table as a foreign key that connects the two tables the country table now holds just four records with no repetition of data and there's no need for a language column within a top box table state in the country is enough to determine the language and most importantly all non-key attributes are determined only by the primary key in each table this means that my table now meets the requirements of 3nf you should now know how to design a database in third normal form and you should also be able to explain the concept of transitive dependency well done well done you've reached the end of this module and database design it's now time to recap the key points and skills you began this module with a lesson on designing a database schema and learned that a correctly designed database is the basis for all subsequent data storage and Analysis it's crucial to know the principles of good database design because a poorly designed database makes it hard if not impossible to produce accurate information having completed this module you should now be able to define the term database schema describe the schema of different database systems create a basic database schema using SQL and list the two main types of database schema you then moved on to explore relational database design in this lesson you learned how to design a relational database having completed this lesson you should now be able to describe the relational model list different types of relations evaluate an entity relationship diagram or e or D and explain the purpose of primary key in a database table and you should now also be able to demonstrate how to select a single primary key on a composite primary key describe the purpose of a foreign key to connect tables using a foreign key and summarize the meaning of entities in a relational database furthermore you should now be able to list different types of attributes identify entities and their attributes and create links between entities the last lesson cover database normalization normalization is a process of converting a large table into multiple tables to reduce data redundancy you should now be able to explain database normalization recognize insert update and deletion anomalies and explain the atomicity concept and describe the repeating groups of data problem and you should now know how to design a database in first second and third normal form and explain the concepts of functional partial and transitive dependency you should now be familiar with the essential skills and concepts of database design and you should also be able to create the structure of a normalized relational database well done that's great progress towards your learning goals in this course you covered an introduction to database engineering let's take a few moments to briefly recap what you learned in the opening module you can introduction to the course and explored possible career roles that you might want to follow as a database engineer you also reviewed some tips around how to take this course successfully and discuss what it is that you hope to learn you then covered an introduction to SQL or standard query language the coding syntax used to interact with databases and finally you explore the basic structure of databases and learned about the different types of keys they use you began module 2 with an exploration of SQL data types and learned how to differentiate between numeric data string data and default values you also completed several exercises in which you learned how to utilize these different data types in your database projects you then moved on to explore crud or create read update and delete operations you learn how to create databases and tables and populate them with data you explored how to update and delete data and you demonstrated your ability with crud operations by completing exercises in creating and managing data in the third module you reviewed SQL operators learned how to sort and filter data we began the module with a lesson on SQL operators in which you explore the syntax and process steps to deploy SQL arithmetic and comparison operators within a database next you covered how to sort and filter data using clauses the Clauses that you learned about include the order by Clause the where clause and the select distinct clause you also covered an overview of how each Clause is used to sort and filter data in a database and you went through demonstrations of these Clauses and had an opportunity to try them for yourself in module 4 you learned about database design in the first lesson you need an overview of how to design a database schema you explored basic database Design Concepts like schema and learned about different types of schemas the next lesson focused on relational database design in this lesson you investigated how to establish relationships between tables in a database using keys you also learned about the different types of keys that are used in relational database design such as primary secondary candidate and foreign keys finally you covered a lesson on database normalization in this lesson you investigated the key Concepts around database normalization you then learned about the concept of normal form and about the first second and third normal forms well done and completing this recap now it's time to try out what you've learned in the graded assessment good luck congratulations on reaching the end of this introduction to databases course in the program metadatabase engineer you've worked hard to get here and developed a lot of new skills during the course you're off to a great start with your database learning and you should now have a thorough understanding of databases and data you were able to demonstrate some of this learning along with your practical database skill set by creating and querying a database in the project lab following your completion of this first course in meta database engineer you should now be able to demonstrate knowledge of different database schema explain relational database design and table normalization perform database operations such as create read update and delete and demonstrate SQL commands by sorting and filtering data the key skills measured in the graded assessment revealed your ability to demonstrate knowledge of different database schema explain relational database design and table normalization perform database operations such as create read update delete and demonstrate SQL commands by sorting and filtering data so what are the next steps well this is the first course in the metadatabase engineer specialization and it has given you an initial introduction to several key areas you probably realize that there's still more fee to learn so if you found this course helpful and want to discover more then why not register for the second course you'll continue to develop your skill set during each of the metadata base engineer courses in the final lab you'll apply everything you've learned to create your own fully functional database system whether you're just starting out as a technical professional a student or a business user the course M projects prove your knowledge of the value and capabilities of database systems the lab consolidates your abilities with the practical application of your skills but the lab also is another important benefit it means that you'll have a fully operational database that you can reference within your portfolio this serves to demonstrate your skills to potential employers and not only does it show employers that you are self-driven and innovative but it also speaks volumes about you as an individual as well as your newly obtained knowledge and once you've completed all the courses in this specialization you'll receive certification in Mata database engineering the certification can also be used as a progression to other meta role-based certifications depending on your goals you may choose to go deep with Advanced role-based certifications or take other fundamental courses once you earn this certification meta certifications provide globally recognized and industry-endorsed evidence of your technical skills thank you it's been a pleasure to embark on this journey of Discovery with you best of luck in the future have you ever accidentally deleted something on your device and wished you could undo your mistake wouldn't it be great if you had a time machine to go back and undo your mistake while humans can't yet travel back in time programmers working on projects can do something quite like it they do this using a system called Version Control as a programmer you'll be working with many files in your project and it's important to keep track of the changes being made Version Control is a system that records all changes and modifications to files for tracking purposes and is essential to your day-to-day development activities in this course you will become familiar with Version Control and how it relates to development by the end of your studies you'll understand what version control is how it works and how it is used there are both centralized and distributed Control Systems available and you'll examine the different types of workflows available conflict resolution is an important aspect of Version Control as it helps users manage file and version conflict issues you'll get to explore popular methods of version tracking using Version Control Technologies like git and GitHub and you will learn how to create and clone a repository in GitHub in addition you will become familiar with Git Concepts such as ADD commit push and pull branching and forking diff and blame as well as focusing on Version Control the course also explores the use of command line syntax with an emphasis on unix commands there are many videos in your course that will gradually guide you toward that goal watch pause rewind and re-watch the videos until you are confident in your skills then you'll be able to consolidate your knowledge by Consulting the course readings and put your skills into practice during the course exercises along the way you'll encounter several knowledge quizzes where you can self-check your progress you're not alone and considering a career as a web developer and the course discussion prompts enable you to connect with your classmates it's a great way to share knowledge discuss difficulties and make new friends to help you be successful in the course it's a great idea to commit to a regular and discipline approach to your learning regime you need to be as serious as you can about your study and if possible map out a study schedule with dates and times that you can devote to attending the course of course this is an online self-paced program and you can study at dates times and places that suit your lifestyle however it might help to think of your study in terms of regular attendance just as you might have to do at a physical learning Institute you may have encountered new technical words and terminology in this video don't worry if you don't fully understand all these terms right now everything you need will be covered during your learning in summary this course provides you with a complete introduction to version control and it is part of a program of courses that leads you toward a career in software development communication is the most important skill for collaborating with other developers to ensure that you're on the same page when you're building products and also so that people are keeping track of each other's timelines and the understanding of what the product requirements are are consistent [Music] thank you [Music] hi I'm Leila rizvi I am a back-end software engineer at Instagram working on Instagram calling in San Francisco effective collaboration is important so that we can move cohesively together on large projects with people that have a wide range of skills these Engineers actually have to collaborate with one another a lot when we build features together we have to work in parallel with one another to design the best features for our users we also have to collaborate with a lot of non-engineers a lot for example when we built Instagram live we had to work with our product managers to figure out what we should build we had to work with our user researchers to figure out what areas we should focus on to build the best products for our users we had to work with our designers to figure out the right look and feel for our product and we had to work as Engineers to figure out what we can actually build in the timeline that we had communication is one of the most important skills for working with other developers learning how to give developers the right amount of context for what they're working on learning how to ruthlessly prioritize your work is also very important as a software engineer there's always going to be an endless amount of things that you can do to improve your product learning which things are most important to unblock other developers or unblock yourself is the most important skill the last thing that's really important to learn as a software developer is to accurately estimate your products when you first start out it'll be a little bit tricky but over time learning how to say how long a Project's going to take and being able to explain the trade-offs is going to be very critical the skills you need as a software developer changes from company to company a little bit when you're at a big company like meta Engineers are much more specialized whereas if you're in a startup you are many different hats as an engineer at meta we have to learn to be able to give just the right amount of context to people for what we need to get done because we're working with people with little context on the work we're doing there's so many Engineers there but if you're in a startup people generally have a little more context in what you're working on but they might not be as specialized in that area so you have to do a little bit more learning potentially and you have to maybe give less context to them some challenges I've encountered while learning to collaborate is that I have to learn how to adjust how I work with different people depending on their work preferences some people I work with are visual Learners so I learn to whiteboard when I'm talking to them some people I work with like to think before they speak so I learned to be a little more patient when I'm talking to them to listen to them a skill that I'm working on learning to develop my own career is I'm learning how to work on mobile I'm a back-end engineer and I work with mobile Engineers every single day to better like build products for them it's nicer for me to understand how they operate how their code base works so that we can build better things together to learn mobile I took an Android bootcamp class at meta.meta provided for Bakken Engineers interested in learning mobile and after I finish the course the Android Engineers on my team have been giving me small tasks and small projects for me to drive so that I can learn a little bit more about how they work practicing effective Version Control leads to better collaboration because it helps you understand why certain changes were made if you look at a commit it also helps you contact switch between different features or projects that you're working on effective collaboration led to a better outcome on a recent project of mine I'm a back-end engineer and I was driving the back end parts of a key project for my team I had to unexpectedly take time off but I had a lot of clear commits for my code changes and get and I had a lot of documentation for my team so they were able to easily pick up where I left off and the project ended up staying on track you should keep learning to collaborate with other people because the more people you work with the better products you build because you'll get different perspectives on what you're building it's rewarding because your product that you're working on ends up being better the more people you work with the more people you can learn from and ultimately it's rewarding because the product that you build is going to be better because it's going to have a lot more different input and a lot more different perspectives for how it should be built have you ever worked on a document made changes to it and wished you could have gone back to your first version a few days later can you remember that feeling of wishing you could travel back in time for developers this time machine exists and it's called Version Control in this video you learn about version controls primary features and benefits Version Control is a system that records all changes and modifications to files for tracking purposes developers also use the terms Source control or source code management the primary goal of any version control system is to keep track of changes it achieves this by allowing developers access to the entire change history with the ability to revert or roll back to a previous state or point in time there are different types of changes such as adding new files modifying or updating files and deleting files the version control system is the source of Truth across all code assets and the team itself let me give you an example that we're all familiar with in word processing applications Version Control functionality is available to provide users with the safety net of Auto saving the document the application creates a restoration point on each autosave to which the user can revert if required Version Control Systems for coding projects tend to be a bit more complex but their underlying functionality follows the same process working as a developer you need to become familiar with many different tools and Version Control is one of them for developers especially those working in a team there are many benefits associated with Version Control these include revision history identity collaboration Automation and efficiency let's explore these in some more detail revision history provides a record of all changes in a project it provides developers with the ability to revert to a stable point in time in cases where code edits cause issues or bugs the ability to roll back to a particular version or time allows teams to work faster and deliver code with more confidence keeping a record of changes is great but it doesn't have as much value if you don't know who is responsible for adding or changing a record all changes made are always recorded with the identity of the user that made them combining this feature with the revision history allows teams to see not only when the changes occurred but also who made the changes teams can also analyze the editing creation and the deletion of files on the control system as a software developer you will often work with a team to achieve a common goal this can be adding new features to an existing project or creating a brand new service in all cases a Version Control System allows the team to submit their code and keep track of any changes that need to be made another important aspect of a version control system is something called a peer review developers working on a task create a peer review Once the code is ready for inspection the peer review aims to get other developers on your team to review the code and provide feedback where necessary the ability to create and deliver code on a wide scale is complex and time consuming Version Control helps keep track of all changes it plays an integral role in the explosion of development operations or devops as it's commonly called devops is a set of practices philosophies and tools that increase an organization's ability to deliver applications or services to a high quality and velocity Version Control is a key tool in this process and it is used not only to track all changes but also to Aid in software quality release and deployments you as a developer will usually work on a project alongside many developers and team members with other skill sets you and your team need to be efficient to make your project a success you and your team may work using processes from the agile methodology in an agile process a team normally plan and execute two weeks of work to complete which is called an iteration each iteration has a list of tasks to complete before the two weeks ends these tasks while complex in some cases are aided by having Version Control in place if you would like to learn more about the agile methodology there is a link to an additional reading at the end of this lesson testing and having some level of automation on every task introduced allows the team to be more efficient it also ensures more confidence that any new feature being introduced will not break any existing flows you now know what version control is all about great work now that you have a better understanding of the goals and benefits of Version Control you are ready to learn how to start using it an interesting fact about collaborating on projects at meta is that Engineers Drive every project at meta they're in charge of coordinating with product data scientists researchers on what we're building and the timelines for that whereas another company's product managers or leadership is often in charge of each project [Music] I'm Leila rizby I'm a back end software engineer on Instagram calling in San Francisco in this video I hope you learn that effective collaboration is important for engineers success at meta and learning how to use Version Control at meta is also very important as a back-end engineer on Instagram calling for my role I collaborate with other mobile engineers in my department on a daily basis so that we build the best products together I also collaborate with Instagram messaging a lot because calling and messaging are closely tied I also work with non-engineers like product and data scientists regularly as well to collaborate effectively with my co-workers we message each other on chat to unblock one another but when something warrants a meeting we'll schedule a meeting when something warns a document we collaborate together on a document with comments and leave notes for one another Version Control for Instagram is interesting because we have one giant monolithic repository for all of our code for backend that means that whenever I'm making a change the code that I'm writing in is shared with every other Instagram team it is risky in some ways but it's also nice in others so I can reuse their code the other thing that's interesting about Version Control meta is that any engineer can improve any change at meta they're big on this saying that nothing at meta is someone else's problem so that means that any engineer can actually work on anything that they want to work on Version Control at meta is a little different than Version Control at a lot of other places while meta has a giant monolithic repository for our code where we continuously release our code many other companies have smaller repositories called microservices for each team so each team has their own code base and only they work in it they use branches so that they can take their code and merge it back with like the master brand for their team or product this is great for smaller teams in some ways but it has its cons and that you might run into a lot of merge conflicts at meta because we have so many Engineers there would be too many merge conflicts if we had branches for each team in the company some collaboration challenges for version control on my team is that because we use a monolithic repository merge conflicts happen a lot so we try to write smaller changes so that we can easily revert them we also try to add a lot of Gatekeepers so that if we ship something to production on Instagram program we can easily turn it off without waiting for any rollback we also share our code with messenger calling so we also have to add a lot of tests the messenger doesn't break us get blame is a way for us to look at the revision history for files it helps us so that if I'm looking at a line of code and I don't understand it I can figure out who wrote that code and reach out to them I can also figure out what they were trying to do in that code they write a message saying what the change was for it's also great so that I can figure out what changes to revert if I'm trying to revert a change I used it every single day when I need to understand what some code is doing I'll reach out to the point of contact that wrote that code which is especially helpful at meta where there's so many Engineers I often don't know the person I need to get a hold of so seeing their code helps in this video I hope you learned that learning how to work together effectively and collaborating well as well as learning how to use Version Control effectively is critical for 6 ASAP meta getting a diverse set of perspectives on what features you should build who you should build for what a feature might need or how it can improve is really helpful and the rewarding part is that your end result your project ends up being better as a developer working in a team you are continually writing changing or updating existing source code it may happen that while you are working on a new feature another developer in the team is busy fixing an unrelated bug with multiple developers all working in the same code base keeping track of all of those additional updates can be problematic luckily Version Control addresses these kinds of problems in this video you will discover the different types of Version Control Systems learn how they operate and learn about their similarities and differences there are many different version Control Systems available for example subversion perforce AWS code commit Mercurial and git to name a few Version Control Systems can be split into two types or categories centralized Version Control Systems and distributed Version Control Systems both types are quite similar but they also have some key differences which set them apart from each other let's start with centralized Version Control Systems centralized Version Control Systems or cvcs for short contain a server and a client the server contains the main repository that houses the full history of versions of the code base developers working on projects using a centralized system need to pull down the code from the server to their local machine this gives the user their own working copy of the code base the server holds the full history of changes the client has the latest code but every operation needs to have a connection to the server itself in a centralized Version Control System the server is the central copy of the project after making changes to the code the developer needs to push the changes to the central server so that other developers can see them this essentially means that viewing the history of changes requires that you are connected to the server in order to retrieve and view them now let's discover how distributed Version Control Systems work distributed Version Control Systems or dvcs for short are similar to The centralized model you still need to pull code down from the server to view the latest changes the key difference is that every user is essentially a server and not a client this means that every time you pull down code from the distributed model you have the entire history of changes on your local system now that you know a little about cvcs and dvcs let's explore some of the advantages and disadvantages of each the advantage of cvcs is that they're considered easier to learn than their distributed counterparts they also give more access controls to users the disadvantage is that they can be slower given that you need to establish a connection to the server to perform any actions with dvcs you don't need to be connected to the server to add your changes or view a files history it works as if you were actually connected to the server directly but on your own local machine you only ever need to connect to the server to pull down the latest changes or to post your own changes it essentially allows users to work in an offline state Speed and Performance are also better than its cvcs counterpart dvcs is a key factor in improving developer operations and the software development life cycle you will learn more about dvcs later in this course and there you have it you can now differentiate between a centralized and a distributed Version Control System you also learned how they operate and what their benefits are as an aspiring developer I'm sure you can appreciate the importance of Version Control Systems well done I'd like you to think back to a time when you thought you'd lost work because it had been overwritten or deleted as you have previously learned Version Control and Version Control Systems help developers keep track of their code and up to date with any changes in this video you'll get a feel for how developers use Version Control to keep track of changes and resolve coding conflicts when working with a team of developers it's essential for the code base to have a source of truth that has all historical changes Version Control Systems play an integral part in aiding this process by providing a full history of changes of every single file added to its Repository it makes collaboration across a team easier and also AIDS in working toward a common goal whether it is adding new features and following the flow of how they were implemented or discovering where a potential issue may have been introduced being able to accurately pinpoint the who the when and the what of those changes is Paramount the revision history will record the essential data points so any developer or team member can walk through the entire project from start to its current state every change that has occurred on the project should be easily accessible either by a simple command or integration into the developer's IDE it's important for organizations to Define standards for how developers communicate changes they make developers need to know prior to looking at the code what the lead developers aims are the more information the better and this creates a stronger team environment that is more transparent and open now I will guide you through an example of a typical development team working on an e-commerce application suppose you're working in a team with three other developers to release a new feature you've been tasked with creating a new feature to enable experiments on the website this will allow the marketing department to test user Behavior a daily report is generated that ranks the effectiveness of each experiment the reports will give insights into how each experiment is doing they will then provide the results of which experiment is the most successful and overall winner after all the code changes the developer will push their changes to the repository and create something called a pull request developers will then peer review the pull request to approve request changes or decline when working on a single project there's usually some level of crossover between the developers when this does occur the history of revisions can help aid the developers in seeing the full life cycle of changes that have occurred it is also essential for merging conflicts where multiple developers have made changes that may need to be resolved prior to the code being approved the history will show who made the change for what reason the code itself and its changes and the date and time of when they occurred typically on a new project you will encounter changes in one task that may cause potential issues or conflicts with another the history of revisions will give the team the ability to manage and merge these changes and deliver the business objectives in a timely manner well done you've now learned about the history of revisions remember it's vital to have a system in place to keep a record of all changes to the code base this is critical when working with a team of other developers you should now be able to describe how developers use Version Control to fix any conflicts that may occur during production well done you've reached the end of this introductory module on software collaboration it's now time to review what you've learned during these lessons this module started with a case study about how software Engineers collaborate across the globe without wrecking one another's code you then begin to explore the answer to the question what is Version Control you learn to describe how modern software teams collaborate and work on the same code base list different version Control Systems explain different version control methodologies and explored software development workflows you learned about the history of Version Control and that it has been in use before the internet was widely adopted you explored conflict resolution and discovered the important role of Version Control in the software development process you learned about some of the common tools and strategies developers use to implement Version Control such as workflow continuous integration continuous delivery and continuous deployment the differences between staging and production were explored and you learned that the staging environment should mimic your production environment you also learned the many areas that benefit from creating a staging environment including new features testing migrations and configuration changes you learned that any issues should be caught and fixed in the staging environment before going live in production you have also explored downtime vulnerabilities and reputation regarding production you should now be familiar with Version Control well done you're making good progress on your Learning Journey one of the first things you learn to do when you use a computer for the first time is to operate the mouse and type on the keyboard at first it goes slow but as you become more competent you interact with your computer and it responds as you want it to but what does it really mean to interact with your computer in the context of using a computer the term interact simply means to exchange information or even simpler send and receive information so essentially a computer sending data to you and you receive it in turn you also send some data to your computer and the computer receives it I've talked about The Mouse and the keyboard but can you think of other ways in which you and your computer interact computers have various input and output devices the input devices include a keyboard mouse microphone camera touch sensitive devices and so on the output devices are things like speakers monitors headsets and haptic devices to name just a few you use all these devices to send data to a computer and receive data from it but there's something else that supports communication with your devices these are graphical user interfaces or guise which facilitate your interactions guise are popular because they require very little training to use goobies offer an easy way to interact with devices but they also somewhat limit the scope of human computer interaction as an alternative to guise and input devices such as microphones you will learn to interact with your computer through the command line the command line is a very powerful alternative because it allows developers to perform tasks quicker and with enough experience less potential for errors to use this powerful tool effectively you need to have a certain level of knowledge you might feel that the learning curve for the command line is a bit steep but take it from me the payoff is definitely worth it by learning just a few commands you can perform various tasks such as creating new directories creating new files combining directories copying and moving files around different directories and searching through files using various criteria and keywords as you become more advanced in using the command line you will be able to perform tasks such as track software access and control remote servers search for files using specific criteria unzip archives access software manuals and display them in the command line install upgrade and uninstall software and mount and unmount computer drives or check disk space and so on pretty advanced stuff don't you think but the list goes on you can write scripts to automate various tasks control user access to files and programs stop start and restart programs create aliases of only a few characters long to initiate very long commands download files from the Internet run various software and finally run and control self-contained virtual software which is also known as containerization there are many many ways to use the command line but for now I will guide you through some basic commands to get you started first the CD command which stands for change directory this is used to point our command line to a specific directory for instance a certain folder for example on Linux if I type CD tilde forward slash and desktop I will point the command line to the desktop of my computer when you type CD dot dot you will move out of the current directory and back into the Parent Directory next is the touch command which makes a new file of whatever type you specify for example to build a brand new file you can run touch followed by the new file's name for instance example dot txt note that the newly created file will be empty you can also make new folders using the mkdir command for example MK der followed by the title you want to give the new folder to view a history of the most recently typed commands you can use the history command there are many other commands that you can use but with the ones I just introduced you can already do quite a lot I'll take you through a quick scenario as an example let's say you want to point the command line to the desktop directory and then add a new folder there titled myjs project next you want to point the command line to the my.js project directory and make a new file which you will call example dot JS and finally you want to open the example.js file in vs code to do all of this you will need to run the following commands the first action you'll do is to use the change directory or CD command then you want to use the MK dir command to make the new folder to move into the new folder directory you use the CD command again and then you use the touch command to create the file the final command is the code command which will open the file in vs code if you've run all these commands correctly you'll end up with a myjs project directory on the desktop with the example.js file inside of it and additionally that example.js file open inside vs code ready to be edited in this video you discovered that you can interact with computers on a more advanced level through the command line you now have a better idea of what kind of advanced tasks the command line allows you to do and you are also ready to try out a few basic commands I encourage you to start practicing some of these commands just like you got better and better at typing and moving the cursor with your mouse I assure you that with practice you will soon use command line like a pro I'm pretty sure you use your phone to perform a number of activities such as sending messages shopping online and watching videos you simply tap your screen scroll and swipe but have you ever thought of how your phone responds to your tapping scrolling and swiping you interact with your phone and computer through a graphic user interface or GUI which is just a layer above underline commands that tells the device what to do developers however need to know how to use specific commands to perform various types of tasks for example to create a new folder on the desktop you right click and choose new folder in the command line you use the specific command MK der to achieve the same result having a grip on unix commands specifically is a great skill to have in today's software development World in this video you will get started with a few basic unix commands did you know that the majority of companies run their platforms on the cloud and ninety percent of these systems run on a platform called Linux you might be wondering why I am discussing Linux while the topic of this video is unix commands to answer this let's explore some history Unix preceded Linux and was developed by Ken Thompson and Dennis Ritchie and team at atnt labs in 1969. Linux came much later on and was originally developed as a hobby by liniss Torvalds hence the name Linux the commands that you will explore in this video originated from the Unix platform but you can use them in most modern environments that run some flavor of Linux using the command line could seem a little intimidating at first but you will quickly learn that unix commands are simply a layer below the normal actions such as opening file directories or renaming files Windows for example became the dominant desktop operating system mainly due to its easy to use GUI Windows allowed non-technical users to perform tasks without having to learn a list of commands but you as an aspiring developer will learn to perform tasks using unix commands before I delve into some of the most common commands it's important to note that each command has a set of helper instructions these helpers give detailed information about how the commands can be run and how something we call Flags can be passed one of these helpers is the man command man is short for manual and when called against a command it will display a detailed Manual of instructions for that given command for example the command man space LS will show you the detailed Manual of instructions for the list command LS you can also use something called Flags in conjunction with unix commands flags are used to modify the behavior of a command think of them as options that can either change or extend the functionality of the given command next you will learn about some of the most commonly used unix commands and in the next video you will see some of them in action the CD or change directory command is used to move from different directories of the file system you can learn more about working with relative and absolute paths from the additional reading at the end of this lesson LS is used to show the contents of the current working directory the ls command can accept many different types of flags that will change what is returned in the response for example ls-l lists the file out in list order and shows the read or write permissions owners and groups it belongs to ls-a on the other hand will list all files and directories including hidden ones the PWD or print working directory command shows the full path of the current working directory the copy or CP command copies files or folders from one destination to another and the MV move command moves files from one directory to another in this video you learned about some of the most commonly used unix commands next time you use your device think about what commands run underneath the GUI to complete the tasks you are performing okay so I've opened up my terminal window let's navigate to my home directory I type the CD command and then a tilde followed by the enter key then I use the ls-la command to return all of the files in a list including all hidden piles notice two files A bash RC file and a bash profile file for now I want you to focus on the bash RC file first I can use the command last dot bash RC to check this file The Bash RC file is mainly for configurations it is essentially a script file that's executed when you first open the terminal window what's in there will be configurations for the shell itself for example the types of colors that I'm using I can also add in things around my shell history like how much history of previous commands I want to store and so on so any configuration options that I put in here will be executed when the terminal session begins I press the q key to exit the less environment the other file is the bash profile file so I can run the last command again this time with DOT profile this tends to be used used more for environment variables for example I can use it for setting environment variables for my Java home directory or my python home directory or whatever is needed during development again I press the q key to exit now I will create a simple shell script for this example I will use Vim which is an editor that I can use which accepts input so type vim and then I create a new file by typing the test shell.sh and press the enter key and then at the top of the file I put in what type of file I want it to be in this case it's going to be a bash file if I press the I key on my keyboard it'll set insert mode then I put in a hash symbol followed by an exclamation mark a forward slash the word bin another forward slash and then the word bash this lets the operating system know that this is a bash script the script is very simple I want to print out some type of text onto the screen so I use the echo command and type in what I want to print out in this case Hello World press escape to get out of insert mode then I type colon WQ exclamation mark to save the file press enter and if I look in the directories now notice there is now a file named test shell.sh the other thing to notice is that this file can't be run at the moment in other words it's not executable it's just a read write file but I want it to be executable which requires that I have an X being set on it in order to do that I have to use another command which is called chmod after using this command I type in the type of permissions that I want so I type in 755 and then I want to set the file that I want to add the permissions to which is test shell dot sh and if I use the ls-l-a command again I notice that the file has now been turned into an executable file this means that I can now run the file from the command line to run the file I press dot forward slash test shell dot sh followed by the enter key and now you notice the words hello world are printed out on the screen this is how you can create simple scripts and make them executable within the bash shell I've opened up the command line and first I want to check what directory I'm currently in to do that I run the PWD command PWD is short for print working directory I type PWD and press the enter key the command returns a forward slash which indicates that I'm currently in the root directory this is the top level directory within the operating system if I want to check the contents of the root directory I run another command called LS which is short for list I type LS and press the enter key and now notice I get a list of different names of directories within the root level in order to get more detail of what each of the different directories represents I can use something called a flag flags are used to set options to the commands you run use the list command with a flag called L which means the format should be printed out in a list format I type LS space Dash l press enter and this Returns the results in a list structure let's focus on some items in this list first you need to know what's the difference between the link file a directory and a standard file the link file is always represented by the L and it's always going to be at the very start of the output the temp directory has an arrow beside it which points to TMP this means temp is the same link to the actual directory TMP the next item is the bin directory and it's represented as a d that means that's just the standard directory and that you can use the change directory command to actually open the directory and check its contents now let's focus on the ETC directory to change directory use the CD command I type CD Etc to change to the ETC directory and press enter now I type LS to check the contents notice that the contents are completely different from the root the command returns what's inside the ETC directory to verify that you're in the ETC directory run the PWD command and it confirms that you are in ETC again if I want to change the output type ls-l and it returns a printed list format in list structure let's cover a standard file like a text file or a configuration file the association or the symbol for it is The Hyphen in this case it represents the file resolve.conf understanding the different symbols and different name conventions is important when you're trying to find specific files notice that there is a root and that these just represent the owner and the group that it's Associated to so if I want to move back from the ETC directory there are two ways to do it one is by typing in CD dot dot which means that I go back up to the Parent Directory press enter and then I type PWD notice I am now back in the root directory to step back into Etc type cdetc to confirm that I'm back there type bwd and enter if I want to use the other alternative you can do an absolute path type in CD forward slash and press enter Then I type PWD and press enter you can verify that I am back at the root again to step through multiple directories use the same process type CD Etc and press enter check the contents of the files by typing LS and pressing enter let's try another directory for example the SSH directory type CD SSH and press enter then type LS and press enter again you'll notice the different output from each one to move back up to the previous directory I can use the CD dot dot and I should now be in the ETC directory and then again back to the root using CD dot dot which will take you back to the root itself finally I can confirm that I am in the root by typing the PWD command and pressing enter you have now learned how to navigate and change directories so first I want to check what current directory I'm in by using the PWD command and pressing the enter key you can see that I'm in the root directory which displays as forward slash root if I type in the ls-l command notice that I have one directory in here called projects now I will create a new directory called submissions I do this by typing MK der which stands for make directory and then the word submissions this is the name of the directory I want to create and then I hit the enter key I then type in ls-l for list so that I can see the list structure and now notice that a new directory called submissions has been created I can then go into this directory using the change directory command I do this by typing CD submissions and then press the enter key I type the ls command and notice that there's nothing in there if I want to add some text files or some content I can use another command called touch I type touch test1 dot txt followed by the enter key to add another text file I type in touch test2 dot txt followed by the enter key again now I run the ls-l command and notice that the two text files are listed inside the submissions folder after this I want to go back to the root level directory and I do this by typing in CD dot dot followed by the enter key and then I run the ls-l command again and notice two directories are now listed projects and submissions now I want to create another directory called archive to do this type MK dir followed by the word archive and then hit the enter key to see all of the directories enter the ls-l command followed by the entry key again once again notice that three directories are now listed archive projects and submissions to get back to the top view of my terminal I can clear my screen by typing clear followed by the enter key after this I type in the ls-l command and now I can see all three directories okay so let's say I want to move the submissions folder into the archive folder this requires a different command called move written as MV in this example I need to specify the directory I want to move and then where to move it to so I type in MV submissions followed by the word archive and then I hit the enter key then I can check to see if the move happened by using the ls-l command and now notice that the submissions directory is gone so now I want to go to the archive directory by typing CD archive again I use the ls-l command and now notice that the submissions directory is listed inside of the archive directory recall that I created two text files inside the submissions directory well you'll notice that they were also moved to the archive directory so I go to the submissions directory by typing CD submissions which changes the directory I use the ls-l command followed by the enter key and now you can see the two text files are present and they were moved too you have now learned how to make directories and files and move directories and files okay so I have launched my terminal and I'm running the ls command it informs me that I have two folders archive and projects next I can change directory into archive using the CD command and search inside using LS this reveals a submissions folder I can then type the CD submissions command to enter into the submissions folder and check what's inside the ls command reveals two files file one dot txt and file 2.txt each of these files have some content in them I can check the content of a file by running another command called cat I run the command cat file1.txt this Returns the contents of the file which is some simple text another command is the word count command which is abbreviated as WC to use this command I can just call it against the file by typing WC file1.txt-w the W flag tells the WC command to return the word count the output informs me that there are 181 words in the file let's run another example with pipes pipes allow you to pass the output from one command as the input to another I can perform an LS command on the current directory note that this outputs two file names let's type the ls command again then I pass in my pipe using the vertical line character then I use the WC command with the dash W flag notice that it returns a result of 2 because there's two files in the system so what if I want to find the word count of a file using pipes I just changed the ls command to cat file1.txt pipe wc-w this returns a word count of 181 for file 1 dot txt did you know that you can also combine this command against the directory or multiple files for example I can use the command to get a combined word count for file 1 and file 2. to get this data I just input the command cat file1.txt and then also pass in file 2.txt then I use a pipe followed by a wc-w this returns a total word count of 362 for the two files in this video you will learn about redirection and the different types of redirection you can use the basic workflow of any Linux command is that it takes an input and gives an output the standard input device is the keyboard the standard output device is the screen with redirection you can change the standard input and or output there are three types of i o or input output redirections standard input standard output and standard error the shell keeps a reference of standard input output and error by a numbering system the zero is for standard input one is for standard output and two is for standard error now you will learn about each of these redirection types let's start with standard input taking input normally refers to a user typing information from the keyboard we use the less than sign for user input the cat command can be used to record user input and save it to a file how do we take input and stored in a file such as a DOT txt file let me explain how you can do this by using an example to store user input to a text file so I have just launched my terminal but how do we take input and stored in a file such as a DOT txt file one of the commands you learn to use frequently in this course is the cat command this command is actually set up to take in input on my terminal I type the cat command followed by a greater than sign and then follow it by the name of the input file in this scenario input dot txt press enter now I can add text to the text file created at the end of the text press enter next press Ctrl D to tell the cat command that's the end of the file to Output the contents of the file enter cat followed by a less than sign followed by input dot txt press enter notice that the text that I added from the keyboard displays let's discuss standard output now a lot of the commands we have already used for example LS send their output to a special file called the standard output output direction is handled with a greater than sign i o allows us to use redirection to control where the output goes now I will demonstrate how you can send output to a text file everything in Unix Linux is a file this means every time you run a command like LS and press enter it sends the output of the file to an STD out file in Linux if you want to control where the output goes you can use a redirection how do we do that enter the ls command enter Dash L to print it as a list instead of pressing enter add a greater than sign redirection now we have to tell it where we want the data to go in this scenario I choose an output.txt file the output dot txt file has not been created yet but it will be created based on the command I've set here with a redirection flag press enter type LS then press enter again to display the directory the output file displays to view the content of that file use the last command followed by output Dot txt and press enter the content that displays using the ls minus L directory includes the different files available errors occur when things go wrong when using redirection you also have to specify that the error should be written to a file you can do that by explicitly setting the number two before the output Arrow and you can also combine it with the standard output of 2 greater than n percent one to print both the standard output and error if any occurs you have already learned that input is represented by zero according to the Shell output is represented by one the input stream uses the less than sign the output stream uses the greater than sign and the error stream uses two followed by the greater than sign it may happen that an error occurs when you are outputting data to a text file remember that the error will not correspond with the output stream it will change to use the error stream which is represented by two let me now demonstrate how this works so if the terminal open and I'm running a similar example to the standard output type the ls command follow this by Dash L to try and print it as a list instead of using a directory that we know exists I'm going to use one that doesn't exist enter forward slash bin forward slash USR now type a greater than sign followed by the name of the output file in this scenario type error.txt press enter notice that the message cannot access it states that there is no such file or directory normally you think that it would still print the contents of the file but because an error occurred it prints it to the console there are two ways to send the contents to the error.txt file I type ls-l forward slash bin forward slash USR then add the number two which represents the error output followed by the greater than sign now enter the name of the output file error.txt press enter to see what we have inside the error file type less followed by the file name in this case air.txt press enter the error message LS cannot access forward slash bin forward slash no such file or directory displays if you want to handle both cases where you may find data or may not find data you can pass in a different redirection so it handles each one both for output and for error to do this again enter ls-l forward slash bin forward slash USR next add the greater than sign for output followed by the file name air underscore output Dot txt this time we're going to use another redirection to signify that we also want to get errors to do this I enter a 2 followed by a greater than sign this is followed by an ampersand sign and the number one to get the output that is available press enter to have a look inside the error file type less error underscore output Dot txt and press enter notice that the air is contained inside the air dot output file displays and that brings us to the end of this video now you know what redirection is and how to use the three types of input output redirections well done grep stands for Global regular expression print and it's used for searching across files and folders as well as the contents of files on my local machine I enter the command ls-l and see that there's a file called names dot txt if I access that file using the last command it displays a list of first names in non-sequential order as in not arranged alphabetically so what I'll do first is use grep to find some patterns of names that start with similar matches then I'll also show how grep can be passed with different flags to get different results first I'll perform a standard search using grep so what I'm going to do is look for names that begin with Sam by entering the command grab samnames.txt this then returns a list of names that begin with Sam keep in mind that grep is case sensitive which means if I run the same query with a lowercase s it returns a completely different set of results because this query doesn't match the capital S it returns partial matches in which Sam appears in the middle or end of a name rather than the beginning fortunately I can pass in a flag to ignore case sensitivity I can do this with a command grep minus I followed by the keyword Sam and then the file name names.txt again this time I get back both users that begin with Sam and also with Sam as a partial match in the middle or the end of the name so the result set changes based on the type of query that I passed through with different flags we could also do an exact match by passing in a different flag and that's the dash W which means it'll match exactly what I'm looking for so I'll input grep Dash W and then pass in the keyword Sam with a capital S and finally our file name names.txt in this case we only get back a single result with the name Sam as any partial matches are ignored lastly I can use a pipe command to combine different searches for Grappa itself for example let's say I want to search across a list of directories for certain executable files I can combine that with different commands and search across the file structure to find exactly what I'm looking for if I check all the executable files inside the bin directory by running LS forward slash bin it returns a long list of results in order to filter that down I can run the same query of Ls forward slash bin but this time I'm going to pipe it pass in a grep and then enter the keyword zip you'll find that in this case I get a much smaller subset back in the results and if I need to refine it further then I can also apply the different flags to look for an exact match a partial match or ignore case sensitivity great work you've reached the end of this module on the command line it's now time to review what you've learned during these lessons in this module you learned how to use the command line to execute commands in Linux you were introduced to some of the most commonly used commands that reverse create rename and delete files on your hard drive you learned how to use piping and redirection to create powerful workflows that automate your work having completed this module you should be able to describe what the command line is and how it is used explore your hard drive using the command line create rename and delete files and folders on your hard drive using unix commands and use pipes and redirection this module began with a video exploring the answer to the question what are unix commands you learned how to determine the current working directory using the PWD command you also explored how to create and change directories and files using the command line you can now create a working directory create two different directories der 1 and or two create files and directories inside dirt1 enter 2. you can now also use graph to search for files folders and contents of files you're now familiar with the command line well done you're making good progress on your Learning Journey are you familiar with Version Control or Version Control Systems here's a quick example of where they're useful have you ever opened an app on your phone and received a prompt to update to a new version these prompts most likely direct you towards an app store where you then download the latest version as you download the new version you might notice a new layout button or piece of functionality in software and web development developers use Version Control to track the differences between versions and a popular method of tracking versions is the use of Version Control Technologies like git and GitHub in this video you'll discover the answer to the question what is git and GitHub you will learn about the differences between git and GitHub and how web developers make use of them and explore the benefits and advantages of both services let's start off with Git git is a Version Control System designed to help users keep track of changes to files within their projects git was designed to address the challenges that its creator Linus Torvalds was having managing the development of the Linux kernel the operating system for Linux Linux has thousands of contributors who commit changes and updates daily git was designed to help with the challenge of tracking all these changes and updates as well as helping to keep track of changes git was also designed to tackle some of the shortcomings of other Version Control Systems the benefits that git offers over similar systems include better Speed and Performance reliability free and open source access and an accessible syntax it's also important to note that git is used predominantly via the command line developers tend to find git syntax and commands easy to learn the other service commonly used by developers is GitHub GitHub is a cloud-based hosting service that lets you manage git repositories from a user interface a git repository is used to track all changes to files in a specific folder and keep a history of all those changes it incorporates Git Version Control features and extends these by providing its own features on top some of the most common of these features include Access Control pull requests and automation you will learn more about these later in this course the features are split out into different pricing models to suit different size teams and organizations it's also important to point out that GitHub is very popular among web developers it's like a social network for example projects can be private or public users on GitHub have their own profile which other users can follow public projects can accept code contributions from anyone across the globe and it also includes multiple features outside of its core development tools like documentation ticketing and project features you're now familiar with Git and GitHub Version Control Systems along with the benefits and advantages that they offer this is just the beginning of your Version Control Journey with Git and GitHub great work okay so I have just logged into the GitHub website once there I click on the green button with the text create repository when I click on the button I am redirected to the create a new repository screen where I'll be prompted for who the owner is I choose my account as the owner option for this example next I need to input a repository name so I type a name called my Dash first Dash repo notice that the input field has a green tick icon beside it this is just GitHub letting me know that this name is available to create the repository if it's not I will see an X icon and be prompted to rename it okay so now I need to type a value for the description input for this I type practice account for learning git the next option I want you to know about is if you want the repository to be public or private public just means that anyone on the internet can see the repository I still have control over who can make changes to it it's just available on the viewable aspect of it on the internet the next option is private meaning it's not available for anyone to see I can only allow access by granting people access to the repository the next few options are about initialization I can initialize a repository with a readme file a git ignore file and a license if one is required for now I'm just going to choose the readme file option and then click the create repository button okay so a repo has now been set up and I can see that I have one single file in the repository called readme.md MD is just short for markdown a popular method for creating documentation because it's shorthand for creating HTML Pages this allows me to do things like creating titles and texts I can insert images and various other web page elements notice that the main branch has also been created and it's important to know that every repository you create will have a single main branch at the start this is also known as the main line next I'm presented with additional button options the first is labeled go to file then there is ADD file which you can use to add a new file from the UI and finally a green button labeled code clicking this button provides me with the GitHub UI options for cloning down the repository first is the option for https which contains the https URL of the repository and I can use this to pull it down by using the git clone command next there is an option for SSH but to use that I have to set up my SSH keys and assign them to the user accounts and finally I have the GitHub CLI option underneath notice that there are additional options for GitHub desktop if I would like to use that and finally I can also download a compressed zip file containing all the files and folder structures for this demo I will show you how to use https to begin select the https option and click on the copy button to copy the https URL for cloning now I go to my command line that I will be using to run the commands to clone the repository I'm currently in my home directory okay so what I usually like to do is create a directory for all repositories that I'm working on at the moment first I create a directory using the command make dir then I type the name of the directory I want to create which is projects next I can CD into that and now I can run the commands to clone the project from the GitHub UI to do this I type the command git clone and paste the https URL I copied earlier finally I press enter on my keyboard notice that I receive a message stating that git is cloning into the my first repo folder it then displays messages about all the objects that have been received it also displays a 100 status message and then finally a statement that simply says done now I can list the directory by running the ls-la command which means list all directories notice that I have my repository which I named my first repo this is the name of the repository that we set up on GitHub finally if I enter inside that folder using the CD command I can see a single file the readme.md file if I use the ls-la command another file is listed which is just named dot get you will learn more about this later when you explore how to use this for Source control as you now know GitHub is a cloud-based hosting service that lets you manage git repositories from a user interface it's like a social network you can follow users or accept code contributions from anywhere in the world in a previous video you created and cloned a repository to your local device I am now going to explain to you how to pull the repository to your local device I will be demonstrating commands that you can use in gitbash Shell for Windows users and terminal for Mac users this refers to the application where the commands are typed in let's move to the directory I want by typing CD and the name of the directory my first repo once inside the directory run the list all command by typing LS space Dash l a dash La is short for list all there are four items in this directory I will focus on two of them the dot get item and the readme.md item let's start with the readme.md file this item was added when I created the repository on GitHub the other item is a folder called dot kit which is a hidden folder used to track all the changes in Linux any folder starting with a DOT is a hidden folder this folder is automatically created when you create a repository and you will learn more about it later in this course in the command I ran I added the switch Dash La so we would list all files and folders including the hidden ones the dot gate folder is initialized by running the git init command as the repository was created on GitHub it was not required for us to run it GitHub handled all of this as part of its create new repo flow now let's focus on git workflow git uses workflows which can be broken into three states namely modified staged and committed now I will go over each state and then provide an example of adding a new file to my git repository to show it in action let's start with the first date adding removing and updating any file inside the repository is considered a modified State git knows the file has changed but does not track it this is where the staging State comes in let's turn to it now in order forget to track a file it needs to be put in the staged area once added any modifications are tracked which offers a security blanket prior to committing the changes then the last state is the committed State committing a file and get is like a save point in many ways git will save the file and have a snapshot of the current changes let me introduce you to an example that summarizes the workflow clearly suppose you have a workflow that contains the three stages just mentioned as well as the remote Repository file is added from the working directory to the staging area from there the file is committed and then pushed to the remote Repository from the remote repository the file can now be fetched and checked out or merged to a working directory you will learn more about this later well done you've covered some of the git fundamentals and now know what is inside a git folder and understand the git workflow so I've opened up my terminal window I'll have a look at what directory I'm currently located in I can do this by running the PWD command which is short for print working directory notice that I'm in the directory my first repo now I can check inside that directory by running the ls-la command I can see that I have two items a readme.md file and a hidden folder called dot git before I add any files or make any changes it's always good practice to check if any changes or commits are currently there I can do this by using the get status command git status also displays what branch I'm on in this instance I'm prompted that I'm on the branch called Main and that my branch is up to date with the origin main this means that all the latest files on my local machine are exactly the same as what is displayed on the GitHub UI which represents the server that everyone commits to git status also tells me that I have nothing to commit and that my working tree is clean now let me show you how to add a simple text file I'll add a file called test.txt by using the command touch test dot txt then I'll run the command git status again now git is telling me that I have an untracked file which is the test.txt file that I just added it's also telling me that I have nothing added to the commit but that untracked files are present and that I should use git add to track them the purpose of the git add command is that I'm essentially prompting git and letting it know that I want to track this file and that it'll be included as part of my commit the first phase of this process is just to run the command git add test.txt now I'm going to run git status again to check that file is now being tracked notice again that I'm notified that my branches are up to date but it's also telling me now that there are staged changes to be committed which is this new file called test.txt it prompts me asking if I want to revert those changes for this I can use the get restore command with the flag dash dash stage and the file name test dot dxt running the command will unstage the file from the commit I then run git status one more time to see the file is back to an untracked status so once again I'll add the file using git add test.txt run git status and now notice that the file is back in a tracked state okay let me clear my screen before moving on to do this I use the clear command now any changes that I make from here on will be tracked and then at the end I will use the git commit command the staged area is really important because you're essentially preparing to get all of the files and changes that you want as part of whatever feature you're working on basically you are getting all of that content ready for commit you also have to remember that this is only on your local machine the distributed manner of git means it will only push to the server using the actual push command itself but any change you make here is only specific to you and your local machine anyone else who pulls down the project from GitHub will only get what's available on the remote server okay now I want to explain to you how to run the git commit command first type in git commit you can pass in a flag of Dash M which stands for message allowing you to type in a message which will be attached to the commit in this example the message is adding a new file for testing next press return on the keyboard and now notice that the response States one file change Zero insertions zero deletions there is also a create mode statement with the name of the file test Dot txt finally if I run the git status command the response says that there is nothing to commit and the working tree is clean however I want to be aware of the message at the top of my screen this message tells me to use git push to publish my local commits and this ties back in with what I mentioned earlier all of these changes are on my local machine and they will only be uploaded to the remote server when I run the push command you learn more about the push and pull commands in a future lesson okay I've opened my command line I should check to ensure that I'm in the correct directory using the PWD command I can see that I'm in the my first repo directory now it's good practice to perform a git status command to make sure that I have no commits outstanding if there are no commits and the shell is clear then my Branch will show as up to date with origin main outside of the main branch itself so my next step would be to create a new branch to create this new Branch I use the git checkout command by typing git checkout Dash B I then call this new Branch feature forward slash lesson which I'll refer to as feature lesson for the purpose of this video but this is just one way to create a branch I could also use git branch and pass in the name as well these methods are the same and can both be used to create a branch the key difference between them is that git Branch just creates a branch but get checkout Dash B moves me from the main branch into the branch that I created I can verify that I've been moved between branches by running the get Branch command this will then tell me if I have switched from the main branch to one of the feature lesson branches any changes that I make will now only occur in this new Branch it's important to remember that the main branch has no indication or knowledge of any of these changes even when I push code to the main repository this is because that Branch exists in isolation the new Branch needs to be merged back into the main branch to recognize changes in the feature lesson branch this is where it'll come in with a pull request the purpose of a pull request is to obtain a peer review of changes made to the branch in other words to validate that the changes are correct when coding many teams will have conditions around the unit tests and integration tests these conditions will usually include validating that the standards have been met for merging back into the main line a team will also check for any issues that might have been missed the next step is to add a file to the new Branch I can create a simple text file called test2.txt using the command touch test2.txt then I added using the get add command I then committed using the git commit command once I've committed the new file I need to push my changes up to the remote repository with Git push I type git push Dash U origin feature lesson it's good practice to specify Dash U this means that I'm only going to get updates from the Upstream which in this case will be the main branch the result of this is that the origin won't be my main branch anymore instead it's feature lesson I press the enter key and this pushes the new Branch up to the remote Repository as I am using https I will be prompted for my login information once this action has been completed GitHub will recognize that a new branch has been added it will then prompt me to create a pull request that can be compared against another branch in this case the main branch so my next step is to open the GitHub UI GitHub will show my new Branch with a prompt click on the compare and pull request button a pull request lets the team know that I've made new changes that I want them to review and that I also want to approve or request changes to the actual pull request itself another thing to note on the GitHub UI is that I'm comparing this with the main branch I've essentially cut a branch from the main called feature lesson I've then added a new file called test2.txt and it's this file which is the main difference between the two branches next I check the pull request I can see that there's been one commit in other words just one file has been changed and it's been added as test2.txt I then click create pull request the team will then review the changes and either approve or decline them once approved you can then merge your changes to the main branch this is much cleaner than everyone working off the main branch it's particularly useful if you have features that are closely tied together for example you could be working with a feature that crosses over with some code or requires changes that most likely will be altered by someone else so the approach of keeping everything at Branch level is much easier than having everyone working from the main line in fact everyone working off the same branch is more likely to cause issues having independent branches makes the project easier to manage also there's no limit to how many branches you can have the team decide on the naming conventions that they use in a lot of cases when adding a new feature you can add the keyword feature Then followed by the branch name like a URL path such as feature forward slash lesson in this example for bug fixing fix forward slash can be used so because we have no reviewers in my current project I'm just going to merge the branch then I'll confirm the merge once confirmed I'm presented with the option to delete the branch on your own projects it's up to you whether you want to keep the branches or delete them for now I'm going to keep the branch then I can return to my code where the test2.txt file has merged into the main branch I can then confirm that by going back to my command line next I look at git status again to check if there's something to commit at this point there's nothing outstanding I'm still in the feature lesson Branch I can check out my main branch by typing get checkout main then I run the get pull command I'll then receive the latest changes that were merged in from the feature branch that I just created notice that the test2.txt file is now available I can also verify that by doing a simple check within the directory by using the ls command this returns a readme file test.txt and the test2.txt which is from my Branch you have now learned the branching workflow which you'll use regularly when collaborating with other Developers in the pre-internet era saving project files to different machines for backup and transfer was a tedious process it required manually copying files between machines one at a time making things slow for teams nowadays the cloud has enabled a more efficient way to do this and in this video I'll explain the differences between remote and local on GitHub you have previously learned about the flow's modified staged and committed in a git workflow now you will learn about pushing your changes from your local to a remote repository remote refers to any other remote repository to which developers can push changes this can be a centralized repository such as one provided by GitHub or repositories on other developer devices in this lesson you will be hearing some new terms such as clone Push Pull and repo don't worry these will all be explained soon the remote code is accessed through a URI which is unique and only accessible to those who have permission local on the other hand refers to your machine which can be a laptop desktop or even a mobile device and is only accessible to you to demonstrate both of these in action let's say we have a project called coding project one which is located on GitHub with a unique URL in other words this project is stored on the remote server when a user wants to copy this project to their local device they need to either perform a clone if it's the first time or pull it to get the latest changes to clone a project a user must first choose a folder on their local machine coding project 1 is then cloned from the server and copied into the chosen folder the user can then make changes to the project and push those changes back to the server other users working on the code base won't see those changes on their local machines unless they pull the latest changes from the server one of the advantages of git is that you can work offline and then commit your changes when you are ready now let's go through an example of how exactly you would do this in GitHub in this video I'm going to explain what local and remote mean in GitHub and help you to understand the differences between the two first off I'm going to create a new local repository using the git init command I'll start by inputting mkdir to create a new directory and then I'll set the name as my second repo next I'll use the change directory command which is CD followed by the name that I just set finally I'll perform the get init command to create my new Repository this will return a line telling me that an empty repository has been initialized under root projects my second repo if I execute another command called get remote it comes back as blank the reason for that is that I've only initialized a local repository which has no connection to a central repository that sits either on GitHub or another server right now it's only available locally on my machine now I'll step back out from this directory and go into my first repo with a CD command again this is a repository that I created earlier and does have a connection to GitHub using the remote URI I'll then check it by using the get remote minus V command git tells me that it's set to get tutorials 101 myfirstrepo.git next I'm going to set this against our second Repository I'll step into the new directory once more using the CD command in this case we're going to add this URL to the remote settings by using the command get remote ad specifying a name and then passing in our URI the name that is typically used here is origin so I'll stick with that so again the full command with the URL should read git remote add origin git at github.com get tutorials 101 myfirstrepo.get this time when I execute the get remote minus V command I have this setup against the git tutorials 101 which sits up on GitHub what I'm going to do next is use the get pull command which will connect with the GitHub server and pull down all the changes from the Repository so on my local I now have all the changes but when I check the directory it's blank the reason for this is that I haven't set up a branch that matches with what I have on the server repository fortunately I can change that by performing the command get checkout main which will set up a branch main on my local that tracks the branch main from the remote and now when I check my folder using the ls command it confirms that I have the readme test and test two files available on my local in this video you learned about the differences between local and remote and GitHub this will help set you up to exchange data more efficiently within your development team see you next time by now you should know how to use git add and git commit to add new changes to your local repository put them into the staging area and get them ready for a commit now let me guide you through the next step and upload these changes to the remote repository using git push I'll also demonstrate how to retrieve changes from the remote servers and apply them to your local repository with Git pull before we begin let's go over the command line and perform the command git status git tells me that I'm on the branch main but also the my branch is ahead of the origin main branch by one commit what this means is that all the changes that I have on my local repository are currently ahead of what is stored in the remote repository on GitHub that ties into git's distributed workflow in which you can work in an offline State and then only ever communicate with a remote repository when you use the commands of git push or git pull now I'll guide you on pushing your changes to the remote repository and then I'll demonstrate how to use the pull command to get the latest changes it's always good practice to check which branch you're currently on and you can use git status or get Branch to do this this is important because if you do make changes in a different branch you need to specify where you're pushing up to so let's push up the changes using the git push command I'll specify the origin Branch to be the main branch as in I'm pushing my changes to the origin as the remote repository and then I want to push it to this Branch as the main I'll be prompted for my login information as I am pushing using https once I enter my login information you'll notice that the commit is pushed from the local main to the remote main on the remote repository let's refresh the page on the GitHub website you can see that my test.txt file now appears there that's taken the commit snapshot that I have in my local repository and pushed it up to the remote repository git has then compared those files with what's on the remote repository to find any conflicts or problems if none are found it'll just merge them straight through which is classed as an auto merge if there are any conflict my push will fail before doing a push it's also good practice to perform a git poll in order to get the latest changes from the remote repository and reduce the odds of encountering a conflict because I only have a single file and this will be a new repository I'm not going to run into any conflicts or any issues so now let's move on and I will guide you through how you can use git poll normally when you're working on a project you could have several developers all submitting with different branches different code and different features in order for you to get those changes you need to use the get pull command to demonstrate this I will add a single line to the test.txt file using the GitHub UI and then add the commit change updated the test.txt file I'll then commit it directly to the main branch by clicking on Commit changes the changes now appear on the UI but because I haven't used the get pull command on my local machine yet I should have no content on the test.txt file let's verify by using the cat command on test.txt and sure enough the file is empty which is what you'd expect as I mentioned before I need to run the command git pull this will get the latest changes from the remote Repository if any new changes were added it'll be reflected in the Shell output I run the command and in this case git tells me that one file has changed with one insertion if I run the cat command on test.txt once more it shows that the Line This is my change is now available in my local directory with Git pull you're taking all that data from the remote server git then merges the snapshot from the remote with the snapshot that's on your local it will auto merge them if there's no conflicts once that's complete I'll have the latest changes on my local machine you have now learned how to push to and pull from your GitHub repository have you ever applied for a job you know the process prepare your resume look for jobs submit an application prepare for interviews that is an example of a workflow in computer programming workflows are really important by the end of this video you will be able to describe what a workflow is and you will also be able to identify different workflows available now let's start with an example to illustrate why workflows are important as a developer working on a project you first need to pull the project down from a remote repository to your local machine this is commonly called checking out a project or pulling a project once on your local machine you can build and run the project and make changes when you are done you have to push the changes you made back to the remote repository so other developers can see them from this example you can understand that the purpose of a workflow is to guide you and other members of your team you should not disrupt or cause blockers for deployment or testing or for any other developer who contributes to the project itself choosing a workflow needs careful consideration it can depend on the size of the team the culture of your workplace and also the type of product you intend to build or update with that in mind let me explain feature branching a common workflow used by many Developers feature branching means you create a new Branch from the main line and work on this dedicated Branch until the task is completed rules and conditions need to be made in order for this branch of code to be kept in a good state every code base has a main repository which is essentially the source of Truth for the application all changes such as ADD edit or delete or submitted directly to the Future branch the main branch stays as is when you are ready and happy with the code you have added you have to commit the changes and then push to the server Repository to commit you push the changes and as it's a feature Branch a pull request follows the pull request is compared to the main branch so developers who peer review the code can see exactly what was changed once it's reviewed and approved it can then be merged into the main line now let me guide you through how this works using git and GitHub before creating a new Branch always ensure you have the latest code you can do this by running the get pull command to pull the latest code from the remote repository next you need to create your new Branch you can do this by passing the dash B flag with the checkout action next let's add new content to this branch let's create a readme.md file to do this type git add dot or get add readme.md and press enter next you need to commit the new file and provide a meaningful message so other developers can discover what you added to do that run the git commit command with the dash M option to include a message with a short description of the changes being committed the file has now been added to the local branch this means that the file is only visible locally to you to allow other developers to see the changes you need to push the file to the remote repository you can do that by running the get push command and referencing the new file the changes are not pushed to the remote repository on GitHub your next action is to get it reviewed as part of a pull request but more about that later and that brings us to the end of this video now you know what a workflow is and how a feature workflow works well done previously you learned about the hidden folder called dot get that is located in each project you know that this folder is responsible for keeping track of all changes across a project how does git know what branch you're currently on let me explain it keeps a special pointer called head which is one of the files inside the dot gate folder this file refers to the current commit you are viewing you will now learn how to identify the current commit you are working on first open the dot get folder on your terminal type CD dot git and press enter next type cat head and press enter in git we only work on a single Branch at a time this file also exists inside the dot get folder under the refs forward slash heads path let me show you type CD dot get and press enter next type cat forward slash refs forward slash heads forward slash main press enter after you entered this command a single hashed ID appears the single hashed ID is a reference for the current commit let's switch branches to see how the head is moved to point to a new branch type git checkout testing and press enter next type get branch and press enter again this moves the head to point to the testing branch let me explain how this happens by using a diagram we have two branches the main and testing Branch when you run the checkout command it moves the head to now point to the testing branch to check the contents of the head file inside the dot get folder you have to enter the last command type less dot get slash head and press enter you can now verify that the head has changed from Main to testing now I will demonstrate how get head works with a simple example so I am here in the terminal to see what branch I am in I am running get Branch when pressing the enter key I can see I'm on the main branch to confirm that I run the cat.get forward slash head command and press enter that brings me back to the reference to where it actually points to namely ref refs forward slash heads forward slash main in this case you can see the references pointing to the head's Main if I change my brand to feature testing branches above I use the get checkout command git checkout feature forward slash testing branches I then go to my head file inside the git folder by typing cat dot get forward slash head the ref is now pointing to the feature testing branches namely refs forward slash head forward slash feature forward slash testing branches notice that my branch is up to date with origin forward slash feature forward slash testing branches I'll go back into my main branch by typing get checkout Main and then check the reference file inside the main directory using the cat command again cat dot get forward slash refs forward slash heads forward slash main when I press enter I get a hash ID this is a reference to the latest commits of that working directory I can show you that if you make a change to any files within this directory that this ID will then change after a commit has happened so let's do a simple update to the readme file you can do this with any editor such as vs code or I can also do this by executing the Vim command type vim readme.md and press enter when inside the readme file add some text in this case add minor update to your my first repo line and save it then check the ID again just to verify type cat dot get forward slash refs forward slash heads forward slash main and enter if you would like to learn more about Vim there is a link to an additional reading at the end of this lesson the ID should be the same because we haven't committed anything we've just made a change to run the cat command therefore the ID is exactly the same as before if I do a get status it is telling me that I have modified the readme file on the screen the words modified readme.md displays in red I'll now add that file the shorthand to add a single file is get add space Dot type the command and press the enter key I am then going to do a commit type get commit Dash M for message by adding minor update to the command line and press enter the data confirms that one file changed there was one insertion and one deletion I am using the cad command to verify what is in the reference file by typing cat dot get forward slash refs forward slash heads forward slash main press the enter key to confirm that the ID is changed originally the ID started with a 8b55 and now it starts with 9c90 whenever a change occurs for a Commit This ID will then update to be the latest commit for that working directory and that's get head you now know what the purpose of head is you can also change the head to point to a different branch great novels are rarely written in one go and usually endure several drafts before the author is satisfied with the outcome of course there's always the possibility that an idea from an earlier draft will sound good again later but without a system of organization it can be difficult to find where this idea is located programming is no different and sometimes you need to revisit old code in this video I'll guide you through how to use git diff to compare changes across your files branches and commits you probably know that the git status command tells you which of your files have been changed the git diff command goes a step further and tells you what exactly these changes are when used together you can think of them as a file system git status tells you the file names but to open the file and see the contents you'll need to use git diff to demonstrate let's say you have a text file named cities dot txt which contains the names of cities you visited you've been updating the list during a tour of South America but Upon returning home you've lost track of what you've recorded so what can you do well this is a situation where git diff makes itself useful git diff will compare the previous version of the file with your current one to find any differences it will then tell you specifically what content has been removed as well as what content has been added to the file in your citys.txt file git diff would tell you that you removed one city that was in version a and then added a new one that appears in version B so now that you've had a basic explanation let's go into a more detailed example in this video I'll show you how you can use git diff is used to make comparisons against files on your local repository it can also be used against commits and against branches I'll start with a simple example when I go into my local repository I'll find a file called readme that I'd like to update slightly you can do this with any editor such as vs code I can also do this by executing the Vim command to enter the file for editing remove a few words and then save it if you would like to learn more about Vim there is a link to an additional reading at the end of this lesson next I'm going to use the get diff tool to compare the updated file against the head because we haven't yet completed a commit it's not available for a comparison against another commit so I'll input git diff pass in the head as the first option and then finally the file name this then returns an output showing the changes that occurred in each file here the line starting with a minus symbol represents what it originally was while the line with a plus symbol shows what it is now so my example tells me that the words minor update have been removed in addition to individual files you can also make comparisons against previous commits I'll start by using the get log command to display my history of commits and I'll also use the pretty flag here so that each one is shown in one line the pretty flag is used by developers to make the output more readable each commit has its own ID code so I'll perform a git diff command on the codes from the most recent commit and from the very first one get will go through all the files note all the changes that have occurred and return the differentiation between the two finally one more way of using git diff that I'll show you is how to use it for making comparisons against branches if I perform the command git Branch it will display all the branches that are available in the repository I can then use the git diff command to pass in my main branch followed by my feature Branch as the second option and once again this will display all the changes that have occurred between the two this shows that my feature branch has the previous update while the main branch has the most recent one so these are a few examples of how you can use git diff in this video you learned how to use git diff command to keep track of changes across your files branches and commits this tool can help you to stay on top of updates and avoid mistakes or overlap see you next time one day you might be overseeing a big team of developers can you imagine how complex it gets to keep track of everyone's changes and updates to files fortunately git has a very helpful command for keeping track of who did what and when it's called get blame in this video you will learn about git blame and I will demonstrate how it is used with a few examples one of the core functions of git is its ability to track and record the full history of changes for every file in the repository in order to view and verify those changes git provides a set of tools to allow users to step through the history and view the edits made to each file the git blame command is used to look at changes of a specific file and show the dates times and users who made the changes by now you should know how to use commands like git log to see the changes made I will now use the feature.js file to demonstrate how gitblame works let's get started with the git blame command to run the get blame command type git blame and the name of the file in this case feature.js after pressing enter git returns a list of all changes on the file to understand what is happening let's break down the blame messages and go through it line by line firstly let me guide you through the format of each line every line will start with the ID and then the author the date and time when the change was made and the exact line number where the change occurred then the actual change or content is also returned the ID is a reference ID of the commit the same ID might appear in several lines this happens when a single commit has been made by the same developer the author is the person who created the commit timestamp is the date and time when the changes were committed line number represents the location in the file or the exact line where the changes were made the content is the code that was added to the file now that you know the meaning of each line in the blame output let's explore a real example in this example you will check who made changes when the changes were made and also what changes were added for the purposes of demonstration I will be using a public repository called mkdocs MK docs has various different contributions from many different developers so it's a good way to see a log file from all the changes of specific files to begin I will check inside the directory by using the ls command and passing in dash L to get a list of all the available files and I'll just pick one the file I will use is called setup.py which is a python file so in order to examine the different changes to that file I create a command called get blame and then pass in the name of the file setup Dot py and press enter the output will list all the changes made by all the different developers it will also indicate the timestamps and line numbers as well as the actual changes that were made now I will talk you through the output starting from the left of the list is what's called a hash Dash ID it just represents the commit of when a change occurred then the name of the developer who worked on the file is listed and then you have the timestamp when the change went in next is the line number in sequential order and finally the actual change that was implemented I can scroll through the list of changes all the way to the end of the file depending on the size of the file or whatever number of lines it has if I want to exit out I click on Q this will clear the screen to make it easier take note that get blame on its own and by passing the file name will list the entire file in a lot of cases you will work with very large files and then you need a way to abbreviate the output or Chop It Down based on say line numbers fortunately get blame offers a flag for that to do this I type git blame and pass in the flag of Dash L and specify the starting line number and the end line number I will type 5 comma 15 then pass in the file name again setup Dot py and press enter this time a smaller subset is returned that only starts at line 5 and ends on line 15. the output indicates that there are four different changes made by five different developers across these lines let me give you a few more tips around git blame now firstly you can change the format of how the list is displayed this is similar to what you can do with the ls command on the unix commands you can also pass in a dash L flag for changing the output itself so again let's run get blame Dash L followed by the file name and press enter this time there are a few changes to the output for instance the hash Dash ID is in its full length form it's not in the shortened version the output is now a bit more detailed you can also control if you want to show email addresses or change the date format these are the examples of the various things that you can do secondly another aspect of using gitblame is that you can see detailed changes or the actual commit changes of a specific hash Dash ID to do that I will run a get blame command on that file again in order to copy a hash ID from the output now I will use that with a git log Dash p and pass in the hash Dash ID and press enter this gives you the actual change that occurred just to clarify git blame will display to you the point where it was changed get log will give you the detail of the change I always use the two in conjunction to get more details about what changes occurred you've reached the final video in this lesson on creating a repository with forking and the end of the git module let's take a few moments to recap on what you've learned you now know how to explain the principles of git and utilize a GitHub repository including branching and merging code perform a local install of git on a Windows operating system create a new repository in GitHub and clone it to a local machine explain the fundamentals of git and outline the git workflow and identify the differences between remote and local repositories in GitHub explain what the git add and commit commands do and describe how they work push content to remote repositories with Git push and retrieve content from remote servers using git poll keep lines in your workflow clear and stable with the use of branches and explain how head is used in git to identify the current Branch you're working on compare changes across files commits and branches using diff commands examine changes to files and identify their author with the use of blame commands and create a repository with the use of forking you're now familiar with Git GitHub and creating repositories with forking great work in this course you were introduced to the practice of Version Control let's do a brief recap of what you covered in the first module you learned about how different version control systems and effective software development workflows enable modern software developers to collaborate across the world without messing up each other's code you gained knowledge about the history of Version Control and know how Version Control or subversion is used to bring order to the chaos of massive software projects that have the potential for mistakes and bugs next you learned more about the various systems tools and methodologies that are leveraged by software developers to collaborate successfully as part of a global team you explored how to resolve conflicts in git and that Version Control plays a crucial part in the development of software you then moved on to investigate the difference between staging and production and that a staging environment should mimic your production environment in module 2 you learned about how to use the command line to execute commands in Linux you were introduced to what the command line is and learn to use commands that Traverse create rename and delete files on your hard drive then you learned how easy it is to use piping and redirection to create powerful workflows that will automate your work saving you time and effort finally you explored the command line further discovering standard input output streams flags that can be used to change the behavior of a command and grab in module 3 you developed a strong conceptual understanding of the git technology and how it is used in software development projects to manage team files first you learned how to install git on various operating systems and then how to connect to GitHub via https and SSH before creating a GitHub account next you gained a practical understanding of how git Works including creating and cloning a repository add commit push and pull you also explored how to use a repository in some Concepts associated with workflows such as branches blame and forking finally the ungraded lab is an opportunity to complete a practical Version Control exercise by forking a repository creating a branch and committing a change it also includes staging your changes and opening a pull request with a source repo well done on completing this recap now it's time to put into practice all that you've learned are you ready to proceed good luck congratulations on completing the introduction to Version Control course you've worked hard to get here and developed a lot of new skills during the course you should now have a great foundation in the different version control systems and how to create an effective software development workflow and you've also demonstrated your skill set by managing a project on GitHub for the graded assessment following completion of this course you are now able to implement Version Control Systems navigate and configure documents files and directories using the command line create and manage a GitHub repository and manage code revisions the key skills measured in the labs showed durability too determine the current working directory and make and change directories and files using the command line create clone commit and push to a repository create a repository with forking and manage a project on GitHub so what are the next steps you've established a good foundation so far but there's always more to learn whether you're just starting out as a technical professional or student this project will enable you to prove your knowledge and ability your project experience shows employers that you are self-driven and Innovative it also speaks volumes about you as an individual and your drive to continue your educational progress once you've completed all the courses in this professional certificate you'll receive Coursera certification certifications provide globally recognized and Industry endorsed evidence of mastering technical skills congratulations once again on reaching the end of this course it's been a voyage of Discovery best of luck and do continue to pursue your own learning objectives to their final goal the digital space is a world of connection and opportunity and take this moment for example the web has made it possible for you to enroll in this program where you'll learn from the personal stories of developers at meta by the time you have completed this professional certificate you can become a creator of digital experiences connection is evolving and so are you you might not have a background in Tech at all and that's okay even if you have no experience this program can get you job ready within a single year how can this professional certificate prepare you for a job at an organization like meta the database engineer professional certificate will help you build job ready skills for a database engineering role while earning a credential from meta from meta Engineers you will learn about how they collaborate to create and test high performance databases you'll also discuss interesting topics with other aspiring database engineers and complete a range of coding exercises to improve your skills it's important to complete all the courses in this certificate in order as each course will build on your skills although we have a recommended schedule for each course the program is entirely self-paced which means your time is your own to manage as you make your way through the courses in the certificate you'll learn how to model and structure a database according to best practice and create manage and manipulate data using SQL one of the most widely used languages for working with databases you'll also learn how to use the Django web framework to connect the front end of a web application to your database for your final project you will create a functional relational database designed and developed with best practice architecture to Showcase as part of your portfolio during your job search you'll also be ready to collaborate with other developers as you will have learned to use is get and GitHub for Version Control in the final course you will prepare for the coding interview you'll practice your interview skills refine your resume and tackle some common coding challenges that typically form part of technical job interviews once you complete the program you'll get access to the meta career programs job board a job search platform that connects you with over 200 employers who've committed to sourcing Talent through meta certificate programs who knows where you will end up whatever the future of connection looks like you'll be part of its creation let's get started welcome to the next course in database engineering the focus of this course is on database structures and management with mySQL let's take a moment to review some of the new skills that you'll gain in these modules in the first module of this course you'll learn how to filter data using logical operators perform joins on tables and make use of aliases group data using the group by and having Clauses and deploy the any and all operators in the database in the second module you'll explore key Concepts around the topics of updating databases and working with views for example you'll learn how to insert and update data using the MySQL replace statement make use of constraints in a mySQL database and change the structure of tables using alter and copy table statements you'll also learn how to use sub queries and how to combine the width comparison operators and you'll discover how to create virtual tables with a MySQL create view statement in module 3 you'll explore functions and MySQL stored procedures by the end of this module you should know how to make use of common MySQL functions like numeric string and date functions deploy comparison and control flow functions and work with stored procedures during these modules you'll encounter activities to test your Knowledge and Skills you receive the opportunity to demonstrate some of this learning along with your practical database skill set in the lab project and you'll also demonstrate your knowledge of these topics in a graded assessment so what are you waiting for let's get started you know what's amazing is databases are present in our everyday interactions with these amazing like digital experiences we have so whether it's looking up uh somebody's phone number on your smartphone or whether you're looking for your next movie to stream or paying for your groceries at the checkout counter and scanning your groceries like you're interacting with databases every step along the way foreign [Music] ER I'm a software engineer working out of our meta DC office and I've worked in Security in community integrity and currently in the Privacy space as your use of data for an application grows you increasingly need to be able to find that information quickly need to manage deleted update that information quickly and so without a database management system you're either left to kind of create those structures on your own and you may be successful at first but as you're you're the scale of what you're trying to do grows it becomes increasingly harder so these database Management Systems something like MySQL gives you that almost for free you have all that service and functionality already available to you which feeds you up to then focus on let's say the business problems that you're trying to solve like the user problems you're trying to solve without having to necessarily worry about everything that's powering the data management layer MySQL is a it's a relational database management system so it allows you to store data retrieve that data and manage it delete and update it for a variety of uses so it can be applied to many different application types so everything from the motor vehicle administration managing my driving records to something like the likes and shares I get on my Facebook app all of that can be managed in the database and MySQL is one of the most popular databases we have in the world so MySQL allows you to automate a lot of things like making backups and setting up failovers updating schemas it allows you to handle very high concurrencies requests so if you've got web applications or needs where there's a lot of requests coming in it's particularly good at that it also has this fantastic Community which means you have forums where you can reach out for help you've got great people that can provide you support documentation and it's open source which means you can always take a look at the code and you can actually make a request to the creators and contributors to that software we at Matt I use MySQL to store and retrieve the social raft interactions the shares the likes and so I know that when my friends use the Facebook app and saw my picture last week that I can visualize the thousands of requests being made across our Fleet of isql servers to retrieve that information and display it to them and to me so MySQL is powering all these amazing digital experiences for us we use MySQL because it provides Automation and that means a small group of Engineers can manage a very large Fleet of servers and can do things like these backups and failovers again automate all those functions and it's also very good for these high transactional requests so a lot of what we do at meta are these short bursts of very uh precise requests for insertion of data deletion of data lookup of data and MySQL excels at that it's incredibly rewarding to learn about databases and it can be challenging not only to code but to learn database management and configuration data structures but the rewards are that you are empowered to build incredible experiences incredible solutions to real user problems or you can build applications that will serve you know for entertainment there's a variety of uses for software that are all going to be powered by well organized structured data so hang in there go through this lesson you know learn because you will be standing on the shoulders of giants with all this data layer that will be readily available for you and you will Propel you to take it to the next level I hope you have learned that data is a part of every application and that structuring that data and managing the data effectively such that you can retrieve the data quickly you can process it effectively and you can display the right information to the user is super important and the knowledge you have of database management will help you not only structure that data but you will influence everything else that comes after it the processing the apis and even something like the user interface so you have a huge opportunity to influence software development projects all the way from the back end to the front-end experience by your database management skills and data skills I hope that you get a chance to apply this knowledge to a future role and I wish you the best in your next endeavor you might already be familiar with using a where clause and a condition to filter data in a database table but what if you need to specify multiple conditions in a where clause you can use logical operators to specify multiple conditions or rules so when the data is filtered all specified conditions are applied by the end of this video you'll be able to identify the logical and and or operators and explain how they're used to combine conditions and develop a working familiarity with the logical not operator and outline how it is used with data filtering before you explore how to filter data using multiple conditions let's take a moment to recap how the where Clause works it's important that you understand it before working with logical operators when filtering data in a database table you can add a where Clause to your SQL select statement to specify a condition or rule for how the data should be filtered a select statement begins with the select keyword or command you must then specify the data or columns to be queried you then add the from keyword followed by the table you need to query finally you must add a where clause and a condition but as you've just learned it's also possible to specify multiple conditions in the where clause these conditions are specified using logical operators the let's begin by exploring the and and or operators the and operator is used with the where Clause to filter data it checks of all combined conditions meet the value of true and the or operator checks if any of the combined conditions meet the value of true let's take a moment to explore the Syntax for each of these logical operators write the select statement as usual however in this instance multiple conditions are placed after the where clause and combined using the and operator the statement checks of all these conditions yield a value of true for a record if so then that record is included in the result set with the or operator a record is included in the result set if any of the conditions separated by or is true that is if at least one condition yields a true value for a record in the table then that record is included in the query result set next let's look at the not logical operator the not operator Works slightly differently to other operators it selects a result to be included in the query results set only if the condition specified in the where Clause are not true in other words it reverses or negates the results that are returned once the condition is evaluated to use the not operator you just type not after the where Clause followed by the required condition let's take a few minutes to find out how these operators are used over at Loki shrope Loki shrober reviewing their accounts and need to generate specific details on their customers and the purchases they've made they can complete this task by filtering data with the use of logical operators in Loki shrubs database is a table called customer purchases this table contains the data lucky shop needs to complete their queries the data is divided into the following four columns customer ID customer names customer locations and purchases the value of each customer's individual purchase lucky shrobe first need to identify customers from the location Gila County who've made purchases of over two thousand dollars this requires two search conditions the first is customers who are from Gila County and the second is customers who have made purchases of over two thousand dollars you can retrieve these details by writing a basic select statement as follows begin with select all from the table customer underscore purchases next type the where Clause then the first condition as follows purchases column the greater than operator and the figure of two thousand then type the and operator to include a second condition this second condition targets the location column and uses an equal operator to return all results for Gila County the and operator here combines the two conditions it ensures that both conditions are evaluated when filtering data from the table so your select statement is instructing SQL to select all records from the customer purchases table that satisfy the following criteria purchase is greater than two thousand dollars and made by customers in the location of Gila County for a record to be included in the result set its purchases column must have a value greater than two thousand if so then the first condition yields a true value in addition the location column must have a value of Gila County if this is the case then the second condition also yields a value of true and as you just learned the and operator here insists that the results of both conditions are combined this is another instance which yields a value of true so any records that match are included in the result any records in the table that do not yield the value of true for both conditions are omitted from the results your query is now ready to run so press enter to execute the results set that this query returns contains two records there are two customers from Gila County who've made purchases over two thousand dollars Benjamin Klaus and Julie Murr now lookie shrub need to identify customers who are from Gila County or Santa Cruz County The Logical operator or can be used to combine multiple conditions in the where Clause so it's perfectly suited to this task for this query you first need to generate a list of customers who are from either Gila County or Santa Cruz County so the first step is to write the following select statement select all from customer purchases and then add the where clause this where Clause is Then followed by the first condition location equal to Gila County then insert the or operator followed by the second condition location equal to Santa Cruz County you now have a where Clause that uses the or operator to combine the two conditions for a record to be included in the result set it's location column must have a value of Gila County if so then it meets the first condition and yields a true value or the location column must have a value of Santa Cruz County if this is the case then the second condition yields a true value the or operator ensures that at least one of these conditions will yield a True Value any matching records will be included in the result a record in the table that does not yield a true value for either condition is omitted from the result press enter to execute the query in this case the result returns three records or customers next Loki shrove need to retrieve the details of customers who do not reside in Gila County or Santa Cruz County they can perform this task using the not logical operator it's used in a similar way in the where Clause of a select statement you can write the statement as before but this time type and not operator after the where Clause then list the conditions location equal to Gila County or location equal to Santa Cruz County in this query the conditions have been enclosed in parenthesis because there are multiple conditions parentheses are not required where you have just one condition the not operator checks the records for values that do not yield a true value for the given conditions in other words records that do not yield a value of true for either of the listed locations press enter to execute the query and generate the output so all the records that have a location value which is not Gila County or Santa Cruz County are included in the result the remaining records are omitted the output shows four records from the customer purchases table if you find all these examples and operators a bit complicated don't worry your review detailed examples of how to use these operators in later videos in this course for now you should just be able to identify each of the operators and explain their syntax you might already be familiar with filtering data using the and and or operators what if you need to perform more complex data filtering tasks like filtering data based on a pattern you can use more logical operators such as in between and like by the end of this video you'll be able to identify the in between and like logical operators and explain how they're used and explain how wildcard characters can be used with logical operators to filter data let's begin with a review of the in-between and like operators the in operator lets you specify multiple values in the where Clause the between operator selects values within a given range these values can be numbers text or dates and the like operator is used to filter data based on pattern matching let's look at the Syntax for the in operator the in operator requires slightly different syntax than a typical select filter statement after the where Clause you must type the column name to which the in operator is applied you then need to add the in operator and you must also include the set of values within parenthesis if the specified column's value of a record matches with any value in the set then that record will be included in the query result set of the select statement the in operator is like a shorthand for multiple or conditions you also can use not in to filter the opposite results of those you receive from the in operator next let's review the between operator for the between operator you must also specify the column name after the where clause the between operator is then applied along with the two required values these two values Mark the boundary of a range in other words they're the beginning and ending values of the range the operator then selects values within this given range the values that can be used with the between operator include numbers text and dates if the specified column's value of a record Falls within the value range specified here that record will be included in the query result set of the select statement finally let's look at the like operator the like operator is used to filter data based on pattern matching the operator is placed after the where clause and specified column name a pattern to be matched against the column data is then added this pattern can be written using what I refer to as wildcard characters the first of these is the percent sign which represents 0 1 or multiple characters the second is the underscore sign which represents one single character for example a pattern could be written as G underscore underscore percent sign within a pair of single quotes as you've just discovered each underscore represents one single character while the percent sign is zero one or more characters so this pattern searches for values that start with the letter G and are at least three characters in linked if the specified column's value of a record matches the given pattern that record will be included in the query result set of the select statement let's look at a demonstration of these operators in the lucky shrove database lookie shruber performing a review of their accounts they need to generate specific details on their customers and the purchases they've made they can complete this task by filtering data with the use of the in between and like logical operators in Lucky shrove's database is a table called customer purchases this table contains the data lucky shrub need to complete their queries the data is divided into the following four columns customer ID customer names customer locations and purchases the value of each customer's individual purchase first looking shrub need to use the MySQL in operator in the where Clause to identify customers from the location Gila County who've made purchases of over two thousand dollars you might already be familiar with filtering data using the or operator the in operator is like a form of shorthand for multiple or conditions you can get the same results as the or example by using the in operator to extract the required data using the in operator write a basic select statement as follows begin with select all data from the table customer purchases next type the where Clause then the first condition as follows purchases column the in operator and the figure of two thousand then within parenthesis specify the set of values separated by a comma Gila County and Santa Cruz County when run this query returns three records or customers these are the same results as the or operator returns now let's check out how the my sequel between operator functions in the where clause in this example lookie shrub need the details of customers whose purchases are in the range of one thousand dollars and two thousand dollars write the select statement as before then add the where clause the where Clause is followed by the filter column which is purchases then add the between operator and give the value range the range begins with the value 1000 followed by and then ends with the value 2000. the between operator filters out the records that have a purchase value between one thousand dollars and two thousand dollars including the beginning and end values in this case the between operator is a quicker and easier way to filter out the records that have a purchases value greater than or equal to one thousand dollars and less than or equal to two thousand dollars finally let's see how lucky shrub make use of the my sequel like operator the like operator is used for pattern matching when used in a where Clause is search as a column for the given pattern this means that it filters out data from the table based on the pattern it's often used in conjunction with wild cards for single or multiple characters let's demonstrate an example using the pattern on the location column you can filter out the records that have a location value that matches the pattern Loki shrubs pattern must be set to find any values that start with G and are at least three characters in length first write the select statement just as you've done before then add the where Clause followed by the name of the filter column location finally add the like operator followed by the pattern which in this example is G followed by two underscore characters and a percentage symbol press enter to execute the query the output that's generated contains three values that start with G and are at least three characters in length any values that don't match the pattern have been omitted from the table lookie shrobe have completed their data filtering tasks and returned all the required results from their database you should now be able to combine conditions and filter data using the in between and like logical operators great work little lemon restaurants has run into some problems with their database some of the table and column names in the database are too long which is causing issues with the output of queries they need to find a way to generate results that are simpler to use read and understand fortunately they can solve these issues with mySQL aliases over the next few minutes you'll discover how little lemon can make use of my sequel aliases and by the end of this video you'll be able to demonstrate an understanding of the concept of an alias in a database identify examples of situations in which it is beneficial to use aliases and demonstrate the use of alias in MySQL queries But first you might be wondering what is an alias in the context of SQL SQL aliases are used to provide database columns and tables with temporary names these temporary names make it simpler to use read and understand the output of the database for example little lemon can use aliases to shorten the names of tables and columns in their database there are three common situations in which it's useful to consider an alias an alias can be used to rename a table or column whose original name is too long or technical it can be used with a concatenation function to combine an output into one column instead of two and you can also use an alias to create distinct table names when dealing with multiple tables however it's important to bear in mind that the Syntax for creating and using an alias can change depending on which of these issues you're attempting to resolve let's take a few minutes to review an example of the Syntax for each scenario beginning with renaming tables to rename a table you need to use a select statement which begins with the select keyword then type the original column name followed by the alias both must be separated by the as keyword the as keyword creates the alias you can also include other columns in the table with each separated by a comma then write the from keyword followed by the table name if your table requires multiple aliases then write out each column name and use the as keyword for each column you need to create an alias for example in their client orders table little lemon can use an alias to rename lengthy columns like client order information to just orders next let's review the Syntax for a concatenation function that combines an output into one column instead of two the select command is used to retrieve data this is followed by the concat function which concatenates or combines the information extracted from the column names placed in parenthesis these names must be separated by commas and a pair of double quotation marks the quotation marks split the output by creating an empty space between the concatenated values the as keyword is then added followed by the name or Alias you want to assign to the new concatenated column and the from keyword specifies the table SQL must extract the data from little lemon can use a concatenation function to combine the values contained in the first and last name Columns of their client details table these values are then placed in a new concatenated column called client names finally let's explore the Syntax for querying multiple tables the first thing to note when querying multiple tables is that you can use a one character Alias to represent each table for example if you're querying two different tables then you can use x for table 1 and y for table 2. so the syntax then begins with a select command followed by the tables and columns to be queried you can query columns using dot notation such as x dot column one to query table 1 column 1 or Y dot column two for table 2 column two next add the from keyword then type the original name of each table alongside its Alias we're both separated by the as keyword finally add a where clause and conditions as required for example perhaps your querying prices in an online store database and want to return a list of items that are less than twelve dollars for table one and five dollars for table two those are the three main instances in which MySQL Alias can be used along with their related syntax now that you're familiar with the concept of MySQL Alias let's see if you can help little lemon with their databases little lemon restaurant has a table in their database called food orders delivery status that keeps track of food orders the table has two columns called Date food order placed with supplier and date food order received from supplier however these column names are too long and complex so they need to be simplified to make the database more efficient you can use aliases to simplify the output so that the column names are easier to read and understand when queried begin with a select statement and Target the order ID column then rename the date food order placed with supplier column as date order placed and rename the date food order received from supplier column as date order received notice that there are double quotation marks used for order date received because the Alias name contains a space in other instances you can declare the Alias without the use of quotation marks finally type A from keyword followed by the name of the table then click enter to execute the query the output now shows the Alias names instead of the original column names which makes it much easier for little lemon to track food orders however you can make this table even more efficient for instance you could concatenate order ID and order status into one column instead of two as you learned earlier you can use a SQL Alias with functions write the statement as follows begin with a select command and then the concat function then Place The Columns you want to concatenate in a pair of parentheses you should also make sure that you include quotation marks to split the output next use the as keyword to create the alias in this instance you can call the Alias column order status then use the from keyword to identify the table finally hit enter to execute the query the output shows the new order status column with the concatenated info finally let's review how to work with multiple tables in the database the restaurant has divided their menu into two tables called starters and main courses both tables show the names of the mails available to order and the respective costs as part of a new promotional campaign little lemon want to promote starters that cost seven dollars or less and main courses that cost 15 or less so you need to query these tables and identify the meals that match these prices in this instance you can use a one character Alias of s to represent starters and you can use C to represent main courses add these aliases into a select statement and use dot notation to request the name and cost of the meals then use the from keyword to identify the tables and the as keyword to create aliases for each one course is a c and starters are s finally add a where clause and specify the condition the condition returns all starters less than seven dollars and all main courses less than fifteen dollars finally press enter to execute the query SQL generates an output that shows all related costs in one table for all issues with little lemon's database have now been solved using MySQL aliases thanks your assistance little lemons database is now more efficient to use and they've identified some great meals to include in their next promotional campaign with the skills you've gained from these tasks you should now be able to demonstrate an understanding of the concept of an alias in a database identify examples of situations in which it is beneficial to use aliases and demonstrate the use of alias in MySQL queries great work Loki shrobe gardening center need to gather information on their customers and the orders they've placed but the records are held in three different tables however they can extract this information from their database using joins Clause to join the required elements of these tables together over the next few minutes you'll discover how the joins Clause works and by the end of this video you'll be able to demonstrate an understanding of the join Concept in a database and describe the main types of joins in MySQL let's take a closer look at the problem that lucky shrub has encountered with their database tables Loki shrobe needs to determine what products were ordered and which customers place the orders however this information exists in three separate entities or tables customers orders and products so how can lucky shrub extract records from three different tables before you can begin to assist lucky shrub you first need to understand the concept of joins the SQL join Clause is use the query data based on a common column between two target tables for example the customers and orders tables both contain a customer ID column and the product ID column is a common column between the orders and products tables these common columns can be used to join these tables together and extract the required records there are four types of join used to combine tables an inner join which extracts or selects records of data that have matching values in both tables and a left join that extracts or selects records of data from the left table and all matching records from the right table the right join which extracts or selects records of data from the right table and matching records from the left table and the self join in which a table is joined with itself to retrieve info that exists in the same table let's begin with a review of the inner join an inner join returns records of data that have matching values or columns in both the left and the right tables this relationship between the two tables can be conceptualized in the format of a Venn diagram as can all other joints in terms of syntax the left and right tables are identified as table 1 and table 2 respectively lucky shrub need to identify the full names of all clients that placed orders with the business to complete this query they need the client's table and the orders table they can then create an inner join using the client ID column that exists in both tables the output result reveals the records of all clients who placed orders and the client ID represents all records with matching IDs to be listed the syntax of an inner join begins with a select statement which queries the left table and the column with the matching values the from keyword is then added along with the name of the left table next is the inner join Clause followed by the name of the right table finally the on keyword is used to identify the inner join that the tables share next let's move on to review the left join the left join returns all common records in a similar way to the inner join in addition it returns all available records of the common column from the left table even if there isn't a match in the right table lookie shrope can use the left join table to extract data from the clients and orders tables using the client ID values the join locates four matching records between the two tables and places them in the common area of the Venn diagram the left join syntax begins with a select statement in which the required columns from table 1 are identified the as keyword is then used to create an alias for each column the from keyword is used to identify the left table which is the one that must be queried and once again the as keyword is used to create an alias for this table the left join Clause is then used to join table 2 and assign an alias finally the on keyword equates the matching columns between the two tables now let's review an example of the right join the right join returns all records from the right and left tables but with the right table as the main target table for example lookie shrope can use the right join to extract records of data from the orders and products table based on the product ID values this lists all products from the products table joined with the matching related orders details in the left table the right join syntax is very similar to the left join the only difference is that the right join Clause is used to extract records of data finally there is the self join a self join is a special case in which a table must be joined with itself in other words one table is treated as two in order to extract specific information from either the left right or inner join in the case of Lucky shrub the business holds records of all staff members in a staff table the table contains records on sales floor employees and line managers lucky shrobe can treat the table as two tables to determine who is a line manager and who is a sales floor employee a self-joined syntax is written as a select statement in which an alias is created for the common column in table 2. you've encountered a lot of information in this video particularly in terms of syntax don't worry if it doesn't all make sense at this stage in the videos that follow you'll learn how to create each type of join in more detail but for now you should be able to demonstrate an understanding of the join Concept in a database and describe the main types of joins in MySQL well done lucky shrope gardening center require information on orders recently made by their clients this information is stored in two separate tables the clients table and the orders table but there must be a more efficient way to review this information that doesn't involve using two tables at the same time right thankfully lucky shrub can use the inner join Clause to return records of data based on a common column with matching values in both tables in this video you'll help lookie shrub to complete this task and by the end of this video you'll be able to apply the inner join Concept in MySQL and use SQL aliases to create temporary column names as you're probably aware by now databases normally have more than one table in fact database normalization rules dictate that related data should be held in Separate Tables so let's begin with a quick review of the two tables the client's table has four columns client ID full name contact number and address and the orders table has five columns order ID client ID product ID quantity and cost the first task is to identify the full names of all clients who made orders you can do this using the inner join clause in a SQL select statement the statement begins with the select command this is Then followed by the column full name attached to the client's table separated by a DOT this queries data from the full name column of the clients table then the from keyword is used to Target the client's table next the inner join Clause creates a new row of data for each matching record in other words where the client ID in the client table matches the client ID in the orders table and the equal operator ensures the matching condition must be met remember that it's important to specify the table name of each column when you are dealing with multiple tables in the same statement this is especially important when the column name is used in more than one of the query tables for example client ID exists in both the clients and the order tables press enter to execute the query the output result set lists the full names for all clients that have made orders this example just extracts a list of names but you can also query other information from both tables for example you can display the column names with more user-friendly labels if required for instance you can take the client ID full name and contact number columns from the clients table and create a join with the product ID quantity and total cost column from the orders table you can do this with the following SQL statement start with a select command that selects the required columns from the client's table then use the as keyword after each column to create an alias in other words create a new name for each column then do the same for the required columns on the orders table so in this statement each column is attached to the related table name and the Alias technique is used to create new names for each column click enter to execute the query the result set is a table with all required data related to the four matching clients IDs cl1 cl2 cl3 and cl4 as shown in the output table in this video you explored how to work with the inner join clause in MySQL to query data from two tables in the database also you learned how to use an alias to create temporary column names that have more readable labels lucky shrope can now review the data they need using a more efficient table thanks to the inner join clause and you should now be able to apply the inner join Concept in MySQL and use SQL aliases to create temporary column names great work need to review data on orders made by their clients this data exists in two separate tables clients and orders lucky shrobe can query data from both tables using the left and right join Clauses in MySQL these Clauses will work because both tables share several closely connected columns in this video you'll help lucky shrob use the left and the right join clauses and by the end of this video you'll be able to demonstrate how to apply a left join and a right join in MySQL and utilize aliases to create temporary columns and table names let's quickly review the two tables before creating the query the client's table contains four columns customer ID full name contact number and address and the orders table contains five columns order ID client ID product ID quantity and cost the first step is to create a query for the client ID and client name columns within the client's table which is the left table then you must create a join with the following columns from the order table the right table order ID quantity and cost you can use the left join clause in the SQL statement to complete this task to start use the select command to retrieve data followed by the column client ID and full name attached to the client's table separated by a DOT this syntax retrieves data from the two columns from the client's table this data then joins the order ID quantity and cost columns from the orders table as you're probably already aware it's important to specify the table name of each column when dealing with multiple tables in the same statement this is especially important when the column name is used in more than one of the query tables for example the client ID column exists in both the clients and orders tables you can also use the SQL as keyword to create suitable aliases for the column names when displayed in the output result set and you can use the as keyword to create aliases for the two tables as follows C for clients and O for orders this now means that instead of repeatedly typing clients to specify the column Source table you can just use C and instead of using the word orders you can write o in this statement the left join Clause creates a new row of data for each matching record from the left table the clients table it does this even if there are no matching records in the orders table which is the right table for example the clients with IDs cl5 and cl6 have yet to place any orders this means that null values will be inserted for related columns from the right table finally press enter to execute the query the output result table contains several null values for the clients with IDs of cl5 and cl6 this is because they have not yet made any orders next let's create a similar query using the right join concept you can use similar syntax to the previous query just replace the left keyword with the right keyword in this statement clients represents the left table and the orders represents the right table the right join Clause extracts data from both tables based on the client ID values just like the previous example executing this query should return all requested information from the orders table write table joined by the requested information from the clients table left table based on the common column client ID so press enter to run the query and create the output the output shows that the right join has returned all records from the right or orders table where a client has made an order and it extracted the matching records from the left or clients table based on the client ID values no no values were printed in the output result table this is because all clients who made orders already exist in the client's table lucky shrub now have the order and client information they need and you should now be able to demonstrate how to apply a left join and a right join in MySQL and utilize aliases to create temporary columns and table names good work the lucky shrobe database has a table called employees which lists all staff in the business some of these staff members are line managers and other employees report to these line managers lucky shrobe needs to query the data from this table to determine which roles everyone is assigned they can complete this task using the self-join Clause a special join case this Clause lets lucky shrope create a join between rows on the same table so that they can extract specific information but the table must be treated as two tables to perform the required joins over the next few minutes you'll help lookie shrobe with this query and by the end of this video you'll be able to apply the self-joined concept in MySQL and use an alias to provide same table with two different names let's begin by reviewing the employee table from the lucky shrub database this is the table that stores the required information on employees and their line managers the table includes five columns employee ID full name job title County and line manager ID in this table the primary key employee ID values are also used in the line manager ID column to show who manages each employee in the lucky shop firm so your main task is to list the full name of our line managers and the employees they manage the full names of both sets of employees exist within the full name column to complete this task you can create the employees table as two identical tables then create an inner join to investigate each employee ID and match it with the line manager ID then extract the full name value and print it as either line manager or employee and remember that the line managers are also employees before writing the query remember that the self-joined Clause creates two tables from one in other words know you're dealing with two tables in your query not just one so let's begin with a SQL select statement the statement uses E1 with an as keyword to declare an alias for the first employee table and it also uses E2 with an as keyword to declare an alias for the second employee table remember that the employee table is the same in both cases in addition your statement queries the full name column from the E1 table and it uses the as keyword to declare a suitable Alias name of line manager from the left table it then queries the full name column from the E2 table and uses the as keyword to declare an alias of employee from the right table same columns from both employee tables but only once there's a match between the column values in this instance the condition is E1 dot employee ID equal to E2 dot line manager ID in other words the condition matches the employee ID with the line manager ID if it finds a value of true then the full name is returned from the left table and displayed in the line manager column and the full name is also returned and displayed as an employee from the right table press enter to execute the query the output result set links the line managers with the employees they manage a quick summary of the output result set shows that the employees Sheamus and Greta report to the line manager Simon Simon reports to himself and all other staff report to Sheamus thanks to the self-joined Clause lucky Shrum have now determined which employee is in which role and you should now be able to apply the self-join concept in MySQL and use Alias to provide the same table with two different names good work lucky shrub are filing their end-of-year tax returns and must provide information on all employees that they have hired over the last 12 months there are several full-time employees in the business and there are several part-time employees who were recently hired to help with the holiday season with the records for the full-time and part-time employees are stored in Separate Tables so how can lucky shrove combine these records into one table they can use the my sequel Union operator over the next few minutes you'll discover how the union operator works and by the end of this video you'll be able to demonstrate an understanding of the Union operator and explain how the union operator is used in MySQL let's begin with a definition of the Union operator the union operator is used to combine result sets from the multiple statements in the same query for example you can use the union operator to join two select statements in order to combine their result sets and present as one table so how does the union operator work let's look at the syntax and find out you begin with a select statement followed by the names of the columns that must be queried the from keyword is then used to Target the table in which the records are located next you add a union operator followed by a select statement that queries the required records from the second table the union operator essentially creates a union between the two select statements there are a few best practices that must be observed when creating SQL select statements with a union operator every select statement must have the same number of columns all related columns have similar data types and all related columns must have the same order in every select statement but what about cases where the same value exists in both tables but appears only once in the combined set of results like a name or location this happens because the union operator only returns distinct values from the targeted tables to list all values including duplicated data you can use the all keyword the use of the all keyword after the union operator ensures that all values remain even duplicated ones let's explore a working example of the Union operator as you saw earlier location need to gather information on all employees that they have hired over the last 12 months with the data for their full-time and part-time employees is stored in Separate Tables Let's help them out Loki shrobe need to combine the records from two tables into one using the my sequel Union operator both the full-time employees and part-time employees tables include the same four columns employee ID full name contact number and location lucky shrub need to query the full names and addresses or locations of all employees to combine the results from both tables you can write two select statements that Target the full name and location columns one statement targets the full-time employees table the other targets the part-time employees table and a union operator is placed in between both statements to combine the results before executing these statements you must check that each of these select statements includes the same number of columns in addition all columns must contain the same data types and must be placed in the same order in both statements finally click enter to execute the query the output places the results of both select queries into the one table that contains two columns full name and location these columns hold all required records for all Lucky shrobes part-time and full-time employees however lookie shrop has two employees named Julia Mark one who works part-time and another who works full-time but only one Julia Marr appears in the combined set of results this is because the union operator only returns distinct values yet it's interpreted both instances of Julia Marr as a duplicated value fortunately you can use the union operator to generate an output that contains both employees just write the same select statements once again with a union operator in between but this time place the all keyword after the union operator as you learned earlier the all keyword ensures that the output retains all results from both tables even if they're duplicate values finally click enter to execute the query the output is then generated on screen and it contains both instances of Julia Marr thanks to the union operator lookie shrub now have all the information they need and having helped them out you should now be able to demonstrate an understanding of the Union operator and explain how the union operator is used in MySQL well done Loki shrober reviewing recent customer orders in their database they need to find a way to group records with similar values into one single record so that they can analyze the order data and produce summaries the MySQL Group by clause and its related aggregate functions are a great way for the company to complete this task over the next few minutes you'll explore these Concepts and use them to help lucky shrub produce summaries of their orders and by the end of this video you'll be able to group rows into subgroups using the MySQL Group by clause and utilize the MySQL Group by Clause with SQL aggregate functions so before you begin helping lucky shrub let's take a moment to find out what database developers mean by the term Group by the group boy Clause is used in SQL syntax to group rows in a table based on given columns into summary rows also known as subgroups to get a better understanding of this Clause let's look at the syntax the syntax begins with a select statement followed by the name of the required column the from Clause is then added and targets the name of the table that holds the required column finally there is the group by Clause after this Clause a list of column names are added each one is separated by a comma these are the columns According to which the data must be grouped if there's a where clause in your select statement then the group by Clause must be placed after this clause and make sure that the columns listed in the select Clause include the columns listed in the group by clause Additionally the group by Clause is also frequently used with aggregate functions an aggregate function can be used with the group by Clause to perform one or more calculations and return a single value for each subgroup you might be familiar with aggregate functions from previous videos but if not don't worry here's a quick recap of the main aggregate functions used by database developers with the group by clause some used to add values of given columns together and return a single value average used to determine the average of column values and Max which Returns the maximum value of one or more given columns the minimum aggregate function determines the minimum value of one or more given columns and finally count is used to count the number of instances that a given column value occurs let's review the syntax of the select statement when using the group by Clause with an aggregate function first input a select statement followed by a list of columns you can then apply the aggregate function on any of these columns as required for example you can use the max aggregate function to calculate the maximum values in column 1. just make sure to place the column in parenthesis next include the from clause and the name of the table that holds the columns finally include the group by Clause followed by the names of the columns by which the data should be grouped make sure that these same columns are also present in the select column list lucky shrope can make use of the group by syntax and aggregate functions to determine the total number of orders received by each department in the business so now that you've learned about the group byclaws and aggregate functions it's time to use your knowledge to help lucky shrub let's start with a quick review of the order table the table contains five columns order ID Department order date order quantity and order total there are multiple records with the same value for the Department column for example there were five orders placed with the lawn care Department this means that there are a total of five records for the lawn care department and there are more instances of multiple records with the same value for other departments like decking the best approach in this instance is for lucky shrove to group all these records so that they have just one row for each group or Department this will make it much easier to analyze the data and produce summaries you can help them to reduce the Departments into five groups or soap groups using the group by clause first write a select statement followed by Department the column name next insert a from Clause followed by orders the table name then add the group by clause and the name of the column finally run the statement to generate the output in the output that's returned all records in the department column have been reduced to five groups one row or one single record for each department in the business now do you have simplified the table you can use aggregate functions to analyze the data lucky shrove's report must show the number of orders placed with each department you can use the count function to produce this data the Syntax for this query is almost the same as the previous one you just performed the key difference is that you must add the count function followed by the column name in parenthesis after select Department this specifies which column holds the data and the count function counts the occurrences of each department among the order records now just execute the query to generate the output the output Returns the five departments alongside the total number of orders placed with each next let's find out how much money each department made from these orders you can use the same syntax from the previous query but this time use the sum aggregate function with the order total column then execute the query the output Returns the total sum of the selected numeric column in other words it adds the values in the order total column for each instance of each department now let's determine the minimum order quantities for each department once again you can use the same syntax but with the Min function targeting the order quantity column once you run the query the output Returns the smallest value of the column finally lucky shrub also need the average order total for each department so write the syntax one last time and in this instance you can use the average aggregate function to query the order total column the output that's returned shows the average value of the order total column thanks to your help lookies show up now have a summary that shows all the relevant data from the order table grouped together as required have improved your skills with lucky shrub you should now be able to group rows into soap groups using the MySQL Group by clause and you should also know how to use the Clause with SQL aggregate functions well done at this stage of the course you may be familiar with the group by Clause having helped lookie shrobe group data from their customer orders in an earlier video Lucky shrobe Now need to filter this grouped data against the list of conditions to determine the best performing departments in the business they can use the my sequel having Clause to specify filter conditions that will generate this data over the next few minutes you'll explore the having Clause so that you can help lucky shrobe and by the end of this video you'll be able to identify the my sequel having clause and explain its purpose and demonstrate the use of the having Clause to specify a filter condition for groups of rows so what is the having clause and what does it add to your grouping data skill set the having Clause is used in a SQL statement to specify a filter condition for the group data that the group boy Clause generates let's take a moment to review the Syntax for this Clause starting with a quick recap of the syntax of a typical SQL statement as you learned in previous videos the where Clause is used in a select statement to specify one or more filter conditions and you must place a where Clause before the use of the group by Clause but the where Clause can't be used to specify a filter condition for the group data that the group by Clause generates so how do you filter this data you can add the having Clause your SQL statement the having Clause is added after the group by clause the having Clause is used to specify the filter condition that needs to be applied to your grouped data the having Clause evaluates the group filter condition against each group returned by the group by clause if the result is true the row is included in the result set however don't forget that if you omit the group by Clause then the having Clause behaves just like the where clause let's take a quick look at a basic example of the having clause lucky shrobe can use the having Clause with aggregate functions to determine which of their departments received orders of a certain dollar amount now it's time to use your new having Clause knowledge to assist Loki shrope as you discovered earlier lucky shrobe needs to filter their customer order data to check which departments met their monthly sales Target of 2275 dollars let's see if you can help them out let's begin with a review of the order table which holds the required data the table is divided into five columns order ID Department order date order quantity and order total the first task is to identify which departments have order totals of a value greater than 2275 dollars so you're only concerned with the department and order total columns you can determine the order total of each department by using a select statement with a group by clause first type the select Clause followed by the department column you then need to include the sum aggregate function then place the order total column in parenthesis next add the from keyword followed by the table name which is orders and finally include the group by clause and Target the department column run the statement to retrieve an output that shows the total sales figures for each of the five departments your next step is to filter this data to retrieve the results to have an order total value greater than 2275 dollars you can use the same statement as before but this time add the having Clause after the group by clause the having Clause is followed by a second instance of the sum aggregate function which once again targets the order total column finally use the greater than operator followed by the figure 2275 this instructs the SQL statement to fill the results greater than 2275 dollars this SQL statement is now ready to execute however you can make the syntax more efficient by using an alias you should be familiar with the concept of an alias from previous lessons you can use an alias called total in the select clause for the aggregate function this Alias can then be referred to in the having clause this makes the condition concise and easier to read now you can execute the query the output that's generated reveals three departments in Lucky shrobe which met this month's sales targets thanks to your assistance lookie shrope has identified their best performing departments you should now be able to identify the my sequel having class and explain its purpose and you should also be able to demonstrate the use of the having Clause to specify a filter condition for groups of rows you're making great progress with grouping data congratulations you've reached the end of the first module in this course let's take a moment to recap on some of the key skills you've gained in this module's lessons in the first lesson you received an introduction to the course in which you learned why MySQL is a key language for database engineers and how meta uses MySQL and you enhance your knowledge by reviewing some key additional resources in lesson two you explore the topic of filtering data you began the lesson by learning about the concepts of data filtering and logical operators you then discovered how to use logical operators to filter your data sets next you completed a reading in which you explored some real world examples of data filtering finally you reviewed some additional resources on the topics of data filtering and logical operators you then progress to lesson three in which you learn the skills required to join tables you first learned about the concept of aliases and how they can be used in MySQL you then learned how to identify different types of joins and how to utilize them in your database tables and you demonstrated these new skills in a lab environment next you learned about the concept of a union operator and you demonstrated its use in your databases finally you enhanced your knowledge of these Topics by exploring additional reading materials in the fourth and final lesson you gain skills in grouping data you began the lesson by learning how to identify and make use of the group by Clause you then learned how to use a having clause next you completed a reading in which you learned how to utilize the MySQL any and all operators finally you were challenged to demonstrate these new data grouping skills within a lab environment you should now be able to filter data join tables and group data in a database using MySQL great work I look forward to guiding you through the next module in which you'll discover how to update databases and work with views location gardening center are hiring some new employees once these new employees have been hired the company then needs to add their contact details to the database some of these contact details must also replace those of employees who've recently left the replace command is the best method for Lucky shrub to make these changes in this video you'll learn how the replace command can be used to help lucky shrub make these changes and once you've helped lucky shrub you'll then know how to explain how the replace command Works in a mySQL database and demonstrate an understanding of the replace command by inserting or updating data let's begin with an overview of the replace command the replace command is used to insert or update data in a table however unlike the standard insert and update commands replace first checks for a duplicate key if found it deletes the existing record and replaces it with the new one so now that you know what the replace command is used for it's time to look at the syntax but first let's quickly recap the Syntax for the insert command its similarity to the replace command should help you to understand the replace command better you should be familiar with the insert into command from the previous course with this command you instruct SQL to insert new values into designated columns within your chosen table the replace command Works in much the same way you type out your table name column names and values just like before the only difference is that you must begin the statement with a replace command you can also use replace command with the set keyword the set Clause assigns a value for the selected column but without using the where Clause to specify the condition in other words it locates the required record of data then replaces the values with the new set if you don't specify a column value in the set Clause then the replace command uses a default value or sets the value to null the replace command is a complicated concept and its similarity to other commands can be confusing so to help you out further let's take a moment to visualize how the replace command works as you just learned the replace command first checks if the new record of data already exists in the table by checking the primary or unique key of existing records if there's no matching key then replace works like a normal insert statement and adds the new data if a matching key is found then the command deletes the existing record and replaces it with the new one now that you're familiar with how the replace command Works take a few moments to see if you can help lucky shrub insert and replace new and existing employee records in their database lucky shrubs employee contact records are stored in the employee's contact info table the table consists of three columns the employee ID which is the primary key the contact number and email address columns you need to insert a new record of data for the new employee Sheamus Hogan with the following details an ID equal to one a contact number and an email address you can add this data to the table using the standard insert command just type insert into followed by the table and column names then add the values to be inserted into each column ID contact number and email address click enter to execute the query the new employee record is added to the table you can do the same with the replace command for the employee Thomas Erickson begin with the replace into command followed by the table and column names to be updated then use the values keyword and list the values to be added to the table for Thomas these values are as ID contact number and email address click enter to execute the query you can then use a select statement to check the table's records the output shows that Thomas's contact details are now in the table as are those of Sheamus however Sheamus has decided to leave lucky shop to work for a rival gardening center so you now need to replace his details with those of a new employee Maria Carter you can try updating the table using the insert command type an insert into statement just like before and assign Maria an ID of one in your statements values alongside her contact number and email address so that her records replace those of Sheamus in the table then execute the query but it looks like SQL can't execute this query instead of adding Maria's details to the table it's returned an error message warning of a duplicate entry this is because you're trying to assign Maria an ID of one but this ID is already assigned to Sheamus as a primary key value the primary key must always have a unique value in each row of the table otherwise my sequel returns an error message so how can you replace Sheamus's records with Maria's type the statement again but this time use the replace command instead of insert then click enter to execute the query MySQL has accepted the statement with no errors let's query the table to make sure it contains Maria's records type a select statement and from keyword followed by the table name then click enter to execute the query the output Returns the contact details for Maria and Thomas MySQL has replaced Sheamus's records just like you instructed there's one more task to complete Maria has recently changed her contact number so the number also needs to be updated in the table you can use the replace commands to update the record of data type the replace command and the table name then use the set Clause with Maria's employee ID of one followed by the new value which is her contact number but make sure that you set values for all columns otherwise they'll be set to null or default values press enter to execute the query you can use a select statement to check the table and confirm that Maria's details were updated thanks to your efforts lucky shrove's employee contact info is now up to date you should now be able to explain how the replace command works and demonstrate the replace command by inserting or updating data great work little lemon restaurants have built two new tables in their database that allow customers to create accounts and register bookings to make sure that these tables work as required the restaurant has set up constraints which ensure that the tables only accept valid data over the next few minutes you'll learn about the concept of constraints in MySQL by exploring how little lemon have made use of them in their database and by the end of this lesson you'll be able to identify the main types of constraints and explain how they function and explain the my sequel on the lead Cascade and on update Cascade options let's begin with an overview of what database Engineers mean by the term constraints when creating a table you might decide that each column must hold a unique value in each row of the table like a phone number you can enforce this rule using the unique constraint which prevents any violation of the rule whenever data is inserted or updated in your database there are three main types of constraints in mySQL database which can be used to enforce these rules key constraints which apply rules to key types domain constraints used to govern the values that can be stored for a specific column and referential integrity constraints which establish rules for referential Keys let's take a few moments to explore each of these three MySQL constraint types and discover how they're used by little lemon in their database as you learned in the previous course all tables include different types of keys like primary keys and foreign keys you can use constraints to establish rules for these keys for example the primary key constraint can be used to specify that one or more column values must always be unique and they cannot accept a null value little lemon's database contains a table called customers this table records key data on customer bookings using the primary key constraint the table has three columns called customer ID full name and phone number customer ID is defined as a primary key which returns data on the table's unique records thanks to the primary key constraint this column's values must always be unique and I can never accept null value in other words every Row in the column must hold a customer ID and all customer IDs must be unique next let's look at domain constraints as you learned earlier these are special rules defined for values that can be stored for a certain column little lemon's database contains a bookings table that records data on customer bookings however the restaurant can only facilitate a maximum of eight guests per booking so they enact the SQL check constraint on the number of guests column the this limits the value range that can be placed in the column which means the table rejects any numeric values greater than 8. finally let's explore referential Integrity constraints you learned earlier that this type of constraint establishes rules for referential keys but how exactly does this work basically in a referential Integrity constraint there are two types of tables a referencing table that holds a primary key and a referenced table that contains a foreign key the value of the foreign key column that exists in the referencing table must always exist in the referenced table otherwise a connection can't be established between the two tables to understand this better let's explore the example of the related tables in little lemons database in the form of an entity relationship diagram little lemon's database includes two related tables the customers table that holds data on customers and the bookings table the records information on customers bookings with the restaurant each booking in the booking table must relate to a specific customer in the customers table otherwise the restaurant can't identify who made the bookings and this also means that each customer must already be registered in the customers table before they can make a booking in the bookings table the customer ID column in the bookings table is defined as the foreign key this is the attribute that joins the two tables together and establishes dependency between them this means that if a row of data is altered or deleted in the customer's table then this action destroys the related row of data in the bookings table in other words deleting a row of data from the customers table violates the referential Integrity rule and this results in an error message from MySQL warning that the action directly impacts on the bookings table so how can you make the required changes to the bookings table without violating the referential Integrity constraint in this instance you can use the on delete Cascade option this option automatically deletes the related rows of data from the bookings table and if you want to update a primary key value in the customers table you can use the on update Cascade option to automatically update the related rows in the bookings table you'll discover more about these options in a later video you should now be able to identify the main types of constraints and explain how they function and you should also be able to explain the MySQL under lead Cascade and on update Cascade options well done little lemon restaurant need to build two tables in their database that let customers create accounts and register bookings they also need to apply constraints to the columns in these tables to ensure data consistency and integrity over the next few minutes you'll help little lemon create these tables and apply the following common constraints not null unique check and foreign key and by the end of this video you'll be able to demonstrate how to apply these common constraints in a mySQL database table the first table that must be created is customers which records customer details and the table requires the following constraints the primary key constraint on the customer ID column the not null constraint on the full name column and a unique constraint on the phone number column to ensure that each customer has a unique number so let's get started begin with the create table command and call the table customers then add a pair of parentheses within the parenthesis Define the customer ID column as not null and as the primary key this ensures that all IDs are unique in each row of the table and that the column does not accept a null or empty value next add a full name column with the constraint not null and assign a value of varchar with a character limit of 100. then declare the phone number column as not null unique this ensures that it only accepts a unique number for each customer finally execute the statement now let's view the output by writing and executing the following statement show columns from customers this shows the customer's table the table contains all relevant constraints The Columns are defined with not null two keys have been declared the customer ID column is the primary key and the phone number column only accepts unique values the next task is to apply referential integrity this ensures that each customer can make a booking in the restaurant and that each booking must be assigned to a specific customer in other words any customer ID that exists in the bookings table must also exist in the customers table otherwise it won't be possible to identify who made the bookings when creating the bookings table it's important to focus on the referential Integrity constraint and the check constraint to limit the number of guests to a maximum of eight begin with a create table command followed by bookings and parenthesis in the parenthesis create the following columns booking ID booking date table number number of guests and customer ID all columns are defined as not null to ensure that each one must accept a value all columns are also assigned the integer value except booking's date which is assigned a date value the booking ID column is defined as the primary key the number of guest column is defined with a check constraint that specifies it's not null and use a smaller than or equal to operator so that it can only accept a maximum of 8 guests next Define the customer ID column with the foreign key constraint then use the references constraint so the foreign key references the customer ID column in the customers table now use on delete and on update Cascade options to delete and automatically update the related rows of data in the bookings table however be aware that these actions depend on the update and delete operations taking place in the customers table click enter to execute the statement to display the table structure type the following syntax show columns from bookings the output set result shows all columns are assigned the required constraints and values and the customer ID column is mole this means that it's not a unique key and multiple rows can have the same key value this makes sense because each customer might make multiple bookings at the same or at different times this code also joins the two tables and establishes dependencies between them so if you change or delete the customer ID in the customers table then you also update or delete the related record in the bookings table you should now be able to apply different types of constraints in mySQL database to maintain data integrity and consistency great work Loki shrobe gardening center has bought new heavy machinery but it can only be operated by qualified employees the business has a database table called Machinery that records the contact info of all qualified employees however the table has issues with its constraints and it's also missing some key information Lukey shrobe can fix these issues by making alterations to the table using the alter statement over the next few minutes you'll learn about the altar statement and then use what you've learned to help lucky shrub alter their database and by the end of this video you'll be able to add delete and modify columns and constraints in an existing table let's begin with an overview of the alter statement and its syntax you might often encounter tables in a database that contain missing columns or constraints or their existing columns and constraints may need to be modified you can use the alter table statements to make these changes the alter table statement is often used alongside different SQL commands here's a quick overview of some common commands used with the alter table statement the modify command is used to Target specific columns and in stroke SQL to make changes to them the add command can be used to add a new column to a table and the drop command can be used to drop or delete a column from the table so how are these commands used to make alterations to a table the alter table statement begins with the Altar and table Clauses followed by the name of the table to be altered next insert a modify command followed by the name of the column to be altered and the changes to be made for example you can change a column's data type and add a not null constraint then repeat the modify command for all other columns you want to alter you can also alter a table by adding another column just use the add column command followed by the name of the new column to remove a column from a table just use the drop command followed by the name of the column you want to drop or delete now that you're familiar with the alter table statement see if you can help lucky show make the required changes to their table the tasks that lucky shrub need to complete are as follows set the employee ID column as the primary key change the column constraints and add a new column to the table let's get started lucky shrove's Machinery table includes four columns employee ID full name phone number and County the table is missing a primary key fortunately the employee ID column is the perfect candidate because all values are unique to set this column as the primary key you can write an alter table statement add the alter table Clauses followed by the table name then write the modify command and the employee ID column name next set the data type as varchar with a character limit of 10. then set a not null value to ensure that the column always contains data finally add the primary key value to the column the employee ID column is now the table's primary key it looks like each column in the table is also set to accept null values this means that the table can contain empty fields or rows which is poor practice in a database so to change all columns to not null you can write another alter table statement in fact you can use the same statement as before and just add a new line for each column for the full name and County columns you can write the following syntax add a modify command set the varchar data type to 100 and a value of not null for the phone number column you can write the same syntax but with integer and unique values this means that the column now accepts unique numeric values only this avoids any duplicate values to view the new table structure write the following statement show columns from machinery this query's output shows that the employee ID is now set as the primary key the phone number is a unique value and all columns are set as not null now your final task is to add a new column to the table lucky shrove's Machinery can only be operated by employees aged 18 and over so the company needs to identify each employee's age and determine who is old enough to operate the machinery there's currently no age column in the table so you'll need to create it and add a constraint to ensure every new employee added to the table is at least 18 years old you can write the statement as follows alter table followed by the Machinery table name then the add column command next call the new column age and assign it an integer value finally use the check function to limit the values in this column to at least 18 or more then click enter to execute the query to view the table's new structure write show columns from machinery the output now displays the Machinery table with a new age column thanks to your help all the required changes have now been made to Lucky shrove's Machinery table you should now be able to use the alter table Clause to add delete and modify columns and constraints in an existing table great work IE shrube are planning an overhaul of their database in preparation they want to create copies of their data to keep it safe during the rebuild they can complete this task using the copy table process over the next few minutes you'll learn about the process for copying a table and then help lucky show up to copy tables in their database and by the time you complete this video you'll have learned how to copy data from an existing table to a new table within the same database copy a table to a new location while ensuring it retains its constraints and copy data from an existing table to a new table from a different database these tasks are carried out using the create table syntax however before you explore this syntax let's take a moment to review the process for copying tables it's important that you're familiar with the process before you begin copying tables you first need to identify the database and the table you want to copy the data from next determine the columns you want to copy either all columns or just some of them then use the create table statement to build a new table with a relevant table name and finally use the select command to structure the new table by specifying The Columns you want to copy data from now that you're familiar with the process steps let's review the create table syntax the copy table SQL statement begins with the create table command followed by the name of your new table next write the select command then identify the columns to be copied you can copy one several or all columns finally use the from command followed by the name of the existing table you want to copy but what about copying a table between two different databases once again begin with a create table command however in this instance you must use dot notation to identify the names of the new database and table then use the select command to select the existing tables columns and finally use the from clause then follow this with another instance of dot notation that identifies the names of the existing table and database to be copied Loki shrove are now ready to begin copying tables in their database they want to carry out the process as follows first they need to copy the client's table to a new table called clients test in the same database they then need to copy a few select columns to the table next they need to make sure that all constraints from the original table were copied over to the new one and finally they want to copy a table from one database to another use your new knowledge of the copying tables process to help them out first let's review the clients table in the lucky shrub database by typing select asterisk from clients then click enter to execute the query this generates the client table on screen the table contains four columns clients ID full name contact number and location for the first part of the test lookie shrub need to copy the client's table to a new table called clients test in the same database you can perform this task using the create table SQL query to create two statements in the first statement use a basic create table Clause to create the new client test table in the second statement type the select command with the asterisk as shorthand for all columns then type the existing table name which is clients finally click enter to execute the query this query copies all columns and their data from the client's table to the new client's test table to check that the query was successful you can type the following statement select asterisk from client test then click enter to execute this query generates the client's test table and the table contains a copy of all the data as required next lucky shop needs you to copy partial data only they need to copy the full name and contact number columns from the clients table to another table begin with a create table statement followed by the name of the new table which is client test 2. then use a select command but in this instance specify just the full name and contact number columns type the from keyword followed by the name of the existing table which is clients finally use the where clause in the select statement to specify a condition in this case copy the data only for those employees who live in Pinal County click enter to execute the query the queries output shows the client's Test 2 table and the table contains a copy of all the data from client test for all employees from Pinole County the test worked next you need to make sure that all constraints from the original table were copied over to the new one it's important to remember that copying data using the methods you've encountered so far doesn't copy the key constraints you can check the constraints on the original table by typing and executing the statement show columns from clients the query generates the client's table the table structure shows all columns with the key constraints set for the client ID and contact number columns now let's check for these constraints on the client's test table by typing and executing the following statement show columns from clients test this statement shows the client's test table the table is missing the primary and unique Keys defined in the original table so how can you copy these keys you can use the following statement create table clients test 3 like clients the like keyword creates an exact copy of the existing table structure press enter to execute the statement then type and execute the following SQL statement to display the new table structure show columns from clients test 3. the output shows an exact copy of the initial clients table and all the key constraints have been copied as expected your final task is to copy the client's table from the lucky shop database to the new test database begin with a create table statement then specify the new database and table names as testdb dot clients test type select asterisk to instruct SQL to copy all data and finally add the from keyword followed by the existing database and table names which are lucky shrub.clients click enter to execute the query now you just need to check that the query was successful by moving into the test database type tests DB then show tables to reveal all tables in the test database this statement reveals all the tables created in the test database including the client's test table you just copied over from the lucky shop database and the table contains all the data from the original one Loki shrobe now have all the required copies of their tables in their database you should now be able to copy data from an existing table to a new table within the same database copy a table to a new location while ensuring it retains its constraints and copy data from an existing table to a new table from a different database great work little lemon restaurant need to extract Financial info from their database to complete their accounts they can carry out this task using a sub query over the next few minutes you'll explore the concept of a soap query and learn how to recognize a subquery and understand its syntax identify scenarios in which a sub query can be used and explain how subqueries are used to retrieve data so to begin let's answer the question of what is a sub query as the name States a sub query is a query within another query in other words it's an inner query placed within an outer query the inner query is viewed as the child query and the outer query as the parent query but what does a query within a query look like well the best way to understand a subquery is through its syntax as you just learned a subquery is a query within a query an inner or child query within an outer or parent query the inner query or slope query executes first and its results are then passed through the outer or parent query you can also build multiple sub queries in MySQL the outer query is presented like any normal query it contains select from and where Clauses likewise the sub query is written as a standard query however the sub query must always be placed within a pair of parentheses when executed a sub query can return any of the following results a single value a single row a single column or multiple rows of one or more columns a key advantage of a sub query is that you can compare it against other values using a comparison operator you should be familiar with comparison operators from previous lesson items in this specialization if not here's a quick recap examples of commonly used standard comparison operators include equal to less than and greater than there's also less than or equal to greater than or equal to and not equal to let's look at the Syntax for sub queries and comparison operators a sub query can be placed before or after a comparison operator in the where Clause of your parent query now that you're familiar with the basics of sub queries here's a demonstration of how they're used little lemon restaurants are reviewing their accounts and need employee salary data from their database this data is held in the employees table the table contains four columns employee ID employee name role and annual salary little lemon must use this table to identify which employees earn a salary higher than that of the assistant chef you can use a subquery to complete this task this query can be completed in two parts as follows the outer or main query must extract details of all employees whose annual salary is greater than the specified value and the sub query must identify the annual salary of the assistant chef when executed the soap query provides a subset of data from the employee's database and this subset of data is then used as an input for the outer query Begin by writing the outer query as follows a select command followed by an asterisk then the from clause which targets the employees table the next part of the outer query must filter out the employee's table data based on the annual salary so add a where Clause followed by the annual salary column then add the filter condition in the where Clause the annual salary column followed by a greater than operator symbol the greater than operator must Target a specific value but how do you determine what this value is you can use a soap query write a sub query within your main query as follows add parenthesis after you're greater than operator within the parenthesis right select annual salary column from employees table then write a where Clause followed by the roll column finally add an equals operator followed by the assistant Chef value now that you've written both queries press enter to execute the sub query executes first and extracts the annual salary of the assistant chef this value is now the input for the outer queries where clause next the outer query is executed the outer queries where Clause filters out the records of all employees earning an annual salary greater than that of the assistant Chef in other words the outer query filters out the values greater than the value retrieved by the sub query the sub queries result shows that the assistant Chef earns forty five thousand dollars a year and the outer query shows that there are three employees who earn more than the assistant chef these employees are the manager assistant manager and head chef and that's an example of how the subquery is used in a database you should now be able to recognize a sub query understand its syntax and identify scenarios in which a so query can be used well done little lemon restaurant need to perform complex queries in their database and standard sub queries might not be enough for this task so they'll need to use multiple row sub queries with complex comparison operators over the next few minutes you'll explore sub queries and complex comparison operators and learn how to explain how subqueries interact with complex comparison operators and demonstrate the use of sub queries in a complex data retrieval scenario as you might already know a key advantage of a sub query is that you can compare it against other values using an operator however there are more complex operators that can be used with multiple row sub queries the any operator returns data for any values that meet the specified condition all returns data for all values and the sum operator returns data for one or more matching values let's look at how to write multiple row subqueries using the All any and some operators these comparison operators let you perform a comparison between a single common value and a range of other values they result in multiple records or Target multiple values within a table soap queries can also be used with the exists and not exists operators the exists operator tests for the existence of rows in the results set returned by the soap query it returns true if the soap query returns one or more records on the other hand the not exists operator checks for the non-existence or absence of results from the soap query not exists returns true when the soap query does not return any row of results let's review the Syntax for the exists and not exists operators the syntax is very similar to a standard sub query the key difference is that the exists operator is placed after the where Clause to determine the existence of the value specified in the soap query or you can use the not exists operator to check for the non-existence or absence of results from the subquery let's look at a demonstration of how subqueries are used with these operators the little lemon restaurants need to identify all employees earning an annual salary that's less than or equal to the annual salary earned by all employees in the following roles manager assistant manager head chef and head waiter the data required to complete this query is in the employees table the table has four columns as follows employee ID employee name role and annual salary you can extract the data required from this table using two queries an outer query to identify all employees who are earning an annual salary that's less than or equal to the specified values and a sub query that extracts the data of annual salaries earned by employees who are in the roles specified earlier let's begin with the outer query it starts with a select command and an asterisk then add a from Clause that targets the employees table next write a where Clause followed by the annual salary column name finally write a less than or equal to operator now you must write the subquery within parenthesis write a select command to select the annual salary column then a from Clause to Target the employees table next write a where Clause followed by a condition that extracts data from the role column finally in parenthesis write the required roles manager assistant manager head chef and head waiter these queries must return a result that lists all employees earning an annual salary that's less than or equal to the annual salary earned by all employees in the role specified so to ensure that you get the desired result place the all operator after the less than or equal to comparison operator but before the sub query then execute the query to return the output the sub query executes first and identifies the salaries of the manager assistant manager head chef and head waiter roles these salaries are the values that the outer query uses the values are seventy thousand sixty five thousand fifty thousand and forty thousand so the outer query filters out the employees who earn an annual salary less than or equal to all these values the final output shows that the employees with IDs of 5 and 6 earn an annual salary less than or equal to the other roles on the other hand the any operator compares the results of the so query to determine whether it can exclude records from the outer query that satisfy the conditions for any of the values returned by the soap query little lemon's next task is to identify employees earning an annual salary that's greater than or equal to the annual salary earned by any employee in the four roles specified earlier you can use the same query as before but remember that this time you're checking for values that are greater than or equal to those in the soap query so change the comparison operator in the where condition of the outer query to a greater than or equal to operator now just before the sub query replace the all operator with the any operator finally press enter to execute the query the output shows that there are five employees who earn a salary greater than or equal to the other roles for their final query little lemon need to determine if their head chef and waiter are assigned to a booking they can do this using the exists or not exists operators the query involves two actions in the first action the outer query extracts details of employees and in the second action the sub query determines if the head chef or headweighter have been assigned to a booking the required data is held in the bookings table this table has six columns as follows booking ID table number first name of guest last name of guest a column for each booking slot or time and a column that shows the ID of the employee assigned to the booking Begin by writing the outer query as follows select asterisk from employees then add a where clause the sub query must determine if there are any employees in the role of head chef or headwaiter assigned to a booking so add the exists operator after the where clause then write the sub query in parenthesis as follows a select command and an asterisk a from Clause targeting the bookings table aware clause and a condition that must return results for the required employees if they're assigned to a booking press enter to execute the query and generate the output so the exists operator has checked for the existence of the specified results in the soap query and that is found that these results exist therefore the operator result is true the result is that the outer query filters out the details of the two employees who are assigned to these three bookings now let's replace exists with the not exists operator to see what results are returned the output Returns the same three records as before however the not exists operator checks for the non-existence of results from the subquery also in this case the where Clause filters out employees that do not exist in the results obtained by the soap query this returns results of four employees that don't exist in the soap queries results in other words this is the data for employees who don't meet the soap queries criteria and aren't identified in the results you should now be able to explain how subqueries interact with comparison operators and demonstrate the use of sub queries in a complex data retrieval scenario well done lucky shrobe have had particularly good sales so far this year they now need to identify the top three best selling products to make sure they have enough quantity in stock for the next few months there's a lot of data to parse through in their database so they've decided the easiest way to identify the best selling products is with the use of a virtual table or View in this video you'll explore views and then use what you've learned to help Loki shrobe and by the end of this video you'll be able to explain the concept of views in database and demonstrate how to create rename and drop views in mySQL database let's begin by developing an understanding of what database Engineers mean by the term views views are virtual tables created from one or multiple tables depending on the requirements The View presents a table interface that lets the database users access and manipulate the data within the table using MySQL so why do database Engineers use views let's look at some common use cases views can be used to create a subset of a table of data for example a table might have seven columns but you only need data from three so you could create a subset from these three columns and Views can also be used to combine data from multiple tables you might need to query two columns from one table and four from another you can use views to combine both sets of columns into one virtual table now do you understand what views are let's review the syntax the syntax begins with the create command followed by The View keyword and the name of the view or virtual table the as keyword is then used to define the view table functionality next use the select command to specify the columns the table must be built from you can specify these columns using dot notation making sure to include both the table and column name for example table1. column one to select the first column in the first table then use the from keyword to specify the tables that the data must be extracted from finally you can use the where clause and a condition to set data order and filtering rules that's how you can create a view by extracting data from one table however creating a virtual view based on multiple tables requires a bit more effort let's find out more when creating a view from multiple tables much of the syntax Remains the Same the key difference is after the select command You Must List the columns that you require from both tables using dot notation you then need to create an inner join after the from keyword in which you join the two tables together then use the on keyword to determine the matching columns used to create the join let's take a closer look at the use of dot notation dot notation is used to link columns with tables this is particularly important if you're dealing with multiple tables multiple tables might give rise to a potential conflict in names for example two tables could use the same name for a specific column to avoid this you can establish a link between each column and its respective table by placing a dot in between them however dot notation is optional if your query is only dealing with one table The View syntax presents a clear five-step process for creating a virtual table or View create the virtual table using the create view syntax list the columns to be moved from the original table to the virtual one specify the original table from which data must be extracted to create the view set the conditions and finally set the data order and filtering rule now that you've been introduced to what a view is and been shown how to create one it's time to see if you can assist lucky shrub as you discovered earlier lucky shrub need to identify their top three best-selling products with the use of a virtual table or view to make sure they have enough quantity in stock for the next few months let's use your new knowledge of views to help them out you can create a virtual table or view for Lucky shrobe using the data in the orders and products tables in their database let's take a moment to familiarize ourselves with these tables before using them to create the view the orders table has five columns that include information about the order ID client ID product ID quantity and cost while the products table has three columns that include information about the product ID item name and price lucky shrub want to identify their top three best selling products so to create the view you only need data from the item name column from the products table the order quantity and total cost columns from the orders table as you learned earlier the key steps for creating the view lie in the syntax so write a create command and the view keyword then write the name of the view which you can call top three products include the as keyword to define the view table's functionality then use the select command and Dot notation to Target the required columns for your view next use the from keyword to identify the tables however the view is Created from two separate tables so you'll need to join these tables together using inner join based on their matching product ID value finally use order by to list the products based on the highest cost with only the top three products appearing on screen in descending order execute the query to generate a new virtual table called top three products with the three required columns item quantity and cost you can now query this virtual table just like any other normal table using the following basic SQL statement select asterisk from top three products the table prints the top three best-selling products along with their name quantity and cost why don't you try rename the table to something shorter like top products you can rename a virtual table using the MySQL rename command to rename the table write rename table top three products two top products this syntax is used to rename all types of tables in MySQL in this statement you just specify the View's current name after the rename table Clause then you specify the views new name after the to keyword finally click enter to execute the query the table has now been renamed top products what if you no longer require a virtual table you can just drop it using the SQL drop command drop view top products click enter to execute the query The View has now been removed and there's no impact on the original table it was created from thanks to the view lucky shrub now what their top three best selling products are and they can make sure that they have enough quantity in stock for the next sales period you should now be able to explain the concept of views in database and demonstrate how to create rename and drop views in mySQL database well done congratulations you've reached the end of the second module in this course let's take a moment to recap on some of the key skills you've gained in this module's lessons in the first lesson you learned how to insert and update data and should now be able to explain the concept of the replace statement outline how the replace statement is used to insert or update data in a database table and demonstrate the replace statement following your completion of the projects in the lab environment in lesson two you learned how to work with values and constraints now that you've completed this lesson you're able to identify the main types of constraints explain how constraints work in a database outline the MySQL on delete Cascade and on update Cascade options and demonstrate your ability to utilize values and constraints as proven in the lab environment you then moved on to the third lesson in which you learned how to change the structure of a table having completed this lesson you're now able to add delete and modify columns and constraints in an existing database table copy data within and between tables and databases using the copy table syntax you also demonstrated your ability to alter tables in the labs and you review the additional resources to learn more about these Concepts in the fourth lesson you explore the concept of sub queries now that you've completed this lesson you're able to recognize a sub query and understand its syntax identify scenarios in which a sub query can be used and explain how sub queries can be used to retrieve data you also demonstrated your ability to work with subqueries in a lab environment finally in lesson 5 you learned about virtual tables or views now that you've reached the end of this lesson you're able to explain the concept of views in a database demonstrate how to create rename and drop views in a database identify the advantages of using views in MySQL and you completed readings in which you gained additional knowledge on the topics of views having completed this module you should now be able to update data work with values and constraints change the structure of a table and utilize sub queries and virtual tables great work I look forward to guiding you through the next module in which you'll discover how to work with functions a MySQL stored procedures the jewelry store magenta and Gallo also known as mng are reviewing client orders in their database they must determine the average amount of money that each client has spent with the business mng can use numeric functions to extract this information in this video you'll explore numeric functions and learn how to identify common MySQL numeric functions and explain how these functions are used to process and manipulate data in a mySQL database at this stage of the course you've encountered some basic functions so here's a quick reminder of what database Engineers mean by the term functions in the context of MySQL as you've learned in earlier lessons a function is a piece of code that performs an operation and returns a result some functions accept parameters or arguments while other functions do not functions are very useful for manipulating data in a database table broadly speaking MySQL functions can be grouped into five different categories as follows numeric functions string functions date functions comparison and control flow functions you'll review each of these functions in more detail over the course of this lesson the focus of this video is MySQL numeric functions which can be divided into two categories aggregate functions which can be used on a set of values and math functions which perform basic mathematical tasks on data you should already be familiar with aggregate functions having used these previously in the course with select statements to calculate aggregated values so let's just recap them briefly commonly used aggregate functions include sum average and Max there's also the minimum aggregate function and count now that you've recapped aggregate functions let's look at some common math functions a number can be rounded to a specific decimal place using the round function and the mod function can be used to return the remainder of one number divided by another these functions are a great way for mng to perform additional tasks while also determining the average dollar amount that each client has spent with the business but how can you and mng make use of these functions in a mySQL database you can build them into your SQL select statements let's review the syntax the round syntax begins with a select command followed by the name of the column to be queried you then call the round function followed by a pair of parentheses Within These parentheses write the required arguments the first argument can be a column name or any numeric value the second argument must be the number of decimal places finally write the from keyword followed by the required table name the mod syntax is very similar just call the mod function instead of round and within parenthesis identify the column or value and instruct MySQL what number to divide the value by finally identify the table that holds the data when working with the mod function bear in mind that the first argument can be a table column or any numeric value while the second argument must be the value by which the first will be divided for example mng can use the round syntax and average numeric function to determine the average dollar amount each client spent rounded down to two decimal places let's take a few moments to explore mng's database and find out more about how they make use of numeric functions as you learned earlier mng are reviewing client orders and must determine the average dollar amount that each client has spent with the business the company has a table called client orders that shows the average amount each client has spent the table has two columns client ID which shows the ID of each client an average cost which displays the average amount each client has spent however even though this table shows the average amount mng need to round down these values to two decimal places you can help them using the round function right select followed by the column names then call the round function on the average cost column in parenthesis put the average cost column as the first argument then pass the number two as the second argument to round the value to two decimal places next use the from keyword to Target the client orders table finally Group by client ID execute the query to create the output and display all decimal places reduce the two in the next task mng are restocking their inventory and need to identify which items they've placed an even number of orders for the data they need is in the table mg orders the table contains several columns but the ones you need to complete this task are order ID Item ID and quantity to determine if a given quantity is odd or even you can divide the quantity by two the remainder is your answer this can be done using the mod function first right select followed by the column names then call the mod function pass the quantity column as the first argument and the number two as the second argument execute the query the query Returns the following values a value of 0 if there is no remainder when all data is divided by two or it returns just the remainder value the output shows that an even number of orders have been placed for items 1 3 5 and 6. m g have now completed their database tasks using common MySQL functions and you should now be able to identify frequently used MySQL numeric functions and explain how these functions contribute to data processing and manipulation in a mySQL database well done magenta and Gallo or m g are performing an inventory review they require a list of item names and their available quantities they can extract this data from their database using common MySQL string functions over the next few minutes you'll learn how string functions can be used to perform tasks like this and by the end of this video you'll be able to identify common MySQL string functions and explain how these functions are used to process and manipulate data in a mySQL database let's take a moment to find out what database Engineers mean by the term string functions string functions are used to manipulate string values for example adding strings together or extracting a segment of a string here's a few examples of some commonly used string functions the concatenation function is used to add several strings together and there's also the substring function which extracts a segment of a string from a parent string uppercase converts a string to uppercase and lowercase converts a string to lowercase next let's explore the syntax of these strings to find out how they're used in a mySQL database a very simple example of a concatenation function begins with a select command which calls a concatenation function you then type a pair of parentheses in which you include the string values to be concatenated ensure both are contained within double quotes and separated by a comma then include the from keyword and the name of the table that contains the data you can also use the where Clause to specify a condition a more complex example of the concatenation function might involve extracting string values from two separate tables for example the data that mng require is on two Separate Tables items and mg orders m g can pass their arguments in the select Clause identify the two tables they required in the from clause and specify the condition in the where Clause so that SQL filters the required data from the combination of the two tables this example might seem complicated but don't worry you'll explore it in more detail in a few moments when you help m g query their database let's continue to review string function syntax with substrings the syntax of a substring function is similar but there are three arguments contained within the parenthesis the first of these is the string itself the next one is the start index the point in the string at which the soap string must begin and length refers to the length of the string portion that must be extracted next let's review the Syntax for the uppercase and lower K string functions M and G often convert the values in one column of a table to uppercase and the values in a second column to lowercase here's how they perform this task an uppercase string function begins with a select statement an uppercase function in parenthesis write the name of the column whose values must be converted to uppercase finally in stroke SQL which table to Target a lower K string function is very similar the only difference is that the parenthesis must contain the name of the column whose values are to be converted to lowercase next let's look at how m g make use of string functions in a mySQL database as you learned earlier mng need a list of item names and their available quantities ordered in the format item name order quantity the item details are in the items table and the order details are in the mg orders table the items table records information on items in mng's inventory within the following columns item id name and cost the mg orders table records data on deliveries within the following columns order ID item id quantity cost order date delivery date and order status you can extract the required data from these tables using the concatenation string function begin with a select command then call the concat function and write a pair of parentheses within the parenthesis pass the argument's name and quantity these are the names of the columns for your output these columns stand for the items table and mg orders table respectively then add a hyphen in between the arguments to combine them use a pair of single quotes for The Hyphen and ensure all arguments are separated by commas use a from keyword to specify the two tables finally use a where Clause to specify a condition that filters or acquired data from the combination of the two tables then execute the query MySQL extracts a table that shows the total quantity of each item in the inventory the next task is to retrieve all string values in the order status column of the mg orders table in both upper and lowercase you can Target the string values from the order status column using the upper and lower K string functions in your select query call the uppercase function and pass in the column name order status then Target the mg orders table with the from keyword execute the query to retrieve all values in uppercase to retrieve all values in lowercase just type the same query again but this time called the L case function execute the query once more to retrieve all values in lowercase as part of their next task M and G are reviewing an order from a client they need to extract the first name of this client from the clients table the clients table records key information on clients and stores it in the following columns the client ID column in which the required client is assigned an ID of one the client name address and contact number columns you can retrieve the information mng need by using the substring function to extract the relevant part of the string from the table's client name column value first write a select statement and call the substring function followed by a pair of parentheses then pass in the client name column as the first argument to the substring function pass in the start index as the second argument which is the letter K or character one of the string and pass in the length of the string portion you need to extract as a third argument the client's name is kishan which is six letters long so 6 is our third argument then identify the table to Target with the from keyword finally add the where Clause with the client's ID as the condition run the query to extract the client's first name you've now helped m g to complete their database tasks using string functions and you should now be able to identify common MySQL string functions and explain how they're used to process and manipulate data great work mng are reviewing some recent orders delivered to the store they must determine how many days have passed between the date these items were delivered and the day they were ordered they can complete this task using date functions in this video you'll explore date functions and learn how to identify common MySQL date functions and explain how these functions are used to process and manipulate data in a mySQL database first let's find out what date functions are date functions are used in a mySQL database to extract time and date values in a range of different formats mng often use date functions to identify key time and date details for customer orders commonly used date functions that mng take advantage of include current date which Returns the date in year month date format and current time which Returns the time in hours minutes seconds format there's also date format which is used to format a date according to a given format once that format is valid in MySQL and date difference identifies the number of days between two date values perhaps m g can use the date difference function to find out how many days have passed between orders but before you find out how let's take a few moments to explore the Syntax for these functions in most instances date functions are written as select statements to extract today's date in year month date format just type select the current date function and open parenthesis for the current time in hours minutes seconds format type select the current time function and open parenthesis however the syntax becomes a bit more complicated with date format and date difference to change the date format type the date format function and open parenthesis within the parenthesis type today's date in standard SQL year month date format enclosed in double quotation marks then input a valid MySQL format in a pair of single quotes you can refer to the further reading section at the end of this lesson for a list of valid formats to determine the number of days between two date values type the select command and date difference function followed by parenthesis within the parenthesis type the first and second date values in year month date format and ensure both are enclosed in double quotation marks then run the query to create the output now that you've reviewed the Syntax for date functions let's see if you can use this knowledge to help m g mng need to complete a series of time and date tasks using date functions the first of these tasks is extract the current date and time to retrieve this data just write select command followed by the current date function and a second select command followed by the current time function execute these queries to return the current date and time now mng needs you to format a date by displaying the month name of a given date you can do this by using select and calling the date format function then pass in the order date as the first argument type the required format to get the full month name finally identify the required table execute the query to create the output for the final task mng must determine the number of days between the delivery date and Order date for their most recent orders as you discovered earlier the date difference function can be used to complete this task the delivery data is contained in the mg orders table the table records delivery data within the following columns order ID item id quantity cost order date delivery date and order status to complete this task you need to focus on the values from the delivery date and Order date columns first write a select query and call the date difference function pass the values from the delivery date column as the first argument then pass the values from the order date column as the second argument use the from Clause to Target the mg orders table and finally use the where Clause to filter out the records that do not have a null delivery date once executed the query reveals the number of days between the delivery and order dates for the most recent orders mng now know how many days passed between the delivery date and Order date for their most recent orders and you should now be capable of using common MySQL date functions the process and manipulate data well done m g are approaching the end of their business year and need to extract sales revenue data for each item in their inventory they can extract this data using comparison functions over the next few minutes you'll explore the concept of comparison functions I'm at the end of this video you'll be able to identify common MySQL comparison functions and explain how these functions are used to process and manipulate data in a mySQL database so what do database Engineers mean by the term comparison functions MySQL comparison functions allow you to compare values within a database for example the function can be used to determine the highest lowest and other values a benefit of comparison functions is that they can be used with a wide range of values including numerical strings and characters here's a few examples of MySQL comparison functions the greatest function is used to find the highest value least determines the lowest value and is null is used as an alternative to the equals operator to test if a value is null to demonstrate the syntax let's identify the highest and lowest values from a table that contains numerical values only the syntax begins with a select command followed by the name of the required column often this is the column that holds the table's primary key or identifying attribute next type the greatest function followed by parenthesis containing the names of the columns you need to compare then use the as keyword with a column Alias of highest to ensure SQL Returns the required values in a new table under this column next utilize the least function in the same manner finally identify the table to be queried for example m g can use the greatest and least syntax to extract sales revenue data they can Target the last four business quarters and deliver the highest and lowest values from each you'll find out more about how mng can do this in a few moments for now let's look at the Syntax for the final comparison operator is null is null is often used with a select command followed by the name of the required column then a from keyword is used to identify the required table and is no function can also be used with a where Clause the Clause calls the is null function and identifies the column it must pass through now that you're familiar with the syntax of comparison functions let's take a few moments to find out how they're used in the m g database as you learned earlier mng required data on their sales revenue for each item in their inventory for the last four business quarters the sales revenue data is contained in the sales revenue table the table has five columns one column called item id which identifies each item in the inventory and an individual column for each quarter m g first need to identify the highest and lowest Revenue each item brought in over the past four quarters you can help them by using the greatest and least comparison functions just like the syntax example from earlier start with a select command and list item id as the first column then to identify the items that brought in the highest revenue call the greatest function and pass the four business quarter columns as arguments then create the Alias highest write a similar line of Syntax for the least function and assign it the Alias of lowest finally use the from keyword to Target the sales revenue table once executed the queries output presents the highest and lowest sales revenue values for each item over the last four business quarters for example the item with the ID of one was worth 138 thousand dollars to m g at its peak and sixty thousand during its lowest sales period m g need to determine which of their most recent orders have yet to be delivered the delivery data is held in the mg order table the table contains seven columns order ID item id quantity cost order date delivery date and order status the delivery date column is your primary concern here all orders yet to be delivered have a null value within this column so you can use the is null function on this column with the where Clause to filter these orders Begin by writing the select statement as usual followed by an asterisk then use a from keyword to Target the table finally write a where clause and call the is null function to pass through the delivery date column once executed the query returns a value of 3. this is a true value for all records that have a null value for the delivery date column mng now have the required sales data and you should now be able to use comparison functions in a mySQL database great work mng needs to determine which items in their inventory are turning a profit and which items are making a loss they can use a control flow function to carry out this task over the next few minutes you'll explore control flow functions and by the end of this video you'll be able to identify common MySQL control flow functions and explain how these functions are used to process and manipulate data in a mySQL database so what are control flow functions in a mySQL database control flow functions let you evaluate conditions and determine the execution path or flow of a query the most common control flow function used in a mySQL database is the case function the case function runs through a set of conditions contained within a case block and returns a value when the first condition is met let's take a moment to explore how this function operates the case function is held within a case block it operates in a similar manner to an if then else statement once it finds a condition that's true it Returns the result if no conditions are true then it Returns the value specified in the else clause if there's no else clause and no conditions are true it returns null so what does the full syntax of a case function statement look like first write a select keyword followed by the name of one or more columns that contain the required values this is followed by the case function which denotes the start of the case block next is the list of conditions which are written using the when and then Clauses the case block is then closed with the use of an end clause you can also add an alias for the expression depending on the needs of your code finally identify the table to be queried for example mng can use the case function to identify which items in their inventory are loss making and which ones have turned a profit they can extract the sales data for each item from the sales revenue table any items with a value less than or equal to twenty five thousand dollars are considered loss making any items with a higher value are viewed as profitable MySQL displays the terms profit or loss next to each item's ID depending on the result let's take a few minutes to explore mng's database and find out how they extract sales revenue data using a control flow function as you've just discovered mng need to check which items in their inventory have turned a profit this year any items that have accrued at least twenty five thousand dollars in sales are considered profitable all other items are making a loss and should be removed from sale the data they need is contained in the sales revenue table the table has five columns one column called item id which identifies each item in the inventory and then individual column for each of the four business quarters by checking if the value of the lowest quarter is less than or equal to 25 000 mng can determine which items made a profit and which items made a loss the easiest way to perform this task is by using the case control flow function first write the select statement and Target the item id column this is the column you need to display results against so write the case keyword to begin your case block in the case block write when and give the condition with the least function then list the quarterly sales columns in parenthesis the next set of steps involves the operator and conditions write a less than or equal to operator and write then to specify what information you intend to display if the condition is true in this instance you need to display the word loss then write the else keyword and specify what information must be displayed if the condition is false in this case it's profit end the case block with the end keyword now you need to create the aliases and identify the table to be targeted use an as keyword to create the profit loss alias this is the name of the column that the results of your case query are placed in finally write the from keyword and Target the sales revenue table execute the query to extract the results the results show that items 1 4 5 and 6 generated a profit while items two and three made a loss m g have now completed their database tasks with your help and you should now be able to use control flow functions when writing SQL select statements nicely done Loki shrobe often performed the same queries on their database every day and each time they perform these queries they have to rewrite the same SQL code again there must be an easier way right well with my sequel lookie shrobe can use the stored procedures method to save a specific query as a block of code that they can then recall whenever required over the next few minutes you'll discover how this works by exploring the concept of stored procedures and by the end of this video you'll be able to demonstrate an understanding of stored procedures in a mySQL database and create and drop simple stored procedures in MySQL so let's begin with an overview of what database Engineers mean by the term stored procedures a stored procedure is a block of code or pre-prepared query that can be stored in your database you can then invoke or call the stored procedure using the call command there are a lot of benefits to be gained from using stored procedures with stored procedures your code is more consistent your code is also reusable you no longer need to write the same SQL statements repeatedly and your code is also easier to use and maintain next let's explore the syntax to get a better understanding of how the stored procedure works first to create a basic stored procedure write the create procedure command this must be followed by the name of the procedure and a pair of parentheses which hold the list of parameters this parenthesis is required even if your stored procedure contains no parameters then write the rest of your procedure logic as required for example if your procedure must select all data from a table then write a select command with an asterisk and the front keyword followed by the table name when writing a stored procedure with one or more parameters the syntax is much the same the key difference is that you must include all required parameters within the parenthesis then write the rest of your procedure logic once you've created the stored procedure the next step is to invoke it to invoke a procedure you can use the call command followed by the procedure name make sure to include the parenthesis but what have you no longer required a stored procedure how do you remove it from your database to delete a stored procedure you can use the drop procedure command followed by the procedure name in this instance you don't need to include any parenthesis as you learned earlier lucky shrub make heavy use of the same queries in their database for example they often need to query the list of products in their database products table to find items for customers or check what's in stock in their store however they need to rewrite the same query each time they interact with the products table it's a time consuming process why don't you use your new knowledge of stored procedures to help them create a reusable query Loki shrope need to create a stored procedure that can extract all data from their products table the table holds data on all products in the store and is divided into three columns the product ID column the item column used to list all products by name and price column which lists all prices rounded to two decimal places to create a stored procedure that returns all data from the table you can write the following syntax begin with the create procedure command followed by the procedure name since the goal of this procedure is to return details of all products you can call it get products details then add parenthesis this stored procedure doesn't require any parameters so you can leave the parenthesis empty next write a select command and an asterisk symbol to instruct MySQL to extract all data finally write the from keyword and Target the products table click enter to run the query the new procedure get products details has been created lucky shrobe can now call this query to extract data from the table instead of rewriting a new select statement each time to demonstrate the stored procedure just write the following call command call get products details and parenthesis click enter to run the procedure and extract a set of results that includes all product data lookie shrub also frequently write queries to identify the lowest price products in their database so that they can add these items to sales or promotions you can create a stored procedure with one or more parameters for this query begin with the create procedure command then write the procedure name you can call it get lowest price products in parenthesis you need to declare the parameters lowest price and the integer value these parameters return the lowest integer values in the form of a table column called lowest price next write a select command and an asterisk symbol then write the from clause and Target the products table after the from Clause include a less than or equal to operator followed by the lowest price parameter click enter to execute the query in this statement you've declared a parameter with an integer data type that must pass an integer value into the stored procedure however don't forget that this query also includes parameters so each time the query is called you need to specify the value of the stored procedure must process as an example let's return the data of products with a price of less than or equal to fifty dollars by typing the call command the get lowest price product stored procedure name I'm placing the value of 50 in parenthesis click enter to execute the query the query passes the value of 50 to the stored procedure through the parameter the output appears on screen with a list of all products priced less than or equal to fifty dollars finally lucky shrove have decided to remove the get products details stored procedure from their database to drop the stored procedure from the database type drop procedure command and the name of the procedure get products details click enter to execute the query the stored procedure has now been dropped from the database lucky shrobe can now perform queries in their database much more efficiently thanks to the use of stored procedures you should now be able to demonstrate an understanding of stored procedures in a mySQL database and create and drop simple stored procedures in MySQL well done congratulations you've reached the end of the third module in this course let's take a moment to recap on some of the key skills you've gained in this module's lessons in the first lesson you learned about functions in MySQL and should now be able to explain the concept of MySQL functions differentiate between common types of MySQL functions and make use of basic MySQL functions in a mySQL database and you demonstrated your Knowledge and Skills with mySQL functions in the labs and quizzes the MySQL functions that you encountered included numeric functions which are used to aggregate data or perform mathematical operations string functions deployed on string values in a database and date functions which return time and date information you also explored comparison functions and discovered how they can be used to compare values in a database and finally you learned how control flow functions are used to evaluate conditions and determine the execute path of a query in the second lesson you explored stored procedures and should now be able to explain the concept of stored procedures in a mySQL database and create and drop simple stored procedures in MySQL you also demonstrated your skills in a lab environment having completed this module you should now be able to make use of functions a MySQL stored procedures great work in this course you learned about database structures and management with mySQL let's take a few moments to recap the key topics that you learned about in the opening lesson you received an introduction to mySQL during this introduction you learned about databases discovered how meta makes use of MySQL databases on a day-to-day basis and you learned how to make the most of the content in this course to ensure that you succeed in your goals you then moved on to the next lesson in which you learned how to filter data in this lesson you learned how to filter data using the and or not in between and like logical operators combined conditions with the use of logical operators you learned how to identify wildcard characters and explain how they're used to filter data and you then demonstrated your knowledge of data filtering in a series of knowledge checks in the next lesson you explore the concepts of aliases and table joints you can now explain the concept of an alias and demonstrate how they're used in a lab environment outline what a table join is and explain different types such as inner left right and self joints demonstrate how to join tables and make use of the Union operator in a mySQL database having completed the video and demonstrated your skills in a knowledge check you then learned about grouping data use the MySQL Group by Clause to group rows and deploy it with aggregate functions demonstrate the use of the MySQL having Clause to apply filter conditions make use of the any and all operators and you demonstrated your ability to group data in a lab environment next you began the second module in which you explore different techniques for updating databases and working with views in the first lesson of this module you learned how to insert and update data you can now update and insert data using the replace command identify the main types of constraints like key domain and referential and explain how they function add delete and modify columns with the use of the alter table command and make use of sub queries you then learned about views in MySQL databases you can now explain the concept of views create rename and drop views in a mySQL database identify the advantages of using views and you demonstrated your Knowledge and Skills with views in a series of knowledge checks and ungraded labs in the third module you are introduced to functions and MySQL stored procedures you can now explain what a function is and identify different types of functions you can use numeric functions to aggregate data or perform mathematical operations you can manipulate string values using string functions extract data on time and date values with the use of date functions compare values using comparison functions and you can deploy control flow functions to evaluate conditions and determine their execution path in the final lesson of this module you explore the concept of stored procedures you can now explain what stored procedures are in a mySQL database and create and drop simple stored procedures in MySQL you've reached the end of this course recap it's now time to try out what you've learned in the graded assessment good luck you've reached the end of this meta-database engineering course you've worked hard to get here and developed a lot of new skills along the way you're making great progress on your database journey and you should Now understand database structures and management with mySQL you are able to demonstrate some of this learning along with your practical database skill set in the lab project following your completion of this course in metadatabase engineering you should now be able to filter join and group data insert and update data in a database using constraints sub queries and Views and deploy functions and stored procedures in a mySQL database the key skills measured in the graded assessment revealed your ability to demonstrate your knowledge of key MySQL topics like filtering joins and data grouping explain database concepts related to Virtual tables data integrity and sub queries and exhibit your experience with functions and stored procedures so what are the next steps this meta database engineering course has given you an initial introduction to several key areas you probably realize that there's still more for you to learn so if you found this course helpful and want to discover more then why not register for the next course you'll continue to develop your skill set during each of the metadatabase engineering courses in the final lab you'll apply everything you've learned to create your own fully functional database system whether you're just starting out as a technical professional a student or a business user the course and projects prove your knowledge of the value and capabilities of database systems the lab consolidates your abilities with the practical application of your skills and the lab also has another important benefit it means that you'll have a fully operational database that you can reference within your portfolio this serves to demonstrate your skills to potential employers and not only those that show employers that you are self-driven and Innovative but it also speaks volumes about you as an individual as well as your newly obtained knowledge and once you've completed all the courses you'll receive certification in meta database engineering the certification can also be used as a progression to other meta role-based certifications depending on your goals you may choose to go deep with Advanced role-based certifications or take other fundamental courses once you earn this certification meta certifications provide globally recognized and Industry endorsed evidence of your technical skills thank you it's been a pleasure to embark on this journey of Discovery with you best of luck in the future welcome to the next course in database engineering the focus of this course is on Advanced MySQL topics let's take a moment to review some of the new skills that you'll develop in these modules in the first module of this course you'll learn how to create and work with functions along with both basic and complex stored procedures in MySQL this is so you can reuse or invoke code blocks to perform specific operations you'll then learn how to make use of variables and parameters to create more complex stored functions and procedures in MySQL you'll also learn how to develop user-defined functions for when mysql's built-in functions don't meet the needs of your project in the next lesson you'll make use of MySQL triggers to automate database tasks you'll explore different types of MySQL triggers like insert update and delete and you'll learn how to make use of each type you'll also develop an understanding of how you can make use of scheduled events to ensure that your database tasks and events are completed at specific times the next module focuses on core rules and guidelines for database optimization in this module you'll develop an understanding of the concept of database optimization and the advantages it brings to a mySQL database you'll review techniques for optimizing database select statements so that they're executed quickly and efficiently for example targeting required columns or avoiding the use of complex functions you'll also learn how to work with indexes in MySQL to speed up the performance of data retrieval queries in the next lesson you'll be exploring further optimization techniques you'll start by learning how to use myc equal transaction statements to manage database transactions you'll discover how you can use common table Expressions to manage complex SQL queries by compiling them into single blocks of code you'll learn how to make use of prepared statements to limit the number of times MySQL must compile and parsecode and you'll discover how to interact with a mySQL database using the Json data type in the third module you'll explore the concept of data analytics in MySQL first you'll develop an understanding of the relationship between database analytics and MySQL you'll discover how to make use of data collected during data analysis by converting it into useful information that can inform future decisions you'll also explore the different types of data analysis that can be performed within a database you'll then move on to learn about the relationship between MySQL and data analysis including the benefits and limitations of MySQL as a data analytics tool in the second lesson of this module you'll learn how to perform data analysis in MySQL using SQL queries like joins sub queries and Views you'll then explore how to emulate a full Azure join in MySQL to extract all records from two tables including those that don't match and finally you'll learn how to extract data from multiple tables using the join method during these modules you'll encounter numerous activities designed to test your skills and knowledge these include lab exercises knowledge checks and module quizzes and in the final module you'll receive the opportunity to demonstrate some of this learning along with your practical database skill set in the lab project and you'll also demonstrate your knowledge of these topics in a graded assessment so let's get started in the previous courses you learned that you could reuse code within your database projects with the use of functions and stored procedures these methods save you from having to repeatedly type the same code over the next few minutes you'll recap the basics of functions and stored procedures and you'll also learn about their benefits and key differences as you might recall lucky shrub frequently make use of these methods to query stock data in the products table of their database this means that they don't have to repeatedly type the same code each time they check their stock over the next few minutes you'll take a closer look at how they achieve this so as you've just learned the main purpose of creating stored procedures and functions is to wrap or encapsulate code together in the body of a function or procedure this means that instead of typing the same code repeatedly you can call a code block to perform a specific operation by invoking the identifier name but there are other benefits to functions and stored procedures they make code more consistent and more organized and introduce reusability to make the code easier to use and maintain let's look at a few examples of these Concepts to find out more as you just saw a moment ago lucky shrub make use of procedures when checking their stock first they create the query as a stored procedure using the create procedure command followed by the name of the procedure and the required logic they then invoke this procedure using the call command to extract the required data from the database if there's no data in the table then function returns a null value let's look at an example of a function the mod function can be used to find the remainder of the division of two numeric values X and Y for example 7 divided by five to find the result invoke the function by using the identifier name within a select statement in this instance the result is 2. remember that unlike stored procedures a function always returns a value for example you might recall the scenario from the last course in which mng used a function to determine the average dollar amount each client spent with their business this function always returns a value because it specifically targets clients that spent money let's take a few moments to explore a key difference between functions and stored procedures parameters functions can only have input parameters while stored procedures can have both input and output parameters both functions and procedures can accept values within their respective code in other words they both accept an input but only procedures can pass values back out again with the use of output parameters don't worry if you find this concept confusing you'll learn more about parameters in later videos you can create as many functions and procedures as you need just make sure you know when to use one over the other for example functions are best when you need to return one specific value like in a SQL statement or within another function stored procedures are mostly for processing manipulating and modifying data so as you've just learned functions and procedures are an effective way of reusing code to complete repetitive tasks and even though they may bring many benefits it's important to know when to use one over the other in this course you'll explore these Concepts in more depth you might already be familiar with basic stored procedures and functions from earlier courses however MySQL also offers more complex stored procedures and functions which rely on variables and parameters over the next few minutes you'll learn how to use variables and parameters to build sophisticated functions and procedures lucky shrub gardening center have several repetitive but complex queries they need to create for their database they can create these queries using variables and parameters let's follow their process and find out how it works first you need to know what the term variable means in the context of MySQL a variable represents a placeholder that stores a value this value may change at times depending on the needs of the query basically variables are used to pass values between SQL statements or between a procedure and a SQL statement there are two different ways in which variables can be used in MySQL you can create variables inside or outside of a stored procedure and Insider outside of a select statement so what does a variable look like in MySQL a user-defined variable is Created from alphanumeric characters you just type the at symbol followed by the name that you want to call your variable then assign a value to your variable using an equal to operator make sure that you end your syntax with a semicolon but how do you create a variable inside or outside of a stored procedure to do this you need to use the set command within your syntax the set command is used to assign a value to a variable within a stored procedure let's take a moment to see what the set command looks like in practice when creating a variable inside or outside of a stored procedure type the set command followed by the name of the variable then assign a value to the variable for example lucky shrub have an orders table in their database that records orders placed with the business they can create and use a variable called order ID to Target the record with the order ID number of three they can now use this variable to delete update or query the record or you can create a variable inside a stored procedure using the declare command in this instance you type the variable name without an at sign then you assign the variable a relevant data type and default value lucky shrub can use this method to create a variable called minimum order cost the expectation is that this variable stores a value equal to the cost of the minimum order in Lucky shrubs database as you learned earlier you can also create a variable inside a select statement however when assigning a value to a variable in a select statement you need to use the assignment operator syntax this instructs MySQL to assign a value to the variable as standard equals Operator just checks that one value equals another so type a select command and then the name of your variable then assign a value to your variable using the assignment operator for example lucky shrub can create a Max order variable that retrieves the most expensive order from their orders table they can then access the value by typing select Max order the output shows the most expensive order it's also possible to create a variable inside of a select statement and assign it a value returned from a function you just type the select command followed by the function then the into keyword and the variable name finally type the from keyword and the name of the table the value must be extracted from lucky shrub can use this method to create a variable called average cost which Returns the average cost of items from their orders table now that you're familiar with variables let's move on and explore the topic of parameters a parameter is used to pass arguments or values to a function or procedure from the outside in MySQL a function only takes input parameters but there are three different types of parameters that can be declared in stored procedures in out and in-house parameters let's take a few moments to explore how each of these works the in parameter is the default parameter it's used to pass an argument or value to a stored procedure to use this parameter type the create procedure command and your procedure name type the in keyword in a pair of parentheses if you don't specify a keyword then MySQL uses in by default then within your parentheses add another pair of parentheses with your parameter names then add a select statement that outlines the logic of your query for example lucky shrub can create a procedure that calculates 20 percent of each employee's salary for tax purposes they can then call the procedure against a specific salary value this passes the salary to the procedure and Returns the amount due in tax next let's investigate the out parameter the out parameter is used to pass a value to a variable outside of the procedure here's an example where lucky shrub use a procedure called get lowest cost to identify the order with the lowest cost in their orders table they use the out keyword to pass the value outside the parameter so the next step is to call the procedure the value of the procedure can then be stored in the form of a variable within a pair of parentheses to display the variable stored value just use a select statement to return the output finally there's the in out parameter this is a combination of both parameters it's used to pass an argument to the procedure and then pass the new value back to the outside so it's effectively an in and an out parameter for example you could create a procedure called square a number that Returns the squared value of a specific number using the in out keyword and a number variable the procedure expects an input number through the a number parameter it multiplies this number by itself then Returns the result to the same a number parameter again then you can set a variable called X number with a value of five call the procedure using the X number variable value the procedure passes the value through the parameter it then performs the calculation and Returns the result back through the parameter user select statement to Output the variable value you should now be familiar with how to create more complex stored procedures and functions using variables and parameters great work you might already be familiar with mysql's built-in functions but what if none of these built-in functions meet your Project's needs no problem you can develop your own user-defined functions in this video you'll find out what user-defined functions are and learn how to create your own lucky shrub are having a sale in which they're offering a 10 discount on selected products but rewriting the same statement for every product during every transaction would be very time consuming instead lucky shrub need you to create a user-defined function that they can invoke when needed to calculate these discounts before you begin helping lucky shrub let's make sure you understand what database Engineers mean by the term user-defined functions you might already be familiar with built-in MySQL functions like string or numeric functions user-defined functions are created to perform operations that can't be completed with built-in functions users develop code that implements equations or formulas to complete a task and return a result let's break this process down a database engineer creates their own code the code carries out a specific function and the function then Returns the required result to build a function in MySQL you can use the create function command alongside the returns clause and the return command these commands and clauses specify the data type and values to be returned by the function let's find out how this syntax works begin your statement with the create function command then assign a name to your function follow the function name with parentheses and parameters the parentheses are mandatory but you don't always need to include parameters next specify the return data type followed by the keyword deterministic deterministic means that the function always returns the same result for the same input parameters for example if a sum function is defined as deterministic then it always returns the same result for the numbers it adds together finally you can implement the logic with a return keyword let's look at how lucky shrub can make use of a user-defined function lucky shrub can use this syntax to create a function called find total cost a course parameter with a decimal data type passes a user input value of cost and the returns Clause defines the function's return type as a decimal number with five digits finally the return command calculates and Returns the final cost after deducting 10 so each time lucky shrub needs to determine the sale price of their items they just invoke the function in a select statement followed by the current price in parentheses but what if you want to develop a more sophisticated function for example lucky shrub want to offer a 10 discount to customers who make purchases of a hundred dollars or more and a discount of 20 on purchases of 500 or more the first step is to use the delimiter command to compile the whole function as a single compound statement using begin and keywords then click enter to change the delimiter from the default semicolon to a double forward slash next use the create function command and name your function get total cost include a cost parameter with a decimal data type that passes a user input value of cost the returns Clause defines the function's return type as a decimal number with five digits and the return command calculates the final cost after the discount has been deducted the function is also defined as deterministic so that it always returns the same result for the same input parameters the next step is to use the begin and end keywords to determine the body of the function use an if else statement to check the input cost and deduct the appropriate amount then add the return command to calculate the final cost once the discount has been deducted finally click enter to create the new function then change the delimiter to the default semicolon so you can use MySQL as usual now it's time to test the function using a select statement a customer has just made a purchase that cost five hundred dollars so lucky shrub need to determine the discount to be applied type a select command followed by the name of the function alongside the purchase value in parentheses then click enter to execute the function the output result is 400 so this customer's purchase now costs 400 following a 20 discount if you want to drop the function just use the drop function statement followed by the function's name click enter to drop the function lucky shrub can now apply discounts to their customers purchases as required and you now know how to create your own user-defined functions in MySQL for your own specific projects great work you should already be familiar with the process for creating basic stored procedures so in this video you'll learn how to create more complex stored procedures that require multiple statements you can learn how these procedures work by helping lucky shrub lucky shrub need to determine the current cost of each of their products ahead of their upcoming sale they must identify all products that cost less than fifty dollars so they can add an appropriate discount and they need to identify all products that cost more than fifty dollars for further discounts the required data is stored in the products table in their database you can help them to complete this task using a complex stored procedure first you need to use a delimiter command so that MySQL can compile the code in a begin end block as one compound statement type the delimiter command to change the delimiter from the default semicolon to a double forward slash click enter to apply the changes next type the create procedure command then type the procedure name get product summary add a pair of parentheses and include two out parameters along with relevant variables these parameters output the low price products and high price products outside of the procedure they also store the output values in the variables next you need to create the body of the procedure implement the logic within the begin and end keywords the logic consists of two select statements followed by a count command that targets the product ID column within the products table the first statement Returns the ID of all products that cost less than fifty dollars the second statement returns all products that cost more than fifty dollars a double forward slash indicates the end of the query click enter to create the procedure finally change the delimiter to the default semicolon again so that you can keep using MySQL as usual now it's time to execute the procedure type the call command followed by the name of the procedure then in a pair of parentheses create the two required variables you can call the first variable total number of low price products and the second variable total number of high price products these variables hold the output results from the out parameters click enter to execute the call statement the procedure retrieves data from the table and passes it to each variable now you just need to access the data using a select statement type the select command followed by the two variable names make sure the names are separated by a comma click enter to execute the statements the output result shows the total number of low and high priced products lucky shrub now have all the data they require for their sale thanks to your stored procedure good work as a database engineer you'll often need certain actions to occur automatically when specific events take place like when data is inserted updated or deleted from a table but how can you make sure that these actions happen automatically and avoid the need to rewrite code each time they must be invoked you can do this with the use of MySQL triggers in this video you'll learn what a MySQL trigger is and how to code and use them lucky shrubs sales team are adding discounts to products however any discounts over 25 percent must be reviewed by a manager this means that the sales team needs to add a trigger to the database that Flags items when they're assigned a discount above the 25 threshold let's explore MySQL triggers and find out how lucky shrub can use them to complete this task the first question to answer is what's a MySQL trigger a MySQL trigger is a set of actions available in the form of a stored program these set of actions are invoked automatically when certain events occur examples of these events include inserting updating and deleting data from a table in a mySQL database however before you can use a trigger you need to create it and you'll also often need to drop or delete a trigger once it served its purpose let's take a moment to explore the Syntax for creating and dropping triggers a trigger is created using the create trigger statement to create the trigger type the create trigger statement followed by the name of your trigger because a trigger is often user defined you can create a custom name however make sure that each trigger's name is unique within the database then Define a trigger type for example is it an insert update or delete trigger and should it execute before or after don't worry about this for now you'll explore trigger types in a later video next specify which table the trigger must be assigned to and identify how it should be applied to the table next you need to define the trigger's logic in other words specify what it is that the trigger must achieve the trigger can insert update or delete data it can even combine these actions as required if it requires multiple statements then these must be enclosed within a begin end block then execute the statement to create the trigger again this part of the syntax isn't a concern at this stage in the lesson you'll review different types of triggers and what they can achieve in a later video to drop or delete a trigger that you've created you can use the drop trigger command to use this command just right drop trigger then add the if exists Clause this Clause makes sure that the drop command only works if my SQL can locate the trigger within the database if you try to drop a non-existent trigger without this Clause then MySQL returns an error next identify the schema that the trigger belongs to using dot notation to identify both the schema and Trigger names this makes sure that MySQL only deletes the trigger from the specified schema and not the entire database finally type the name of your trigger then execute the statement and drop the trigger it's also important to remember that if you drop or delete a table from your database then MySQL automatically removes all triggers associated with that table so how can lucky shrubs sales team make use of these methods as you learned earlier the team need to add a trigger to their database that Flags when employees attempt to add a discount of more than 25 to an item an approval request must then be sent to a manager for any flagged items lucky shrub can use create trigger commands to create this trigger they can name the trigger approval request they then assign a trigger type of after update so that the trigger executes the logic after an update operation has occurred within the table finally they place the trigger logic within a begin end block finally let's look at a few more benefits of triggers triggers are useful for keeping a log of Records or changes made within a database it's basically a way of maintaining audit Trails where a record is inserted into the database each time A change is made triggers are also an alternative to constraints they can be a useful way to help maintain data Integrity by making sure all data is updated as required they're also useful for performing tasks automatically on specified actions on a database table you should now know what a MySQL trigger is and understand the basics of how to create and drop them within a database as you might know by now a MySQL trigger is a set of actions that can be invoked automatically when certain events occur but how do you determine when and how these triggers are executed well you can control the behavior of your triggers by using different types over the next few minutes you'll explore the different types of triggers available and learn when to use them lucky shrub are rebuilding their orders table which records orders within their database they need to assign a new set of constraints or rules on this table maybe they can create these rules using triggers let's find out which types of triggers lucky shrub can make use of and in what order these triggers should occur first let's explore the two main types of triggers defined in SQL row level triggers and statement level triggers a row level SQL trigger is invoked for every row inserted updated or deleted in a table so if 100 rows are added to a table then the row level trigger is invoked 100 times a statement level trigger on the other hand is invoked once for each action and it occurs just once no matter how many rows are inserted updated or deleted so a single insert statement could add 100 rows to a table but it only activates once for all 100 rows it's important to be aware of both types of triggers however MySQL only supports row level triggers so they'll be the focus of this lesson as you learned earlier triggers are typically used to perform three types of actions insert data into a table update data in a table and delete data from a table but how can you determine when an insert update or delete trigger occurs well depending on when a trigger is actioned it can be classified as either a before or after trigger let's find out what this means the before keyword or modifier indicates that a trigger must be invoked before any action is performed on a table row while after indicates that the trigger is invoked after the action is performed on each row by combining these modifiers with the insert update and delete keywords you can create different types of triggers for example the before insert trigger is automatically invoked before an insert event occurs on a table while after insert is invoked after an insert event similarly a before update trigger is invoked before an update event occurs and an after update trigger is invoked after the event finally before delete triggers are invoked before data is deleted in a table and after delete triggers are invoked after data is deleted in each instance the syntax is largely the same for each type of trigger begin with a create trigger command followed by the name of your trigger next add the modifier and keyword to determine when your trigger must occur and on what action it must take place for example before insert instructs MySQL to invoke the trigger before an insert event occurs on the table then type the on keyword and the name of the table this is followed by the for each row keyword this instructs my SQL to carry out the action for each row in the table finally type the logic of the trigger as you might recall this is usually typed within a begin end block particularly if you need to specify multiple statements let's look to look shrub for an example they want to impose a new constraint on their orders table this new rule must state that no minus values can be inserted in the table's order quantity field so lucky shrub can begin with the create trigger command they can then name the trigger order quantity check next they add the modifier and keyword before insert then they assign the trigger to the orders Tab and make sure it applies to each row finally they create the trigger logic within begin and end statements the logic states that if the table encounters an order with a value of less than zero then it must set the value to Zero by default now each time a new row is inserted into the table the before insert trigger carries out the required action before it inserts a new value let's take a moment to explore some more types of triggers that lucky shrub can use lucky shrub want to maintain an audit trail of all updates made to their orders table with an after insert trigger they can send a log message from the orders table to the audits table each time a new order is inserted the company also needs to create a log that captures the date and time and Order record is deleted from the orders table they can use an after delete trigger for this task after a record is deleted the trigger inserts a record in the log with the date and time these are just a few examples of how the different types of triggers work in MySQL you'll explore them all in more detail later in this course but for now you should be familiar with the different types of triggers available and know how to create them good work at this stage of the lesson you should be familiar with mySQL triggers and the different types of triggers available to database Engineers now let's take a few moments to find out how you can create and drop these triggers into your databases to help you understand these Concepts let's look at how they're used in Lucky shrub lucky shrubs database contains an orders table with several columns that record information on each order placed with the business lucky shrub want to make sure that no minus values are inserted in the table's quantity column when a new order is recorded any minus values that the table encounters must be set to a default value of zero they can complete this task using a before insert trigger the trigger syntax begins with a create trigger command this is Then followed by the name of the trigger which is order quantity check always make sure that the trigger name is unique within the database next assign the trigger type and specify when it must be invoked in this instance it's before insert in other words it's invoked before an insert command then type the on keyword followed by the table name this lets MySQL know which table to Target you'll also need to type for each row so that MySQL targets each row within the table finally write the trigger's main logic this must be a series of one or more SQL statements that execute when the trigger activates if you have multiple statements then enclose them within a begin end block the trigger's logic checks if a minus value is about to be inserted into the quantity column this action requires an if statement so that it can access the quantity column to create this if statement you need to use one of two modifiers new and old new suits our purposes here is it targets the value of a column after the operation in other words the value to be inserted if you needed to access the column value before the operation you'd use the old modifier so type a statement that says if the New Order quantity value is less than zero then set the new value to zero don't worry if you don't quite understand these modifiers they're covered in more detail later in this lesson now let's find out how to run our trigger before running this trigger make sure you redefine the MySQL delimiter semicolon to a double forward slash then execute the trigger once executed change the delimiter back to a semicolon lucky shrub now have the required trigger in their orders table lucky shrub now need to delete this trigger from the table you can delete or drop the trigger using the drop trigger statement type drop trigger then type the if exist condition to prevent MySQL from returning an error next provide both the schema name and the trigger name using dot notation the schema name is optional but still recommended it helps MySQL Target the correct trigger and don't forget that if you drop the orders table then all related triggers are also deleted you've now helped a lucky shrub to create and drop the required trigger from their database and you should now be familiar with how to create and drop triggers from your own databases great work when working with mySQL databases they'll often be tasks or events that must be completed at specific times like inserting data or generating reports with mySQL scheduled events you can make sure that these events occur at the scheduled time even if you're not present in this video you'll learn what a MySQL event is review the syntax used to create events and explore some examples lucky shrub often make use of MySQL scheduled events for example the finance department has just requested a report on all orders received this month month however this report must be generated at 11 59 pm on the last day of the month lucky shrub can use a one-time event to create this report they can schedule their mySQL database to generate the report at the specified time and date before you find out how lucky shrub can create this event let's first find out more about what a MySQL scheduled event is a scheduled event in MySQL is a task executed according to a given schedule in other words it's an event that takes place at a specified time each event has a unique name and contains one or more SQL statements they're stored in the database and can be executed just once or they can be a recurring event the main types of scheduled events that you'll work with in MySQL include one-time events and recurring events one-time events are scheduled events that occur just once for example inserting data into a table one hour from now and a recurring event is a scheduled event that occurs on a regular their basis like generating a weekly report from a database so how do you create a MySQL scheduled event events are created in my sequel using the create event keywords let's find out more about how this syntax works first create the event using the create event keywords you can follow these keywords with if not exists this tells my SQL to create the event only if it doesn't already exist then follow these keywords with a unique event name next type the on schedule keywords and specify a scheduled time at which the event must occur then type the do keyword this keyword is followed by the event body in which you specify the logic of the event using SQL statements so how can you use this syntax to differentiate between one-time and recurring events if your scheduled event is a one-time event then specify the schedule using the at Clause this is followed by a timestamp an interval keyword and a specific time at which the event must be executed for example lucky shrub can use this syntax to generate a one-off revenue report 12 hours from now and they can create their event logic within a begin end Clause creating a recurring event is more complicated the syntax is largely the same the key difference is that you must use the every Clause instead of at followed by an interval you can also use the starts and ends keywords with the timestamps and intervals to designate specific starts and end points for the event lucky shrub can use the recurring event syntax to create a daily stock check event if the event identifies that some stuck levels are too low it sends out an order to restock those items you'll find out more about how lucky shop can create this and the previous event in just a moment before then let's look at how to delete or remove an existing MySQL event that's no longer needed using a drop event statement first type the drop event keywords it's also good practice to include if exists this tells MySQL to check if the event still exists and hasn't already been dropped from the database finally type the event's name and then execute the statement now that you're familiar with scheduled events and their syntax let's see if you can help lucky shrub generate that report as you saw earlier lucky shrubs finance department has just requested a report on all orders received this month they need the report generated at 11 59 pm on the last day of the month however it's now the last day of the month and it's also approaching 12 noon so they need the report 12 hours from now this is a one-off event so begin with the create event keywords then assign the event a unique name let's use generate revenue report now you need to specify the schedule since this is a one-time event use the at Clause then schedule the event to occur 12 hours from now so include the current timestamp and add a 12-hour interval the next step is to add the schedule's logic type the do keyword Anna begin and end block within this block instruct MySQL to select all data inserted into the orders table this month and to place that data within a report data table great 12 hours from now the finance department will have their report lucky shrub need your help with another task they're reviewing their stock and need to make sure that they have at least 50 units available for each item on sale you can help them by using a recurring event first create the event and call it daily restock then specify the schedule as this is a recurring event use the every clause and schedule it to occur once a day next add the do keyword followed by a begin and end block within this block Define the events logic MySQL must check if the number of items for any record in the products table is below 50. if my sequel locates a record below 50 then the number of items must be updated if at any stage you need to remove this event just type the drop event keywords then if exists followed by the event name great work you've helped lucky shrub to create these events in their database you should now be familiar with the basics of MySQL scheduled events include different types of events and their syntax well done congratulations you've reached the end of the first module in this course let's take a moment to recap on some of the key skills you've gained in this module's lessons in the first lesson you received an introduction to the course in which you learned about the role of an advanced database engineer in Mata discussed what you hope to learn with your classmates received an overview of the topics that you'll cover in this course and you enhanced your knowledge by reviewing some key additional resources in lesson two you learned about Advanced MySQL functions and stored procedures you learned that the main purpose of creating stored procedures and functions is to wrap or encapsulate code together in the body of a function or procedure the benefits of stored procedures and functions is that they make code more consistent more organized and introduce reusability to make the code easier to use and maintain and you also know that the key difference between functions and stored procedures is parameters functions can only have input parameters while stored procedures can have both input and output parameters you then learned how to make use of variables and parameters to create more complex stored functions and procedures in MySQL you learned that variables are used to pass values between SQL statements or between a procedure and a SQL statement you can create variables inside or outside of a stored procedure and inside or outside of a select statement you learned that a parameter is used to pass arguments or values to a function or procedure from the outside and that there are three different types of parameters that can be declared in stored procedures in out and in out parameters you also learned how to develop user-defined functions for when mysql's built-in functions don't meet the needs of your project a database engineer creates their own code the code carries out a specific function and the function then Returns the required result in lesson three you explored MySQL triggers and events you discovered that a MySQL trigger is a set of actions available in the form of a stored program these set of actions are invoked automatically when certain events occur examples of these events include inserting updating and deleting data from a table in a mySQL database the two main types of triggers used to manage SQL events include row level triggers and statement level triggers next you discovered how to create and drop these triggers in MySQL you can now create a trigger using the create trigger statement and drop or delete a trigger using the drop trigger command you also reviewed other aspects of the syntax you used to create a MySQL trigger this includes defining the trigger name and type and specifying the logic by enclosing multiple statements within a begin end block you then learned about MySQL scheduled events as part of this lesson you reviewed the syntax and process steps for creating a scheduled event in MySQL you should now be able to work with functions and triggers in a mySQL database well done I look forward to guiding you through the next module in which you'll learn how to optimize a database you need your databases to respond quickly to your SQL queries but as your data volumes grow and your data requirements become more complex the response times can increase fortunately you can use database optimization to improve the performance of your databases in this video you'll learn about the concept of database optimization and its importance over at lucky shrub they've had a large increase in clients and orders during their latest sale their data volumes have grown considerably so they now need to make sure that they can still retrieve information from the database quickly let's find out more about database optimization and discover how lucky shrub can optimize their own databases database optimization is improving the performance of a database system to reduce the time it takes to query process and transmit a user's query basically it's the process of maximizing the speed and efficiency of the databases performance and optimized database can process a SQL query and return the required data fast it's also important to note that database performance depends on hardware and software in this lesson you'll focus on optimizing queries using MySQL software at this stage of the course you've encountered a lot of different kinds of SQL statements these SQL statements can be divided into two categories data retrieval statements which return data from the database these are also known as select statements and data change statements used to alter data within the database like insert update and delete both types of statements require different kinds of optimization later in this module you'll explore optimization techniques in detail but for now let's look at the basics data retrieval statements are select statements there's a lot of work involved in optimizing select statements and it typically involves indexes and index is a type of handle that you can use to quickly look up data indexes are created on table columns you learn more about indexes later in this lesson other methods for optimizing select statements that you'll encounter in this lesson include targeting specific columns in the select command efficient use of functions and wildcards in predicates and making use of inner joins instead of outer joins you'll also learn about deploying the distinct and Union Clauses and explore the importance of using the order by Clause to sort results different methods are required for optimizing data change statements for example to optimize update and delete statements you first need to optimize the conditions in the where clause and insert statements can be optimized by performing batch inserts this means inserting more than one row in a single insert operation for now you just need to be aware of the distinction between data retrieval and data change statements you'll explore them in more depth later in this lesson although database optimization can be complex it's worth the effort as you've learned an optimized database offers improved performance with faster turnaround times and it removes unwanted task loads from the database by optimizing their database lucky shrub can process their sales data much more quickly and efficiently they'll avoid any potential issues that could arise from the growth in data you should now be familiar with the concept of database optimization along with the different kinds of SQL statements that can be optimized good work when working with a database it's important that your SQL queries are compiled and executed quickly and efficiently by the database but this can only happen if your queries are optimized over the next few minutes you'll explore techniques for optimizing select statements lucky shrub have received large numbers of orders from their clients this has led to increased volumes of data in their database they need to query this data using select statements to improve the performance of their queries they'll need to make sure that the statements are optimized let's find out more about optimization guidelines and explore some techniques that lucky shrub can make use of as you might already know select statements belong to a category of SQL statements called Data retrieval statements these types of statements are designed to return data from the database but if they're not optimized correctly then they add extra load to the database and slow down its performance this means that it then takes it longer for the database to execute your SQL select statements or queries and return the data you need however there are a few basic guidelines or best practices that you can follow to optimize your select statements you might already be familiar with some of these methods Target only required columns in your select Clause avoid using functions in predicates and avoid using a leading wild card in predicates use inner join where possible and make use of distinct and Union Clauses only when necessary let's take a few moments to explore some examples of these guidelines when querying a table you might often make use of an asterisk in your select statement to extract all available data however instructing my sequel to query all data in a table adds extra load on the database and slows down its performance particularly if you only require data from specific columns a more optimal approach is to list only the columns in your statement that hold the data you require instead of using an asterisk lucky shrub can use this method to Target the required data in their orders table and return the data faster another common mistake that database engineers make is using MySQL functions in predicates that refer to columns which aren't indexed a predicate is an expression that returns a true or false value an example of this is where Clause conditions you should also avoid using functions in the where clause on a column that's indexed because this prevents the database from using the index you'll explore indexes in more detail later in this lesson using a leading wildcard on predicates can also lead to a Slowdown in the database an example of this is using patterns that begin with a wild card when combining the like operator in the where clause my sequel can't make use of an index in a column during a search when it's matched against a pattern with a leading wild card another method for optimizing databases involves using the inner join instead of the Azure join where possible an outer join retrieves all records from both tables including rows that don't contain matching values this takes longer for MySQL to process the inner join is more efficient because it retrieves only the necessary data or matching records from both tables this helps to optimize your queries often when creating SQL queries you'll use the distinct Clause to eliminate duplicate values or the union Clause to combine multiple query results this can slow down the query because it must perform a sorting operation and eliminate duplicate records however if you use Union all instead then this eliminates the need for a sorting operation and speeds up the execution process you should now know how to optimize MySQL select queries and be familiar with basic optimization guidelines well done lucky shrub have received large numbers of orders from their clients and need to query this data quickly and efficiently using select statements they must make sure their queries are optimized so that MySQL can compile and execute them efficiently first the sales department needs to find out which orders are arriving on September 12th they can write a select query that uses the where clause and date underscore add function but sorting through the data to calculate delivery dates using the date underscore add function in the where Clause places a lot of extra load on the database a more efficient method is to generate a custom column in the orders table called expected delivery date this column shows the expected date of each delivery so now lucky shrub just need to scan this column for all values that match September 12th and they no longer need to use a function the sales Department's next task is to process an order for a customer with the surname of Ito first they need to find the customer's details in the clients table in the database one method is to use a select statement that combines a leading wild card with the like operator but MySQL can't make use of an index when there's a leading wildcard the solution is to add a new column to the client's table using an alter table statement and call it reverse full name the reverse full name column contains the client names but reversed in other words the client's last name or surname is listed first then their first name you can run an update statement on the client's table to carry out this task next use the create index syntax to create an index on the new column don't worry about this Syntax for now you'll explore it in more detail in a later video you can now make use of a trailing wild card with the like operator on the reverse full name column to achieve the same result and you can still use the index finally the finance department need a report on all orders placed with the store they can extract this information by targeting the product and orders tables in the database usually this task could be completed by using an outer join query however this type of query also returns records that don't match from both tables even though they're not required a more efficient method of querying these tables is for Lucky shrub to use an inner join that targets the shared product ID columns from both tables this returns only the matching records so it's a much more efficient way to execute the query you should now know how to optimize MySQL select queries and be familiar with basic optimization guidelines well done when performing data retrieval MySQL often scans an entire table even though it only needs to locate specific column values these queries take a lot of time to execute and place extra load on the database but MySQL can execute these data retrieval queries faster with the use of indexes that Target specific column values in this video you'll learn how to use indexes to speed up data retrieval over at lucky shrub the sales department needs to retrieve the contact numbers of clients from the database but there are thousands of clients and phone numbers to sort through fortunately lucky shrub can make use of indexes to retrieve this data faster before you explore lucky shrubs process let's take a few moments to get a better understanding of indexes an index is a data structure that helps to maintain pointers that lead to sorted data although you can't see an index within a database it's still helpful to visualize it as a table that contains two columns one for pointers and another for sorted data for example lucky shrub can use an index that lists pointers in one column and the full names of clients as sorted data in a second column there are two types of indexes used in a mySQL database the first is a primary index also called a clustered index the second is a secondary or non-clustered index a primary index is an index that is stored within the table itself it's generated automatically once you create a table that contains a primary or unique key the index enforces the order of rows in the table within the table itself a secondary index is created using the MySQL create index statement the syntax begins with create index then write the name of the index a commonly used approach is to write the name of the column you want to create the index on prefaced by idx for index next use the on keyword to assign the index to a table and finally add a pair of parentheses and write a list of columns that the index is to be used against an index can be created using one or more columns from a table but you should only create indexes on columns that you'll frequently perform searches against this is because when you update or insert data into the table that same data must also be added to or updated within the index which takes time lucky shrub can use a secondary index to optimize their SQL select query they can create an index on the full name column so that client details can be located faster now that you're more familiar with the concept of an index let's see if you can use your new knowledge to help lucky shrub lucky shrub need to find the contact number for the client Jane Delgado however there are many client names to search against and MySQL must scan all rows until it locates the correct name let's quickly review the approach that MySQL usually takes to complete this task first type the explain Clause to Output data that explains how the database executed the query you can pinpoint potential bottlenecks and sub-optimal queries by reviewing the output then type a select statement that selects a contact number from the client's table that matches the value of Jane Delgado press enter to execute the query the query Returns the contact number for Jane Delgado but as the output results have shown MySQL had to scan and filter 10 records before it found a matching value the possible Keys column also shows a null value this means that there's no key or pointer that can help to make the search easier so the solution is for Lucky shrub to speed up the search process by creating a secondary index first type create index then add idx for index to the full name column next Target the client's table using the on keyword and then place the full name column name in parentheses finally execute the statement to create the index to test the efficiency of the index you can create and execute another explained statement this time the output result shows that MySQL only had to locate one row and it was able to locate possible Keys using the index full name Index this means that lucky shrubs SQL select query can now search the index and locate the data faster instead of searching through all records in the client's table and you should now be able to explain what an index is outline the differences between primary and secondary indexes and describe the process for creating an index well done how often have you encountered an error during a critical activity forcing you to begin the task from the beginning this can be particularly stressful when you're working on many related queries at the same time thankfully you can use MySQL transaction to roll your database back to a previous state in this video you'll learn how to manage database transactions with mySQL transaction statements lucky shrub are updating new sales in their orders table and the stock levels in their products table to carry out this transaction lucky shrub need to create and execute several different queries but they could encounter an error at any point in the process for example the internet connection could fail while inserting data into the table and this could result in invalid data or an incomplete transaction however if such an event were to occur lucky shrub can roll back their database and restore it to its original state using transaction commands so what are transactions in MySQL as you just saw with lucky shrub a transaction in my sequel is one or more queries that can be committed permanently to the database and the database can be rolled back to its original state if any of the queries fail to execute as required MySQL provides the following set of statements for managing database transactions start transaction begin or Begin work commit and roll back let's explore each of these statements in more detail start transaction is the standard SQL statement for starting a transaction process this syntax marks the point that you'll return to should you decide to roll back the process so begin your syntax with start transaction then list your SQL statements underneath for example lucky shrub can begin their database updates with a start transaction statement and then follow this with a list of the required SQL queries however start transaction isn't the only way to begin a transaction with mySQL you can also use the begin or Begin work aliases as alternative ways to initiate a transaction whichever method you choose once you've finished typing your SQL statements and you're happy with the result then it's time to commit the transaction to the database you can use the commit statement to commit the transaction changes permanently to the database just type the commit statement at the end of your code block but what if you encounter an error during your transaction like lucky shrub and their internet connectivity issues or maybe you typed incorrect code executed the wrong statement or entered incorrect data you can use the rollback command to roll back the current transaction and cancel the changes made to the database just add the rollback statement to the end of your SQL statements to return to your start transaction point however it's important to remember that the rollback statement must be enacted before you commit your SQL statements once you've rolled back your code you then need to type the correct SQL statements and once you're happy with these statements type commit to commit the changes to the database so let's quickly recap the process begin your transaction with start transaction type your required SQL statements and use commit to commit your changes to the database and should you encounter any errors or other issues just use the rollback statement to return to your start transaction point now that you're familiar with mySQL transactions and the related statements let's see if you can help lucky shrub update the sales and stock levels in their database tables a client with an ID of cl1 has just placed an online order for 10 bags of artificial grass this item has a product ID of P1 in the products table there are currently 100 bags in stock this number must be updated to 90 once the client's order is processed first type start transaction to determine the point you can roll back to if an error occurs now you need to add the required SQL statements the first is an insert into statement this statement inserts a new set of values into the required columns in the orders table for the client's order then type an update statement that updates the number of bags of artificial grass in the products table by deducting 10 units from the current stock level the next step is to use a select statement that creates an inner join between the orders and products table using the product ID key which is common to both tables execute this statement to check if the transaction was completed as you expected unfortunately it looks like there was a mistake these updates have been applied to the client with the ID of cl11 it seems you typed the wrong client ID in your code no problem you can restore the data by using the rollback statement now check the orders and products tables again using select statements all data has been restored to its original state so let's type start transaction once again followed by the same SQL statement as before only this time make sure to update the correct client details once you've completed your new set of SQL statements check that the output is as you expected great this time all details are correct you can now type commit to commit your changes to the database you should now be able to manage transactions in your MySQL databases using transaction statements great work when working with databases you'll often need to write complex SQL queries these can be difficult to manage however you can optimize these queries by compiling them into simple blocks using a Common Table expression or CTE in this video you'll learn how to make use of ctes to optimize your database queries over at lucky shrub the finance department must calculate the average sale for each customer over the last three Financial years to carry out this task lucky shrub need to create one select statement that contains several complicated SQL queries that include functions strings operators and clauses fortunately you can help lucky shrub to minimize the complexity of these queries using a CTE before you help them out let's explore the basics of a CTE a CTE is a method of optimizing complex database queries by compiling them into simple blocks of code these blocks can then be used to rewrite the query by calling the CTE when required this simplifies the query and makes it much easier to read and maintain a Common Table expression can be created for one or multiple queries it all depends on the requirements of your database let's begin with an exploration of the Syntax for a single CTE the Syntax for a single CTE query uses the with Clause to start the Common Table expression this is Then followed by the name of the CTE this can be a custom name the as keyword is then used to associate the query within parentheses with the CTE name finally create a select statement to query the name of the Common Table expression the Syntax for creating multiple queries is a bit more complex start your code block using the with Clause then list the queries underneath the with Clause make sure that each query has a unique name and is separated by a comma finally type your select statement to execute a CTE type its name after the select statement or you can execute multiple CTE at once to execute more than one CTE add a select statement for each CTE place a union operator in between your statements to return data for all statements in the output result for example lucky shrub can use multiple queries to calculate their average sale let's explore lucky shrubs use of ctes in more detail see if you can help them out using your new skills as you discovered earlier lucky shrub need to calculate the average sale over the last three Financial years their current approach is to create three separate select statements one for each year the statements are combined using a union operator each statement calculates the average cost by using Aggregate and string concatenation functions the data is extracted from the orders table and the conditions are specified using a where clause you can click enter to execute these statements and return the average sale for each year although they work as intended these queries are quite complex and difficult to manage but you can use a Common Table expression or CTE to improve their readability start by using the with Clause then rewrite the first expression as average sales 2020 followed by the required logic you can use an as keyword to return it as average sale for improved readability then create the second and third Expressions use the as keyword to associate the expression with the query and make sure that each expression is separated by a comma now you just need to type three select statements each statement uses an asterisk symbol to extract all data from each of the three Expressions Place Union operators between the queries to combine the results finally press enter to execute the code the output is the same as the last query you executed however this time you've created a query that is more optimal all expressions are now contained within a simple block of code that is easy to read and maintain you should now be familiar with how to use a CTE to optimize your database nice work each time you create a statement it must be compiled and parsed by MySQL before it can be executed this process uses a lot of resources a more efficient method is to create a prepared statement that can be used repeatedly without requiring clearance each time in other words you can create a prepared statement that MySQL compiles and parses just once before it's executed the statement functions as a template that holds unspecified values as parameters these values can then be added as required each time the statement is invoked my sequel knows it's safe to execute this is a much more efficient and optimal way of executing statements without using valuable MySQL resources let's look at an example of how to create and execute a prepared statement from the lucky shrub database lucky shrub need to extract data on customer orders from their orders table Let's help them carry out this task using an optimized prepared statement the prepared statement must return the following information from the orders table in the database for each specified record client ID product ID quantity and cost the first step is to prepare the statement using the prepare command then type the statement name this can be a custom name in this instance you can call the statement get order statement then type the from keyword follow this syntax with a select statement in single quotation marks this select statement extracts the required data from the orders table against a specified value however you might have noticed that the value is currently unspecified because it requires an input value later you can enter any value you like to process the statement with a new argument you don't have to wait for my SQL to compile and parse the statement click enter to execute the statement the database Returns the output result a confirmation message that declares statement prepared the get order statement is now ready to use next you need to declare a variable named order ID and assign it a specific order ID let's use an ID of 10. now you can use this variable with the prepared statement first type the execute command followed by the statement name this command is used to execute prepared statements next type the using keyword followed by the variable name the using keyword specifies the variable value to be passed to the parameter in the prepared statement so this prepared statement is basically instructing MySQL to extract the client ID product ID quantity and cost data associated with the order ID 10 in the orders table click enter to execute the query and return the results although this prepared statement targeted the order id10 you could also Target any other order ID from the orders table the statement can extract the related data and it doesn't have to wait until it is compiled by MySQL you should now be able to create and execute a prepared statement in MySQL well done as a database engineer you need to work with many different types of data this can place a lot of pressure on mysql's resources as it compiles and parses through these different data types one method of optimizing mysql's use of resources is to store data using the Json or JavaScript object notation data type Json is an easy method of communicating data between different database systems and it stores data in a simple text format that doesn't require any special parsing here's an example of a line of Json code from the lucky shrub database lucky shrub used this line of code to store properties and assign them specific values this line of code is placed within a pair of single quotation marks and curly braces within the insert into statement each property and value are typed in double quotation marks and separated by a colon these are known as key value pairs each pairing is separated by a comma let's explore how lucky shrub makes use of this MySQL Json code in their database lucky shrub need to track the actions of clients who use the online store as they browse lucky shrub products and place orders lucky shrub can capture this information and store it in Json format in MySQL MySQL can then quickly and efficiently process this data first create a table called activity that stores client activity then create two columns the First Column is called activity ID and provides a unique identifier for each client activity using an integer data type the second column is called properties this is a Json data type column it stores the properties of each client activity like client ID and product ID it also records if the client has placed an order by placing either a true or false value next to the order property the next step is to populate the table with data create three activity IDs and then log client activities using Json code for three client IDs two of these clients have ordered products one client has not ordered a product now you need to retrieve data from the properties column since you're working with the Json data type you need to retrieve or access this data using a column path operator type a select statement that selects the activity ID and properties columns for the properties columns use the dollar sign symbol and Dot notation to denote each element inside the Json property place the column path operator between the columns and their elements finally execute the statement to return the output results from the activity table you've now helped lucky shrub to create an optimal method of storing and accessing data from the activity table in their database you should now be familiar with how to use Json in MySQL to optimize the database great work congratulations you've reached the end of the second module in this course let's take a moment to recap some of the key skills you've gained in this module's lessons in the first lesson you learned how to optimize database queries and you now understand that database optimization is the process of maximizing the speed and efficiency of the databases performance when executing queries you know that optimization focuses on two different kinds of statements data retrieval or select statements which return data from the database and data change statements used to alter data within the database you learn that by optimizing a database you can process data much more quickly and efficiently during this lesson you also learned how to implement different optimization techniques on select queries including targeting only required columns in your select Clause avoiding the use of functions in predicates and avoiding the use of a leading wild card in predicates you also learn to use inner join where possible and make use of distinct and Union Clauses only when necessary and you also explored the use of indexes to help maintain pointers that lead to sorted data during your study of indexes you learned that there are two types of indexes the first is a primary index also called a clustered index the second is a secondary or non-clustered index and you then review the Syntax for creating a secondary Index this involves using a create index statement a custom index name and the on keyword to Target the required table and columns in lesson two you explore further optimization techniques now that you've completed this lesson you're able to make use of MySQL transaction statements to manage queries and roll the database back to its original state if any of the queries fail to execute as required you can manage database transactions using statements like start transaction begin or Begin work commit and roll back you can start your transaction using start transaction and you know that if you encounter an error with your queries you can add the rollback statement to the end of your SQL statements to return to your start transaction point as you work through this lesson you also learned how to optimize select queries using MySQL Common Table Expressions you can now use a CTE to compile complex queries into simple blocks of code these blocks can then be used to rewrite the query by calling the CTE when required this simplifies the query and makes it much easier to read and maintain start your code block using the with Clause then list the queries underneath finally type your select statement followed by the query name you can also execute multiple CTE at once using the union operator between statements you then explored MySQL prepared statements you can now make use of prepared statements to limit the number of times MySQL must compile and parsecode and you discovered how to interact with the mySQL database using the Json data type as you worked through these lessons you also enhanced your understanding of the topics through Reading items tested your knowledge of optimization techniques in quiz environments and demonstrated your ability to make use of MySQL optimization techniques in a lab environment having completed this module you should now be able to make use of a wide range of database optimization techniques you can deploy these techniques to make sure that your statements are compiled parsed and executed quickly and efficiently in MySQL great work at this stage of the course you should be familiar with the role that data analysis plays in databases but it's also important to understand the close relationship between data analysis and data analytics with data analytics you can take the data collected during data analysis and convert it into useful information that can inform future business decisions over the next few minutes you'll explore how this works and learn about different types of data analysis lucky shrub have had a great holiday sales season they've collected a lot of data around their sales they now need to use this data to help plan for their next sales period lucky shrub can use data analytics and related variables to make sense of this data and plan effectively for the future so lucky shrubs use of data provides a good base for understanding what the term data analytics means data analytics involves taking data analysis a step further by converting and processing the collected data into useful and meaningful information this information is then used to inform and make predictions about future events data analytics also involves the use of special tools which you'll explore briefly later in this lesson so your next question is most likely how do organizations make use of data analytics over at lucky shrub they can make use of their data with data analytics tools to predict what products sell best and should be kept in stock what kind of special offers attract the most customers and how best to manage their online sales however before you can perform data analytics you first need to analyze and generate insights into the data you've collected this data is collected through data analysis and SQL queries there are different types of data analysis that can be performed within a database let's take a few moments to explore these different types of data analysis and learn how they inform data analytics descriptive data analysis presents data in a descriptive format in other words it describes what happened you can use the data extracted from a database to explain a particular event for example lucky shrub can analyze their sales over a specific period they can then describe the period using this data by referring to top selling products and profits that they made exploratory data analysis is the attempt to establish a relationship between different variables in a database in other words is there a relationship between variables A and B or can you establish a link between variables X and Y over at lucky shrub they use exploratory data analysis to determine if there's any correlation between an increase in sales for a specific product and the season in which it sold like an increase in the sale of trees during the holiday season inferential data analysis focuses on a small sample of data to make inferences about a larger data population and draw General conclusions lucky shrub often make use of inferential data analysis their data shows an increase in the sale of barbecue products over the summer months so they can infer that this is the best period in which to sell these Goods predictive data analysis uses existing or Legacy data to identify paradigms and patterns these patterns can then be used to make predictions about future performance for example lucky shrubs data show that the sale of gardening tools increases when these items are discounted they can use this data to predict that further discounts will lead to more sales and there's also causal data analysis which explores the cause and effect of relationships between different variables did variable a cause b or did variable X have any effect on why lucky shrubs data showed that many customers who bought gardening tools also bought outdoor lighting Products causal data analysis is a great way for a lucky shrub to try and identify the relationship between these purchases finally know that database Engineers often use the terms data analysis and data analytics interchangeably although separate Concepts they are closely linked you can't have data analytics without data analysis so be aware of this fact when working with data analytics you should Now understand the concept of data analytics and be able to recognize different types of data analysis well done at this stage in the lesson you should be familiar with the concepts of data analytics and data analysis and the differences between them over the next few minutes you'll explore the relationship between my SQL and data analysis along with the benefits and limitations of MySQL as a data analytics tool lucky shrub make use of my sequel to host their data and Carry Out data analysis lucky shrubs database processes a large volume of data every day including online orders client information and data around the store's products with mySQL lucky shrub have an effective set of tools to host and work with this data over the next few minutes you'll discover how lucky shrub can make use of MySQL to analyze this data as you've progressed through these courses you've learned how powerful MySQL is and another advantage of MySQL is that it offers database Engineers the tools required to perform data analysis on the data in their database however MySQL also has its limitations when compared to other more Advanced Data analytical tools let's take a few moments to explore these MySQL databases are built using a relational database model as you learned in a previous course relational models structured data sets in related tables and these related tables make it easy to access retrieve and analyze related information with mySQL lucky shrub can connect their database tables using foreign keys this means that they can use one table to locate information in another for for example the orders and products tables are both connected through the product ID key this relationship helps lucky shrub identify the products that each client ordered MySQL is also a free open source database management system so there's no cost or intellectual property to consider when managing a database with mySQL this is of great benefit to Lucky shrub because it reduces the cost of doing business and because of its capacity and accessibility MySQL is a very widely used database management system large numbers of businesses governments and other organizations make use of MySQL to collect store and process data this makes it easier for these organizations to communicate data and improve their data analytics for example lucky shrub suppliers also use MySQL to manage their data so lucky shrub can keep their suppliers up to date with information on their stock Levels by sending them data from the products table in their database however despite all these advantages MySQL also comes with limitations mysql's capacity to perform data analysis is much more limited than other more advanced data analytics tools with these other tools database Engineers can perform much more complex data analysis backed by powerful artificial intelligence my sequel also lacks data visualization features other database analytics tools offer database Engineers visualization features like bar charts graphs and Maps these tools are a much more effective way of communicating information than just presenting data in the form of tables with these visualization tools lucky shrub could quickly spot Trends in their data and identify any major issues so as you've just learned there are a wide range of benefits to using MySQL as a database management system it's free and open source holds large amounts of data in a relational system and is widely used across various organizations however its data analysis and visualization capabilities are limited when compared to other more Advanced Data analytical tools analyzing data in a mySQL database requires a good understanding of how to access data and extract relevant information using SQL queries you should already be familiar with many of these like sub queries joins and Views over the next few minutes you'll learn about the role that these SQL queries play in the data analysis process over at lucky shrub they need to perform data analysis on the client orders within their database however the types of data analysis they need to perform are very different they include simple data extraction tasks using basic SQL queries and tasks that involve Advanced sub queries joining tables and creating virtual ones once these tasks are completed and the required data is extracted they can analyze it and prepare it for data analytics let's explore the relationship between SQL queries and the data analysis process to find out more about how they support lucky shrubs business as you should already know data analysis involves collecting and presenting the data in your database the data can then be used to gather further insights to support the data analytics process in MySQL data can be collected from a database using a wide range of SQL queries at this stage of the course you should be familiar with many of these SQL queries for example you can extract or collect data using joins to join two tables together sub queries to create a query within a query and Views to create virtual tables you can also use functions to perform sophisticated operations and return different results and filter required data using operators so the basic process for performing data analysis in MySQL using SQL queries works as follows you can extract the required data from your database using a wide range of one or more SQL queries you can then use further SQL queries to present a description of the results of your data analysis and you can then gain further Insight from these initial results using data analytics let's explore an example of this process lucky shrub need a list of all products that sold in quantities of 100 items or more they can extract this data using a sub query that targets the tables that hold the data and filters the required results once they execute the sub query MySQL Returns the records that they need a list of the top selling products once lucky shrub identify their top selling products they can then use different types of SQL queries and data analytics tools to generate further insights and plan for the business's future all these insights and potential strategies are made possible by the data collected through SQL queries for example now that they know what their best-selling products are they can continue to buy more of them and they can buy less of the products that don't sell as well they could even offer discounts on certain items to try and increase sales so to recap this process lucky shrub create a SQL sub query to Target the data they require they then extract this data from the database through data analysis and this data can then be explored further using more sophisticated queries to generate business insights now that you're familiar with this process it's time to put it into action as you discovered a few moments ago lucky shrub need to perform data analysis on their client orders let's see if you can help them out the data that lucky shrub require is in the orders table in their database they've extracted the data from the table but they now need to Target specific data that provides insight into the performance of the business for example lucky shrub need a list of all products that sold 10 items or more you can extract this data with a select statement that targets the order tables product ID column then add a where Clause that targets any product ID that has sales data next add a sub query that selects product IDs from the orders table that's sold in a quantity equal 2 are greater than 10. execute the statement to extract the data MySQL outputs a table that displays the required records joins are also a useful method of Performing data analysis in MySQL you can use different kinds of joins to explore the relationships that exist between data lucky shrub need to analyze and extract data on their clients and the orders they placed over the last 10 days however the data exists in two separate tables you can help them analyze the data in these tables using joins write a select statement to join the required columns from the orders and products tables using an inner join then use the between keyword to filter the client IDs and related orders from the required dates execute the statement to show the required data views are also helpful for analyzing data they can be used to create virtual tables that focus on specific types of data lucky shrub need to analyze their sales data and extract the top five best-selling products you can use a select statement and a virtual table or view to help lucky shrub analyze their data for this information writer create view statement call the new virtual table top products then write a select statement that uses an inner join to combine the required columns from the products and orders tables these tables hold all the sales data you need to help lucky shrub carry out their analysis finally use an order by Clause to order the extracted records in descending order then execute the statement the statement creates a new virtual table called top products that shows the name quantity and cost of the top five best-selling products you can use a select statement to extract all data in this virtual table to perform further data analysis all these insights and potential strategies are made possible by the data collected through SQL queries and what SQL queries you use all depends on what data you need to extract and analyze and what you want to achieve from this analysis you should now be familiar with performing data analysis in MySQL using SQL queries great work when analyzing data there may be times you need to extract all records from two Separate Tables you could use a join but this would just return matching records sometimes you'll also need records that don't match so the solution is to emulate a full outer join this method extracts all records even those that don't match in this video you'll learn how to emulate the full outer join or full join in my sequel to extract data from tables lucky shrub need a list of all their new orders and the clients who place them they also need the data of clients who didn't Place orders this data is stored in two tables clients and orders MySQL supports inner left and right joins but these won't return the required data from both tables so lucky shrub need to emulate a full join to extract this data you can help lucky shrub to complete this task but first let's find out more about what a full join is and how it works in SQL a 4 Azure joint returns all records from a left and a right table when it identifies a match between the two this includes records that match and those that don't however MySQL doesn't support the full outer join so you need to emulate it using a combination of the left join and the right join you also need to use the union all operator to return duplicate records should they exist alternatively you could use the union operator to retrieve unique records only let's take a few moments to explore the syntax for these methods here's how to emulate a full Azure join using a union all operator first type a select command followed by the names of the columns that you require then use a from Clause to Target the first table which is your left table next use a left join Clause to join the first table with the second table the right table then use an on keyword and Dot notation to equate the matching columns between the two tables now that you've scripted the left join you need to create the right join but you also need to combine these joins using a method that returns all duplicate records it's at this point in your syntax that you can add the union all operator once you've added the operator create the right join as you should already know the syntax is almost the same as the left join statement the key difference is that you must use a right join Clause when executed the union all statement returns all duplicate records should any exist but what if you only want unique records to retrieve unique records only you can use the union operator the syntax is largely the same you need to script a right join and a left join statement as before but this time place the union operator in between the two statements instead of Union all then when executed the statement only returns unique records for example lucky shrub can use the union operator syntax to return unique records from the clients and orders tables let's see if you can help lucky shrub to simulate a full outer join using the union operator as you learned earlier lucky shrub need records of all new orders from the orders table and the records of the clients who place these orders from the clients table they also need the data of clients who didn't Place orders start with a select statement then Target the following required columns from the clients table using dot notation client ID full name and contact number then Target the following columns from the orders table order ID cost and date then use the from Clause to identify clients as the left table join it to the orders table with a left join Clause the next step is to use the on keyword to equate client ID as the matching column between both tables then add the union operator so that the syntax retrieves unique records only now it's time to script the right join statement again the syntax is mostly the same as the left join statement just use a right join Clause instead of a left Join one finally press enter to execute the statement the output shows all client IDs with the related order IDs it also shows matching order IDs and client IDs along with IDs that don't have a match lucky shrub now have all the records that they require from their database and you should be familiar with how to emulate a full outer join in MySQL using the union and Union all operators good work at this stage of the module you should now be familiar with the data analysis process and how the results can inform data analytics now let's look at an example of how to perform data analysis in MySQL by querying multiple tables using joins lucky shrub need to identify all clients who bought 10 items or more from a specific product line so that they can send them a special offer to make more purchases the clients must have made their purchases after the 5th of September 2020 and there must be 50 units or more of the product currently available in stock so that all special offers can be redeemed there are three tables in the database that contain the required data the orders table with information on each order the clients table which contains key information about each client and the products table which holds the data on all products in the store you can help lucky shrub to query these Tables by using an inner join to Target the following data the client ID and contact number columns from the clients table the order ID quantity and date columns from the orders table and the number of items column from the products table let's get started begin with a select statement then use dot notation to identify the required columns from each table next use an as keyword to create an alias for the product Table's number of items column rename it as items in stock then Target the clients table with a from keyword and use the inner join Clause to join it to the orders and products tables within a pair of parentheses the first join is created between the clients and orders table using in the client ID column which exists in both tables the second join is created between the orders and products tables using their respective product ID columns next add a where clause and parentheses within the parentheses State the following customers must have purchased 10 or more units of the item all purchases must have been made after the 5th of September 2020 and there must be at least 50 units of the item currently in stock finally click enter to execute the query my SQL extracts the required data from the tables and displays it on screen you've now performed data analysis on three tables within the lucky shrub database and through data analytics you can use this data to identify clients that you can send special offers to great work congratulations you've reached the end of the third module in this course let's take a moment to recap on some of the key skills you've gained in this module's lessons in the first lesson you learned how to evaluate MySQL for data analysis you are now able to explain that data analytics involves taking data analysis a step further by converting and processing the collected data into useful and meaningful information this information is then used to inform and make predictions about future events you then learned about the main types of data analysis these are descriptive data analysis which presents data in a descriptive format exploratory data analysis used to establish a relationship between different variables inferential data analysis used to make inferences about a larger data population and draw General conclusions predictive data analysis which identify paradigms and patterns and there's also causal data analysis which explores the cause and effect of relationships between different variables you also learned about the benefits of using MySQL as a database management system particularly when it comes to supporting decision makers in an organization the primary benefits of MySQL are that it's free and open source it holds large amounts of data in a relational system it's widely used across various organizations and it also offers database Engineers the tools required to perform data analysis on the data in their database and you also know that there are limitations around mysql's ability to perform data analysis for example it's less powerful than other tools and lacks data visualization tools in lesson two you learned how to perform data analysis in MySQL now that you've completed this lesson you should be able to perform basic data analysis using SQL queries in this process you extract the required data from your database using a wide range of one or more SQL queries these SQL queries are used to present a description of the results of your data analysis and to gain further Insight from these initial results using data analytics you then explore the different kinds of SQL queries that can be used to perform data analysis you can emulate a full Azure join to return all records from a left and a right table when it identifies a match between the two this includes records that match and those that don't you also learned that you could use the union all operator to return duplicate records or the union operator to retrieve unique records only however MySQL doesn't support the full Azure join so you must emulate it using a left and right join you can also use functions to perform sophisticated operations and return different results and filter required data using operators you can make use of sub queries to create a query within a query and deploy views to create virtual tables and as you work through these lessons you also enhance your understanding of the topics through Reading items tested your knowledge of optimization techniques in quiz environments and demonstrated your ability to make use of data analyzation techniques in a lab environment having completed this module you should now be able to evaluate MySQL for data analysis and perform data analysis in MySQL great work I look forward to guiding you through the next module in this course in this course you learned about Advanced MySQL topics let's take a few moments to recap the key lessons you encountered during this course in the opening lesson you received an introduction to Advanced MySQL topics during this introduction you learned about Advanced database engineering you discovered how meta makes use of advanced database engineering techniques and you learned how to make the most of the content in this course to ensure that you succeed in your goals you then moved on to the next lesson in which you learned about functions and stored procedures in this lesson you learned how to create and work with functions and both basic and complex story procedures in MySQL so that you can reuse or invoke code blocks to perform specific operations you then learned how to make use of variables and parameters to create more complex stored functions and procedures in MySQL and you learned how to develop user-defined functions for when mysql's built-in functions don't meet the needs of your project you then moved on to the final lesson in this module in which you learned how to use MySQL triggers and events and automate database tasks in this lesson you discovered that a MySQL trigger is a set of actions available in the form of a stored program these set of actions are invoked automatically when certain events occur you explore different types of MySQL triggers like insert update and delete and you learned how each type can be used to control the behavior of your triggers and you also developed an understanding of how you can make use of scheduled events to ensure that your database tasks and events are completed at specific times you also reviewed other aspects of the syntax used to create a MySQL trigger you're now familiar with the create trigger command and defining the trigger name and type and specifying the logic of a trigger by enclosing multiple statements with a begin end block you then moved on to the second module in which you learned about the core rules and guidelines for database optimization in the first lesson of this module you explored how to optimize database queries and you developed an understanding of the concept of database optimization and the advantages it brings to a mySQL database you also review techniques for optimizing database select statements so that they're executed quickly and efficiently for example targeting required columns or avoiding the use of complex functions and you also learned how to work with indexes in MySQL to speed up the performance of data retrieval queries the second and final lesson in this module presented an overview of further optimization techniques you began by learning how to use MySQL transaction statements to manage database transactions you then discovered how you can use common table Expressions to manage complex SQL queries by compiling them into single blocks of code you learned how to make use of prepared statements to limit the number of times MySQL must compile and source code and you discovered how to interact with a mySQL database using the Json data type in the third module you explore the relationship between MySQL and data analytics the first lesson in this module focused on evaluating MySQL for data analysis first you developed an understanding of the relationship between database analytics and MySQL and then you discovered how to make use of data collected during data analysis by converting it into useful information that can inform future decisions you also explore the different types of data analysis that can be performed within a database you then moved on to learn about the relationship between MySQL and data analysis including the benefits and limitations of MySQL as a data analytics tool in the second lesson of this module you learned how to perform data analysis in MySQL using SQL queries like joined sub queries and views you then explored how to emulate a full Azure join in MySQL to extract all records from two tables including those that don't match and finally you learned how to extract data from multiple tables using the join method you've reached the end of this course recap it's now time to try out what you've learned in the graded assessment good luck you've worked hard to get here and developed a lot of new skills along the way you're making great progress on your MySQL journey and you should Now understand Advanced topics in MySQL you are able to demonstrate some of this learning along with your practical MySQL skill set in the lab project you should now be able to deploy functions and triggers in a MySQL environment optimize a database and perform data analysis using SQL queries the graded assessment then further tested your knowledge of these skills however there's still more for you to learn so if you found this course helpful and want to discover more then why not register for the next one you'll continue to develop your skill set during each of the database engineer courses in the final lab you'll apply everything you've learned to create your own fully functional database system whether you're just starting out as a technical professional a student or a business user the course and projects prove your knowledge of the value and capabilities of database systems the lab consolidates your abilities with the practical application of your skills but the lab also has another important benefit it means that you'll have a fully operational database that you can reference within your portfolio this serves to demonstrate your skills to potential employers and not only does it show employers that you are self-driven and Innovative but it also speaks volumes about you as an individual as well as your newly obtained knowledge and once you've completed all the courses in this specialization you'll receive a certificate in database engineering the certificate can also be used as a progression to other role-based certificates depending on your goals you may choose to go deep with Advanced role-based certificates or take other fundamental courses once you earn this certificate thank you it's been a pleasure to embark on this journey of Discovery with you best of luck in the future welcome to the programming with python course python is a versatile high-level programming language available on multiple platforms it's a language that's used by companies large and small to build a diverse range of applications areas that python is used in include web development data analytics and business forecasting the Python programming language syntax is very similar to English it's intuitive and beginners will often quickly understand what's going on it's also a great choice for experienced programmers who will appreciate its power and adaptability because python is a very popular software development tool it's important for you as a new developer to know how it works and to know how to code with it this course covers the key points you need to know to begin programming in Python starting with module 1 you'll get started with a Python programming language an Associated foundational Concepts you'll learn to recognize common applications of python and you'll be able to explain foundational software engineering Concepts use operators to program output in Python and use control flow and Loops to solve a coding problem and in module 2 you'll build on what you've learned in the first module by learning about the Core Concepts that underpin the Python programming language variables and different data types in Python and you'll get to use control flow and Loops to execute code under specific conditions this module also introduces you to working with functions and data structures in Python recognizing errors determining their causes and deciding how to handle them and creating reading and writing data in files in module 3 you'll learn about the programming paradigms of functional and object-oriented programming you'll use functions to explore algorithmic thinking furthermore you'll learn how to work with objects classes and methods in Python then you move on to dealing with modules packages libraries and tools in module 4. here you'll learn how to find import and use popular python modules and packages leverage powerful tools to optimize the programming workflow you'll discover different types of testing and their features and you'll be able to use testing tools to write a test module 5 is the graded assessment where you get to demonstrate your python coding ability you'll be able to exercise the skills and knowledge from this course and you'll have the opportunity to reflect on the course content and the learning path that lies ahead of you you may have encountered some Concepts or terminology in this video that you don't fully understand don't worry about that right now this course is designed to address all such issues and give you a solid coding foundation in Python enjoy the course the developer that wrote python loved Monty Python's Flying Circus a2e show um instead of the snake for python he thought the name python was short in cryptic and he made it the language name [Music] name hi I'm Leila rizby I am a back-end software engineer on Instagram calling in San Francisco I've been coding in Python for 10 years it was the first programming language I learned I use Python every single day for my job at meta it's my favorite language to write in because it's so easy to use and simple the first application I had to code in Python was to create a calculator since python was the first language I learned in my first application in Python was a little tricky for me to build learning how to indent right how to do spacing learning the syntax and learning what Loops are and all these core computer science Concepts is very hard for me but I tried for a while and it worked out some important ways that you have interacted with python in your day-to-day activities likely include using Instagram using Facebook using Google or Spotify python is such an ubiquitous language that you've likely used it regardless of whether or not you know you've used it python is also used for tensorflow which is a machine learning framework that Airbnb uses to classify images and some healthcare companies use to classify MRI data at meta python is used for the Instagram background it is used for ads machine learning algorithms it is also used for our production Engineers who keep our services alive and running you should go through the process of learning python even though it might be challenging because it's an easier language to learn it's simple it has a lot of libraries that support it so it also makes it easier to build more and more features because there's a lot of Engineers that use it already so it makes it easier to develop features quickly and you can see results a lot faster with python thanks for watching the video with us today and good luck on your journey as a software engineer computers and their programs are integrated into our lives at a scale we could not have imagined over the past 20 years we have seen dramatic jumps in technological gains in areas of distributed computing cloud computing and AI improvements such as voice and face recognition and self-driving cars in the next few minutes let me give you a brief overview of the history of programming and you will also learn the basics of how programming works Computing dates quite far back in our history Charles Babbage in 1822 while studying at Cambridge University in Britain was working on bettering calculating devices such as navigation charts and astronomical tables which at the time were used by many ships at sea Babbage came to the realization that all these calculating devices contained various amounts of human error and he wondered if there was a better solution his solution was the Difference Engine the Difference Engine used mechanical gears with numbers 0 to 9 etched onto their gaps separated by Gear's teeth its key function was to carry out one operation that was computed by manually moving hand cranks until the final answer was revealed after building a working prototype Babbage spent many years working on further improving his designs and constructing improved versions of the original idea he created another device called the different Engine 2 but ultimately produced a new and better concept called the analytical engine the analytical engine is widely accepted as being the basics of modern day computing babbage's friend Ada Lovelace published a document describing how the analytical engine could perform a sequence of calculations which is essentially what a computer program does however the analytical engine was never completed and Babbage like a lot of developers did not invest in good documentation with the historical side of things covered let's now delve into understanding what programming is before I explain programming it's helpful to understand how computers work at the most fundamental level computers only understand binary code which consists of two digits zero and one this may seem quite strange at first but with a little explanation it will all make sense zero and one relate to different electrical States similar to a light switch zero is equal to off and one is equal to on for example in programming when you calculate numbers cost or any arithmetic you mostly use decimal numbers every program written needs to be converted to binary code or machine code an example of decibel to Binary conversion is decimal one is binary one decimal 2 is binary one zero decimal three is binary one one and so on a computer represents the binary code by using tiny electrical conductors called transistors these transistors are housed inside the central processing units CPU which is essentially the brain of a computer when a program is written using any type of language it needs to be compiled or interpreted the outcome is to turn readable programming code for us into readable programming code for the computer it's essentially extremely hard for humans to read and understand binary and therefore using it is error prone it's far easier for us to read and write programming languages so what is programming programming is the ability to provide a computer with a set of instructions in a particular language then it can understand and perform those operations or tasks in other words you need to tell the computer what you want it to do in a format and language it can understand programming is a skill the more you practice and learn the better you become at first it can take quite a while to write straightforward programs as you progress you'll become more familiar with the language and how the logic and conditions should be applied programming is also a creative skill that's because you can write computer programs to solve problems in many different ways and that brings us to the end of this video you now know the brief history of programming and how typical computer programming works would you like to be able to program on different platforms for example Windows Mac and Linux in an easy syntax similar to the English language then python is your solution it's a high level programming language that works on many different platforms by the end of this video you'll know the benefits of learning Python and understand where to use python python was created by Guido van Rossum and released in 1991. it was designed to be readable and takes a lot of similarities between the English language and Mathematics since its release it has gained greatly in popularity and supports a rich selection of Frameworks and libraries at present it's currently one of the most popular programming languages to learn today it's widely used in all areas of business such as web development artificial intelligence machine learning data analytics and various different programming applications python is also very easy to learn and get started with given that the syntax resembles the English language it makes it easier to read and decipher programs that are written in Python also require less code in comparison to programming languages such as C or Java one of Python's key advantages is that it makes developers very productive and allows projects to be completed more quickly creating good software that is used by many is hard and very time consuming the simplistic nature of python abstracts a lot of complexity away from the developer to allow them to focus on the task at hand given the language is quite easy to understand and pick up it can be an easier route to market for new programmers starting out to get something produced in much less time compared to some other languages python has a much easier learning curve it lends well to the philosophy of write less do more now that you can describe the benefits of learning Python and where it can be used it's also good to know that python developers are in high demand becoming a python developer makes for a good career choice when you start using python it's important to make sure that it works correctly on your operating system with your chosen integrated development environment or IDE in this case Visual Studio code or vs code so an essential step in using this software is to make sure that the right version of python is used as The Interpreter when running vs code in this video I will demonstrate how to set up vs code on Windows and make sure that it points the correct python interpreter so I start by opening the vs code editor to do this I click on the Windows icon in the taskbar which brings up a menu I then type Visual Studio code in the search bar the best match for the search is the visual studio code app which I click on to open vs code is now open next I select get started with python development this is a useful guide to setting up python on the vs code IDE the first step in the guide is to install python I already have python installed and I can verify it by typing python version in the terminal to open the terminal select the terminal tab on the top menu and choose new terminal after I press enter it displays python version 3.10 which is correct the second step in the guide is to create a python file I do this by clicking on the create a python file option in the guide menu and then clicking on the create python file button that appears next I put in a print statement by typing print hello world I will explain what a print is at a later stage but for now you just need to know that it's for printing out a value within the terminal itself now I save this as a python file in my root level directory by clicking on file save as and then entering the file name hello world Dot py is the file extension I have to use when saving a python file the next step in the guide is to select a python interpreter and I click on this option in the guide menu and then I click on the select python interpreter button that appears which brings up all the versions of python I have installed I do this because I want to make sure that when I run the python script it will choose the correct interpreter the version that comes up is python version 3.10 and I set this as The Interpreter because it is the most recent version to test and verify that everything is working correctly I have to run the python file in the top right of the screen you'll notice a play button I close the guide so that it displays better the play button has a drop down menu that has the option for running the python file or running it in debug I click on the Run python file option note in the terminal window that it has run the file using python 3.10 as The Interpreter and I get the output of hello world and that means that I'm now set up to use Python directly in the IDE so I can run and debug my scripts you now know how to create and save a python file and how to select the correct python interpreter to run your files in vs code when you start using python it's important to make sure that it works correctly on your operating system with your chosen integrated developer environment or IDE in this case Visual Studio code or vs code an essential step in using the software is to make sure that the right version of python is used as The Interpreter when running vs code in this video I will demonstrate how to set up vs code and make sure that it points to the correct python interpreter on my Mac when I open Launchpad I type in visual or Vis for short and it opens vs code vs code provides a walkthrough guide for setting up python this can be found on the welcome screen however if I don't see it in the main view I can click on more and then I click on get started with python development it brings up the guide I can use to verify that I have everything set up correctly next I check that python is installed by opening the terminal window I click on the bottom left of the screen and then on the terminal tab then I type python space Dash Dash and the word version when I hit enter notice that it returns python version 3.10 which is correct by default Mac comes with python version 2.7 but this is not the version I want to use instead I want to use the most recent version next I create a simple python file to do this I click on create python file in the get started with python development guide I put in a print statement by typing print hello world I will explain what a print is at a later stage but for now you just need to know that it's for printing out a value within the terminal itself now I save it by clicking on file save as and then entering the file name hello underscore World dot p y p y or Pi is the file extension I have to use when saving a python file now I have to check that I can run that file and to do that I need to set my python interpreter so I go to the get starter with python development guide again and I click on select python interpreter which opens up the python interpreter screen here I can choose which version to use from the drop down menu notice that I have different versions of python installed but the one I want to use is the one I installed through Homebrew and it is also the recommended one to bring up the python interpreter screen without the get started guide I can press the command key shift and the letter p then I type in Python followed by a colon and I choose python select interpreter from the top of the menu then I press enter When selecting the version of python it's best to choose the recommended one but make sure that it's the latest version that you have installed on your operating system next I'll run the file first I close the get started guide then in the top right of the screen you'll notice a play button with a drop down menu which has the option for running the python file or running it in debug I click on the Run python file option and then I click on the play button note that the output appears in the terminal I have now validated that I have vs code pointing to the correct python version that I want to use and that I am able to run and execute scripts directly on the IDE you now know how to create and save a python file and how to select the correct python interpreter to run your files in vs code did you know that python can be run directly in the command line on Windows or terminal in mac and Linux in this video you'll learn more about the core differences of running code from the command line by the IDE as well as exploring ways in which you can run programs through python let's explore the two main ways to run Python's programs the first way is using the python shell and the second way is to run a python file directly from the command line terminal the python shell is useful for running and testing small scripts for example it allows you to run code without the need for creating new DOT py files you start by adding Snippets of code that you can run directly in the shell let's explore the second main way to run Python's programs which is running a python file directly from the command line or terminal note that any file that has the file extension of dot py can be run by the following command for example type python then a space and then type the file name with a DOT py extension vs code is a better choice than using the python shell or running it directly from the terminal because besides including both of these options it comes with a plethora of additional improvements that make coding and python a better experience Visual Studio code also offers features such as Auto completion debugging and code syntax highlighting white space and indentation helpers I'm now going to demonstrate the different ways that you can run python programs in vs code there are two options to run programs through python one is to run directly from the command line or The Terminal if you're on Mac and then the other option is run directly from the IDE which in this case is Visual Studio code let's find out how to do this first I open the terminal window or command line window from within the IDE by clicking on the terminal menu and selecting new terminal now I run the hello world dot py script directly from the terminal so I can run this by typing the command Python and then the name of the file helloworlds.ky followed by the name of the file then hit the enter key the result is hello world there's a second option using the terminal which is entering it into the python shell so if I only type the python hit the enter key it opens a python shell approach here I can write code and run it directly within the terminal window I can for example use the same code that I have above the print hello world code I hit the enter key and that will print out the words hello worlds directly in the shell let's say I want to exit from the shell I then type in the word exit as this is a function I must add the parenthesis hit the enter key and now I'm back in the command window to do the same from within the IDE I just close the terminal window here I can run any Python scripts from the IDE directly by using the buttons in the top right hand corner of the screen by selecting from the drop down either the Run python file or the debug python file now I click on the Run button and the terminal should open automatically the result is hello world that is printed you have now explored the two options that are available for running python code directly from the terminal or command line and from the IDE that brings us to the end of this video you now know the core differences of running code from the command line by the IDE a also able to demonstrate ways in which you can run programs through python in this video you'll explore python syntax and learn how both white space and indentation can impact a program when used incorrectly in vs code I create a new file called python underscore syntax.py I start by using a print statement to generate a line of text don't worry if you're not familiar with print or variable declaration at this point as that will be covered later in the course I type print followed by the string hello when I click on the Run button the text hello appears in the terminal panel now let's say we want to use another print statement on the same line which will output the text Value World so I add a space and then type another print statement with the string of Worlds and we expect this to give us the words hello world but when I click on run we actually get an error specifically it says syntax error invalid syntax this happens because The Interpreter doesn't know when the new line or statement occurs there are two ways to solve this problem one is to move the second print statement to another line I do this by placing the text cursor before the print statement for worlds and then pressing the enter key to move it down one line when I click on run this time there is no error and I get the words hello and worlds on separate lines let me undo the edits I made to my code by pressing Ctrl and z or command Z on a Mac and then try the second method which is separating the two print statements with a semicolon and a space when I click on run again it also runs both statements as expected next you'll cover the impact of white space in Python syntax I first clear my screen and then declare a variable and assign it a value by typing x equals one plus two on the next line I will add a print statement for x before I click on run however I'll go back to my variable assignment and add a random number of spaces around the plus symbol doing this will not cause any problems with this line however issues will arise if I add a new line or an end statement to demonstrate it let's input a new line and type plus three running this code then returns a value of 3. what has happened is that The Interpreter has executed our first line of one plus two correctly despite the extra white space but it did not account for the plus 3 on the second line there are a few ways to work around this issue the simplest approach is to use a force line to do this I type A backslash at the end of the first line now when I click on run it returns a value of 6 which means that both lines have been accounted for to summarize any amount of white space or indentations on a line is fine but keep in mind that if you are combining it with additional lines then you will need to give clear indicators of where a new line has occurred next you will explore indentation in Python I start by clearing my screen and declaring the new variable name with a string value of John I want to write an if statement which will return John only if the name variable has a new value of John I do this by tapping if name double equals John and then on a new line I inputs a print statement for name to make this program work I need to have an indentation before the print statement which vs code added automatically when I click on run I get back John as expected but what happens when the indentation isn't there if I delete the indentation from my code and then run it again I get the error indentation error expected in indented block this tells us that an indentition was not found where it should have been fortunately the error message directs us the specific line where the issue was detected I could then edit my code and fix it when writing programs in Python it's a good habit to read the output whenever you encounter an error as you are often given the specifics of what went wrong and where it happened as you noticed here variables are an essential part of programming and they are used to store all different types of data you might even say they are the Cornerstone of programming this is because they allow you to work with and manipulate data therefore it's important that you can identify variables and recognize how they are used declaring variables in Python is very straightforward all you need to do is declare a name and assign it to value the word variable refers to something that can be changed to do this in Python for a variable that has already been declared you only need to reassign or re-declare it let's explore an example let's say the variable X has been assigned the value of 10. to change this you only have to redecure it so it will have the value of 20. the examples so far have relied on simple naming conventions such as x y and z when working on a project with other developers it will become increasingly difficult to know what these variables mean or refer to as a programmer you will write a lot of code over time and if it's been a few months you'll most likely not remember exactly what the code was supposed to do using generic variables like X and Y doesn't give any information about that variable and where it is used giving meaningful names to your variables that make sense in the given context will allow you and other programmers to easily understand what's going on as a programmer it's important to understand that data will change throughout the life cycle of your program whether it's getting user inputs via a web form or working with variables inside a code itself the key function of the variable is to keep a reference to some sort of value now that you have a basic grasp of variables and their role in Python let's move on to a more practical demonstration of the variables and how to use them I'll demonstrate how to use variables in Python but first I want to briefly talk about naming conventions there are different options available to you as a developer when it comes to naming your variables one option is called camel case the first letter of the first word is lowercase and the first letter of every word after that is uppercase with no spaces between words for example if I have a variable called my name I'll put the M of my in lowercase and the N of name in uppercase with the rest of the letters in lowercase and no space between the words I can take a different approach with snake case when using snake case you keep everything in lowercase letters but you use an underscore between words so if I want to make the variable my name my underscore name would be the result of this approach although you have different options as a developer it's a good idea to be consistent when you are creating variables across your programs let me clear the screen so I can begin so I create a variable in Python by initializing a variable and assigning it to value all I have to do is name the variable for example if I type x equals 5 I have declared a variable and assigned as a value I can also print out the value of the variable by calling the print statement and passing in the variable name which in this case is X so I type print X when I run the program I get the value of 5 which is the assignment since I gave the initial variable Let Me Clear My screen again you have several options when it comes to declaring variables you can declare any different type of variable in terms of value for example X could equal a string called hello to do this I type x equals hello I can then print the value again run it and I find the output is the word hello behind the scenes python automatically assigns the data type for you you'll learn more about this in an upcoming video on data types you can declare multiple variables and assign them to a single value as well for example making a b and c all equal to 10. I do this by typing a equals b equals C equals 10. I print all three values separately and when I click on the Run button again I find that all three of those assignments have 10 as their value again I clear my screen before I move on to the next example yet another option you have is to do multiple assignments for instance I type A B C separated by commas equals one two three also separated by commas in this way I have assigned each of those values to the corresponding letter so a equals 1 b equals two C equals three to test this out I can print all three variables click run and I'll find that the values one two three correspond to the Declaration above another important point that you should be aware of is variable assignments and how you can change it a variable is subject to change throughout the life cycle of your program you will make changes to the value or the assignment of the variable itself so you need to know how to do that let's explore another example I type a equal to 10 and I print that value after this I change the value of a to 5 and I print that value to when I click the Run button a printout is 10 on the first line and there's five on the line below because value was reassigned finally you need to know how to delete a variable my variable is a its value is 10 and I've printed it out and then on a new line I type the delete command or d e l for short followed by a space and the letter A which represents my variable I then print the variable by using the print function and then I click the Run button the value is first given as 10 because the variable still existed but after the deletion it shows an error saying that a is not defined you just covered variable naming conventions now you know how to declare a variable and assign its value and you know how to declare any different type of variable in terms of value you can declare multiple variables and assign them a single value and you can do multiple assignments finally you also learned how to delete a variable that brings us to the end of this video you can now identify variables and recognize how to use them in Python computer systems need to interpret different data values in programming data can come in different types by the end of this video you'll be able to describe the different data types in Python a data type is an attribute associated with a piece of data that tells a computer system how to interpret its value knowing what data types to use ensures that data is collected in the preferred format it also ensures that the value of each property is as expected python offers raw data types to allow data to be assigned to variables or constants the five main types which are classed as literals consist of numeric sequence dictionary Boolean and set some of these data types can be extended for example the numeric data type can consist of types integer float and complex number for now let's just discuss data types in more detail starting with numeric in programming you need to decide on what type will suit your needs for example when working with currency you are most likely going to use the numeric type of float as it allows decimal places to be counted to determine a type of variable python also provides a function named type which will provide the Class Type based on the variable being passed python offers three different kinds of numeric types which are integers floats and complex numbers the integer class represents any non-fractional number that is whole numbers with no decimal places these numbers can be positive or negative for example 10 or minus 10. floats and numbers that contain decimal places and are represented by the float class examples are 10.5 or 6.7 the complex class is used to represent complex numbers which are made up of both real and imaginary numbers a equals 10 plus 10j next let's explore the sequence data types sequence types are classed as container types that contain one or more of the same type in an ordered list they can also be accessed based on their index in the sequence python has three different sequence types namely strings lists and tuples let's explore each of these briefly now starting with strings a string is a sequence of characters that is enclosed in either a single or double quotes strings are represented by the string class or Str for short lists are a sequence of one or more different or similar types they are essentially an array and hold any type inside square brackets each item can be accessed by its index tuples are similar to lists in many ways they contain an ordered sequence of one or more types but the main difference is that they are immutable this means that the values inside the Tuple cannot be modified or changed tuples are represented by the Tuple class and hold data types wrapped in parentheses the next data type is dictionary dictionaries store data in a key value object structure each value can be accessed directly by its key dictionaries can also store any data type for example suppose you declare a variable named ed and assign a dictionary to it the dictionary contains a grouping of key value pairs the first pair is a 22 where a is the key and 22 is a value the second pair is B 44.4 where B is a key and 44.4 is the value you can then output the value of 22 by accessing its key which is a next let's explore Boolean data types which are simply represented as true or false combined with logical operators booleans are used to check whether a condition is true or false in this example I'm checking the underlying data type of the values true and false the class bull is returned meaning it is Boolean the last data type is set which is an unordered and non-index collection of non-repeated values let me demonstrate an example of this data type suppose I assign a set of four items to the variable named example set I then check the type of the value stored in the example set variable by passing it to the type function python reports that the underlying data type that the example set variable holds is a set in programming data type is an important concept variables can store data of different types and different types of data can do different things let's explore these in further detail whenever you declare a variable in Python the data type is automatically assigned for you based on the value of that variable let me demonstrate this by typing a variable called a and assigning it to value of 10. to check the data type that has been assigned by python I select print and use the type function I then pass in the variable a as the parameter and click on run from the output on the terminal I can see a class of floats was assigned because there was a decimal place here is another example I'm using the variable B and I sign it a decimal value of 2.3 to check the data type assigned by python I print out type B and click on run from the output on the terminal I can see a class of floats was assigned because there is a decimal place this is a different assignment than the standard integer these are the numeric data types offered in Python to declare a variable as a string I wrap the text with single or double quotes again I run the print statement with the function type and pass the variable C as the parameter when I click on run the output in the terminal now displays the classes int float and Str for string this sequence is also applicable for other data types for example I can create a list of numbers by using the variable D and assigning it the numbers 1 2 3 4. when I run the print statement with the function type and pass the variable d as the parameter the class list displays after I click run each time I assign a value to a particular variable python behind the scenes is automatically assigning the correct data type for that variable in this video you learned about the different data types in Python I encourage you to start experimenting with these data types in your practice code you may recall that python can work with several types of data in this video you'll learn how to declare and use strings in Python it will also gain a general understanding of sequences and how to access individual items in a sequence in Python A String is a sequence of characters enclosed in either single or double quotes as you may know computers only understand binary code which consists of ones and zeros this means that characters need to be converted to a form that computers can interpret a process known as encoding python uses a type of encoder called Unicode to communicate with computers strings in Python can be declared in several ways for example for a single line you can type the variable name followed by an equal sign and then the characters encased in quotes if your string is too long for one line you can add a backslash at the end of each line to create a multi-line declaration when you run print on your variable all those strings will be combined and appear on one line if needed you can reassign the value of a string say for example the variable name has a string value of John but you want to change the value to Paul this can be done simply by typing name equals Paul now when you run print on name this update should be reflected it's important to know that a string is just a sequence of characters which in turn means it is essentially an array each character in the sequence can be accessed based on its index for example python strings use zero indexing so you can access the first character with a number zero in square brackets or the number two to access the third one if you need to check the length of a string python has functionality to assist you you can apply the Len function to a variable with a string value this will then return a number that represents how many characters are in the string now that you have learned what strings are in Python let's explore some code examples of strings in action first let me demonstrate the two ways to declare a string the first method is by placing the characters inside of single quotes so I type A equals and then hello in the quotes on the next line when I type print followed by a in parentheses and then click the Run button my code returns to the string hello as the output the second method is similar but uses double quotes so I would enter it as b equals hello with double quotes when I run the print function again it also returns to string hello both quotation types are equally valid declaration methods the choice is a matter of personal preference but it's best to pick one option and use it consistently throughout your code in addition to the quotation types you can declare single line or multi-line strings an example of a single line string can be a equals this is a single line I asked to print out the value of a when I run it the value is printed out as I declared it however there may be cases in which a string is very long and you want to break it up into segments to make it more readable to do that I can use the backslash key to create a multi-line string to declare a multi-line string I type B equals followed by the string this is a multi before continuing I add a backslash at the end of this line on the next line I type the continuation of my String Line string example note that I enter a space before the word line so that it's separated from the last word of the string on the previous line and now when I run print on B the backslash has the effect of joining both segments so that the outputs appears as a single string another thing you can do with strings is concatenation which is the joining of separate strings to demonstrate this I first create two new variables a equals hello with a space at the end and b equals there when I run print this time within the parentheses I type A plus b and get back both strings joined together the plus symbol is usually used as an arithmetic operator but when applied on strings it combines them instead and one more thing to know about strings is that they are considered collections of characters what this means is that much like an array you can access individual characters based on an index and you can also check the length of a string using the Len or Len function to give an example I create a name variable with a string value John now I want to print only the first character of this string to do so I run print on name followed by the character index number inside square brackets strings in Python are zero indexed meaning that the sequence count begins from zero so zero is the number that I place in the brackets when I click on run I get back the letter j if I change the number to 3 and run it again I get back the letter N the fourth character in the string John next let's check how many characters are in this string by using the Len function I start a print function and in it I type Len followed by name inside of parentheses when I run it it returns 4 as the length in this video you've learned about strings in Python specifically you now know how to declare and use them and understand that they are sequences of characters see you in the next video python uses different data types process and use information effectively sometimes you need to change the data type of a variable after you've collected values for it let's say for example a user submits a form on a website and one of the fields was an integer but the data was passed as a string this is a problem because the only way to perform calculations with the numbers saved as a string is to convert it to an integer data type to do this you can use typecasting in Python in this video you'll learn about two different typecasting methods in Python you will also learn to apply typecasting using the provided python functions so what is typecasting typecasting is the process of converting one data type to another python has two different types of conversions implicit and explicit let's explore each now in a little more detail starting with implicit implicit data type conversion is performed automatically by Python's compiler to prevent data loss it will convert for example an INT to a float if it picks up that the inserted value is a decimal it's important to note that python will only be able to convert values if the data types are compatible ins and floats are compatible but strings and int are not so if data types are not compatible python will throw a type error alternatively developers can perform typecasting with the explicit data type conversion you do this by using the provided python functions there are many functions but some of the most common are string integer and float let me take you through some of these functions and how to use them first is the string cast function this is used to convert any data type into a string data type to use this function you type Str followed by the value that you want to convert between parentheses next is the int type casting function to use this type int followed by the value that you want to convert between parentheses the float function is another common type of casting function once again you type the word float and add the value that you want to convert in between parentheses python has many more typecasting functions and they also have a similar structure they are odd which returns an integer representing the underlying Unicode character the hex function which converts a given integer to a hexadecimal string an opt which takes an integer and returns a string representing an octal number there are also Tuple sets list and dictionary which you will learn more about later in the lesson in this video you learned about typecasting in Python it's important to remember that data types are not unchangeable you can convert data types using the provided python functions if you need to like other programming languages python focuses on taking inputs from users or other services and providing output python has many helper functions that make it easy to perform both of these actions you may recall that you use the print function to Output variables and other values in this video you'll learn more about the print function and how to use another new function called inputs the input function is designed to get data from a source of input and it can be used in different ways for example one of its most basic uses is when you use the input function to get data that the user types in the keyboard This input can then be printed to the screen in many cases you want to get input directly from a user for example when you ask for a user's email address let's say you want to use the input function to prompt the user to enter their email address and then save that input to a variable called email if you run this code the user will be presented with a prompt to enter their email the email variable will then contain the email address okay let's switch back to the print function which is used for outputs in Python it can be used to print all different types of data and it allows for more complex formatting the print function itself accepts any number of arguments for example comma separated to print numbers in sequence arithmetic to print the outputs of an equation and string concatenation to join or concatenate two strings together Python's print function also has reserved keywords that can be passed as additional arguments these include objects that is values that are printed on screen sep which defines how the objects being printed are separated and end which defines what gets printed at the end there's also a file which specifies where values get printed to and by default it is STD out and lastly flush a Boolean expression to flush the buffer which essentially just moves the data from a temporary storage to the computer's permanent memory storage for example suppose you can pass three parameters to the print function the word hello which is a string the word EU which is another string and sep which is a built-in parameter whose value is set to a string containing a comma and a space this will be used as a separator between the hello and U strings and the output is hello U often while programming you need to know the value of a variable and output it onto the screen python allows for direct formatting inside the print statement you can also control the order by specifying the numbers inside the curly brackets for example if you print the same statements twice but with a number switched the output will differ let's move on to a more practical application of what you've just learned using some code examples of inputs and outputs I'll demonstrate how to use input and outputs in Python I'll begin by demonstrating the input function I start by typing input opening parenthesis and closing parenthesis I then click on the Run command and you'll notice that it runs the input function and I'm provided with a console where I can actually type in input so I type hello there and I press enter nothing happens because I'm not actually collecting data I'm just triggering the input function and by default it'll open up access to the command line or the console and allow me to input data I can also add a prompt to the input function for example I can ask a question to the user such as please enter a number so I type please enter a number in between the parentheses after the word input first I play the console and then I click on run again you'll notice that the output now asks me to enter a number I type 5 and press enter again nothing shows up from an output perspective this is because I haven't actually done anything with the input value I'm just demonstrating how the input function works if I want to get the value of the input I need to assign a variable so I type in Num equals input please enter a number I claim my console screen again and I click on the Run button it asks me for a number in the console and I enter the number six this time now the num or number value will contain the number six but in order to see that I have to Output that variable to the screen I can do this by using another function called a print function so in this case I print the number after the input Itself by typing print opening parenthesis the abbreviation num and a closing parenthesis I clear my screen again and click on run this time I type the number seven when asked to enter a number I press enter and you'll notice that the output prints seven now I want to show you that you can collect more than one input as part of the input because inputs work in a sequential manner so I call this variable num1 and I enter another variable called num2 on the next line the input for this variable is please enter a second number and I just change the first variables inputs to please enter a first number so that the instructions are clearer I print out the value of num1 and num2 the print statement accepts both variables because they are separated by a comma and it'll print out each one in that order again I clear my contour and I click run I Type 4 as the first number I press enter and type 5 is the second number followed by the enter key again you'll notice 4 and 5 are printed out you can also do arithmetic within the print statement in other words you can do addition subtraction standard multiplication and division so instead of using commas in the print statement I type num1 Plus num2 I clear the screen once more and I click run I enter the numbers 5 and 4 again and I get back 54. now this isn't exactly what I intended to do and the reason is because both variables are strings this goes back to what you've learned previously with data types if I want to do the arithmetic calculation I'll need to convert each variable into an integer first so I can use the integer function on num1 and num2 I click on the Run button once more and I enter the same two numbers five and four but now what I get back is nine if I want to see what type the input is I can check the data type by using the type function to do this I type print opening parenthesis the word type opening parenthesis num1 followed by two closing parentheses let me just clear the screen I click on run and enter the numbers 5 and 4 again in the console and it says that the class is string and not integer which is the type I actually want to do arithmetic so just be mindful when you are using input that you will get a string you'll most likely need to use the explicit data type casting to convert it to the data type that you need the print statement can also be used for concatenation so instead of num1 I change it to str1 or string one and I do the same with num2 by changing it to str2 then I amend the input to read please enter your first name for string one and please enter your second name for string two after that I print out hello and then use concatenation so that the user can be greeted by their first and second name now I want to run this program I'll just clear the terminal quickly and then I click on run in the console I type Tom for the first name and Jones for the second name and the result is the output of hello Tom Jones so concatenation can be used with a print statement as well finally you can also change how you assign variables you don't have to use concatenation you can just use string replacement I'm going to use a function within python called format for this based on the order of the brackets you can pass in the variables that you want it to be replaced with in this case string 1 and 2. once more I click run and I enter the username of Tom Jones and hello Tom Jones gets printed in this video you've expanded your knowledge by learning about the input and output function in Python an operator is a symbol that tells python to perform a certain operation you can think of them like road signs in real life for example suppose you're driving on a dangerous road and you spot an alert side to reduce speed then you encounter a stop sign and finally a sign instructing you to turn right you may not have realized that you were on a dangerous road these symbols help keep you safe by instructing you to perform a specific operation similarly when python comes across an operator that you place in your code it will also perform that specific operation these operations can be mathematical logical and comparison in this video you'll learn about math and logical operators in Python most of the time operators work on two values math operators are used for simple and complex calculations it's essentially all the same options as the calculator would have let me explain this with examples of math and logical operators the first operator I want you to know about is the addition or plus operator the plus sign is a symbol that you must use when adding numbers together for example two plus three to subtract numbers from each other you use this subtraction or minus operator use the minus sign to subtract numbers an example of this is three minus two the division operator is next and the symbol you use for it is a forward slash division is an operation in which one number is divided by another for example 35 divided by five the last operator you need to know about is the multiplication operator and the symbol you use for that is the star or asterisk key use this to multiply numbers with each other for example 7 multiplied by four okay now let's explore logical operators logical operators are used in Python on conditional statements to determine a true or false outcome let's explore some of these now first logical operator is named and this operator checks for all conditions to be true for example a is greater than five and a is less than 10. the second logical operator is named or this operator checks for at least one of the conditions to be true for example a is greater than 5 or B is greater than 10. the final operator is named not this operator returns a false value if the result is true for example a is not greater than 5. operators are usually combined with conditional statements to control the flow of a program that meets specific criteria for example let's say a restaurant gives discounts based on the following two conditions is the customer part of the Loyalty program and did they spend over one hundred dollars to determine this you can write code using logical operators to check if the customer is in the Loyalty program and if they spent over one hundred dollars you'll learn more about conditional statements in a later lesson now let me demonstrate how to use Python math and logical operators math operators basically give you the same functionality as what you have on a standard calculator so you can perform operations like addition subtraction Division and multiplication I start with a simple addition example I'm using the print statement so the output displays on my Contour I type print and in parentheses I add two plus two the value I expect back is four when I run the statement the value of four displays in the terminal for subtraction I change the plus sign to a minus sign I click on the Run button and the value displayed is zero If I subtract 2 minus two the answer is zero for division I change the minus sign to a forward slash I type 35 forward slash 5 in the parentheses I click the Run button and the result is the value of 7.0 just a note on this the value returned is a float instead of an integer now let's cover multiplication I change the forward slash to a star sign that represents multiplication I Type 25 asterisk 5. I click on the Run button and get back the value of 175. that was a short introduction to the math operators next you'll explore logical operators logical operators are used to control the flow of your application The Logical operators are and or and not let's cover the different combinations of each in this example I declare two variables a equals true and B also equals true from these variables I use an if statement I type if a and b colon and on the next line I type print and in parentheses in double quotes I type all true you'll learn about the if statement shortly but for now just know that this print statement will only be executed if both A and B are true the print statement of all true is displayed in the terminal if I change the value of B to false and I run the statement again nothing gets printed out the reason for it is that the and statement as a condition is both A and B to be true so that it will print out the statement now let's cover the or operator so I'm changing and to or and I click on the Run button the all True Value has been printed out again the reason for it is that with the or operator if either A or B is true the if statement is true if I set the values of both variables to false and click on the Run button nothing gets printed out this is because a is false and B is false so the condition in the if statement has not been met in this last example I'm going to demonstrate the not operator I'll keep the or operator before all I type if not a in parentheses then or then not b in parentheses followed by a colon I click on the Run button and the value returned is all true and what that's doing is it's looking for a negation against a so not a is not false which is true and the or a negation of B which results in true the or condition checks to see if either is true now I change the A and the B to be true I click on run and nothing gets printed out the reason for that is that it's checking again for if not a essentially if a is not true in this case a is true and its negation is false so it's not going to meet that condition or not B also results in false and does not meet that condition as well because both are the negation of true this is still not going to print out any value because again none of the conditions are being met and that's a brief introduction to using both math and logical operators in Python congratulations in this video you learned about math logical operators great job if you'd like to learn more about math operators in Python there's an additional reading at the end of this lesson in programming it's important to understand how to control the order in which your code is executed for example suppose you're invited to an event you have to consider whether you need to dress formally or informally another example of a control flow is to consider a light switch the flow is represented by the electrical current and the control as the switch itself with the two states of on and off the order in which you make decisions matters and the same applies to writing effective programs in this video you'll learn how to use conditional statements to control flow in Python programs so what is control flow control flow refers to the order in which the instructions in a program are executed all programs have decisions that need to be made as a result of this the program will take different actions or directions in Python there are two types of control flows first you can use conditional statements such as if else and L if or else if and second you can use Loops such as the for Loop and the while loop let's explore these a little further now the if keyword states that if the condition proves to be true a function is performed the else keyword catches anything which isn't caught by the preceding conditions the alif or else F keyword is Python's way of saying if the previous conditions were not true then try this condition the for Loop checks for specific conditions and then repeatedly executes a block of code as long as those conditions are met the while loop repeats a specific block of code an unknown number of times until a condition is met let's explore conditional statements in more detail with some practical examples using the if else and L if I'll now write some code for a restaurant that wants to apply different discounts based on the amount its customers spend to start off I Define a variable for the customer's bill I'll call it build total and assign a value of 114 to it now I apply condition with an if statement if build total is greater than 100 print the statement bill is greater than 100. next to apply a discount to build total I need to create a second variable I'll do this above the if statement in my code call it discount one and assign a value of 10 to it the condition also has to change so inside the if statement I add build total equals build total minus discount one at the very end of the code snippet outside the if statement I'm going to print out what the value of the total bill is to do this I type a print statement that says total bill and then a plus sign to add the value of Bill total here I need to convert the integer to a string I use the Str typecasting function to do this let's click on run great in the terminal two strings are printed bill is greater than 100 and the total bill is 104. but what happens if the bill is less than 100 I change the value of Bill total to 95 and press run notice that this time because the if condition is not met it only prints a statement total bill is 95 but I'd like to print a statement that says the bill is less than 100. to do this I add an else statement below the if statement I type else colon and in the next line I print the statement bill is less than 100. let's run the code the output in the terminal now says bill is less than 100 and total bill is 95. up to this point you've learned how to use the if and else statements to control the order in which values are assigned and printed you are now ready to take program flow one step further say this restaurant wants to add another discount for bills over 200. how would you do that once again I first need to create a new variable above the conditions that I name discount 2 which I set equal to 20. if I run the code now it will still print out the same output because I haven't changed any of the conditions or values yet let's change the value of build total to 210 and click run notice that both statements are printed but discount one is still applied it's clear that the if statement is still executed since the value is 210 the condition is still met to change the program flow for values over 200 I need to add an add condition to the if statement I changed the statement to if build total is greater than 100 and build total is less than 200. let's run the code and see what happens now the discount wasn't applied the outputs just says build total is 210. why because the condition is not met now this is where it gets really interesting to add a second condition for bills above 200 I'll use an else if statement in between the if and L statements I Type L if which stands for else if and then build total is greater than 200. print the statement bill is greater than 200. in the next line I apply the new discounts I type build total equal to Bill total minus discount 2. First Let Me Clear My screen so you can focus just on the results now I press run notice how the program flow has changed the first condition was not met so the code went to the second condition where the value of Bill total was compared to 200. and since it was greater than 200 the statement bill is greater than 200 was printed the code then proceeded past the else condition because the previous LF condition was true remember the else condition only executes if none of the preceding conditions are true finally it printed the statement at the end of the code snippet namely total bill is 190. this proves that discount 2 was applied congratulations you now know how to control program flow with if else and else if once you become proficient in using conditional statements programming becomes a lot of fun I encourage you to test it out in this video you learned how to use conditional statements in Python writing conditional statements is an essential part of programming and I encourage you to practice them in your code the next time you need to make a decision think about the conditions involved and how they can be represented if you were to code them in Python you may recall how to use the if else and else if statements to test a variable against a few conditions but on some occasions you will have to test a variable against many conditions to deal with this you can use something called a match case statement in this video you'll learn how to use a match statement as an alternative to an if statement now let's consider an example to compare the match statements to the if statement say you want to write code to print HTTP error messages according to error codes to do this with the if statement you would have to write the if condition or the alternative if else conditions and finally an else conditional statements like if L if and else work well over a small number of conditions but over a large number of conditions your code can get large complex and messy fortunately there is a cleaner way to achieve the same result using the match statement the match statement in Python was introduced in version 3.10 using the match statement you can achieve cleaner more readable code that allows all the same functionality as the if control statement when using match statements there are a few things to remember you can combine several conditions by using the or operator in the conditional statement the default is essentially the final outcome if nothing is found in the case checks it's the equivalent to the else in the if blocks let me demonstrate this example Now using vs code okay so I've written a simple if statement that checks for an HTTP status code value of the variable HTTP status matches one of the conditions it will print out the equivalent message I'm now going to add a match statement below the if statement for a clear comparison I will test the same variable against the same values I type match and then the variable HTTP status and a colon on the next line I type case which is the equivalent of the word if and the value of 200. on another line I repeat the action using the if statement for 200 which is to print the word success so in other words the variable is matched against the value of 200 and the if values are equal it will print out the word success notice that the value of HTTP status is indeed 200 at the moment so let's run the code to test how the if and match statements are processed in the terminal the word success is printed twice because the value of HTTP status is matched twice in my code run once for the if statement and once for the match statement now let's change the value of HTTP status to 201 and run the code again in this case success is only printed once why do you think that happens because there is an or condition for the value of 201 in the if statement but none in the match statement to do the equivalent in the match statement you use the or operator so I place my cursor in between 200 and the colon and add an all character and the value of 201 I clean my screen by using CLS and then click on run again now success is printed twice again so in the match statement the pipe command is shorthand for if or the great thing is that you can add many case statements in a match statement but what if none of the values match the variable's value now let's change the value of HTTP status to the value of say 550 and explore what happens I'll click on run and this time the word unknown is printed you may be wondering why that is because the else statement is like a catch-all if the value does not match anything within the if or the L if statements the default will be the else statement which in this case has a print function for the word unknown well the match statement also has a default class and you add it by typing the word case underscore colon and now the next line print unknown let's run the code again great the output is unknown unknown which means that the default statement in both the if as well as the match statement was actioned my match statement is coming along well but it still needs a few tweaks to make it act exactly like the given if statement to do that I'll add a few more case statements that will test for the same values as the L if statements I type case 400 colon and then I add a print command with the words bad request I add another case and the value of 500 and I also need to test for 501 like in the L if statement above so once again I add an or character and type 501 colon on the next line I add the error message that I want to print which is server error the match saves a bit of space by combining the or statement so you don't have to do a comparison against a variable each time like in the if statement let's change the value of HTTP status one more time to 501. I just clear the screen again and click on run and server error is printed for both statements now you know that there are some differences between the two but the match statement does exactly the same as the if statement in summary the match statement Compares a value to several different conditions until one of these conditions is met so you now know how to use the match statement as an alternative to the if statement to test a variable against many possible values the match statement is relatively new to python prior to version 3.10 developers had to get creative and code with their own Solutions you'll learn more about those alternative methods later in this lesson have you ever come across a song that you like so much you want to listen to it again and again you select the loop option so that you can listen to it repeatedly this action of repetition is known as looping and also exists in Python in this video you'll learn to use looping constructs when the same set of steps must be carried out many times python has two different types of looping constructs for iterating over sequences the for Loop and the while loop looping is used to iterate through the sequence and access each item inside the sequence let's start with a basic example of looping using a string first you declare a variable called Str which is of type string recall that a string in Python is a sequence which means you can iterate over each character in the string a sequence is just an ordered set now let's break apart the for Loop and discover how it works the variable item is a placeholder that will store the current letter in the sequence you may also recall that you can access any character in the sequence by its index the for Loop is accessing it in the same way and assigning the current value to the item variable this allows us to access the current character to print it for output when the code is run the outputs will be the letters of the word looping each letter on its own line now that you know about looping constructs in Python let me demonstrate how these work further using some code examples to Output an array of tasty desserts python offers us multiple ways to do loops or looping you'll Now cover the for loop as well as the while loop let's start with the basics of a simple for Loop to declare a for loop I use the four keyword I now need a variable to put the value into in this case I am using I I also use the in keyword to specify where I want to Loop over I add a new function called range to specify the number of items in a range in this case I'm using 10 as an example next I do a simple print statement by pressing the enter key to move to a new line I select the print function and within the brackets I enter the name looping and the value of I then I click on the Run button the output indicates the iteration Loops through the range of 0 to 9. it's important to note the three main points the iteration starts at zero based on the index of the item itself every for Loop usually starts with zero most arrays start at zero the reason for that is that it's the first item in an array or the first item in the index in this case the last item in the array or index will be nine now I want to change what I Loop through as an example I'll enter a simple array above and call it favorites to do this I start by removing the hash sign in front of favorites next I replace the range function in the current for loop with favorites to Loop through the I that I declared as part of the for Loop can change to any value and in this case I'm using item I now change my print statements to include item in my print Loop I also change the text to I like this dessert I click on the Run button to print the value stream in this case item calls each of the five desserts titles in turn and our print statement combines them into a sentence the next looping option I'm discussing is the while loop the while loop differs slightly from the for Loop to demonstrate this type of loop I first comment out the for loop on my screen let's start by using the while keyword as in the for loop I need to specify a condition to make the loop Over N times depending on the value itself first I need to declare a counter I do this by typing count equals zero above my looping statement next I enter count after the while keyword followed by the less sign and the word favorites now I insert the function Len to provide the length of favorites this means the loop will run while the count is less than the length of favorites in other words while it is less than five to print the value of the looping statement I press the enter key to move to a new line then I select the print function and within brackets I enter the text I like this dessert the key difference here is that I need to use the index to access the items within the favorites array to do this I type favorites and add counts to represent the index it's important that I now increment counts so that it will essentially match the loop statement if I do not increment count I'll end up with what is called an infinite Loop this means it will just keep looping and looping until the compiler stops it from running out of memory to increment the count I press enter to move to a new line and add counts plus equals to 1. I clear my screen and click the Run button I get the same print output as the for Loop it's important to note that in a standard for loop I don't have access to the index but I can use the enumerate function to do that so I changed my current for Loop statement by adding idx and it becomes 4 idx item in then I call the enumerate function with favorites in parenthesis on the next line to print the output I replace the text I like this dessert with idx and click the Run button the results display the index and the value of the item within the array congratulations in this video you learned about looping constructs in Python using the fort Loop in Python nested Loops can be used to solve more complex problems for example the nested for Loop is written by indentation inside the Outer Loop let's explore this further now and break down how a nested Loop works first the outer loop will start and then step into the inner loop then the inner loop will run until its range limit is Met 10 in this case once the inner loop completes it will come back to the outer loop for the next iteration and then step into the inner loop again this will happen until the outer loop has reached its limits now let's explore an example using nested Loops to iterate over two lists for example suppose you have two lists of integers from one to nine an account variable that is set to zero you again have two Loops the outer loop which will iterate over list one and the inner loop which iterates over list two if you run this code it gives an output of 90. let me break it down for you the outer loop runs a total of nine times the inner loop runs a total of nine multiplied by nine which is 81 times and 9 plus 81 gives a total of 90. to help visualize how this would look you can make some minor changes to the loop to Output what it looks like okay so you now know that the number of times a loop is run is based on the size of the list now that you've learned about nested Loops let me demonstrate some code examples of nested loops so I have vs code open and first let me start with a simple example and write a for Loop I type 4i and I use the range function so I have in range 10. above the loop I label it with a comment Outer Loop this first for Loop is considered an outer loop and inside I will have an inner loop I write a comment in a loop right under the first for Loop now I type for J in then I use the range function again and I pass in a 10 and end with a colon the 10 indicates the number of times the loop will iterate or repeat in this example I print out 0 and then I use end equals and a string with space in double quotes to ensure it prints out in an even manner and lastly I'm going to print out an empty line so that it goes to a new line in each iteration if I run the for Loop the system prints out a 2d array grid this is just to demonstrate how the for Loop works if I want to print out a single line I can change the outer loop to 1. I play my screen run it again and this time a single line of zeros is printed out because the outer loop only iterated once this is all based on the Outer Loop only it runs once but when it goes to the inner loop that it runs 10 times and prints out a zero for each item on the same line every time the outer loop starts it will go into the inner loop the inner loop must finish before it comes back to the outer loop to start on two three four and so on I can showcase that by simply changing the outer loop range to 2. in this case only two lines are printed so the first when I equals zero it comes into the inner loop and prints out 10 times when that's finished it comes back to the Outer Loop then I is incremented again to one and then it'll be printed out from Jay's Inner Loop and another 10 zeros again the other issues to consider with nested Loops is the complexity of what is commonly known as time complexity the larger the array the more time it's going to take to run my code let me showcase this by running a for Loop for a larger range but also putting a time stamp on it I import the time module to put a timestamp in what's printed out I class it as the start time equals time dot time which is the function that I want to call the start time is initialized the moment I run the script I also want to print out how long it took to finish I do that by putting a print statement outside the for Loop I then calculate the time as in the time it took to finish and then subtract that from the start time above this is going to print out many decimal places so I use another function named round which rounds numbers to a decimal of my choice in this case I round it off to two decimal points Let Me Clear My screen one more time I click on run and it outputs the time of 0.0 seconds this makes sense because the time my code takes to run is really short now let me increase the range in the outer loop to 100 and I also increase the range in the inside loop to 100 and I click on run one more time the time now goes up 0.01 in this case it's not a big difference but when you're dealing with large data sets this will have a huge effect on the running time of my code this time let's say for example that I keep the outer loop to 100 but I increase the inner loop to one thousand I clear the screen and click on run the time increases to 0.04 if I increase it inner loop one more time to ten thousand and I clear the screen and click run now the time goes up to 0.45 so the larger the array or the larger the range in this case the more time it's going to take for a program to complete It's always important to remember how you can optimize code to make it run more efficiently and consider the amount of time your code will take to run you just covered nested for loops and learned about the issue of time complexity in this video you learned about nested loops and how they work congratulations on reaching the end of control flows and conditionals and the end of the module on getting started with python let's recap what you've learned so now you know how to explain the history of programming and how it works in a general sense describe the benefits of python and where it's used evaluate if your system is set up correctly for python development identify the differences of running code from the command line by the IDE explain the importance of syntax and space in Python describe what variables are and how they are used identify data types in Python explain how to declare and use strings describe the two types of casting and how to apply them describe the basics of user inputs and console outputs recognize math and logical operators in Python use conditional statements to control the flow of programs use match case statements as an alternative to if statements explain looping constructs and how to use them and explain nested loops and how they work you've learned a lot about the structure and rules that guide Python and now you're ready to create programs great work see you next time so what are functions at the most basic level you can think of functions as a set of instructions that take an input and return an output for example the primary task of the print function is to print a value this value is usually printed to the screen and it's passed to the print function as argument in the example we have here the string hello world is the value passed to the print function by the end of this video you'll be able to declare a function in Python pass data into a function and return data from a function a python function is a modular piece of code that can be reused repeatedly you've used some python functions already in this course such as print and input both are functions and each one has a specific task or action to complete the input function will accept parameters 2 but will also accept input from the user so how do you declare a function a function is declared using the def keyword followed by the name and task to complete optional parameters can also be added after the function name within a pair of parentheses here's an example of creating a function to add two values type the def keyword followed by the function name of sum then enter X and Y is parameters and finally enter return X Plus y as the task to complete I'll now give a practical demonstration of functions how to declare them how they're used and how they can also simplify your code by putting it into a reusable structure let's start with a short example that explains how to calculate a tax amount for a customer based on the total value of that bill I'll start by declaring two variables I type the first variable called bill and I assign it the number let's say 175.00 I know this is going to be a data type known as floats because I'm using decimal points as is the norm for currency the second variable is the tax rate which is the percentage tax rates that should be applied to the bill so I put in 15 and then what I want to do is calculate the amount of tax for the bill itself what I do is add this into another variable called total tax I then do the calculation which is the bill multiplied by the tax rate and then divided by 100 to get a dollar amount to Output the value I print the total tax and pass the total tax variable and then run the program total tax is 26.25 which is 15 of 175. in the real world the build value will be different for each customer and the tax rates may also change updating each variable every time is inefficient to overcome this problem I'll create a reusable function to start creating a function I use the Define command or def for short then I'll give it a name that relates to the task it's carrying out so in this case it's going to be calculate tax with functions you can pass in arguments and the purpose of that is to make it more dynamic so consider the arguments that I need to take in I'll take in a bill which would be the total value of the bill itself and then also a tax rate and then like I've done before I will calculate the total tax by taking the bill multiplying it by the tax rate and then dividing it by a hundred okay then I do a return rack bill in parenthesis multiply it by tax rate and divide by 100. now I can remove the Declaration I made earlier for the variables and the calculation with a function if you run the current code AS is it will come back with nothing because a function is only ever run when it's actually been called I'll demonstrate this if I do a print I can calculate tax and then I pass it as I've done earlier 175 is the total bill and then the tax rate will be 15. I'll also put in just a total tax and then click on run and the total tax rate is 26.25 if I want to change the rates I can reuse the same function total tax I'll call the function again calculate tax I'll give it a different value for a Bill say 164.33 this time I'll change the tax rate to 22 percent clear the screen and then click on run in my total tax for the second item is 36.1526 to clean the output up a bit and make it more visually appealing I'll put in a round function which allows control of the number of decimal places that I want returned in this case I'll do two and then rerun the code this is a lot neater with 36.15 one of the nice things about a function is that you can update it once and any part of the code that calls that function will get those changes as well in this video you've explored basic functions in Python how to declare functions and pass and return data to and from them the concept of scoping in Python allows for greater control over elements in your code which reduces the chances of accidental or unwanted changes by the end of this video you'll be able to understand the basics of scoping in Python and identify the four types of scope before you explore scope and its intricacies it's important to know that python has different types of scope you'll examine each one in detail and I'll demonstrate them with coding examples in order of ascending coverage the four Scopes are local enclosing Global and built in together they are referred to as l e g b variables within the built-in and Global scope are accessible from anywhere in the code for example if there's a variable a in the global scope it can be called in code at the local level the purpose of scope is to protect the variable so it does not get changed by other parts of the code for instance let's say you've declared variable B in the enclosed scope and variable C locally while B can be called in local code it doesn't work the other way around as a rule Global scope is generally discouraged in applications as it increases the possibility of mistakes in outputs now I'll explore using the four different types of python Scopes in this practical demonstration the first one I want to use is global scope I declare a variable called my Global and then give it a value of 10. so the next thing I do is declare a function and I call it fn1 and inside this function I'll declare another variable which I'll call local variable or local V and give it a value of five to show that my Global variable is accessible from anywhere I can do a print statement say access to global and then print out the value of the my Global variable and if I want to run that function I need to specifically call it so fn1 click on run and then the value of 10 is printed out for the global variable but if I try and print out the local variable inside fn1 outside the function it will return an error so I access to local and then put out local underscore V I then clear the console click on run and I get an error saying name error name local underscore V is not defined that's because it's only accessible from within the local scope of the function fn1 next to illustrate an enclosing scope I'm going to declare a second function inside fn1 called fn2 I then declare an enclosed variable which I call enclosed V and assign it the value of 8. the local V will be local to the fn2 I'll now explain how enclosed scope works within fn2 I've got access to the enclosed fee which I can demonstrate by doing another print statement and printing out the enclosed view variable I'll just test that this all works by calling the fn1 function and then making sure that I call our fn2 function inside fn1 I must physically call a function to make it run so I clear the console click on run and then print out access to global 10 and access to enclosure 8. the way scoping works is that the innermost function has access to almost everything from the outside you can access the enclosed variable at this level and then also access the global variable at the outer level the same rules still apply from the outside so if I try and access the variable of enclosed V or try and access the variable of local V I get the same error that the variable enclosed fee is not defined the nested items have access to both the global and the enclosed but from the outside it can't be accessed from a nested or an enclosed scope both for local and enclosed the last scope is built in scope which you've been using when writing code in Python built-in scope refers to what's called The Reserve keywords such as print and Def built-in scope covers all the language of python which means you can access it from the outermost Scopes or the innermost Scopes in the function classes that's a brief demonstration of scope in Python including examples of global local enclosed and built-in by completing this video you've gained a broad understanding of why scoping is important in Python Programming and you are now able to identify the four types of scope lists are a sequence of one or more different or similar data types a list in Python is essentially a dynamic array that can hold any data type let's move to the console and I'll demonstrate some practical examples of python lists first I'll go through a few examples of declaring lists so I create my list by typing list one equals and then the numbers 1 2 3 4 5 within brackets and separated by commas because you use commas to separate items in a list list2 is a list of strings a b c I can also have a list of different data types for example in my list three I can have a string an integer a Boolean and a float so the type doesn't necessarily matter it's just going to be stored in the same way one thing to keep in mind with lists is that they are based on an index for example let's suppose I wanted to access the number three from my list one example since the index always starts with zero I'd have to write list one open square bracket to close square brackets this gets the third item in the list which is number three so if I do print that I get the value of 3 being printed out an important option with lists is that you can also have nested lists so if I declare another list for example list four and I put in one and then I can have a nested list of two three and four and then get back to a five and six that's completely valid as well so any data type can be stored within the list itself just to keep that in mind let's see what else less can do I've got a few different options to add items to a list one is to use the insert function so just before I do that I'll just do a print to print out the entire list there are a couple of different ways I can do it I can use the Star Sign list one click on run and I get the entire list printed out to print it the way it is displayed in my code I can use the print statement type in list one and just put in a separator equals and then comma or just single space I click on run and I get this type of prints returned right back to adding something new to the list the first option that I have is What's called the insert function I can do list one dot insert and what it looks for is the index of where to insert to here I can use the Len or Len function to get the length of the list one then I put in what the next value should be so in this case I put in number six I do the same print statement directly underneath that then click on run and I find as I get six added to the end list I can also use another function called append instead of having to specify the index or the item should be placed I can just put in the append keyword so I type append six and click on run and it's added in without having to specify the index there is another function I can use if I want to add one or more items to the list it's called extend and this will accept a list as well I can put an extend six seven eight nine and then click on run and then my list is extended with six seven eight and nine to remove something from a list there are a few different options the first way is to use pop and then specify the index or location of which item I want to remove to demonstrate pop I'll say pop 4 for index 4. I click on run and the last item from the list is removed remember within a list the index always starts at zero so index four means the fifth item being the value five and that's what has been removed another option is the delete or Dell keyword I can say Del list one and then specify the index to delete in this case I put in the index of two click on run and the index 2 is removed which in this case is the number three 0 1 2 is the number three lastly I can iterate through a list one of the main reasons I use lists is that I can iterate through the values and gain access to large amounts of data so to iterate I can use a simple for Loop so for x and list one and then I can do a simple printout I'll just remove this one underneath I'd like to print out the value of x so I just put in print value and then X when I click on run it'll print all values of the list that's a brief demonstration of what you can accomplish using list and python you just covered how a listing python works as a dynamic array you explored lists and learned how to use inbuilt functions of a list to access the list items modify them add to the list and remove items from the list in this video you will learn about tuples and how they can be used to store different types of data there are users data structures and help to create solid well-performing code to declare a tuple I declare a simple variable I'll name it my underscore Tuple and then I do the assignment operator of equals then to declare the Tuple itself I use parentheses a tuple can accept any mix of data types and it can range from integers like 1 to strings to a double such as 4.5 to a Boolean like true to access any of the items within the Tuple I can use methods similar to those used with a list by referring to an index so in my underscore Tuple if I want to get access to the string which I know is going to be on index 1 I just print out the value so I write print my underscore Tuple brackets 1. remember index always starts with zero I click on run and I find that it Returns the value of strings if it wants to determine the type of the Tuple I can use the type function that python provides I click on run and I get a class Tuple we could also declare a tuple without using the parentheses it has the same effect and will still be classed as a tuple however it's best practice to use the parentheses tuples also provide different methods of count and index I can do my underscore Tuple dot count and pass in the value of strings I click on run and I get back the count of 1. what it does is it looks for the number of occurrences of that value within the Tuple before I move on I'll type clear into my terminal to clear the previous output so we could start fresh and see what's going to happen next the other method is index which should give me back the index of where the value lies in the Tuple I'll change the print statement to look up the index of the double 4.5 when I click on run I get back to this means that the double 4.5 is at index 2 in the Tuple I can also do a loop on a tuple that is iterate through the values and print them out I can write a loop 4X in my underscore Tuple and then print out the value of x I click on run and I get back one strings 4.5 and true so all of the values of the Tuple itself the one key difference of a tuple over a list is that tubal values are what's called immutable and this just means that they cannot be changed so I'll prove this and demonstrate how this works let's say that I want to change the value of the very first item in the Tuple being the value 1. I'll use 0 to access it based on the index and let's say that I want to change it to be five if I run this I'll get back an error and it gives me the error saying type error Tuple object does not support item assignment that's because anything that is declared in a tuple is immutable in this video you learned about tuples including how to declare them and work with their contents in this video you will learn about sets and how they can help with storing certain types of data in different types of formats first I declare a set I can start by declaring a simple variable called set a equals and then use curly braces to define the set itself then the values go inside the brackets I put in 1 2 3 4 and 5. I'll do a simple printout to prove that we have a set I click on run to get the values 1 to 5 printed out sets differ slightly from lists in that they don't allow duplicate values I can demo this by putting in another five when I click on run I find that the second five is not printed out in the list sets also have methods that we can use I can use a method to add new content if I use set dot add 6 I can add in the number six I click on run to find that the value 6 is added to the set I could also use the remove method I'll remove the number two when I clicked on run I found that the number 2 was removed from the set there's also discard which essentially does the same thing as remove using discard when I click on run I'll find that I get the same output let me clear the console before we go any further there are also a few useful methods that could be used with sets to perform mathematical operations let me Demo some of them now first I will create a new set set B I will put in 4 5 6 7 and 8 and reset the values of set a to the original values there are two ways I could use mathematical operators for instance for a union join I can do set a DOT Union and then pass in set B then I can click on the Run button to see what happens I discovered that it joins the two sets together minus the duplicate values like four and five Union merges them into one so you have a set one two three four five six seven eight for the other options for Union I can use the vertical line symbol or the or symbol and that works in the same way let me clear the console before we go on another operation I can use is the intersection I can apply this to set a by writing set a DOT intersection and passing set b as the argument when I click run I get back all the items that match in both set a and set B here we have four and five the intersection can also be represented by the Ampersand and will work in the same way when I click on run I also got back four and five let me clear the console again before we continue another mathematical operation I can use is the set difference to use this I'll print set a DOT difference set B and this should give me back all the elements that are only in set a and not in set B when I clicked run we got the correct output of 1 2 and 3. I could also represent Difference by using the minus symbol when I click on run I'll also get back the same values 1 2 and 3. the last operation I'll discuss is what's called symmetrical difference this is represented by the symmetric difference function which is used in a similar way when I click on run I get back 1 2 3 6 7 and 8. in other words all of the elements that are present in set a or set B but not in both sets symmetrical difference can also be represented by the carat operator when I click on run I get back the same values an additional important thing about sets is that a set is a collection with no duplicates but it's also a collection of unordered items unlike a list where I can print out content based on index if I try to print out set a brackets 0 to get the zeroth element in the set I'll get an error Let Me Clear My console before we attempt to print this output when I click on run I get back a type error saying that the set object is not subscriptable this means that the set is not a sequence it doesn't contain an ordered index of all elements inside okay that concludes our gentle introduction to sets great job so what is a dictionary in Python in many ways it's very similar to how an actual dictionary Works in a normal dictionary to locate to word you look it up by the first letter and then use the alphabetical ordering system to find its location likewise python dictionaries are optimized to retrieve values you may remember how useful python lists are to access an array of values dictionaries access values based on keys and not an index position therefore they are faster and more flexible in operation by the end of this video you'll be able to explain the purpose and function of dictionaries in Python and identify the performance benefits of dictionaries with a python dictionary a key is assigned to a specific value this is called the key value pair the benefit of this method is that it's much faster than using traditional lists to find an item in a list you need to keep reviewing the list until you locate the item but in a python dictionary you can go straight to the item you need by using its key addiction is also mutable in that the values can be changed or updated for example you could declare the number one as the key and coffee is the item and then change these to any other number or drink item but how does this work how do you access or locate the item you need within a python dictionary with the use of the keys to demonstrate this I'll access the coffee item within a python dictionary first I declare my dictionary name as sample dictionary then an equals with a series of key value pairings or keys and items in a pair of curly braces I also make sure to separate each pairing with a comma I then type the print function followed by the name of my dictionary I need to access coffee item which has been given a key of one so I insert the number one in square brackets I run the print function and it returns coffee as a result just as I intended I can also update a dictionary by replacing one item with another I just need to use the key to reference it while using the assignment operator of equals to assign the value for example I can change item 2 in my dictionary from T to Mint T I just write a new line of code that starts with the name of the dictionary followed by the key I want to change in square brackets then I add an equals operator followed by the name of the new item what the coding says is to take item 2 in the sample dictionary and change it to Mint T when I run this function it changes the item I can also delete an item from the dictionary to do this I write a line of code with the delete function followed by the name of my dictionary then I add the key for the item I want to delete in square brackets in this instance I want to delete item 3 juice when I run this delete function it will remove the juice value from my dictionary finally I can also use three different methods to iterate over a dictionary I can use the standard iteration method the items function or the values function let's explore these iteration methods and the other dictionary operations functions in more detail to create my dictionary I'll start by declaring a simple variable called my underscore D and then use the assignment operator then curly brackets it can be the same as a set but by default that's classed as an empty dictionary I can print that out by using a print statement using the type function and then passing in the my D variable clicking on run the class has actually come back as a type dictionary so next I'll add some values into the dictionary and I need to do it in two parts a dictionary holds what's called a key and a value the key can be numeric it can be string but to signify the assignment I use a colon and then put in whatever value that I want in this case I'll put in a simple string value of test to signify that I can change or have different Keys strings integers or ins I put in a string key of name and then the value gym I print out my dictionary using the print function I now have a basic dictionary setup with a key of one name and then one being test name being Jim if I want to access a key in the dictionary I just have to use the square brackets and then pass in the key value so in this case of one I'll pass in the numeric one in the case of the string value I just need to pass in the actual string value itself so name click on run and I get back both test and gym which are the values for each corresponding key if I want to add a new key into the dictionary or update it I can simply do my D and then add a new assignment two in this case test two click on run the key is then added with the current dictionary so update a key I have to call out the value that I want I'm just going to update the first key which is number one with a value of not a test instead of test click on play and it's updated on the screen the other thing to note about the dictionary is that if I try and put in a duplicate key it doesn't allow this so if I put in a number one and then not a test click on run and the key will actually be overridden with the latest one so number one only appears one to the outputs and it doesn't allow the two keys to be printed out because it won't allow duplicate values to be set if I want to delete a key from the dictionary I'll use the Dell operator now I type my D and then specify which key I want to delete in this case number one it's then removed from the dictionary with a dictionary I can also iterate though for example I can use the 4X in my dictionary and then print out the value of x click on run and I get one this only prints out the keys in a lot of cases I may need access to both to do that I use a method called my items with that I can then gain access to the assignments of both the key and the value so I do a printout here key plus value I'll use some concatenation to print out both the key and the value click on run I have to be mindful because I'm using an integer with a string so I wrap that with a type SDR click on run again and I get the value of the key and the value for each of the items in the dictionary itself you should Now understand the purpose and function of dictionaries in Python and their benefits in terms of performance in this video you'll explore args and also quarks or keyword args using these has the advantage that you could pass in any amount of non-keyword variables and keyword arguments to start with a short example I'll Define a sum of function that accepts two parameters A and B and then return back the addition a plus b if I do a print statement call the function sum of with the two values four and five I should get back the value of 9 which I do that all works fine but let's say I want to add in another value 6 for example if I click on run again I get back an error and it tells me that the sum of function takes two positional arguments but three were given if I want to weigh around this this is where args are useful to Define args I use the star symbol and I call it args for naming purposes instead of passing in just two arguments args will allow n arguments to come through any number of arguments when dealing with more than one argument there may be many to iterate through to calculate the total sum I'll have a variable called sum assign it to zero then I'll create a simple for Loop and then I'll Loop through the argument parameters that's been passed in then I'm going to add all the values that come in as part of args which is assigned to the X variable using the plus equals and then finally return the value of sum so again if I run the statement I get back the value of 15. as I mentioned you could pass in any number of arguments and the total sum should be returned for each in this case it's 30 with the number of arguments that have been passed through that's a simple intro to args so now I'll demonstrate quarks let's clear the terminal and switch to my quarks file I'll copy the code from the args file to start let's say for example you wanted to calculate a total bill for a restaurant a user got a cup of coffee that was 2.99 then they also got a cake that was 455 and also a juice for 2.99. the first thing I could do is change the for Loop let's change the argument to quarks by adding another star and then update the variable in the for Loop next I get the key in the value and then I extend the quarks with the items function and then I can simply change the sum to add all the items that are passed through on the value because adding the key makes no sense it's just the string and it won't give you the actual value you intend to get when I run this I get back a value of 10.53 with a bunch of extra zeros now I can change the decimal place for the final returns statement by using the round function and I limit it to two decimal places when I click on run again I get back a total of 10.53 to summarize with args you could pass in any amount of non-keyword variables with the quarks you can pass in any amount of keyword arguments that's a simple intro to both args and quarks in this video you'll learn about errors and exceptions to very important aspects of learning python as a new developer you'll cover the difference between errors and exceptions and explore what happens when something goes wrong with your code errors are a passive coding and they happen for many reasons let's start by exploring two types of errors syntax errors which are caused by human error and exceptions which are known errors that need to be handled syntax errors are usually caused by the developer it could be the results of a misspelling or a typo in the code generally these types of Errors have minimal impact because most Ides like Visual Studio code will warn the developer and give clues about how to fix them a common error for new developers learning python is not adding the colon at the end of conditions or statements if you're using a code editor with syntax checking errors like this may be highlighted at the point of the error for example a missing colon will be highlighted in the code the output will indicate the file name and the line or the error occurred with a carrot character pointing to the error running the code will result in an invalid syntax error informing that there is a syntax problem other common mistakes include indentation problems which are also syntax errors for example if there is an indentation problem the error code will be indentation error The More You Learn Python the less you'll have to deal with these types of Errors because you'll become better at creating and analyzing your code now let's move on to exception errors they happen during code execution and they can easily go unnoticed by the untrained eye but exceptions need to be handled by the developer they need to deal with any potential issues in the code base to keep the application from failing let's explore an exception being thrown as an example your code can be syntactically correct but if it attempts to divide five by zero it doesn't make mathematical sense therefore when you run this program the zero division error exception is thrown luckily by default python includes many exception errors that you can use to pick up potential issues in your code in this video you explore the basics of errors and exceptions which is a step in the right direction to becoming a better python programmer in this video you'll explore how to handle exceptions in Python you'll learn how to change error messages and how to wrap your code within try and accept statements as an example I will write a simple math function I Define a new function called divide by and allow it to accept two parameters A and B the purpose of this function is to return the value of the division of both numbers so in the next line I'll type return a divided by B now I add a print statement for the divide by function inside the print statement I'll also add a new set of parentheses with a value of 40 and 4. these are the values that will be divided by the function I click on run and the value returned is 10 which is correct because 4 goes into 40 10 times now let's test what will happen if I pass in the values of 40 and 0. now when I click on run I get an error or an exception the exception in this case says zero division error division by zero it gives this error because in math you can't divide a number by zero you might agree that getting cryptic errors could upset users so the question is how can you handle errors in a more user-friendly way how can you prevent a user from seeing the actual exception being printed out you do this with Python's try and accept statements simply type try colon and in the next line accept colon you add the code that you want to run within the try statement so I delete the print statement at the bottom and cut the divide by zero function within the try statement I type ANS equals and paste the function now in the accept statement I will add my own error statement I add a print statement for the string something went wrong let me just clear the terminal so you can focus on the output I click on run and now the error statement is printed so what's happening the try statement will try and execute the code that you added inside it if an exception occurs it will trigger the accept line and execute any code added underneath the accept statement but python allows you to make the accept statement more specific if you want to trap the exception itself you could add the Base Class exception right after accept The Base Class exception is used for all exceptions that are written within python you can gain access to the exception information by using the as e after exception the E variable acts as an alias for the exception I can use e to print out the exception in the print statement so let's edit the print statement I add e at the end of the error message I press run and what happens our custom message is printed out but the contents of e are also printed so this time it reads something went wrong division by zero in Python you could also get access to the actual type or class of exception that's occurred to do this I add another print statement of e dot underscore underscore class underscore underscore I run this statement one more time this time the output includes the class of error as well namely class 0 division error let me clear the terminal again let's take this one step further to provide even more specific feedback to the end user in the accept statement I replace the Base Class exception with the actual error that was printed out namely zero division error I will change the print statement so that it first prints the actual error by adding e at the start of the statement and now I'll add some user-friendly text saying we cannot divide by zero click on run and now the output is division by zero we cannot divide by zero up to this point you've covered how to wrap your code with a try and accept statements and how to optimize the message that a user sees but how can you handle more than one exception without knowing what they are ahead of time fortunately you can chain the accept statement by adding another accept statement say the code doesn't trip the zero division error in the first accept statement you could add another accept statement that tests for a generic exception now I will add the Base Class exception again I add a print statement with E and a message with some general information I'll click on run and in this case because there is still a math error the function will still Trip at the first accept statement but this gives you a good idea of how you can test for more exceptions congratulations you now know how to wrap code in the try and accept statements to handle all potential exceptions in your code while handling is an essential part of learning python python has several built-in functions to create and manipulate files file handling includes opening reading and writing files amongst other operations as a developer you'll probably work with large amounts of data and file handling makes that easier this is why it's important to learn how to work with files whether you're working with data on your computer on the web or in the cloud it will most likely be saved in some type of file there are two file handling functions in Python open and close let's explore the open function first the open function is used for Reading Writing and creating files the open function accepts two arguments the first is the file name and or the file location and the second argument is the mode the mode indicates what action is required such as Reading Writing or creating it also specifies if you want the file output in text or binary formats let's explore the modes of file handling you can use in Python first you have R which is used to open and read a file in text format and RB opens and reads a file in binary format you'll learn more about this later R Plus on the other hand opens the file for both reading and writing and W opens the file for writing note that W will overwrite the existing file lastly a opens files for editing or appending data next there's the close function the close function is used for closing the open file connection note that it does not take any arguments there is one more way to open and close a file in Python and that's with the with open function the advantage of using it is that it closes the file automatically the with open function will be demonstrated shortly by now you understand how to open files for certain actions but you might be wondering what the difference is between opening a file in text and binary formats in Python you generally handle files in two ways either in text or binary formats the text format is more user friendly because humans can read it you'll not be able to read files in binary formats but they are much more Compact and therefore result in better performance now let's cover how to specify the type of file handling in Python python uses text as the default format for file handling so just passing in any mode for reading or writing a file will automatically set it to a text format to set the file handling to Binary you need to pass the letter b along with either the read or write option for example you write open the file name an RB to read a file in binary format or a B to append or add to a file in binary format you'll now explore file handling in code first I declare a simple variable called file and assign the open function to it to gain access to a file but before I can use the open function I first need to create a new file for testing let's call it test.txt inside this text file I add a simple line of text hello there good let's go back to the python file inside the parentheses of the open function I can now add to the first argument namely test dot txt between quotation marks since it's a string for the second argument I type in the word mode equal sign and then I will just use R for read also between quotation marks so far the variable called file will have access to the contents of the test.txt let's actually read the file you need to add a read line or read lines function readline will just return the first line of the file while read lines will output an array with multiple lines since we only have a single line of text in this file I'll just use the read line function I type file dot read line and parentheses I assign the line of code to a new variable named data then I add a print statement to print the contents of data lastly I add a close function that will close access to the last text.txt file I simply close with a set of parentheses I click on run and the content of the file is printed out namely hello there next I'll demonstrate another way to gain access to a file in Python I'll change the open function to the with open function I'll just clear the screen to assign a variable to the with open function you need to add as after the parentheses and then the variable name why would you use the with open function the with open function is better at exception handling and will automatically close the file for you just like before I create a second variable data read line and then print the contents of data I'll click on run and just like before the contents of the file are printed you've covered how to work with files in Python this includes the built-in functions to create and manipulate files and the functions to open read and write files in this video you will learn how to create files in Python and explore methods of inserting content into a new file files are used to permanently store data anything stored in the variables of your code exists in random access memory or Ram since ram loses its data when the computer is turned off it's important to be able to create files so data is available for future use or as a permanent record in Python we can create new files using the open function and enabling the right mode let's start with a short example I'll use a with statement with the open function and pass in the following parameters our file will be new file dot text and we'll set the mode by passing mode equals W now I assign this file to a variable by typing as file a shorthand way to assign the mode is to enter a single character that represents the mode you need in this case I can replace mode equal to W with just the letter W and it means the same thing now that I have access to the newly created file as a variable I can begin to add content to the file using the write function on a new line I type file dot right and I'm going to add some simple text this is a new file created when I click run the Explorer pane on the left hand side of vs code displays them my file called new file.txt has been generated as a new file clicking once on the new file to select it displays its content and confirms that the text I pushed through using the right function is now present in the file if I choose to write multiple lines of content to the file instead of a single line I can use the right lines function the right lines function accepts a list a list in Python is represented by square brackets and then a comma for each line I edit my file.write to say file dot right lines then within square brackets I add a comma after the sentence and type the next sentence this is another line to be added I click on run and the new file.txt now has the two lines created by my right lines function but it's not exactly the way I need it to be python will add the contents of the list exactly as it's specified within the list if I want the content to break on a new line I need to specify a new line by putting in a backslash and the letter N no space so just inside the open quote of the second sentence I add backslash n now when I click on run the content of newfile.txt is more readable with each sentence on a separate line one thing to note is that every time I run my script it's replacing the current file for example if I insert the number 2 into the first line of text click on run then check my text file that has now replaced the previous file with one where I just added the number two in the first sentence thus overwriting the existing file with a fresh new file.txt on the other hand if I want to add to the file as opposed to replacing it each time I just need to change the action of mode to do so I replace the letter W and put in a letter A which stands for append now I click on run three times then I check my new file.txt to find that the contents have been added it now has multiple lines however it has not pulled in exactly the way I wanted it to be and the reason for that is I don't have a new line specified at the very beginning so I add a backslash n before my first sentence since I need to replace the file I change the mode back to W to ensure I am overwriting the last file I click on run and that replaces the existing lines that were there now I want to add to the file again so I change the mode back to append by changing the W to an A and I click run three times I check the file and confirmed that the new lines were appended each time the final part of my code will be to trap exceptions always keep in mind the necessity to deal with any Exception by using the try and accept statements I add a new line above my existing code and there I type try colon as an example I use file not found error which is an error that occurs often this needs two new lines added to the end of my code so I type accept file not found error as e and press enter for a new line then to print out the error I type print error comma e now to force the error to happen let's say I ask for a directory that I know doesn't exist in my current directory so the faulty directory is called sample I edit my code to Read sample forward slash new file dot text now I'll clear my terminal and run my screen again I get the error generated by my print command no such file or directory so take care when you are creating files that the directory where you want to place the file actually does exist in cases like that you must ensure the directory already exists or create a directory from within Python and then create the file inside it in this video you learned about creating files within python using the append and write modes and inserting single or multi-line content into the file you already know how to handle files in Python but do you know how to read the content of a file being able to read files is essential when working with store data in Python and python offers several built-in functions to make this easier the three methods we'll explore in this video are read line and read lines let's start with read the read method Returns the entire contents of the file as a string that will contain all the characters you can also pass in an integer to return only the specified number of characters in the file the second method to read files in Python is readline let's explore this method the read line function returns a single line as a string if for example you have a file with two lines of text that say this is the first line and this is a second line the read file function will return as the output only the first line of text this is the first line the readline function can also include an integer argument for returning a specified number of characters on a single line let's say you use the same testing file but pass an integer of 10 your output will be the first 10 characters of the first line in this case the words this is and the letters th for a total of 10 characters the third method to read files in Python is read lines let me demonstrate this method the read lines method reads the entire contents of the file and then returns it in an ordered list this allows you to iterate over the list or pick out specific lines based on a condition if for example you have a file with four lines of text and pass a length condition the read files function will return the output all the lines in your file in the correct order files are stored in directories and they have paths reading files from the same directory is straightforward you only need the name of the file when working with different locations however it's important that you know the difference between absolute and relative paths let's start with absolute paths absolute paths contain a leading forward slash or drive label an absolute file path includes all the information you need to locate a file whether you are in that files directory or not relative paths normally don't contain any reference to the root directory and are normally relative to the calling file a relative file path only includes the information you need to locate a file in your current working directory I'm now going to demonstrate how you can read files in Python I start with a simple sample txt file it just has some text with a couple of lines that I'll use and demo some of the options there are for reading and files I start by using with open and I pass in my file name which is sample.txt I just want to read in the contents so I set the mode to be R and I assign it to a file variable the first option to read a file is to print the entire contents of the file to do this I use the function printfile.read and I click on the Run button notice that the entire contents of the file is printed out as is the second option allows me to print out only a certain section of the file for example let's say I only want to print out the quick brown fox jumps over the lazy dog that's 44 characters I can pass in a parameter to the read function which tells the function to read in the first 44 characters to do this I enter the number 44 and you notice that it prints out only the first line when I click run the way this works is that it starts at the very beginning based on the index of zero and 44 is the last character that needs to be printed out in this way I can control what sections are printed out the third option I have is to read in a line so the function I want is dot read line and it will only take in the very first line from the file I click on run and it prints out only the first line of text within that file the fourth option is to use the dot read lines function that will return a list of lines I click on run and you notice that the text in the file is now wrapped in square brackets lastly because it's a list I can assign it to a variable for example I can say data equals file dot read lines and then I can write a for Loop for X in data I print the value of x and then when I click on run you'll notice that the list items are printed out line by line something to note is that when you use the with open and you get as file it returns a list by default I can just change the for Loop so that it points to the file variable when I click run the same option is returned these are just some of the methods you can use in Python for reading in files you should now be able to describe how to read files in Python and demonstrate how to Output different formats using the read read line and read lines functions you've reached the end of this module on basic programming with python during this module you've received an introduction to python functions and data structures and explored how python deals with errors exceptions and file handling now it's time to recap the key points of this module as with most programming languages functions are the basis for creating actions in Python and by completing the first lesson in this module you should now be able to declare a function in Python pass data into a function return data from a function and explain scoping at a basic level in order to use functions efficiently across a project it's important to determine their accessibility across different levels of code in this lesson you also learned how to identify the four scopes describe how functions control scope at different levels explain data structures and describe the concept of lists in Python python has several built-in data structures to help you to organize and store your data for easy access and you've learned about the most common ones on successfully completing the second lesson you should now be able to identify list methods explain what types can be stored in a list describe how to iterate over a list and explain the main uses of tuples sets dictionaries and quags python file handling and exceptions were topics in the final lesson in this module having completed this lesson you should be able to identify how to create and manipulate files with the open function describe how to read files in Python and demonstrate how to Output different formats and store file contents in data structures this module gave you the opportunity to get started with basic Python Programming well done that's one more step towards becoming a python developer developers can structure their code in many different ways python allows for object-oriented procedural and functional programming models or as they are often called paradigms in this video I'll focus on procedural programming which is like writing step-by-step instructions that a program executes it's an important stepping stone to object-oriented programming therefore as a new developer it's important to learn more about it the main purpose of a programming model is to structure your code that structure makes it easier to update the code and create new functionality within the code but there's no one perfect model that's a solution to coding structures and sometimes a combination of approaches works best procedural programming structures code into procedures sometimes called subroutines or functional sections of code because of this approach the code is made up of logical steps to complete a specific task for example adding two numbers to return their sum I can add together the numbers 5 and 10 with a short piece of code now I want to add together the numbers 8 and 4. however the code I wrote was specifically to add 5 and 10. for my new numbers I must create another similar piece of code to do the calculation that would not be a very efficient way to code instead I change the code into a function that will accept two numbers as arguments and return the sum with this function I don't declare the actual numbers as variables instead I use the parameters A and B less code is needed but something more important has happened I now have the function called sum which can be reused as many times as I like with many different sets of numbers in programming there is a principle called dry don't repeat yourself and it's all about reducing duplication in code the original code I wrote to add two numbers together to return their sum is a good example of what not to do because I had to write the code twice to accommodate the second set of numbers a guideline to keep in mind is to create functions that can be reused throughout your application let's examine another example this time for calculating the total of a bill and adding tax to it the code will be presented in four sections to help you focus on what each procedure does first the build total function accepts a bill as a parameter and Loops through it to calculate the total bill and return its total they calculate tax function except two parameters the percentage and the bill total then it Returns the total amount of tax to be added to the bill which is also rounded off to two decimal places the food bill which contains its items represents a customer's bill which is static but could also be changed to an input to accept data from the user to dynamically create a bill the last few sections will call the two functions to calculate the bill and tax and then print each out along with the overall total could you identify the subroutines or functional sections of the code and did you note how these sections reuse one another now let's put the four subroutines together to examine the four ways in which the footprint of the code is reduced by procedural programming it's best to inspect The Code by starting at the end tax total reuses food total food total reusers build total and food bill calculate tax reusers bill total and build total reuses food bill in summary the advantages of the procedural Paradigm are it's easy for beginners to learn and get started procedures can be reused by other parts of the code code is easy to understand because each procedure is broken into specific tasks procedural programming does have some disadvantages including it can be harder to maintain and extend in some cases it doesn't relate well to Real World objects data is exposed throughout the whole program procedural programming has both its advantages and disadvantages as you learn more as a new developer you'll be better able to decide if it's the best approach to a specific piece of coding or not in this video you'll learn about algorithms an algorithm is a series of steps to complete a given task or solve a problem on a day-to-day basis you use algorithms all the time to complete tasks one such example is following a recipe to make an egg omelette first you have the list of ingredients to use in your omelet this can be called your input next is the method or the instructions to follow step by step to create your dish finally you complete the omelet your output the steps to make the omelettes are the same every time an algorithm and programming Works in a similar way in programming algorithms are used to solve a multitude of problems that range from simple to very complex the key to understanding and creating an algorithm is to break the problem into smaller parts just like the egg omelette recipe that way you build up the steps to complete the algorithm that will resolve the overall problem now let's explore a practical application of algorithms encoding I'm going to demonstrate a particular algorithm that checks if a word is a palindrome a palindrome is a word that can be spelled the same both backwards and forwards for example the word race car is a palindrome because I can spell it forward as r-a-c-e-c-a-r and backwards it's still the same r-a-c-e-c-a-r to be able to check if a word is a palindrome I need to use an algorithm as mentioned earlier an algorithm is a series of steps to solve a problem let me break down the problem I know the string in my example race car has an index and I need to check if the index at the front of the string is equal to the index at the end of the string in this way I can compare the two values of the indexes so I print str0 because that's the first index and I also print str6 because that's the last index I can just count that up to double check zero one two three four five and six and then I click on run the output is the two values I need to compare both of which are the letter r at the beginning and at the end of race car now I'm going to break down our problem into smaller steps first I need to check if the value of index 0 is equal to the value of the last index 6 which in this case is r then I need to check that the next or second character which is index 1 is equal to the second last character which is index 5. finally I need to check if character 2 is equal to character 4. what I need to do is to check if these conditions are true or false so let's check how I can write this out in some code I begin by creating a function def is palindrome and I know that it will accept a single parameter called string which I've entered now I want to get the starting index as well as the end index I put the start index into a variable that equals zero every string will always start at the index 0 and then the end index is going to be the length of the string so I enter end index equals Len function Str and then -1 this is because a string always starts at zero and I have to think about the last index next what I want to do is iterate through the string itself and compare the starting index with the end index characters to validate that they are the same to do this I create a for Loop by typing 4 x in Str and I'll make the comparison within the for Loop I can check if the first index is equal to the last index and since the two characters are and R are the same it'll continue to be true but it would be quicker for me to check if it's false because then I'll know straights away if it's not a palindrome so I do an if statement and I use the string being passed in as the parameter then I use the start index to get the character and then I check if it's not equal to the string within the end index if this condition is met it will return false which confirms that it's not a palindrome but if the condition is never met outside the for Loop then it returns true which confirms that it's a palindrome I've done all the checks across the starting index and the end index and it returns back to the condition of true to confirm that it's a palindrome now I'm going to test the algorithm to verify that it works I use a print statement I call the is palindrome function and I pass in race car because I know that it's a palindrome I click on run and it Returns the value of true if I change race car to race cars and I run it again the condition of false is returned this is an example of creating an algorithm in code to solve a problem it has a series of steps that have to be followed to resolve the problem in code to give back the condition of is the string a palindrome or not now you know how useful algorithms can be as a step-by-step way to solve a problem with coding an algorithm can be used to solve problems with a smaller complex once the steps of an algorithm are created they will then execute the same way each time the algorithm is used as a developer your main task will be to write code to suit business needs that code will have to go through what's called refactoring this means that you rewrite or rework the code to make it easier to manage or to run more efficiently refactoring is a standard part of the software development cycle making code easy to manage may be straightforward but what about making it faster or making it perform better to determine how to make code faster or perform better you must be able to measure it code is measured by time and space time is measured by how long it takes and space is about how much memory it uses Big O notation has different complexities or categories ranging from horrible to excellent and it's used to measure an algorithm's efficiency in terms of time and space let's explore the different kinds of time complexities first constant time this is an algorithm that will always run under the same time and space regardless of the size take a dictionary for example to get the value of an item you need to have the key the key is a direct pointer to the value and does not require any iterations to find it it's considered constant second is a linear time algorithm this will grow depending on the size of the input for example if I have an array of numbers with a range of 100 it will run very fast but if it's increased to a million it will take a lot longer to complete the size in this case affects the running time of the code third a logarithmic time algorithm refers to the running time of the input against the number of operations I can take a linear approach to try to find a number out of 100. let's say the number is 97. in the linear equation it will take 96 iterations before it's found this is because it must iterate through each item one by one until it finds the target value using a binary search I can drastically cut down the iterations and find it under seven iterations this is measured by logarithmic time the binary search works by splitting the list into two parts each time to check if the target is less than or greater than one fourth quadratic time refers to a linear operation of each value of the input data squared this is often a nested list as in this for Loop this for Loop is considered quadratic time as the outer loop will need to iterate in a linear way 10 times but it also has to iterate the inner loop the same 10 times for each single Outer Loop this means its total iterations are 10 times 10 which is a hundred Fifth and last is exponential time which is an algorithm that doubles with each iteration the Fibonacci sequence is a prime example of this refactoring code can be a big task but understanding algorithmic complexity and how it's calculated makes it easier to optimize code now that you know about constant linear logarithmic quadratic and exponential time you are one step closer to your goal of being a developer perhaps you've heard of functional programming it uses a different Paradigm than other models such as object oriented it's particularly Adept at processing large amounts of data at high speeds this video will get you started with what functional programming is later in the lesson you'll explore topics such as pure functions recursion reversing a string and useful python functions such as map and filter let's start by exploring the role of a function functions take some input process it and then produce some outputs there are two types of functions traditional and pure pure functions will always do the same thing and return the same result no matter how many times they are called there are several differences between traditional and pure so let's list them traditional functions can access and modify variables on the global States but pure functions cannot both traditional functions and pure functions can access variables in the local state traditional functions can change args whereas pure functions cannot and lastly the outputs of traditional functions does not depend on inputs however the outputs of pure functions does depend on inputs functional programming in essence is a programming Paradigm that utilizes functions for clean consistent and maintainable code compared to object orientated programming which we'll learn about later functional programming differs by Design functional programming does not change the data outside the scope of the function this simply means that the function should avoid modifying the input data or arguments being passed instead it should only return the completed result of the intended function being called functions are considered Standalone or independent and this AIDS the clean and elegant nature of the code in fact many of the strongly typed object-orientated languages have Incorporated function programming into their structure in order to support functional programming the language itself needs to allow functions to be passed as an argument and also return a function to its caller in Python functions are what is known as first-class citizens which essentially means they have the same level of strings and numbers they can be assigned to a variable passed as an argument or returned to its caller let's explore a few examples of functions available in Python take for instance the sorted function the sorted function accepts a list of items and then returns that list in a sorted order you can use a sorted function to list items in alphabetical order by passing a list of coffees to the sorted function the return sorts the list in alphabetical order the great thing about functional programming is that the logic behind certain tasks is already built in for you functions are reusable and thus save a lot of development time but did you know that you can also create your own functions specific to your own requirements let's look at a simple example imagine you want to spell the names of the coffees backwards this might not be entirely useful but it's a good Showcase of functional you can create your own simple reverse function to do this Define the function let's call it reverse and assign the variable Str to it now return the value of Str with a slice function you'll learn more about the size function later in the lesson then assign a variable to get the result of the map function the map function accepts as its first argument the reverse function and then the iterable coffees it will then automatically handle the iterations to go through each copy and apply the reverse function to it in this video you have learned what functional programming is and you were introduced to examples of built-in functions in Python a good coder will try to keep code clean make it easier to debug and ensure it's extendable the great thing is that pure functions can help you do all that in this video you'll learn what pure functions are and how you can use them in functional programming it's important to understand that there is a clear difference between traditional and pure functions a pure function is a function that does not change or have any effect on a variable data list or set Beyond its own scope for example if you have a list with a global scope a pure function cannot add something to that list or alter it in any way let's explore an example function and then determine if it is a pure function or not this code includes a list on the global scale and a function called add to a single parameter called item the value of item is then set to four the output is one two three four what do you think is this a pure function no it's not a pure function as it changes the global list by appending the item which is passed as an argument in order to change it to a pure function you need to think how to extend the function to accept a list as an argument add the item to the list without modifying the original list and how to return a new list with a newly added item the solution is to create a new list and copy or clone the data from the original list let's revisit the code to make some changes this time you make a copy of the original list the new item is added to the new list then the new list is returned to the caller now that you have a better idea of what a pure function is let's review a few benefits of pure functions firstly with pure functions you always know what the outcome will be pure functions are consistent Snippets of code that do exactly what they are intended to do thirdly pure functions include the ability to Cache since you know the return is always going to be the same lastly pure functions lend themselves well to a multi-threaded program in multi-threaded programs more than one process can run concurrently which creates many threads of data pure functions will help prevent changes on the global scope ensuring data stays reliable now I think it's time to offer you a step-by-step demonstration in vs code of how to alter a normal function to a pure function pure functions are especially useful because they are easier to read better to debug and more consistent I'll now take you through a simple example to demonstrate how to create a pure function I'll start by creating a function that does not behave like a pure function and then I'll tweak it until it's a pure function first I create a list called my list and inside it I add three numbers one two and three then I add a simple function called add to list which takes a single variable called item this function will return my list and append the new item that is being passed through below that I call the function add to list and assign the value of 4 to the variable item finally I add a print statement of my list so that I can focus on the output in the console I click on the Run button and the numbers 1 2 3 and 4 are printed out this means that my list now contains the inserted number four as well because the function appended it to the existing list what do you think is this a pure function no it's not because the data has been manipulated at the global scope from inside the scope of the function let's try to turn it into a pure function the first thing I'm going to change is how the function is being called I want to pass in a new argument I type new list equals add to list I keep the value of 4 and I'll also print out the new list below the first print statement to compare the output now let's make some modifications to the function itself I add a simple append to my list which is going to take in the item variable I type my list dot append an item in parentheses then I want to return back the list I type return my list now I click on run and the output and the console indicates that both my list and new list include the values of one to four it's clear that the function is still not a pure function why because even though it's returning a new variable it still has a reference to the my list variable let's try something else and turn it into a pure function this time I'll accept a parameter called LST for the variable item I type LST comma in front of item in the parentheses I also change the append statement to LST dot append item and I also change the return action to return LST finally I change the call Action by passing in my list comma inside the parentheses before the value of 4. let's run that and once again both lists contain the values of one to four the reason for this is that the function is still using the list as an argument and it's still being updated from within the function so ultimately in order to create a pure function the problem that I have to solve is how to create a new list and then I need to solve how to get all the values from the list that's being passed through and then return the new list back to the calling action let's give it another try this time I create a new list by creating a copy in the function I type the name of the new list n l equals LST dot copy and a set of parentheses now instead of putting the past values into the LST I'll put it into the copy so I type NL dot append and then I also change the return action to return NL I clear the console screen so that I can focus on the output and click on run and finally I get two different results my list is printed with the values 1 2 and 3. but the second print statement for new list includes the values of one to four this function is now a pure function because it adds a value to a list but it doesn't manipulate the original list outside the function in this demonstration you've learned what a pure function is and what you need to do to change a function that's affecting a list on the global slope to a pure function that does not interfere with the original list it's likely that you'll use pure functions regularly in your programming career because pure functions will keep your code cleaner easier to debug and easier to extend in programming recursion is used for solving problems that can be broken down into smaller repetitive problems it's especially good for working on things that have many possible branches and are too complex for an iterative approach one good example of this would be searching through a file system so what is recursion recursion is essentially a function that calls itself recursion creates a pattern of repeating itself over and over and over so what does that mean from a coding perspective in this example a function accepts a single argument and inside the function it has some logic to deal with the problem it's trying to solve the key part is the return in the coach the return statement is returning the same function recursion is quite similar to a for Loop it will iterate or in the case of a recursive function call itself multiple times but a warning when you create a recursive function you must always consider the result if you don't it will spit into an infinite Loop and suck up all the memory until the program eventually crashes or gets terminated let's compare how to use a looping and a recursive solution to find the factorial of a number that can be solved let's start with the looping solution the looping function accepts a single integer called n as an argument and first checks if the number is less than zero if it is it returns to zero is you can't have a factorial negative number the else condition sets the factorial to 1 and then Loops through the range of the argument which is five in this case the loop will calculates one times two times three times four times five which will give the answer as 120 the factor of five now let's explore the recursive solution to the same problem the recursive function is simpler the more compact the main reason for this is that you no longer need the for Loop to do the iteration of the N argument the first line of the function verifies that the number is 1 and returns one if true the else condition multiplies the argument n by calling the find factorial recursive function and passing in N minus 1. recursion can be difficult to understand by way of explanation let's demonstrate exactly what happened as the function calls itself the function is being called over and over and the part that changes is the value being passed into the function each time the argument of n or five in this case is decreased by one each time until it finally is one this stops the function from being called again and exits out of the recursive process so how exactly did this code get the result of 120 this is provided by the return statement it keeps a reference to the incremented value and this is the final return after it has been completed right it's time to review the advantages and disadvantages of recursion first the advantages are recursive code can make your code neater and less bulky complex tasks can be broken down into easier to read sub-problems generation of sequences can be easier to understand than nested Loops but there are disadvantages it can be harder to follow the logic in recursive code in terms of memory they are expensive and sometimes inefficient it can also be difficult to debug and step through the code you should now be able to explain what recursion is and how it can be used to solve problems I believe you'll benefit from using recursions in your code in the future one of the basic ways to test the python developers problem solving skills is by asking them how they would reverse a string knowing how to do this is very useful in the production environment some programming languages have a built-in function to reverse a string python doesn't have such a function but fortunately due to the language's flexibility there are several ways to do this in this video I will show you two ways to reverse a string in Python first I'll demonstrate how to do this with the slice function to start off I create a file called stringreversal dot pi the format or syntax of a slice function is that it always starts with the name of a string open square bracket the start parameter colon the stop parameter another colon and then the step parameter followed by a closed square bracket I'll add a hash symbol in front of this line to indicate that it is a note this is called the extended slice syntax the start and stop parameters are the indices between which the function manipulates the string the step parameter is the Hops or jumps the function makes while it reverses a given string I will now first Define a string then manipulate the string with the slice function and finally print the string I'll call the string trial and assign the word reversal as its value by typing trial equal sign and the word reversal between double quotes to manipulate the string I create a new string called new trial now I assign a value to new trial with the slice function I type in equal sign trial and open square bracket to instruct python to use the entire string I leave the value of the start and stop parameters empty I simply type two colons and then add the value of the step parameter as the number -1 followed by a closed square bracket the negative value of the step parameter indicates that the string needs to be traversed from the right one index position at a time instead of the conventional method of starting from the left finally I print the manipulated string to test if it works I type print and between parentheses I add the string name new trial I click on run great in the terminal the string has successfully been reversed in summary the entire string is traversed from right to left one index position at a time this new sliced object is then copied to another string which is then rearranged and printed it should be noted that you can use the slice function to manipulate the same variable I only used a second variable in this example for clarity the slice function is the simplest way to reverse a string I will now demonstrate another way you can use the slice function to reverse a string this time using recursion I start by creating a new file and saving it as string reversal 2 dot pi next I Define a function and pass a string variable to it namely Str I type Def and the function name String reverse and Str between parentheses followed by a colon this function will act as a conditional if statement I type if l e n open parenthesis Str close parenthesis two equal signs the number is zero followed by a colon on the next line I'll return the value of Str now let's add the else statement the else statement will recursively call the slice function but with a modified string every time on the next line I add else and a colon then on the next line I type return string reverse Str but before I close the parentheses I add a slice function by typing open square bracket the number 1 and a colon followed by the closed square bracket this time the string is traversed from the front skipping the first character in every Loop and this first character skipped is appended to the remaining string so I now add a plus sign Str and the value 0 in between brackets outside the function I'll give Str the value of reversal then I create a second variable that will store the value of the return string I'll call this variable reverse and assign to it the value of the function finally I add a print statement for the variable reverse let's run the code success the string displays in reversed order in the terminal essentially the function calls itself by passing a different string in each recursion and appending the element it has kept right after it in this video you learned two different methods to reverse a string in Python the first by just using a slice function and the second by using a slice function with recursion let's say I want to generate a list using an existing List the general process would involve applying some sort of operation to each element of the existing list and using those outputs to generate the new list there are many ways you could do this in Python in this video you will learn how to process a list with the map and filter functions my file contains a list called menu and it contains a list of various coffees I want to filter this list for specific coffees say I want to print all coffees that start with the letter c I will do this by creating a function through which I will pass the list to compare it to the letter c then I will demonstrate how to get the output first is a map and then as a filter before I start let me talk you through the format of a map function but keep in mind the filter function follows the same format to create a map I type map and then need to Define its arguments the map function accepts two arguments the first argument is an actual function in this case it will be the function that I will use to match values based on a condition the second argument is the articles that will be passed through that function in this case the coffees from my menu list now let's create the function with the condition I press enter twice to move the map function down I type Def and the name of the function which is find coffee I then add a single parameter coffee between the parentheses and a colon after the closed parenthesis the coffee parameter I added will be the coffee from my list I now need to check if the first character of the items in the list matches the letter c to do this I'll create an if statement by typing if coffee and pass in 0 to set the action on the first letter of the coffee variable I then type the equal sign twice followed by the letter c and a colon I press enter and on the next line I type return coffee if the statement is true to use the map function I'm going to assign it to a variable called map coffee I follow that by entering an equal sign and the map now I can pass in the arguments for the map function remember the first argument is the function itself I enter the function name find coffee it is important to note that I am not calling the function I'm just passing it in like an argument I add a comma after find coffee and then the second argument the article in this case menu finally I want to print out the value of map coffee so you can focus on the results in the terminal a click on run and in the terminal I receive a map object as output the next step is to iterate through the map object I type for X in map coffee print out the value of x I click on run again and now I get the output as a map in the terminal a list appears with a lot of values that say none except cappuccino and cortado and that is because cappuccino and cortado are the two matches for the letter C in the function the great thing about the map function is that I did not have to create a for Loop to go through the list the map function takes the function as an argument and passes the menu list values into the function one by one so that handles the iteration for me which makes it quite useful next I'm going to demonstrate how to get the output with the filter function to start I'll comment at the section of the code related to the map function and clear my terminal the filter function works much the same as the map function I declare a variable called filter coffee and assign the filter function to it again I add the two arguments namely the find coffee function and menu then I print out the variable filter coffee I click on run and receive a filter object as output now I will iterate through the filter object just like I did with the map object I type for X in filter coffee print out the value of x I'll clear the terminal now and click run this time only cappuccino and cortado are returned why is that let me explain the difference between a map and a filter function a map takes all objects in the list and allows you to apply a function to it a filter also allows you to take in all objects in the list and runs through a function but it creates a new list and only returns values where the evaluated function returns true that is why there are no none values displayed in the output for the filter function you now know how map and filter work in Python and should be able to also explain the difference between the two functions programming languages are built upon certain models to ensure that code behaves predictably python primarily follows what is known as an object-oriented Paradigm or model as you'll soon discover object-oriented programming or oop relies heavily on Simplicity and reusability to improve workflow by the end of this video you'll be familiar with the object-oriented programming Paradigm you'll also be able to identify the four main Concepts that Define object-oriented programming programming paradigms are a strategy for reducing code complexity and determining the flow of execution there are several different paradigms such as declarative procedural object-oriented function logic event-driven flow driven and more these paradigms are not mutually exclusive so programs and programming languages can opt for multiple paradigms for example python is primarily object oriented but it's also procedural and functional in simple terms a paradigm can be defined as a style of writing or program oop is one of the most widely used paradigms today due to the growing popularity of languages that use it such as Java python C plus plus and more but the oop's ability to translate real-world problems into code is arguably the biggest factor in its success oop has high modularity which makes code easier to understand makes it reusable adds layers of abstraction and allows for code blocks to be moved between projects to help you better understand oop I'll first clarify some of its key components which are classes objects and methods a class is a logical code block that contains attributes and behavior in Python a class is defined with a class keyword the attributes can be variables and the behavior can be functioned inside of it you can create instances from these classes which are called objects in other words a class provides a blueprint for creating an object in more practical terms let's say you want to record the attributes of employees at little lemon such as their position and employment status you could create a class called employee and conveniently bundle those attributes in one place next let's discuss objects as mentioned an object is an instance of a class and you can create any number of them the state of an object comprises its attributes and behavior and each one has a unique identifier to distinguish it from other instances the attributes and behavior of the class are what define the state of the object for example you can create the object imp 1 by calling the employee class once called you can Define the position and employment status attributes as shift lead and full-time respectively in code this would be written as amp1 equals employee followed by shift lead and ft in parentheses this is a case of instantiation or creating an instance of a class finally there are methods which are the functions defined inside a class that determine the behavior of an object instance let's say you want the employee object to Output a string that states their position you would first declare their function intro in the employee class and then call it on an object to get the output now that you know about classes objects and methods let's explore the concepts that oop hinges upon the first one is inheritance which is the creation of a new class by deriving from an existing one the original is called the parent or superclass while any derivatives are referred to as the subclass or child class the next concept is called polymorphism it's a word that means having many forms in the context of python polymorphism means that a single function can act differently depending on the objects that cause it for example the built-in plus operator Works differently for different data types in the case of integer data types the built-in plus operator performs addition operations such as three plus five equals eight on the other hand in the case of string data types the built-in plus operator performs a concatenation or combining two strings together this ability of modifying functionality is called polymorphism the third concept is encapsulation broadly this means that python can bind methods and variables from direct access by wrapping them within a single unit of scope such as a class encapsulation helps prevent unwanted modifications in effect reducing the occurrence of errors in outputs and finally there is a concept of abstraction this refers to the ability to hide implementation details to make data safer and more secure note that python does not support abstraction directly and uses inheritance to achieve it this is something that you'll explore in more detail later there are some other important Concepts in oops such as method overloading method overriding Constructors and more which you'll learn about in more detail later in this video you became familiar with oop Paradigm and the four Concepts that support it inheritance polymorphism encapsulation and abstraction see you next time classes have the ability to combine data and functionality which is a very useful feature when you are coding by the end of this video you'll be able to explain what classes instances and objects are in Python you'll also be able to create a class instantiate it and access its variables and methods you may have also heard of classes discussed in terms of attributes and behaviors in general attributes refer to variables declared in a class while behaviors are associated with the methods in a class creating a class creates a new type of object from which you can create instances an important thing to keep in mind is that everything in Python is an object or derived from the object class to demonstrate how this all works I'll create a class that I can then derive objects from in a new vs code file I first type the keyword class followed by the name my class and a colon I do need to take one more step so that python doesn't throw an error that is the type the pass keyword on the next line the pass keyword plays the role of a placeholder when nothing needs to be executed in practice this tells python that I won't do anything with this class just yet next let's create an object for this class I create a variable called my class and then assign the class to it by typing equals my class followed by parentheses if I run this code the output shows that it has executed without errors however just to check that it's working as expected let's add a print statement to the class so that would be print followed by the string hello in parentheses when I run the code again the word hello appears in the output let me clear the terminal before continuing you may have noticed that I used the same name for both the class and its object but the object name can really be anything for example if I change the object name to myc and run the code once more it will execute the same as before everything I've typed is part of the instantiation process in Python which involves three key steps one class definition two creating a new instance and three initializing the new instance since everything in Python is an object it makes sense to follow naming conventions to make things less confusing later in this case I have my class for the class object and myc for the instance object there is a third type of object called the method object which you can use to call a method whenever it's needed classes mainly perform two kinds of operations attribute references and instant creation I've already written an example of the latter so let's try building an attribute reference this time first I create a variable a for the class object and assign it a value of 5. to print this variable I first need to refer to the class so under the instance object I type print and then my class dot a when I run the code it returns 5 in the output to confirm the class reference is necessary I delete my class from the print statement and run the code again and python throws an error so I'll correct the code and put my class back in let me clear the terminal quickly before continuing so you know what happens if you reference a class object but what if you reference an instance object let's find out by typing a print statement for myc DOT a and then running it in the output I get 5 which shows that attribute reference still works with instance objects finally let's finish up by creating a method inside this class I'll use the def keyword and follow it with hello a pair of parentheses and a colon on the next line I tape a print statement for the string hello world I'll also delete the first print statement to avoid confusion to call this method I add a new print statement at the end of the document for myc.hello which uses the instance object this should work just as I successfully called a variable through an instance object right running the code results in an error so methods are not quite as simple fortunately I can resolve this by adding the keyword self within the parentheses of the method as defined in the class running the code again produces the words hello world in the output you'll also find the word none printed below as there is no return value from the given function that's a brief demonstration of classes instances and objects I created a class then I was able to instantiate it and access its variables and methods code reusability is the use of existing code to build new software reusability is a core programming concept by the end of this video you'll not only be able to create a class and instantiate it with variables and methods but you'll also discover how referencing the same variables and methods in separate instances can produce different outcomes meaning that the code is reusable I'll start by creating a new file called recipes dot Pi where I'll also create a class called recipe before continuing let's also explore two special methods in Python the first one is the new method which is responsible for creating and returning a new empty object to write it I start with the def keyword followed by double underscore new it then appears as a suggestion so I click on it to fill out the rest the CLS here is not a keyword but rather a convention it acts as a placeholder for passing the class as its first argument which will be used for creating the new empty object the second method is the init method which is similar to what is known as a Constructor in some other programming languages it takes the object you created using the new method along with other arguments to initialize the new object being created I write it with the def double underscore init and then choose the first suggestion that pops up the init method takes the new object as its first argument the self keyword here is another convention it has no function itself but serves as a placeholder for self-reference by the instance object so let's delete the two example methods and then write some code that demonstrates how to use the state of the object to your advantage I begin with an init method which I then use to initialize a few values I do this for the value dish by typing self.dish equals dish I then do the same for the values items and time before moving forward I want to check that the arguments in the initializer will match those of my instances to do so I add dish items and time after self imagine a real world scenario where a restaurant Chef wants information about the recipes they have been using so let's write a class that will help them with that I have the variables dish items and time in which items will hold the recipe ingredients I now write a function to produce a string out of this information I type def contents and then self in parentheses on the next line I write a print statement for the string the plus self dot dish plus has plus self dot items plus and takes plus self dot time plus Min to prepare here we'll use the backslash character to force a new line and continue the string on the following line for this to print correctly I need to convert the self dot items and self dot time references to Strings by appending Str at the beginning and encasing each reference in parentheses now that I have a class set up let's use it to create a pizza instance I write this as Pizza equals recipe opening parenthesis the string Pizza comma opening square bracket cheese comma bread comma tomato closing square bracket comma and 45 to represent the time followed by the closing parenthesis I also want a pasta object so let's copy and paste the code for the pizza object and change the object name to Pasta the ingredients to penne and sauce and the preparation time to 55. now that I have a class and two instances let's see if I can access the instance attributes and methods I write two print statements for pizza dot items and pasta items when I run the code I find that despite passing the same function and variable items the two instances produce different contents so next let's try printing the instance method contents over Pizza before we move forward let's clear the terminal so we can more clearly see what the output will be I type another print statement for pizza.contents and empty parentheses I run the code once more and the output uses the class method to print a line stating the pizza has cheese tomato and bread and takes 45 minutes to prepare that's a demonstration of creating a class and instantiating it with variables and methods then referencing the same variables and methods in separate instances to yields different outcomes let's try to solve a problem that may occur for managers or to restaurants because the managers are busy running the restaurants they have limited time to deal with the needs of employees the current system for paying wages requires managers to update each other every time an employee requests a payment because this is cumbersome they would like to implement an automated approach so what can be done fortunately there's a way to reduce the number of steps using instances by the end of this video you'll be able to explain what instance variables and methods are you'll also know how to use them to change the state of an instance object so let's write some code to help those busy restaurant managers let's start a new file called payment info dot pi in this file I'll create the class payslips and initialize three variables and it's called name pay status and amount I start by typing class pay slips and then on the next line I call an inits function with def double underscore in it and then select the triggered suggestion for the variables I type each one in the format self dot variable equals variable next I'll create two functions one to display the status of the payments and another to update the status the first function is written as def pay with self in parentheses followed by self dot payment equals yes on the next line the second function is Def status and contains an if else statement if self dot payment double equals yes return self dot name plus is paid plus self dot amount with Str appended to the beginning the second part of the statement is else return self dot name plus is not paid yet finally let's create instances of this class for the employees I'll call them Nathan and Roger I type the first instance Nathan equals payslips and in parentheses Nathan for name no for payment and one thousand for amounts for Roger I copy and paste this instance and set the values to Roger no and three thousand respectively I also need to make sure to pass these values inside the init method so I type name payment and amount after self now I'm ready to call the instance method status to check the status of the payments I write a print statement for Nathan dot status parentheses and for Roger dot status parentheses when I run the code the output all appears on one line which is not very presentable I add a new line character between the items and the print statements which is backslash n this time the output is much cleaner the output states that neither Nathan nor Roger have been paid but let's say that one manager decides to pay Nathan so I'll use the pay function to update the status remember that the pay function is set up to update the value of the payment variable I type Nathan dot pay parentheses and then copy and paste the last print statement above this line I type another print statement with the string after payment I run the code once more and it now tells me that Nathan was paid 1000 whereas Roger still has not been paid that's a demonstration of instance methods in action now I'll describe the code to you in more detail now let's discuss what happened in that coding example in more detail the two instance objects which are Nathan and Roger each have their own States you may have noticed that when the instance method pay was called to change the state of Nathan Roger was not affected this is because the method inside the class is not affected rather it provides a separate blueprint to each instance which can then be updated for that instance only in the coding example I didn't print the variable values after calling the pay function but if I did it would show that the payment instance variable for Nathan changed from node to yes while Roger remained no now let's imagine that this code is the basis for an online payment system it would allow either manager to click on the paid button for an employee which then update to that employee status no more back and forth calls in this video you learned how to use instance variables and methods to change the state of an instance object without affecting any other instances when instantiating objects from a class you may find that the class is missing some properties that you use frequently in that case you could decide to make a new class that replicates the first one but also adds a few more properties it would be cumbersome to rise everything from scratch but thanks to inheritance you don't have to by the end of this video you'll become familiar with inheritance in terms of child classes being derived from a parent class inheritance is a core Concept in object-oriented programming generally and in particular in Python and it's a major part of code reusability you may know that everything in Python is an object but let's explore that idea more closely it specifically means that every class in Python inherits from a built-in Base Class called objects which is found in built-in dot objects in other words a class declaration such as some class with empty parentheses implies some class with objects as its arguments when speaking of class derivation the originating class is known as the parent class super class or base class the class which inherits from it is the child class subclass or derived class any name pairing is acceptable but the important thing to know is that the child's class extends the attributes and behaviors of its parent class this allows you to do two things you can add new properties to the child class and you can modify inherited properties in the child class without affecting the parents so now let's explore an example of how this is done in Python here you have a parent class P which holds the variable a with a value of 7. then there is the empty child Class C in which class P is passed as an argument and finally a lowercase C represents an instance of child class capital c if you write a print statement for C dot a and run the code the output is seven so even though C itself is empty it still holds the attributes inherited from p keep in mind that any changes in the parent class will also affect any child's classes now that you have an idea of how inheritance Works let's explore an example that demonstrates the flexibility it provides I begin by creating a new file called employment dot pi and my first step is to create a parent class called employees where I'll Define two variables for first and last names I do this by typing class employees colon and on a new line def double underscore init to trigger and select the init method suggestion for the first variable I type self dot name equals name on a new line and for the second I Advance another line and type self dot last equals last I then add name and last to the init argument on line two after the word self next I'll create two child classes that both extends the employee class the first one I create is supervisors and to call the employees class I type class supervisors open parenthesis employees close parenthesis and a colon I then need to modify the inits method of the supervisors class so that I can add another variable named password again I trigger and select the init method but this time it already includes the name and last variables by calling the employees class the super method has automatically been applied to access the variables there and initialize them within the supervisor's class I proceed with adding the third variable password inside of the init method I then make it an instance variable with the line itself dot password equals passwords now I'll write another child class called chefs again I extend the employees class by adding employees as a method inside this class I want this one to contain a new function called leave request so I type def Leaf request and then self in days as the variables in parentheses the purpose of the leave request function is to return a line that specifies the number of days requested to write this I type return the string may I take a leave for plus Str open parenthesis the word days close parenthesis plus another string days now that I have all the classes in place I'll create a few instances from these classes one for a supervisor and two others for chefs first I type Adrian equals supervisors followed by the values Adrian and a in parentheses I can then copy and paste this instance two more times to serve as a template for the chef's instances the first chef is Emily and will hold the values Emily and E while the second Chef Juno has the values Juno and J finally as an instance of the supervisors class Adrian needs another value for the password variable so I'll assign Apple here next let's call the instance method over Emily and pass a value to it she wants to request three days off so I type print Emily dot leave request and the number three I'm also going to add another print statement that will check the value of the instance variable over the supervisor Adrian I type print Adrian dot password the third print statement prints the value of Emily's name variable now I run the code and get the following outputs the words may I take Lee for three days from the first print statement the word Apple from the second one and the word Emily from the third print statement note that both the instance variables and methods inside the individual inherited classes are present along with the variables from the parent class in this video you've learned how inheritance in Python helps to make code reusable organized and less redundant in this video you'll learn about abstract classes and methods if you have an abstract class you can ensure the functionality of every class that is derived from it for example a vehicle could be an abstract class you can't create a vehicle but you can drive a car a tractor or a boat from a vehicle the methods we put in the abstract class are guaranteed to be present in the derived class because they must be implemented if a vehicle has a turn on engine method then we assure that any method calls to a derived class that is looking for turn on engine will find it this could be for reasons of interoperability consistency and avoiding code duplication in general in object-oriented programming the abstract class is a type of class for which you cannot create an instance python also does not support abstraction directly so you need to import a module just to Define an abstract class furthermore methods in an abstract class need to be defined before they can be implemented with all these limitations one might wonder why you would use abstract classes at all one of the key advantages is the ability to hide the details of implementation without sacrificing functionality implementation in abstract classes can be done in two ways one is that as base abstract classes lack implementation of their own their methods must be implemented by the derived class another possibility is that the super function can be used but that's a topic for another time for now let's focus on the module for defining an abstract class you may not be familiar with modules right now but they will be covered in more detail later for now it's okay just to follow along the module is known as the abstract base class or ABC and needs to be imported with some code after that you can create a class called some abstract class and pass in the ABC module so that it inherits that class the next step is to import to the abstract method decorator inside the same module a decorator is a function that takes another function as its argument and gives a new function as its output it's denoted by the at sign you may not be familiar with decorators but for now it's enough to know that decorators are like helper functions that add functionality to an already existing function finally here you'll Define an abstract method which cannot be called on an object of this class you will be able to call this method over objects of classes that inherits from this class similarly we can Define abstract methods with the help of what we call an abstract method decorator present inside the same module any given abstract class can consist of one or more abstract methods however a class that has abstract class as its parent cannot be instantiated unless you override all the abstract methods present in it first with that in mind imagine a scenario in which an employer wants to collect donations from employees for a charitable cause with your newfound knowledge let's write some code to make that possible first I import the ABC module and its abstract method then I create the employee abstract class that calls abstract methods to define a method called donate note that there's no implementation for this method here after that I create the class donation Which derived from the abstract class note that this class also overrides the abstract method I write an implementation for the Donate function which takes a user input stores it in variable a and returns it next I create two employee instances called John and Peter and call the function over each of them I also create a list amounts to which The Returned values will be appended finally I have a print statement for amount which will give the value of the total donations from both employees in this video you learned about abstract classes and methods and how to implement them in your code up to this point you've explored class relationships that were relatively straightforward but what happens when things get complex how will you know which classes inherit from which fortunately method resolution order or mro provides rules that can help make sense of that by the end of this video you'll know how to explain the basic rules of method order resolution and how they apply to inheritance classes explain the concept of code linearization with respect to multiple inheritance and deploy method order resolution functions in Python you've likely encountered some examples of single inheritance where a child class only inherits from a single parent class but it's important to know that python has many types of inheritance the categorization types are based on the number of parents and child classes as well as the hierarchical order including simple inheritance there are broadly four types of inheritance the first type is called Simple inheritance which you've already dealt with there is also multiple inheritance which involves a child class inheriting from more than one parent next is multi-level inheritance which is inheritance taking place on several levels then you have hierarchical inheritance which concerns how several subclasses inherit from a common parent and finally you could say that there is a fifth type called Hybrid inheritance which mixes characteristics of the others as these inheritance types demonstrate inheritance becomes increasingly complex as the number of classes in a project grow and become more interdependent so how do developers solve this issue with the use of mro determines the order in which a given method or attribute is passed through in a search of the hierarchy of classes for its resolution or in other words from where it belongs the order of the resolution is called linearization of a class an mro defines the rules it follows the default order in Python is bottom to top and left to right when Imagining the inheritance of these python classes in a tree structure let's take the simplest example of single inheritance the object is first searched in the class of that object and then in its super class what about in an example where class Zed is inheriting from two classes let's say Z is inheriting from classes X and Y in this instance the mro will be z y and then X in other words The mro Works its way bottom to top and then from left to right but things become much more complicated when more levels are added to the hierarchy so developers rely on algorithms to build mros old style classes used in-depth first search algorithm or DFS from python version 3 onwards python versions have moved to the new style of classes that rely on the C3 linearization algorithm the implementation of the C3 linearization algorithm is complex and Beyond the scope of this lesson but for now here's an overview of a few rules that it follows the algorithm follows monotonicity which broadly means that an inherited property cannot skip over direct parent classes it also follows the inheritance graph of the class and the superclass is visited only after visiting the methods of the local classes this logic will make more sense later when you explore more complex class relationships in a future lesson next let's take a moment to explore some methods of finding the mro of a class first I'll begin with a demonstration of the mro attributes or function let's take a multi-level inheritance example comprised of three classes Class A Class B and Class C Class A is the parent class with b and c the respective child classes in other words B inherits from A and C inherits from B when I print the return for calling the mro function over Class C the output indeed confirms that this is the order that is followed so why is this important well imagine that class A has a variable num with value of 5. and then Class B also has a num variable with a value of nine here the mro function tells you quickly that class C will inherit the nine value from class B finally let's examine one more function which is the help function if I take the code from earlier and replace the mro function in the print statement with a help function it provides a much more detailed output with mro information at the top it also contains information about the data descriptors and types used inside the code in this video you received a brief introduction to Method resolution order and how it affects inheritance in different scenarios these are both very broad topics but hopefully it helps you understand the complexity of code that is possible in Python well done you've reached the end of this module on programming paradigms in this module you'll explored procedural programming functional programming and object-oriented programming we started the module by saying that procedural programming is considered the easiest and a basic stepping stone to object-oriented programming and the first choice for new developers next you explored functional programming which in essence is a programming Paradigm that utilizes creating functions for clean consistent and maintainable code lastly you learned that object-oriented programming is about creating objects that contain both data and methods these Concepts should now make sense to you it's now time to recap the key lessons you learned and the skills that you gained with that in mind let's summarize the key points you learned in this module you should now be able to describe the concept of procedural programming describe what algorithms are and how they can be used to solve problems identify how algorithmic complexity is calculated and recognize how algorithmic complexity can help in improving performance you should also be able to describe Big O notation explain what functional programming is explain how pure functions are used in functional programming and explain recursion and how it can be used to solve problems what you have learned however did not stop here you should therefore also be able to use different methods to reverse a string in Python explain the difference between the map and filter functions explain object-oriented programming and the four Concepts it is built upon and describe the relationship between classes and instances in Python finally having studied the remaining key points in this module you should now be able to create classes instantiate classes access their variables and methods and change the state of intense objects by using instance variables and methods this module gave a comprehensive introduction to different programming paradigms in Python this is Essential Knowledge that prepares you to be able to create even better programming code cars are a major part of our lives that make it easier to move around but if you needed your car to do more such as handle driving in the snow or carry large objects while you'd probably modify it by adding winter tires or hitching a trailer to it in a similar way python is a powerful language that allows developers to build amazing things but it can gain even more functionality with the use of modules in this video you'll learn about modules in Python and why they are used you'll also explore the different types of modules and be able to explain where they can be found now you may wonder what a python module is Imagine That modules work like instructions to make a pie instead of trying to figure out what the steps are to create your pie you follow the instructions modules work in the same way they are building blocks for adding functionality to your code so you don't need to continually redo everything a python module contains statements and definitions so a file like sample.py can be a module named Sample and can be imported modules in Python can contain both executable statements and functions but before you explore how they are used it's important to understand their value purpose and advantages modules come from modular programming this means that the functionality of code is broken down into parts or blocks of code these parts or blocks have great advantages which are scope reusability and simplicity let's delve deeper into these everything in Python is an object so the names that you use for functions variables and so on become important scoping means that modules create a separate namespace so two different modules can have functions with the same name and importing a module makes it a part of the global space in the code being executed reusability is the most important advantage of modularity so when you write a piece of code modules help you avoid the need to write all the functionalities that you may need duplication of code duplicates your efforts uses more computer memory and it's less efficient let's say for example you want to import a math package you automatically get access to a loss of functionalities such as factorial the greatest common divisor used as gcd and so on that are reused without defining them one other feature that using modules brings is simplicity when modules have a little dependency on each other it helps achieve simplicity so each module is built with a simple purpose in mind modules are defined by their usage so you can also use a regular expression or re module for managing regular expressions Simplicity also helps in avoiding interdependency among these modules so if you're working on data visualization import of a single module like mat plot lib is sufficient for visualizing your data there are different types of modules that exist in Python the main difference between these modules is the way the modules are accessed let's cover built-in modules some modules are already built into the standard python Library when you use a statement like import math in your python code for example The Interpreter first tries to find built-in modules so how do you import and execute modules in Python the first important thing to know is that modules are imported only once during execution if for example your import a module that contains print statements print Open brackets close brackets you can verify it only executes the first time you import the module even if the module is imported multiple times since modules are built to help you Standalone modules hold all the functions but will most likely not contain functions that execute without calling it's only when the user executes the different functions inside that module that they will find the utility of those functions in the code the module is normally defined at the beginning of the code but you can Define it at any point in the code since code execution in Python is serial you must import the module first before you execute any function inside of it modules can also be executed from within the function this means that the code inside that module can only be used once the function is executed in this video you covered modules in Python you learned about the different types of modules and how they can be used to save you time and make your work more efficient in Python you can access different types of modules such as built-in modules and user-defined modules from different locations think of the built-in modules as a house that you want to build using pre-built and packaged floors walls and a roof that you can just Assemble this means you don't have to try and find hammers bricks plaster and tiles to build walls and Floors saving you time and making your work more efficient accessing built-in and user-defined modules in Python works in the same way and helps to save time and build efficiency while you're coding remember that any python file can be a module the modules are searched by The Interpreter in the following sequence first the current directory path second the built-in module directory third the python path an environment variable with a list of directories and finally it investigates the installation dependent defaults directory let's explore this in Greater detail in this video I'll demonstrate how to access different types of modules such as the built-in modules and user-defined modules from different locations let's write some code and learn how to access some built-in modules I begin by creating a new file called my calendar in visual code I then use the sys dots path function and return the values that I get from it in a variable called locations I finally print the values using the print function I now try running this code unfortunately this does not work as python has no idea what sys is to resolve this I'm going to try and import the built-in sys function I'm going to run the code again the print function returns all the possible locations that The Interpreter is going to look for when searching for modules including the current working directory but this doesn't look very clean I know that I have a list of values so I'll run a for Loop that Loops through every location in turn this returns a much cleaner result by printing each location on its own line in the terminal now it's always good practice to import all the required modules right at the beginning but I can do this in a different way I'll import a module here in the middle of the code I'll import another built-in module called calendar I'll now use a couple of functions that the calendar has I'll now use a function called leaked days which has two inputs year one and year two and it will be returning another integer value so what I'm going to do is write the lead days function write two input years and return the value in a variable called leap days I'm going to print the value of a variable I get a return value of 13 which means there are 13 leap days in between 2000 and 2050. now I'll use another function this function is called isleap it takes one of the years as an input and returns a Boolean value it tells you if a given year is a leap year so let's try 2036 and return the value in another variable called is it leap this time I get the value of true because 2036 is a leap year if you decide to explore a little bit you can hover over calendar if you use a Macbook press the command key or if using Windows press the control key which will take you to the calendar file by clicking on it note how the calendar module itself has imported a few other modules and other than that it contains all the functionalities I now find the location of calendar inside the python 3.9 package which is one of the locations listed in the terminal by the print locations loop I run earlier you just learned how to access built-in modules and user-defined modules from different locations I encourage you to start using modules in your code to make your work more efficient in this video you will learn how to use import statements for accessing modules from different directories you will also learn how to create packages from the python package index using pip every python file which means any file with a DOT py extension containing a script is effectively a module to check Imports file I am currently creating is therefore a module for some of the files the code that you are working with is generally called the main module in this case check Imports is the main module present in the current working directory also called the scope of the project you can import any python file that is present in the current scope for example I can import the sample.pui file by typing import followed by the file name without the extension I then click on run in the top menu the system returns a message in the terminal pane that the import was successful if I try to import a file with a DOT txt extension the import not be successful for example if I type import followed by sample text and click on run the system will return an error message in the terminal pane as it is not a python file python has a library of standard modules called built-in modules these modules are directly built into the python interpreter and don't have to be installed separately I can import a module like Json by typing import Json once I execute the command I can start using its functions directly the list of built-in modules can be found in the python standard Library you can think of packages as the structuring of python modules as a Collection special files called init.pi files are required for python to treat directories containing the file as packages python has a rich collection of community built packages that I could find on the python package index or Pi Pi pip or pip 3 is the default package installer for Python and helps with the installation of packages from Pi Pi since I have already installed numpy I can import it directly in Python I do this by typing import numpy clearing my terminal and clicking run if I try to import a package that is not installed I will get an error message for instance if I type import Seaborn and click run the message module not found error is returned if the package Seaborn were installed I could run the command again in Python without any error messages to do this I would run pip install Seaborn in the terminal to download the package from the Pi Pi index I can also import files I have created in one of the folders within the current working directory I have a folder called workplace containing a file called trial dot Pi the file consists of a list with a variable names and two entries inside it I'm going to import this file and access its contents I start by importing the sys module next I use a path function in sys by typing sys.path.insert now I must enter the path name to my workplace package in the first index location to do this right-click on the workplace directory and select copy path I enter this path name as the first index location when passing the path as an argument I must use single quotes and type the letter r in front of the path string the sys.path list now has a new directory where it will look for modules now I must import my trial file here by typing import trial and pressing enter a squiggly line appears below the word trial this is because the IDE does not know about the path I've added inside sys.path however I can still proceed as The Interpreter will know about this path to print the output I type print followed by trial.names and click the Run button to execute the values of Adrian and Maria from the names list variable are printed in this video you learned how modules can be imported from anywhere within your system inserting the path name can however be very specific and often tricky and confusing don't worry about this too much for now it is more important to focus on importing files from your current directory it is nice to know that importing modules from other directories is an option if you need it it is good practice though to move the required files into the directory that you are working in let's explore how to use modules with the import statement I have already created a file called imports.pi I will now import the built-in math module by typing import math just to make sure that this code works I'll use a print statement I do this by typing print importing the math module after this I'll run the code the print statement has executed most of the modules that you will come across especially the built-in modules will not have any print statements and they will simply be loaded by The Interpreter now that I've imported the math module I want to use a function inside of it let's choose the square root function sqrt to do this I type the words math dot sqrt when I type the word math followed by the dot a list of functions appears in a drop down menu and you can select sqrt from this list I passed 9 as the argument to the math.sqrt function assign this to a variable called root and then I print it the number three the square root of nine has been printed to the terminal which is the correct answer instead of importing the entire math module as we did above there is a better way to handle this by directly importing the square root function inside the scope of the project this will prevent overloading The Interpreter by importing the entire math module to do this I type from math import sqrt when I run this it displays an error now I remove the word math from the variable declaration and I run the code again this time it works next let's discuss something called an alias which is an excellent way of importing different modules here I sign an alias called m to the math module I do this by typing import math as m then I type cosine equals m dot I then select cos cos from the list of functions after which I add the number 0 in parentheses on the next line I'll print cosine and then I run the code the result is the cosine value of 0 which is 1. this is possible because I used the Alias called m if I tried writing math.cose it wouldn't work because the math module is now recognized as M instead let me remove this code from the screen and clear the terminal before we continue an alias can also be used for a function that's imported for example I can type from math import factorial as F to Alias the factorial function now I sine F of 10 typed as F opening parenthesis the number 10 and a closing parenthesis to a variable called factorial 10. I'll print the variable and see if it works when we run the code we see that it works just fine using an alias in this way reduces the effort of typing factorial every time after I remove the Alias I can import as many functions as I'd like from a given module I'm going to import log and sqrt I create a variable using the log function to find the value of log base 10 of 50. I do this by typing x equals log 10 opening parenthesis 50 and closing parenthesis and again I print the variable on the next line and when I click run I'll see whether or not it worked once again it worked just fine now what if I want to import all the functions inside a given module I can remove the functions I added earlier and replace them with a star this basically translates to import all from the math module when I run the code again let's see if it works and it does however this practice of using a star is not the best approach in certain cases for example this is a small file and I know that the log 10 function is present inside the math module but when you work with a large code base it could be difficult to track where the log 10 function came from additionally when you're importing other modules it may get confusing importing packages is very similar to importing modules in Python just like you can have imported functions you could also import variables and classes from a given module now I replace the star with a variable called sum variable which may be present inside the given module let me try to run the code again since the math module doesn't have such a variable it throws an error when I print it The Interpreter is not able to import some variable from math in this video you explore different methods that can be used to import modules in Python using keywords like import from star and as this enables you to use the modular structure of python in object-oriented programming in general now you should be fairly familiar with how modules work let's look at another related Concept in Python namespaces and scopes the official python documentation defines namespace as mapping from names to objects and scope is the textual region of a Python program where the namespace is directly accessible at this point the dictionary with its key value pairs serves as the ideal data structure for the mapping of names and objects you have also learned how every python file can be a module you can view the same module as a place where python creates a module object a module object contains the names of different attributes defined inside it in this way modules are a type of namespace name spaces and Scopes can become very confusing very quickly and so it is important to get as much practice of Scopes as possible to ensure a standard of quality there are four main types of Scopes that can be defined in Python local enclosed Global and built in the practice of trying to determine in which scope a certain variable belongs is known as scope resolution scope resolution follows what is known commonly as the legb rule let's explore these local this is where the first search for a variable is in the local scope enclosed this is defined inside an enclosing or nested functions Global is defined at the uppermost level or simply outside functions and built-in which is the keywords present in the built-in module in simpler terms a variable declared inside a function is local and the ones outside the scope of any function generally are global here is an example the outputs for the code on screen shows the same variable name Greek in different scopes there are three possible Declarations of the variable at the global level inside the function b or inside the nested function C which is called from within B the ID function is used here in the print statements which Returns the identity of the objects you can make some observations from the output the ID for the global variable Alpha remains same as defined after the code is completely executed the ID for the local variable beta inside the function B remains unchanged before and after the execution of nested function C the ID for gamma is assigned only within the scope of the nested function and the ID for all three variables is different even if they all have the same variable name variables in Python are implicitly declared when you define them that means unlike other programming languages there is no special declaration made in Python for the variable which specifies its data type what it also implies is that a given variable is local not Global when it is declared unless stated otherwise this contrasts with most other programming languages where variables are Global by default so when a variable is declared in a global space it is also local to that space this can be understood with a simple example if you look at the content of both of these dictionaries you can see how the value for the key country is different in both of the cases you have also used two special built-in functions called locals and globals that list the contents of the dictionary inside both of these scopes here you can see the output in this example you can see the global variable declared remains unchanged while Global variables are acceptable they are discouraged for a number of reasons when you are working with production code the project structure can get complex and working with global variables can be hard to diagnose which lead to what is called the spaghetti code other paradigms such as access modifiers concurrency and memory allocation are better handled with local variables while you were just beginning our journey using python it is always a good idea to integrate good practices in your code there are two keywords that can be used to change the scope of the variables Global and non-local the global keyword helps us access the global variables from within the function non-local is a special type of scope defined in Python that is used within the nested functions only in the condition that it has been defined earlier in the enclosed functions now you can write a piece of code that will better help you understand the idea of scope for an attributes you have already created a file called animalfarm.py you will be defining a function called D inside which you will be creating another nested function e let's write the rest of the code you can start by defining a couple of variables both of which will be called animal the first one inside the D function and the second one inside the E function note how you had to First declare the variable inside the E function as non-local you will now add a few more print statements for clarification for when you see the outputs finally you have called the E function here and you can add one more variable animal outside the D function this will be a global variable you can add a call for the D function and a print statement for the global variable you can save this file and run the code first the global animal variable gets assigned to camel then call this function and once inside it assign elephant to the local animal then declare the inner function e and proceed by printing before calling functions animal where the value of animal will be the local value which is elephant once you are inside the inner function e you use the non-local keyword to declare that you are going to use the animal variable and you change the value to giraffe and here you can see that the print statement will give inside nested function the value is giraffe which stays consistent even after you get out of the inner function so when you print after nested function the value Still Remains giraffe once the function is fully executed come out to see that the value of global animal will be camel which you had assigned at the beginning so you can see the changes that you have made inside are not going to affect the value of the global variable let's look at one last thing if you comment the local variable out this will throw an error you can see there is no binding present for non-local animal present inside the D function which was required here in this video I'll cover the reload function that's used with import statements the reload function reloads an imported module in Python the only precondition is that the argument passed to it must be a module that has already been successfully imported within the program previously you learned how the import statement is only loaded once by the python interpreter but the reload function lets you import and reload it multiple times I'll demonstrate that first I create a new file sample.py and I add a simple print statement named hello world remember that any file in Python can be used as a module I'm going to use this file inside another new file and the new file is named using reloads.py now I import the sample.py module I can add the import statement multiple times but The Interpreter only loads it once if it had been reloaded we would have seen hello world several times however I can change this with the help of the reload function let me remove this code and add the import lib module where the reload function sits then I pass the module name as an argument to this function note that the sample module has been imported more than once and I could do it as many times as I want now to better demonstrate how the reload function can be used I'm creating another file called file changes.py this file is going to list the contents of a particular directory in the following code I will be updating the contents of the directory and be able to monitor the changes using a file that I will import since The Interpreter loads the file only once the reload function will allow us to reload that import and effectively update the changes every time without stopping the execution of running code I begin by importing the built-in OS module and I use a function called OS dot Lister inside it next I pass the current path as an argument by right-clicking the files tab at the top and selecting copy path I paste this as an argument to the Lister function and add an R before the path because I'm looking for a directory and not a file I remove file changes.py from here I'll save the output from the Lister function into a variable called contents on the next line I'll add a print function for the contents variable before running our program I'll clear the terminal to make things clear the return value should list the files that are present in the given directory you'll notice that it indeed lists all the files present in this directory once printed I now go back to the reloads.py file and I clear the file then I once again import the import lib module after this I import file changes and create a function called changes as good practice I add a try block and use the reload function to pass file changes as an argument let me go back to the file changes.py file and create a function that will print the contents variable this is now complete but I'll add another print statement for clarity I save this file and then I go back to using reloads.pi I call this function that I just created inside the file changes module and because I want to make a try block work I add the accept and just pass for now after this I execute the code using a for Loop because I want to do it more than once I use the range function and call the function that I just wrote to take some control of the program I'll add an input statement here now the program will execute five times and every time it will load the file changes module and list the contents of the directory to make it more interesting I'm creating a few text files inside the directory now I've returned to using the reloads.pi file to run this code note that the content on the current directory is listed here but I'll now remove the text file called text3.txt when I execute the code again by hitting enter you'll notice that the particular text file has been removed now without changing anything else I'll execute the rest of the code if I also change the content of file changes.pui for example changing the print statement before the file names I could see the code reflected after I press enter again as I demonstrated the reload function can be used for making Dynamic changes within your code with the help of import statements packages are bundled collections of modules in Python serving a specific purpose in Python there are currently tens of thousands of packages to choose from and in this video you'll learn about some of the most popular you can think of this collection of packages like a traditional real world Library each package is a book or magazine and this Library gets bigger every day in programming a package is a directory or folder and in the same way a module is a file or document you import packages in the same way as a module using the import statement and like with the import statement it's important to remember that unless defined correctly the import serves no purpose for example suppose you want to import a package named Foo the import statement on its own will not serve any purpose it needs to be in a format like from Foo import a where fill is the package and a is the module containing the functions you want exploring the packages directory structure or referring to code blocks online can save time to work with packages in Python it's important to know that pip is the default package manager and a python package index or Pi Pi is the package index where you can find and publish packages python has an extensive collection of packages as the developer starting out it can be overwhelming but it's important to understand what python is most widely used for today the major application areas for python are data science Ai and machine learning web Frameworks application development Automation and Hardware interfacing with this in mind packages can be grouped into categories for example built-in packages data science machine learning and AI web and GUI development let's explore each of these briefly now starting with built-in packages these are packages that don't need to be installed separately and can be used as soon as you've installed python almost every project uses one or more of these built-in packages so it's worth getting to know them well the most popular ones are os success CSV Json import lib re math and it tools in the world of data science the most popular python packages are numpy scipy nltk and pandas these are all used for data exploration and manipulation other packages like opencv and match plot lib are used for image processing and data visualization within the world of machine learning or ML and artificial intelligence or AI the most popular packages are tensorflow Pi torch and Keras pytorch and Keras are currently the most popular for deep learning and neural network implementation there are other packages such as scipy scikit-learn and thiano choosing which package to use will depend on the scale and scope of the project and how familiar you become with a package in question okay let's move on to web development python today is primarily used for ML Ai and web development the most popular packages are flask which is a lightweight micro framework and Django which is a full stack framework other popular web development packages include cherry pie pyramid beautiful soup and selenium there are also other packages for robotics game development and other specialized domains for any domain you want to work in you'll find several python packages relevant to it while no one package may be a perfect fit for your current project the open source community of python developers is working relentlessly to fill the gaps as a beginner python coder most functions you need will be met by one package to continue expanding your knowledge of python packages you should think of a project you'd like to create and experiment with the packages I've mentioned in this video in this video you learned about packages in Python you covered built-in packages and some of the most popular packages in use today python is one of the best languages to use for various data science projects and applications in this video you'll learn about some of the commonly used python libraries in data analysis and data science the last decade has seen exponential growth in all data science areas the demand for data analysis and scientists is continually increasing as it's a requirement for developers to incorporate scientific and data analysis into their code python has emerged as one of the most popular languages with data scientists one of the main reasons for its popularity is the large number of different open source packages these have been developed by thousands of contributors collaborating to provide free usable resources many packages are top-rated because they are efficient and provide outstanding functionality in no particular order of preference these packages include numpy scipy matte plot lib and scikit learn as an example scikit learn is used for predictive learning and is built on top of other popular packages it consists of various supervised and unsupervised machine learning algorithms for classification regression and svms modeling data is the primary focus of this library and it provides popular models such as clustering feature extraction and selection validation and dimensionality reduction pandas is an acronym for python data analysis and this is a data analysis and a manipulation tool it's used primarily for working with data sets and provides functions for cleaning analyzing and manipulating data using it I can compare different columns and find the arithmetic mean Max and Min values the primary data structures used in pandas are series and data frames while series are single dimensional and can be compared to a column in a table data frames are multi-dimensional and can potentially store tables efficiently they are agnostic to the data types being stored Panda's most common applications are reading CSV files and Json objects and using them within python code for faster retrieval pandas are known to bring speed and flexibility to data analysis pandas library is normally imported by the code import pandas as PD numpy stands for numerical Python and is a powerful Library forming the base for libraries such as scikit-learn scipy plotly and matte plot lib python scientists use the abilities of numpy especially when working in scientific domains such as signal image processing statistical Computing and Quantum computing numpy carries out the calculations needed for algebraic areas such as Fourier transforms and mattresses the backbone data structure in numpy is called ND array or n-dimensional array which substitutes the conventional use of lists in Python and is a much faster solution than lists the dimensions in numpyte are called axes and the number of such axes is called a rank conventionally numpy is imported with import numpy as NP matte plot lib is the visualization Library used in Python it can be used to create static interactive and animated visualizations many third-party tools such as ggplot and Seaborn extends the functions of matchplotlib these functions are located inside the pi plot sub package matplotlib is imported with import matplotlib dot Pi plot as PLT an example such as libraries 2-0 uses the matte plot lib and numpy libraries too for instance display a graphical representation of students in a class or the distribution of their scores to recap what you've learned in this video you should now know about the most commonly used python data analysis and data science packages artificial intelligence or AI is broadly about making machines think like humans data science primarily focuses on the management and exploration of data which may include media such as text audio images and video machine learning or ml is a subsection of AI and deals with algorithms for training and generating insights from data many fields utilize machine learning some of the most widely used areas are natural language processing deep learning sentiment analysis recommender engines computer vision and speech recognition with the amount of text image and video data available today data science and AI in particular are in Greater demand than ever python is one of the most popular languages used in these domains the reasons are syntactical efficiency and readability flexibility with different languages Frameworks and operating systems a welcoming and large community of Developers ability to build ml models without having to understand the intricacies user-friendly debug and testing tools and modular structure these will promote the development of many primarily open source libraries and Frameworks it's important not to get confused with the terms packages library and Frameworks a package is a collection of modules and both library and framework are often used interchangeably with packages libraries can also be a collection of packages with specific purpose whereas the term framework is usually used where certain flow and architecture is involved it's important to remember that all of these pieces of python code are used with the help of import statements some of the most popular ml libraries in use today are in the areas of deep learning and neural networks computer vision and image recognition natural language processing data visualization and web scraping it's important to understand that these are broad categorizations most of the libraries associated with them are not restricted to a particular field every project is unique and should be treated as such the right selection of the library can save precious time when coding in this video you learned about machine learning libraries many fields use machine learning and those fields like deep learning and neural networks rely on open source machine learning libraries that make developers work easier these libraries are collections of packages and the selection of the right Library can save you time when coding so in the future think carefully about which Library you should pick for a project to make sure that it suits your needs web Frameworks are software applications designed to provide us with a standard way to build deploy and support web applications that we can use on the web they help developers to focus on application logic and routines by automating redundant tasks which helps cut development time they also provide easy structuring and default models within a reliable stable and easily maintainable saving time and effort web Frameworks are primarily written in high level code which removes the overhead required for understanding concepts such as sockets threading and protocols as a result time is better spent working on application logic instead of routines python is a popular framework in web development thanks to several features such as good documentation abundant libraries and packages ease of implementation code reusability a secure framework and easy Integrations the different Frameworks in Python are efficient and make it easy to handle tasks such as form processing routing requests connection with databases and user authentication they also provide debugging and testing tools to handle profiling test coverage and test automation Etc there are mainly three types of web Frameworks in Python these are full stack micro Frameworks and asynchronous let's explore each briefly now full stack Frameworks are considered a One-Stop solution and usually include all the required functionalities this can include form generations and validators template layouts HTTP request handling wsgi interfaces for connection with web servers and database connection handling some of the most popular python Frameworks are Django web 2 pi and pyramid micro Frameworks are a lighter version of full Stacks that do not offer as many patterns and functionalities they are usually used in smaller Web projects and building apis flask Auto Dash and Cherry Pie are some of the popular micro Frameworks as the name suggests asynchronous framework types are used to handle a large set of concurrent connections they are mainly built using async IO networking libraries Growler AIO HTTP and Sanic are some of the names you'll encounter choosing a framework can depend on many factors this can include things like available documentation scalability flexibility and integration while this categorization is pretty broad it's important to remember that each framework in Python has its own unique set of features and functionalities this can make certain Frameworks more suitable than others for a specific project two of the most widely used are flask and Django let's explore each briefly now Django is a high level framework that encourages clean design and Rapid development it's a full stack framework that's rich in features and libraries it's secure and has templating systems and third-party supports it primarily gained popularity due to its rapid deployment speed you can quickly build scalable apps without extensive knowledge of low-level programming flask is a micro framework and better used for smaller projects it's easy to learn simple to use and has a large library of add-ons in this lesson you learned about web Frameworks and the different types you also learned about the different web Frameworks in Python such as flask and Django this thing is an essential component in quality assurance that ensures our software applications and websites work as expected for example suppose you've built your own website and it has a few hundred visitors every day one day an article you've published goes viral suddenly a million people are visiting your site and the website crashes another scenario is online forms we've all faced situations where we fill out a form and a prompt appears telling us that we have made a mistake for example accidentally entering letters in space provided for credit card numbers or missing special characters and passwords this type of data validation is even more critical especially in the domains of banking and finance in this video you'll learn about testing and it's important in the software development life cycle but what exactly is testing well software testing is a process of evaluating and verifying the various software applications and products in terms of performance correctness and completeness it helps identify bugs gaps in the product defects and missing requirements with set expectations in the early days of computers software developers relied heavily on debugging a process for removing and detecting potential errors after the 1980s a software grew in size several different testing types and products also grew in parallel depending on the requirements testing was primarily done in the later stages of the software life cycle now it's evolved to be integrated at the early stages as well the efficiency of any testing type is dependent on how well written it is the ideal testing scenario is to have the least tests written to find the largest number of defects while software testing is important in any scenario the real test of the product comes when it's launched to Market there it's judged by stakeholders and users we live in the internet age products with bugs especially in the early stages make consumers lose interest very quickly as many alternatives are available this is where testing plays an important role and here are a few reasons why it can help testing helps detect poor designs change inefficient flow or functionality address scalability concerns and find security vulnerabilities testing helps provide a b testing to find the best suitable options address compatibility with platforms and devices provide Assurance to stakeholders and a better experience for end users there are a few good practices that must be followed in testing to achieve optimal results test code allowing reusability of tests tests must be traceable to the requirements set s written must be Purpose Driven efficient and allow for repeatability these testing techniques can then follow a procedural approach according to the type of testing used the testing life cycle in general can be broadly described as planning preparation execution and Reporting the steps involved in achieving this can include writing scripts and test cases compiling test results correcting defects based on them and generating reports from our test results okay so you've already learned about test cases they are a general set of actions containing steps data pre and post conditions written for a specific purpose this purpose can improve functionality flow and finding defects a well-written test case eventually provides good coverage reusability better user experience reduces costs and increases overall satisfaction as the tech industry is ever growing several testing categories types told and products have evolved which are tailored to best meet the requirements of the software in question for example a web page will have different testing needs than an Android based game even among web pages a social media page will differ from one say from financial management testing can be categorized by several different factors for example depending on the amount we know about the internal implementation we can call it black box or white box testing there are also many testing types used in practice these include compatibility ad hoc usability and regression testing don't worry too much about these terms for the moment you'll learn more about them later for now I just want you to know that with testing there is no one-size-fits-all solution when testing products it's also important to understand when to stop as no application will ever be 100 perfect otherwise a developer may feel the product is well tested But realize it's full of bugs and flaws as soon as it's released to the end users a few metrics can be established for this purpose given that there are well written test cases in place these include a certain number of test Cycles passing percentage of test cases time deadlines and time intervals between subsequent test failures testing in software development can be seen as the anchor of a ship or insurance for your vehicle you can hope that everything operates smoothly but often it does not and while you can aim for Perfection there is always potential for human error quality assurance today has become an important component in the software development life cycles much of the credit goes to development of testing tools and techniques the question is what type of testing should you use in this video you'll learn about the types of testing including the four main levels or categories of testing which is units integration system and acceptance testing there are different ways in which you can categorize the different test types there are white box and black box tests white box testing is where the tester has knowledge of the code design and functionalities Black Box tests function with no such information and the Tesla has no idea about the internal implementation there are also other ways to categorize different tests as functional non-functional and maintenance tests let's explore these functional tests are based on the business requirements stated they determine if the features and functionalities are in line with the expectations non-functional tests are more complex to Define and involve metrics such as overall performance and quality of the product maintenance tests occur when the system and its operational environment is corrected changed or extended but there are also manual and automated testing methods that are dependent on the scale of the software the most broadly accepted categorization is in terms of the levels of testing As you move ahead in the software life cycle let's delve deeper into these levels of testing four main levels of testing are units or component testing integration testing system testing and acceptance testing the four types of testing levels build on each other and have a sequential flow let's explore these now in unit or component testing the program tests specific individual components by isolating them the components are low level which means that they are closer to the actual written code they often involve use of automation for continuous integration given their small sizes so you usually write these tests while writing the code for example if the code is in Python unit tests can be written with packages such as Pi test integration testing combines the unit tests and tests the flow of data from one component to another the keyword here is an interface this means that you test if the data is correctly fetched from a database within the python code and if you have sent it to the web page there are different approaches to it such as top-down bottom-up and sandwich approaches your approach depends on what code level interfaces you attempt first it builds on the unit testing and a test that deals with it next is system testing which tests all the software you test it against the set requirements and expectations to ensure completeness this includes measurements of the location of deployed components such as reliability performance security and load balancing it also measures operability in the working environment such as the platform and the operating system this is the most important stage handled by team of testers it's also the most critical stage as the shipping of software to the stakeholders and end user happens after this phase the final type of testing is acceptance testing when the product arrives at this stage it's generally considered to be ready for deployment it's expected to be bug free and meet the set standards the stakeholders and the select few end users are involved in acceptance testing it normally involves Alpha Beta And regression testing one way of approaching this is to give pre-written scenarios to the users you use the results for improvements and try to find bugs that were missed earlier all the different testing levels are designed to optimize software at different stages the key to testing is testing early and testing frequently while each of the testing phases is important early detection saves time effort and money as the code gets increasingly complex mistakes become harder to fix it doesn't necessarily mean that unit testing will happen only at the beginning and acceptance at a later stage there are many testing Cycles where these levels are approached iteratively a typical example is the agile model here you release different versions of the product iteratively and you perform acceptance testing every few weeks in this video you learned about some of the types of testing such as unit testing integration testing system testing and acceptance testing it's important to remember that the purpose of these testing methods is to build a systematic approach for testing and identify faults and improvements as early as possible this results in improved overall performance and experience well done with advancement in technology and increasing drivers towards code automation in this video you'll learn about test automation packages and the importance of automated testing in the past machines have substituted human efforts in making Goods which helps us save both time and efforts in programming the tests chosen for automation are the ones that have high repeatability and volume predictable environment and data and determinant outcomes there are a number of testing types that can be automated these include units regression and integration an ideal test code must form a bridge between the programming calls and the test cases python does a fine job in achieving this in addition to its clean concise way of coding there are some well-written Frameworks in Python and some are more well accepted than others the ideal steps involved in test automation are usually preparing the test environment running the test scripts and analyzing the results okay let's now examine some important python testing Frameworks that have grown in popularity over the years first let's explore the built-in testing package per unit or unitest the unit test framework supports test automation independent testing modules and aggregation of tests into collections the first is pi test a native python library that is simple easy to use and reasonably scalable test will be demonstrated later in this course it can handle several functional test types such as units integration and end-to-end there is support for parameterized testing which enables us to execute unit tests multiple times with different parameters passed it can run parallel tests and generates HTML XML or plain text reports you can also integrate it with other Frameworks like Pi units and nose 2 and web Frameworks like flask and Django while primarily used with testing apis it's also well used with UI database connections and other web applications easy creation and quick bug fixes are why Pi test is the most popular testing framework for automation next is robot Which is popular primarily for its keyword driven development capabilities these keywords are used in test cases and can be predefined or user defined robot is very versatile and used for acceptance testing robotic process automation or RPA and test driven development it can be used for many domains including Android apis and mainframes selenium is another open source testing framework that has gained popularity over time and is primarily driven towards web applications it has support for the majority of web browsers and Os there are browser-specific web drivers that enable testing functionalities like login button clicks and filling of forms it allows the tester to select the speed and execution of tests and has an option to run specific tests or test Suites apart from the popular Frameworks Pi test robot and selenium there are many more it's important to know that a number of these testing Frameworks are often used with other tools such as plugins widgets extensions test Runners and drivers these tools help integrate the software piece as being tested and add functionality sometimes more than one framework is employed over the code being tested in this video you learned about test automation packages let's recap quickly automation testing is an important reason why the software industry is able to move ahead swiftly and more smoothly manual testing provides focused attention and the ability to handle nuances and complex problems with more sophistication this kind of testing can't be replaced by automated tests yet it's still some time before test scenarios can be fully automated but the developments of all these Frameworks are in line with that endeavor in this video I will demonstrate how to use Pi test to create simple tests for unit testing Pi test is one of the most popular modules for unit testing in Python this is because it allows you to do simple tests with minimal effort and it also has simple clean code with good documentation first I create a file called addition dot pi next I add a function and pass two variables A and B inside it I'm just going to do a simple calculation that will return the sum of these two variables similarly I create another function called sub which will perform the subtraction between the two variables second I create another file called test Edition dot Pi in which I'm going to write my test cases now I import the file that consists of the functions that need to be tested next I'll also import the pi test module after that I Define a couple of test cases with the addition and subtraction functions each test case should be named test underscore then the name of the function to be tested in our case we'll have test underscore add and test underscore sub I'll use the assert keyword inside these functions because tests primarily rely on this keyword it checks for conditions in your code and expects a Boolean value of true or false when the return value is true the test passes when it is false the test fails let's add assert statements to our tests in our first test we'll assert that the addition of four and five is nine and in the second test we'll assert that the subtraction of 4 and 5 is negative 1. next I make a split screen so that I could see both files now I run Pi test and I specify the file over which I'm going to do the testing to do this open a new terminal and enter python Dash M Pi test and the name of the test file test Edition dot pi I ran the code and both tests passed this means that both the assert statements have been confirmed to be true four plus five is nine and four minus 5 is negative 1. these two dots after test Edition dot Pi in the terminal also indicate that both tests passed now I will intentionally make one of these tests fail I do this by changing the negative 1 answer to negative 2. I make sure that I have saved the file clear my terminal and I'll run the test again note that the first test passed but the second one didn't also note that where there were previously two dots there is now only one dot and an F to indicate that the second test failed the ease at the start of the lines show where the test failed and IT Supplies the possible reason as to why it failed I can also write these tests without the assert statement and just add pass this passes the test regardless of any errors when I run the code again it indicates that both tests passed you should note that I used an equality operator here but I could have used less than greater than or keywords such as is in or not in all that matters is that the assert statement gets a Boolean value I can also add multiple assert statements in a single function so if I write assert true that should return the result and when I run the code again it passes both tests but if I make it false it will show that one test has failed this indicates that all the assert statements within a given function should return a true value for the test to pass note that using the test underscore prefix for both the file name as well as the function name is good practice now I'll restore my code and save the file if I want to run my test over a specific function I just add a double colon at the end of the file name and then I write the function name I'll clear my terminal first then run my code note that only the function I specified has run congratulations you now know about simple functions in this video you learned that you could use Pi test to implement unit testing and how to create and use Simple tests for unit testing this thing has been a relatively recent entry in the software development lifecycle but its importance has been growing as time passes software development is time sensitive and in the process developers often find testing gets squeezed into the time remaining after the code is written this doesn't leave enough time to test and can lead to the software containing bugs that need to be dealt with over time test driven development or tdd is an alternative programming practice in which the tests are written first and the code is written so that the tests will pass this differs from the convention of first writing the code and testing the application progressively tdd follows an iterative approach beginning with writing the test cases the initial work requires feature and test Planning by the team with slight variations let's explore the standard steps step one you write a test for a feature that fails in step two you write code in accordance with the tests step three requires that you run the test expecting them to fail in step four you evaluate the error and refactor the code as needed and finally in step five you rerun the process this process is also called the Red Green refactor Cycle red implies the failed tests and green shows the passing tests after refactoring the whole point of following this cycle is to fail the tests and rewrite until you don't have to a feature is complete when everything is green and you no longer need to rerun you can use a package Library such as Pi test when automation becomes a priority Pi test only requires writing functions while unit test requires classes this means that Pi test has the advantage of being easier because it requires less effort okay now let's explore some of the advantages you gain from tdd writing tests first and refactoring code based on it ensures the tests cover the code you can now write tests with a specific feature and outcome in mind the need for such forecasting provides Clarity from the beginning the forecasting also plays a role in integrating different components where you add new features and interfaces in accordance with the components that are already there working in Cycles over the code gives a developer confidence to easily refactor in terms of additional changes overall smaller code with early bug fixes code extensibility and eventual ease of debugging are the primary reasons tdd is growing in acceptance finally let's briefly explore some of the differences between tdd and traditional testing the main difference is that with tdd the requirements and standards are highlighted from the beginning making it purposeful modern day development often employs a combination of both these forms of testing depending on the different parts and stages of the software development cycle there are several subtypes and variations of test driven development these include Behavior driven acceptance test driven scaling and developer test driven development these are all options that can be used in the software development process congratulations in this video you learned about the process of test driven development in this video you'll learn about how to apply the test driven development methodology in conventional testing you follow the process of writing code and then writing test cases to ensure the Integrity of that code in test driven development or tdd the approach is the other way around and test cases are where you must begin your thinking the steps involved are as follows write test cases with some functionality in mind write code in accordance with the test cases ensuring that they pass and refactor code in case the tests fail let me demonstrate an example to design a test case that checks student enrollment with data stored in a database the test needs to check the Integrity of the names entered first I'll demonstrate how to design the test case and then write some code let's say I'm checking student enrollment for a class exam against a list of names that I already have to keep things straightforward I'm going to use a python list with the names instead of a database I want to make sure that the names I enter are on the list and I also want to ensure data Integrity which means that I must be sure that the names are entered in the correct format I've created two files the first is test underscore find string which is my testing file and the second find string is my main file I already have the pi test package installed and because this is test driven development I'll write my test function first I begin by importing a curses module that will help me check the ASCII characters that are present then I import the pi test module as well as the find string module which is my main file I Define a function named test underscore is present and I add an assert statement to check if the is present function works because I'm going to use it to validate my data entry contrary to the conventional approach of writing code I first write test underscore find string Dot py and then I add the test function named test underscore is present in accordance with the test I create another file named file string dot py in which I'll write the is present function I Define the function named is present and I pass an argument called person in it then I make a list of names written as values after that I create a simple if else condition to check if the past argument is present inside the list so the function called is present will check if the name part is present in the list let me test my code note that the test has passed because the name Al is in the list but this doesn't ensure the integrative entries I may add for example I might not want numeric characters in the names to address this issue I write another function named test underscore no digit I'm going to update some of the code in my main program findstring.py in accordance with the newly added test to do this I create a function called No digit that matches my test and again I create a simple if else condition I run the code and you'll note that one of the tests have passed and the other one has failed so the name owl passed because it's on the list of names but the value of N7 didn't pass because it contains a numeric character I could also add more test cases and modify my code so that it's suitable for the test cases and I can repeat the cycle until I have no more failed tests congratulations you've now explored the test driven development methodology you've reached the end of this module on python modules packages libraries and tools great work during this module you learned that python is a powerful language that allows you to build amazing things but it can gain even more functionality with the use of modules libraries and tools you may remember that we started with modules and learned that they are the building blocks for adding functionality to your code so you don't need to continually redo everything next we explored some of the commonly used python libraries in data analysis and data science and lastly you discover that testing as a tool is an essential component in quality assurance that ensures your software applications and websites work as expected it's now time to recap the key lessons you learned and the skills that you gained with that in mind let's summarize the key points you learned in this module you should now be able to explain what modules are in Python and why they are used identify the different types of modules and explain where they can be found explain how to access built-in and user-defined modules from different locations and use import statements to access modules from different directories you should also be able to create packages from the python package index X using pip write modules using import statements and explain and use the reload function in Python you have now learned that python has an extensive collection of packages and should be able to describe typical module use cases differentiate between built-in and user-defined python packages list some popular python packages and list some common python libraries using data analysis and data science in the module you also learned about libraries Frameworks and testing you should therefore also be able to recognize popular python libraries using machine learning and artificial intelligence explain big data and Analysis with python to find python web Frameworks and list different types of web Frameworks describe testing and explain the different types of testing list four main levels or categories of testing describe testing packages in Python such as Pi test selenium robot and explore the importance of automated testing and you should now be able to explain test driven development tdd methodology and list the features of test driven development this module was an introduction to python modules libraries and tools this knowledge enables you to extend the ability of your programming code in this course you were introduced to the foundations of python development let's do a brief recap of what you covered module one was getting started with python in this first module you learned about the different ways developers used python in the real world and discover the rationale for the existence of python you checked out your hardware and software by running Visual Studio code and carrying out operating system environmental checks identifying any required dependencies you explored variables and data types and worked with strings casting and data files this led you to the section on control flow and conditionals where you got to use Python operators and build looping and flow controls into your code in module 2 you moved on to some core programming skills with python including exploring functions and data structures Scopes lists tuples sets and dictionaries and quags with all that coding built it was time to check for errors you finished module 2 by examining errors exceptions and file handling and considering approaches to error handling nearly halfway through the course on module 3 you discovered all about the paradigms of functional and object-oriented programming and Associated logical Concepts we also had an introduction to algorithms and to python classes and instances nearly at the course end in module 4 you learned how you could boost your coding environment by using modules libraries and tools in Python who also learned about the different types of testing and how to write a test well done on completing this course recap it's almost time to put your knowledge to the test in the graded assessment are you ready to display all your python abilities congratulations on completing the programming and python course you've worked hard to get here and acquired a lot of important skills during the course you should now have a great Foundation of back-end web development skills this is the base for you to continue building on in the future and you've also demonstrated your skill sets in the graded assessment following completion of this first course you should now be able to complete basic programming with python distinguish between the programming paradigms of procedural functional and object-oriented programming demonstrates how to use modules packages and libraries and work within a test driven development environment so what are the next steps well this is one course in the back end developer professional certificate while you've established a good foundation so far there's still more for you to learn so if you've enjoyed this course and want to discover more why not enroll in the other courses throughout each of these courses you'll continue to develop your skill sets whether you're just starting out as a technical professional or student this project will equip you with the knowledge of back-end development practices as used in many business areas such as web development artificial intelligence machine learning data analytics and many other applications you'll have written a portfolio of python code that will demonstrate your skills to potential employers this shows employees that you are self-driven and innovative it also speaks volumes about you as an individual and your drive to continue your growth once you've completed all the courses in this certificate you'll receive a Coursera certification for backend developer these certifications provide globally recognized and Industry endorsed evidence of mastering technical skills congratulations once again on reaching the end of this course it's been a pleasure to embark on this Voyage of Discovery with you best of luck and do continue to follow your Learning Journey welcome to the next course in database engineering the focus of this course is on database clients let's take a few moments to review some of the new skills that you'll develop in these modules you'll begin the module by learning about the my sequel python connection and you'll learn about using pip to install packages or software you'll then learn how to install a front-end python client and connect it to a back-end mySQL database you'll then explore how to establish communication between Python and MySQL to perform crud operations once you've established a connection you'll then access a cursor object once you access the cursor object you'll create a mySQL database and table using python you'll then learn how to commit changes in a mySQL database using python in the third and final lesson of module one you'll explore the concept of a cursor in a mySQL database you'll learn how a cursor Works in Python and MySQL you'll also review the key characteristics of cursors and discover that their read-only non-scrollable and a sensitive you'll then learn that the cursor class is used to translate communication between python and a mySQL database and you also learn how to identify different cursor classes and you'll also review the basics of interleaving requests the second module focuses on performing create read update and delete or crud operations in a mySQL database using python you'll start the module by learning how to create and read records in a database you'll review the steps for this process and discover how python communicates with a database to carry out these actions you'll then explore how to perform MySQL update and delete operations using Python and you'll learn how to commit the changes to the database you'll then complete this first Lesson by performing a series of lab exercises in which you demonstrate your ability to carry out crud operations in a mySQL database using python in the second lesson of module 2 you'll review Advanced queries in a mySQL database using python the first of these queries involves filtering and sorting data in a mySQL database using python you'll recap the basics of MySQL filtering and sorting technique from earlier courses and learn how these same techniques are applied to python next you'll learn how to perform a range of different join operations to combine data from different tables in a mySQL database using python you'll receive an opportunity to test your ability to perform Advanced queries in mySQL database using python through a series of labs module 3 focuses on Advanced database clients the first lesson in this module begins with an overview of how to use MySQL functions with python you'll begin by learning how to identify the importance of MySQL functions and you'll review the different types of functions available in MySQL once you've finished recapping the basics of MySQL functions you'll then learn how to implement or access MySQL functions using python you'll also explore date time functions in Python and learn how to make use of these functions to update a mySQL database using python you'll then demonstrate your ability to make use of these functions in lab exercises in the second lesson in this third module you'll explore using MySQL stored procedures with python you'll recap the basics of storage procedures learn how they differ from functions and how they're created in a mySQL database using python you'll then learn how to access stored procedures through python with the use of the call proc method and you'll also review the use of delimiters the third and final module focuses on connection pools you'll begin by developing an understanding of the concept of database connection pooling you'll learn how database connection pooling works and you'll find out how to identify the advantages of database connection pooling you'll then review the steps for creating a connection pool for a database including the process for implementing the MySQL connection pool module you've reached the end of this course introduction it's now time to begin the next chapter of your database engineering Journey good luck there have been instances in my career where they were interdependent and I had to release them both at the same time and so the only solution was to release code in the middle of the night when we had the lowest traffic and hope that for the five minutes we were down with one while the other one was updating that only a few users ran into the issue and sometimes it's the only thing you can do [Music] foreign pronouns and I'm a software engineer at meta in the Menlo Park office I work on donations products so making sure that we are tracking who is donating to which fundraisers to which Charities this is very important that we don't get this wrong we want to make sure that we're storing the right data and getting the right data to display to the right user and what we mean by that is we want to ensure the data validity we want to ensure that we are making sure that we are only getting the data that the specific user should see that we are saving the right data these are very important Concepts when designing a database that you want to be reliable you want to make sure the relationships make sense and we need to make sure that we're not accessing any data that a specific user shouldn't see the integrity and quality of a database and its design is one of the first and primary steps to ensuring that our data is protected our data is being stored in the safe way and our users can trust that their data is not being mishandled this requires us to have a well-designed database that is thoughtfully thinking about different relationships that data can have with each other as well as having some plan for changes in the future the process I generally follow when designing a database is to first come up with the core data model that I need for a product to work I start with that because for me it's the easiest to conceptualize how we're going to access the data we need from a product perspective and then after that initial data model is built I start drilling down into specific privacy and validation details that we need so should certain data be encrypted should certain data be only accessible via certain servers these are all different questions that we are going to have to ask and depending on the type of data you store could be credit card information could be user information there's going to be even more checks that you're going to have to integrate into your database and database design in order to accommodate that the core considerations for well-designed database at meta are to first and foremost respect user privacy and user safety a user should know where their data is and have access to it and that they should know that it's not being used in any products that they are not comfortable with the other thing we need to ensure is scalability um it does not matter if you have a really well thought out data model if it cannot scale up with the billions of users that meta products see every day I think one of the most common challenges when you're working with a database isn't the initial time you've built a database because you have this concept in your model of a relationship and then something comes along and changes it the first thing to do is really be thoughtful about how you can change your database in the future how how it will scale not only with volume of users but with types of products the other thing that is very important to think about is how to modify an existing database this is going to be a very challenging thing and the majority of the work around databases is a lot of this stuff we have this old data model it does Institute our needs today we're going to overcome this and that requires just a lot of thinking and planning on your part how to migrate the data these are Big tasks that take time to kind of figure out but that is some of the ways that you become better at Magic database and building better data models in the first place as a database engineer you're going to have access to a lot of data and you really need to keep in mind how integral that is to user trust or the trust of anyone trusting you with this data even if you think it's kind of trivial data there's a lot of trust being put in you to make sure that you're being responsible I really want you to walk away knowing that database design database engineering is critical to the overall product experience and integral to user trust in the product that you're building many of the web applications that you use every day rely on a mySQL database to send and store data and many of these same applications are developed in Python database Engineers can send and store data from a python-based application to a mySQL database using a MySQL python Connection in this video you'll explore the basics of this important connection and learn how it works over at little lemon the restaurant needs to connect a python application with its mySQL database they need to form this connection so that they can perform basic MySQL tasks using Python and they also need your help with these tasks But first you need to understand how the connection between MySQL and python works so let's take a few moments to explore the connection between python and a mySQL database a connection is established between python and a mySQL database using an application programming interface or API an API is also commonly known as a driver or a client the API is a written set of programs or software that acts as a bridge between the front-end python application and back-end mySQL database this connection can be created using different apis like SQL Alchemy MySQL client and MySQL connector python MySQL connector python is the most common API and the one that you'll focus on in this course a useful way of understanding the connection between mySQL database and python is visually picture a diagram with a python application on one side and a mySQL database on the other in between these two elements is the MySQL connector python API in a typical interaction between these elements the front-end python application sends a connection request to the connector API the connection request is the python application asking for permission to access and retrieve information from the database using python the API forwards the request to the back end mySQL database the database accepts the connection it then sends a message to the python application back through the API confirming that the connection has been established in other words my sequel gives the API permission for the python application to access the database now the python application sends a connection request to the database once the connection is established you can then instantiate an instance of the cursor from the connector class and when a cursor object is created you can then execute SQL queries in the mySQL database using python let's look at little lemon's database as an example little lemon need to check what time a guest is arriving for dinner using their python application the date and time data for the guests booking is stored in the backend database so the python application uses the execute module from the cursor object to carry out the customer's demand the records are returned through the cursor object in the form of topples that show the booking Slots of each guest once the request has been fulfilled the cursor object and database connection can be closed you should Now understand and be able to explain how a connection Works between a Python application and a mySQL database I look forward to teaching you more about the MySQL python Connection in other videos as a database engineer you'll frequently work with python to perform crud operations in a mySQL database but before you can work with python you first need to install and configure python software on your system so that you can create a connection between Python and MySQL let's look at the installation and configuration process for creating this connection the first step is to download the most recent version of python from the python.org website follow the site's installation instructions once you've installed python you then need to open the application select the search icon and access the command prompt type python dash dash version to identify which version of python is running on your machine if python is correctly installed then Python 3 should appear in your console this means that you are running python 3. there should also be several numbers after the three to indicate which version of Python 3 you are running make sure these numbers match the most recent version on the python.org website if you see a message that states python not found then review your python installation or relevant document on the python website now that you've installed python you need to choose an IDE or integrated development environment to run your code on this is software that you can use to display your code this course uses the Jupiter IDE to demonstrate python so it's probably best if you also use the jupyter environment to install Jupiter type python Dash M pip install Jupiter once Jupiter is installed type python Dash M notebook this opens a new instance of the notebook for you to use within your default browser now you can set up your working environment select the new button in Jupiter then choose a new folder this action generates an unnamed folder rename the new folder to mySQL python course content and then access it you can now save your projects and other files in this location now select new again and choose the Python 3 ipy kernel option this opens a new tab in which you can enter your code you now need to connect python to your mySQL database you can create the installation using a purpose-built python Library called MySQL connector python this library is an API that provides lots of useful features for working with mySQL the MySQL connector python needs to be installed separately using a package installer called pip the PIP package is included with the python software you just installed rename the notebook instance to configuring MySQL connector now you need to use pip to install the MySQL connector python to install the connector type an exclamation mark and then pip to call the package type the install command then type the name of the library which is MySQL Dash connector Dash python finally press shift and enter or select run to execute the code the output of this code is that the installation steps have now been performed as required and a list of libraries have been installed python can now access the functionalities of all these libraries you can import libraries in Python by typing the import command the name of the library and an alias for example to import MySQL connector python in a cell in your jupyter notebook just type import MySQL dot connector as connector the import syntax tells python that there is a library you want to import and make use of the MySQL syntax refers to a subfolder that pip has installed which hosts the connector the dot before connector is known as an access operator it tells python that you want to access the connector subfolder using the dot operator finally the use of as is a method of renaming the import using aliasing this is like the aliasing that you encountered in previous database courses when working with joins aliasing is a common practice in Python development you can use custom names but best practice is to make use of common aliases that other developers are familiar with now that you've typed the code it's time to run it if there's no output in the console when you run the code then this means that the connector has been successfully installed you can now communicate with your database you should now be familiar with the process of installing MySQL connector python to create a bridge between your Python and MySQL environments and you also know how to install and configure a python environment great work a python-based application needs to be able to communicate with mySQL databases to perform database operations so this means that you need to create a connection between Python and MySQL for example little lemon need to create a connection between their website which relies on Python and their mySQL database so that customers can view data like menus and booking slots let's take a few minutes to find out how a connection is established between Python and MySQL at this stage you may be familiar with the MySQL python connector API software package this API facilitates the connection between Python and MySQL But first you need to import it into your Python program to import the connector API type the import command followed by its name MySQL dot connector then select run however typing the mysql.connector each time you need to work with the API can be tedious so let's create an alias instead you can create an alias for the mysql.connector called connector you can then make use of this shorthand to make your coding more efficient to create an alias you can type import mysql.connector again or use the existing code but this time include as connector within your statement the as keyword instructs python to recognize import mysql.connector using the connector alias in all future code now each time you need to use the MySQL python connector API you just type its Alias which is connector but make sure that you have installed the connector API first otherwise you'll encounter a module not found error you've now successfully imported the connector API also called the package or software now you can begin to make use of its modules and functionality using the access operator or the dosh for example you can help little lemon to make use of a connector to establish a connection between their python-based website and mySQL database first create a variable to access the connector from the connector module call it connection and assign it the value of connector.connect next pass the key arguments to the connect module within a pair of parentheses these arguments are the database username and password only an authorized user can access the database so in this instance the user is Mario and the password is Cuisine by default a connection is established between Python and the database installed locally on your machine this database is called localhost it can also be accessed through its IP of 127.0.0.1 you'll learn more about the localhost and other arguments at a later stage in this course for now you should be able to create a connection between Python and MySQL nice work you might be asking yourself what actions can I take once I've established a connection between a Python client and a mySQL database well some actions you can perform include creating databases and tables in this video you'll explore the process for creating databases and tables in a MySQL backend database using python little lemon uses the my sequel connector python client or API to communicate with their mySQL database they need to communicate with the mySQL database to create a database and tables in which they can store data let's see if you can help them out the connection between little lemon's existing MySQL databases and their python application has already been established within a connection object so the first step is to create a cursor object that lets you communicate with the mySQL database you'll learn more about cursors in a later lesson for now you just need to know that a cursor object can be created by invoking the cursor module from the connection object a cursor object points the python application to the location in the mySQL database where the required data is stored once you have a cursor object you can run queries to the mySQL database the cursor accesses an execute module that carries the SQL queries as python strings to the mySQL database you'll learn more about cursors at a later stage in this course now it's time to create a new database create a SQL statement as a python string and pass it to a variable called create database query the statement must create a database called little lemon you can use triple quotes to change your statement to a python string you could also use single quotes to create a string but the advantage of triple quotes is that you can use them to split a SQL query into multiple lines and it's much easier to read and manage a SQL query over multiple lines you now have a SQL query that you can run using the execute method from the cursor pass the variable create database query as an argument to the execute method on the cursor object execute this code to create the little lemon database next you need to set the database for use the first step is to create your SQL query as a python string and pass it to a variable called use database query the query lets you make use of the little lemon database through the use command and the name of the database then pass the variable as an argument to the execute method you now have a new database ready to use the next step is to create tables for the database create a variable called create menu item table for your SQL query then create your SQL query as a python string so that you can pass it to the variable create the query using the create table command to create a table called menu items The Columns that little lemon need in this table include item id name type and price the item id column must hold the ID for each item on the menu it's assigned an integer data type and rendered as Auto increment this means that a new ID is assigned to each item in numeric order the name and type columns display the name of each item in the menu and the type of Cuisine that it's associated with both the name and type columns are assigned a data type of varchar and character limits of 200 and 100 respectively the price column must display the price of each item in the menu it's assigned an integer data type the item id column is assigned the table's primary key then execute the create menu item table by invoking the execute module from the cursor object you can use the same method to create further tables within the database just update your SQL query as needed little lemon now have a new database and table in their mySQL database and you should now be familiar with the process for creating a database and table in a MySQL backend database using python while developing a python-based front-end application when accessing a MySQL backend database using a python front-end client your python application needs to know where the data required to complete your query is stored a cursor indicates where this data is positioned on the database over the next few minutes you'll explore the concept of cursors their key characteristics and learn how they work let's look at an example of cursors from Little lemons database little lemon need to retrieve a guest's booking details they can carry out this task with the SQL select query using python however the python front-end client needs to know where the data is stored within the backend mySQL database the mySQL database can use a cursor object to point to the records that little lemon needs this cursor helps the python client locate the required data this example offers a good understanding of what database Engineers mean by the term cursors a cursor is a pointer that directs the python client to the results of your SQL query within the mySQL database the cursor indicates the location of the queried data by identifying specific rows or records you can use a cursor to read retrieve and move through individual records within the results of your query cursors have several key characteristics or features that are particularly useful to database Engineers for example cursors are read only so you can't update the data that they are associated with the results can't be modified they're preserved by the cursor cursors are also non-scrollable they fetch records in order which helps to keep track of your position when processing individual records you can't skip or jump between records or fetch them in reverse order and cursors are also a sensitive this means that they point to the original data within the mySQL database instead of a copy this is faster than using insensitive cursors which take longer to return results because they can only point to a copy of the data so how can you use a cursor in a mySQL database query the first step is to declare the cursor use the declare statement and assign your cursor a custom name followed by the cursor keyword then use the four keyword and a relevant SQL select statement to determine the purpose of the cursor next you need to open the cursor type the open command and call your cursor's name to establish the result set now the cursor is pointing to the set of results from your select statement the next step is to fetch or retrieve the results of your statement type the fetch command and the name of your cursor then type the into keyword followed by the name of the location the results need to be transferred to for example the results can be transferred to a local variable to be used in your python application the final step is to close the cursor type the close command followed by the cursor name closing a cursor is always good practice to release the memory associated with it as you learned earlier a cursor is non-scrollable it works through the results set in order so once it reaches the last result it no longer needs to remain open so close the cursor to free up the memory it uses now that you're familiar with how cursors work let's return to little lemon's query little lemon can use a cursor to retrieve the booking data for their guest first they declare a new cursor called guest booking details this is followed by a SQL select statement that targets the guests data from the little lemon mySQL database they then open the cursor next they fetch the data and store it in a variable within their python client called booking data you should now be able to explain what cursors are in a mySQL database describe their key characteristics and explain how they work you'll explore cursors in much more detail as you progress through this course so this is a great start to your MySQL python Journey at this stage of the course you've explored how cursors can be used to point to the location of the data you require in a mySQL database however it's also important to understand how python makes use of the cursor class the cursor class converts MySQL records to more python friendly code with python you can also change or alter the behavior of your cursors using cursor subclasses in this video you'll explore the cursor class and subclasses and to develop an understanding of how they work little lemon need to find out how much one of their guests spent on their meal they can query this data from their mySQL database in a more efficient manner by using cursor subclasses to create their python strings let's take a few moments to find out more starting with building an understanding of what database Engineers mean by the term cursor classes cursor classes are a method of translating Communications between python and a back-end mySQL database python sends SQL statements to a mySQL database in the form of string objects cursor classes take these python string objects and parse them into MySQL friendly commands and data types that can be understood by the database python then uses the cursor class when retrieving these results to parse them into python friendly code the cursor class contains several subclasses which can be used to parse string objects in different ways depending on your needs at this stage of the course you may have seen some examples of subclasses in action in the form of attributes and methods for example the column names cursor attribute Returns the column name of a result set from a SQL statement row count returns an integer that represents the number of rows affected by a select insert or update statement or there's the execute method the execute method is the most common cursor function it binds the parameters of a python string argument to a MySQL Query statement so that it can be executed on a mySQL database cursor subclasses inherit the properties of the parent cursor class in turn the subclasses vary the parent class to improve the efficiency of the code let's explore some examples of cursor subclasses one example is the cursor Raw subclass this subclass Returns the results of your variable without pre-processing them to more python friendly interpretations so it uses less processing power leaving you free to create custom conversions of the results however the disadvantage is that it requires more coding to process the targeted variable little lemon can use the cursor raw subclass to create their own custom data type conversion this could save time if the initial conversion type is not the one required another example of a subclass is the MySQL cursor dictionary class this returns each row as a dictionary which helps with accessing variables you can access variables by using direct variable names little lemon can make use of this subclass to return a result set in the form of a dictionary this lets them use the actual column names of the database columns instead of working through a list of unnamed topples and finally there's the buffered cursor class which takes a subset of data and stores it in a buffered memory the advantage to this subclass is that your code doesn't need to repeatedly request each row from the server the disadvantage is that the data needs to be stored on local memory so you can only use this subclass to return small data sets little lemon can use a buffered cursor class to retrieve data this lets them make interleaving SQL requests interleaving of SQL requests is when you take part of a SQL query result and use it to make a subsequent request from a database let's take an example where little lemon need to find out how much a guest spent on a meal they can interleave a SQL request to carry out this task for their first query they create a MySQL Query as a python string that retrieves the guest's booking ID once they have the result of the first query they then create a second or subsequent query that uses part of the first result which is the booking ID to find the cost of the meal in other words little lemon can use part of their first query within their second query to make a subsequent request from the database a database can return multiple results from the first query if you use the first result within your subsequent query before all other results are returned then MySQL encounters an error called unread result found so it's best practice to finish your Loop and let all results print from the first query before you make any subsequent queries however you can avoid this if you first buffer the results using a buffered cursor the buffered cursor returns all rows while a standard cursor requires you to send an individual query to each affected row now that you're familiar with the different cursor subclasses let's look at the Syntax for instantiating them the syntax is very similar across all subclasses you just pass a keyword argument to the cursor that alters Its Behavior in a particular way to create a standard cursor you create the cursor as an object so to instantiate an instance of a cursor subclass you add the subclass as a keyword argument that alters the behavior of the cursor for example you can pass buffered as the keyword to create a buffered cursor or pass raw as the keyword to create a raw cursor or dictionary to use a dictionary cursor little lemon can use a cursor subclass to request all data from the orders table in the mySQL database they can create two cursor instances one buffered and the other a standard implementation they pass their SQL select statement as an argument to both cursors once executed the cursors return all items from the orders table you should Now understand the concept of cursor subclasses and how they can be used to alter or change the behavior of a cursor great work congratulations you've reached the end of the first module in this course you should now be familiar with the basics of how to interact with the mySQL database using python let's take a moment to recap some of the key skills you've gained in this module's lessons in the first lesson you received an introduction to the course in which you developed an understanding of how a connection is formed between python and a mySQL database using an application programming interface or API also known as a driver you now know that a front-end python application sends a connection request to the connector API the API forwards this connection to the back end mySQL database a cursor connection can then be established once the connection is established data can be sent between Python and MySQL you also learned about the different apis that can be used to create this connection and that as a database engineer you'll rely on the MySQL connector python API during this first lesson you also learned how to install and configure python software on your system so that you could create a connection between Python and MySQL you learned how to download python make use of pip and import the different packages that you require and you also learned how to use aliasing to create custom names to facilitate easier communication with the database you then explored a working example of how to connect to a mySQL database using a python client you saw how the API is imported into a Python program and can be used with an alias and you now know how to make use of its modules and functionality using the access or dot operator and you also know how to pass arguments to a connector module like usernames and passwords you then ended this lesson with an overview of the process for creating tables in a database using python you learned that you need to create a cursor object that you can use to communicate with the mySQL database the cursor object accesses an execute module that carries queries as python strings to a mySQL database once your connections are set up you can then create data in a mySQL database using python like databases and tables in the final lesson in this module you learned about cursors you learned that cursors are used to indicate where data is positioned in a mySQL database so that it can be accessed by a python client the cursor lets you read retrieve and move through individual records within the results of your query you then explored the key characteristics or features of cursors that are particularly useful for database engineers you learned that cursors are read only so they can't be modified and preserve results you discovered that cursors are also non-scrollable they fetch records in order which helps you to keep track of your current position when processing individual records and you also found out that cursors are a sensitive this means that they point to the original data within the mySQL database instead of a copy you then explored the code required to use a cursor in a mySQL database you can now use commands like declare to declare a cursor the open command to call the cursor's name the fetch command to fetch results and the close command to close the cursor you then explored an example of this process from Little lemons database in the next part of this lesson you explored different cursor subclasses and you learned how they can be used to change or alter the behavior of your cursors you discovered that cursor classes are a method of translating Communications between python and a mySQL database classes take python string objects and parse them into MySQL friendly commands and data types that can be understood by the database you also explored some common examples of cursor subclasses subclasses inherit the properties of their parent cursor class like the cursor role subclass the cursor dictionary subclass and the buffered cursor class you also learned about the interleaving SQL requests which involve taking part of the SQL query to make a subsequent request you then explored the Syntax for creating and using subclasses and you also explored an example of subclasses from the little lemon database you should now be familiar with the basics of how to interact with a mySQL database using python including establishing a MySQL python connection and working with cursors great work I look forward to guiding you through the next module in which you'll learn how to perform queries in MySQL using python as you're aware working with a database in MySQL involves crud operations working with databases through python also involves crud operations the key difference is that the SQL statements used to carry out your operations must be processed in python as strings in this video you'll discover how to execute create and read operations in a mySQL database using python little lemon is populating the restaurant's database with the records of upcoming bookings little lemon also need to retrieve or read this data from their database so they know which guests are attending for dinner you need to help little lemon create and read data in their mySQL database using python But first you need to understand how to create and read data in a mySQL database using python let's get started with creating data so far you've learned how to create data in a database using an insert statement like an insert into command for example little lemon must use an insert into statement to add the names of customers and the time slots they've booked to the customer name and time columns in a table called bookings in their database however there are a few more steps to this approach when working with python create your SQL statement as normal then use quotation marks to convert it to a python string argument this string argument is passed to a mySQL database through a connector which parses it into a format that MySQL can understand so the first step in the process is to write your SQL statement then add a pair of quotation marks to convert it to a python string argument finally create a variable in which you can store the query as a python string next pycon sends the string to the database through the cursor the statement is then executed on the database little lemon can use Python to add booking data from guests to their mySQL database using the MySQL insert query variable the SQL data just needs to be passed in a string format as you should know by now you can also retrieve or query data from a database using a select statement and a SQL select query is also the first step to complete when reading data using python python can read data from a mySQL database using a select statement and just like with your insert query you create a variable in which the query can be stored as a python string and you write your select query make sure that it's written within quotation marks to convert it to a python string little lemon can use a read data query string object to retrieve all data from their bookings table they just need to write the select query as a python string then use the execute module from the cursor object now that you know how to create and read data using python let's see if you can help little lemon for the purpose of this demonstration a connection has already been established between Python the API and the mySQL database through the MySQL connector python API so your first task is to instantiate the cursor object from the connection using the cursor method the next step is to execute the MySQL insert query by passing it as an argument to the execute method once the query is executed commit the change to the database using the commit method of the connection object now each new instance of customer data is added to the bookings table in the database through the MySQL insert query for the next stage of development little lemon needs to read or retrieve the data in their database some sample customer data has been added to the database to test the read functionality you need to develop the functionality and retrieve this data as you learned earlier the first step is to create a SQL statement as a python string that you can pass to a variable in this case you need to create a SQL select statement that retrieves all data from the bookings table in the database and pass this statement as a string to the python variable called read data query now you need to pass your query to the execute module on the cursor just like you did when creating data once the query is executed you need to retrieve the results using the fetchall method on the cursor create a new variable called results then pass the results of your query to this variable through the cursor object the results variable is a list data type and it shows each record in the form of a topple so the results variable is essentially a list of topples and each Tuple is a single extracted Row from the bookings table in this instance the items in each topple in the results variable are ordered in the same way as the columns in the bookings table they are ordered this way because you're reading all records from the table you can retrieve the column names by creating a new variable named columns and calling the column names from the cursor object these values are then stored in the columns variable for later use don't forget that it's also good practice to close the cursor object and connection when you no longer need them you now know how to execute create and read operations in a back end mySQL database using a front-end python application that's a great start I look forward to guiding you through more crud operations in Python updating and deleting records in a mySQL database involves routine crud operations there are also common operations for a python-based application that interacts with the mySQL database over the next few minutes you'll learn how to execute update and delete operations in a back-end mySQL database using python little lemon needs to update and delete the records of the restaurant guests in the bookings table in their database using python you need to help little lemon with this task but first let's take a quick look at the bookings table this table holds the booking data of each guest this includes their booking ID table number first and last name booking slot and the ID of their waiter little lemon must update and delete data within this table using python let's begin with updating records as you know from previous courses data is updated in a database using update statements little lemon needs to update the booking data of a guest let's see if you can help them out you can start by writing a SQL update query that uses a where Clause to let little lemon update a guest's table number the update query is created as a string it's then passed to a string object called update bookings little lemon just needs to update the specific values of each query Let's test this query by updating the booking information for guest 6 Diana Pinto who has been moved to table 10. use the execute method to run the query and update the records in the bookings table in the database to commit these changes to the table in the database use the commit method to check that the data was updated correctly you can run the print cells of the Jupiter notebook the guest with the booking ID of six is now assigned to table 10. the update query has worked there are also times when guests cancel their bookings when this happens little lemon needs to delete their records from the database to carry out this task write a SQL delete statement that deletes the record of a specific guest from The bookings table your statement can use a where Clause of booking ID the delete statement is created as a python string it's then assigned to a string object called delete booking ID you can test this query by deleting the booking information from Marcos Romero who was assigned the booking ID of four use the execute methods to run the query then commit the changes to the database now it's time to check that the query worked with another printout you can rerun the cells with the initial select query from The jupyter Notebook the printout changes if the database has been altered there's no data in the table for Marcos Romero this means that your delete statement was successful you can also amend the where clause in your delete statement to check for null values next to table and employee IDs if the value is null then you can set the bookings to be deleted you now know how to perform update and delete operations in a mySQL database using python great work when you query records in a database your query can often return hundreds thousands or even millions of results depending on how much data there is but you might only require a fraction of these results as you saw in earlier courses you can use filtering and sorting techniques to Target only the data you need from these results when querying a mySQL database using python you can apply these same techniques to ensure that your query targets specific data over the next few minutes you'll recap some basic filtering and sorting techniques and learn how to filter and sort MySQL data using python over at little lemon the restaurant is currently querying the bill records of each customer in its database they need the records of all bookings with a final bill greater than or equal to forty dollars they also need the bookings to appear in ascending order with respect to the total bill amount filtering and sorting techniques are a great way to carry out this task let's find out how these techniques work with python and then help little lemon one filtering technique that you should be familiar with is the use of the where Clause let's begin with a quick recap of the where clause as you should know by now you can use this Clause to filter and extract records that satisfy a specific condition for example little lemon can use the where Clause to create a SQL select statement that targets the booking ID and bill amount columns in their orders table the statement returns all values in the bill amount column equal to a value of 40. the where Clause helps to filter the records from the database but not to the extent that little lemon require as you saw earlier little lemon need the records of all bookings with the final bill greater than or equal to forty dollars you can use comparison operators to specify the exact data you require from a database for example little lemon can replace the equals operator in their statement with the greater than or equals to comparison operator this operator targets all records from a database where the bill amount is greater than or equal to a value of 40. this Narrows down the records even further returning much more specific results another technique is the order by Clause as you should know the order by Clause is an optional Clause that can be added to a SQL select statement it helps to sort data in ascending or descending order so to filter their records even more efficiently little lemon can add the order by Clause to the end of their statement they target the bill amount column and then type the ascending keyword once executed the query returns all Bill amounts greater than or equal to forty dollars in ascending order now that you've recapped some examples of filtering and sorting techniques your next question might be how can I use these techniques in Python to query a mySQL database well as you saw earlier little lemon need to query the records in their mySQL database using python Let's help them create this query using the filtering and sorting techniques that you just explored to recap Little lemon need the records of all bookings with a final bill greater than or equal to forty dollars they also need the bookings to appear in ascending order with respect to the total bill amount first write a SQL select query as a python string stored in a variable called MySQL Query the query must Target the booking ID and bill amount columns from the orders table use the where clause and a greater than or equal to comparison operator to Target all records that are greater than or equal to the value of 40. then use the order by clause and the ascending keyword to order all records from the bill amount column in ascending order the next step is to establish a connection through the MySQL connector python API between the front-end python application and the backend mySQL database then get the cursor object from the connection using the cursor method run the query using the execute method from the cursor object once the query is successfully executed you can fetch all the results in a new variable called results that satisfies the conditions given in your query just use the fetchall method on the cursor object to grab the query results all the retrieved data is in results as a python list of tuples you can display the list on your dashboard for little lemon little lemon now have the data they need and you should now be able to explain how to make use of filtering and sorting techniques to Target records in a mySQL database using python great work at this stage of your database engineering Journey you should have experience of using join operations to extract data from multiple tables there are also instances in which you'll need to perform a join operation on a mySQL database using python in this video you'll explore the process for executing join operations using python little lemon are adding a new menu feature to their website this feature lets customers view menu items their prices and the type of Cuisine each meal is related to the data that the feature requires is in two different tables in their mySQL database menu and menu items little lemon need to use a join to combine the data from these tables to create their menu feature let's quickly recap the concept of a join a join is created using the SQL join Clause it targets a common column between the two target tables these common columns are used to join the tables together and extract the required records some examples of joints used in MySQL include left join right join inner join and outer join all these joins can be used with python let's explore an example of the syntax and process for creating an inner join first a SQL query is created using join as a python string that can be passed to a variable the query is then run using the execute method from the cursor object once the query is executed you can retrieve its results into another variable use the fetchall method on the cursor object that holds the results of the join the data is retrieved as a list of tuples for example little lemon can create a join query as a string object in Python that combines the menu items and menu tables this join is performed using an inner join on the item id column which is common to both tables when executed using python on a mySQL database this statement returns all required results now that you're familiar with the process for joining data from different tables using python let's see if you can help little lemon to create this query as you learned earlier little lemon need to use a join query to extract the data for the new menu feature on their website they begin by creating the join statement as a string object in Python store it in a variable called my join query the join query must Target the name type and price columns from the menu items table and it must also Target the cuisine column from the menu table an inner join is created on the item id column which is common to both tables let's assume that the MySQL connector python API is running a connection between the front and back end and that the cursor object is also available this means that you can now execute your query using the join statement run the query using the execute method from the cursor object when the query is executed you can fetch all the results from the cursor object in a new variable called results using the fetchall method the results are retrieved as a list of topples one for each row little lemon can view the order of each individual entry they just call the column names attribute on the cursor to return the names of the column in order at this stage of your database engineering Journey you should have experience of using joins to extract data from multiple tables and you should now be familiar with extracting data from a mySQL database using joins and python well done congratulations you've reached the end of the second module in this course you should now be familiar with how to perform queries in a mySQL database using python let's take a few moments to recap some of the key skills that you've gained in this module's lessons in the first lesson you learned how to perform create read update and delete or crud operations in a mySQL database using python you started this lesson with a recap of how to create data in a mySQL database using an insert statement you then learned how data is inserted into a mySQL database using python first it's created as a python string argument this argument is then passed to a mySQL database using a connector the connector parses it into a format that MySQL can understand you then explored an example of the syntax used to render a mySQL database query as a python string argument and you explored an example of creating and reading data using python from the little lemon database next you recapped how to delete and update records in a mySQL database using update and delete operations you learned how to create these queries as python string arguments and explored some examples from the little lemon database you then undertook a series of lab exercises in which you received the opportunity to perform crud operations in your own mySQL database using python and you tested your knowledge of these Topics by completing several quizzes in the second lesson of this module you learned how to perform Advanced queries in a mySQL database using python you began the lesson by learning how to filter and sort records in MySQL using python you recapped the basic filtering and sorting techniques that you learned in other courses these include the use of the where Clause to satisfy one or more specific conditions utilizing the order by Clause to sort data in ascending or descending order and the inclusion of comparison operators to specify the exact data required you then discovered how these techniques are used in Python by exploring several examples from the little lemon database the next part of this lesson focused on joining data from different tables in a mySQL database using python you recapped the basics of the join clause and how it can be used to Target a common column between two target tables you then learned how join can be used with python to extract data from a mySQL database a SQL query is created using join as a python string the string is executed using the execute module on the cursor object the results are then retrieved from the mySQL database using another variable that satisfies the query's conditions and the fetch all method you also explored an example of this process from the little lemon database and just like the first lesson you also completed a lab exercise in this lab exercise you performed a join operation using python to extract data from a mySQL database you then tested your knowledge of the process in a quiz you should now be equipped with the skills and knowledge required to perform queries in a mySQL database using python well done I look forward to guiding you through the next module in which you'll explore the topic of advanced database clients there are many different types of tasks that you need to perform when working with a mySQL database functions are a great way to store the syntax that you need for these tasks as reusable blocks of code these code blocks mean that you don't have to retype your code each time you need to use it over the next few minutes you'll recap the basics of functions and learn how they're used with python little lemon's mySQL database holds a lot of data on different aspects of the company like customer behavior and sales revenue little lemon can use functions to perform specific operations on their database and return results they can then use the data from these results to improve the restaurant's performance so let's quickly recap the basics of MySQL functions as you probably already know a MySQL function is a piece of code that performs a specific operation and returns a result in other words it's a task that combines a set of instructions and produces results in the form of an output MySQL functions provide a lot of advantages for database engineers some MySQL functions accept parameters or arguments While others don't they're great for manipulating data in a database you can also create custom functions that combine several tasks in a block of code you can then store these functions within your database and invoke them when needed and as you've just discovered MySQL functions are reusable so they can be used to complete repeat tasks at this stage of your database engineering Journey you've probably made use of many different types of functions the most common built-in functions available in MySQL include string functions numeric functions and date and time functions there's also comparison and control flow functions let's take a moment to recap the basics of these different types of functions and look at how little lemon can make use of them let's begin with string functions these functions are used to manipulate string values like adding strings together or extracting a segment of a string an example of a string function is concat concat combines data from two Separate Tables into one string little lemon can use a concat string query function on their database records to extract information on each customer like how much money they spent you can also use numeric functions in MySQL numeric functions include Aggregate and math functions these functions are used to carry out common tasks on numeric data sets for example little lemon can use the average numeric function to determine the average dollar amount that each customer spends with the business another example of MySQL functions includes date and time functions date and time functions can be used to query a mySQL database to extract date and time values in a range of different formats depending on the query over at little lemon they often extract date and time data to analyze the behavior of their customers they can use this data to find out how long guests spend at the restaurant and which days are the busiest next up is comparison functions you can use these functions to compare values within a database and they can be used with many different types of values like numerical strings and characters little lemon make use of comparison functions to identify the best and lowest selling items on their menu by using greatest and least comparison functions on their sales data and finally there's control flow functions control flow functions are used to evaluate conditions and to determine the execution path or flow of a query for example as you learned previously the case function runs through a set of conditions within a case block it then returns a value once the condition is mesh or a null value if no conditions are met little lemon often rely on control flow functions to determine which items on their menu are loss making and which items have turned to profit now that you've recapped the different functions available in MySQL let's find out how they work with python let's take the example of numeric functions and see how little lemon calculate the mean or average bill for each customer they can create a select query that uses the average function to determine the average bill amount this query is passed to the database to be executed once the query is executed little lemon can access the results and see the average dollar amount each client spent with the business this is just a basic recap of MySQL functions and a short demonstration of how they work in Python the rest of this lesson will explore methods for accessing MySQL functions using python in more detail I'm looking forward to exploring this topic in more detail with you as a database engineer you can use date and time functions to extract time and date values from a database and you can perform similar tasks using the functions available in Python's native date time Library in this video you'll learn about the different date time functions available in Python and how you can make use of them little lemon has received several bookings from guests for tonight however the restaurant has encountered a scheduling conflict so they need to push each booking slot forward by one hour Python's date time functions are a great way to solve this problem find out how daytime functions work then help little lemon to update their booking slots let's begin with an overview of date time daytime is a python class with several built-in functions that can be used to format and change time and date variables it's native to python so you can import it without requiring pip let's review the functions that Python's daytime Library offers the date time Now function is used to retrieve today's date you can also use date time date to retrieve just the date or date time time to call the current time and the time Delta function calculates the difference between two values now let's look at the Syntax for implementing date time to import the daytime python class use the import code followed by the library name then use the as keyword to create an alias of DT you can now use this Alias to call the library instead of typing date time every time you need to use a function you now have a date time object created within your python environment so let's find out how to make use of its functionality using the date time Now function Begin by creating a variable called current time next type the DT Alias as the module name then use it to call the date time Now function finally instruct python to print the current date and time values execute the code to print the time and date of your location python Returns the date and then the time the date is displayed in year month day format the time is displayed in hours minutes seconds format but what if you just need to know the current time or maybe you just want today's date you can use the same code again but this time give python two separate print instructions the first instruction tells python to print the current date and the second instruction tells python to print the current time when the code is executed python displays each value separately let's look at a slightly more complex function time Delta when making plans it can be useful to project into the future for example what date is this same day next week you can answer questions like this using the time Delta function to calculate the difference between two values and return the result in a python friendly format so to find the date in seven days time you can create a new variable called week type the DT module and access the time Delta function as an object instance then pass through seven days as an argument finally instruct python to print the results of the variable when executed python Returns the value of the date one week from now that you know how date time works let's see if you can help little lemon as you learned earlier little lemon have encountered a scheduling conflict to resolve it they need to push each booking slot forward by one hour you can carry out this task by instructing python to retrieve the data from the bookings table and then adding one hour to each booking let's assume that little lemon have already passed through their login details created a new cursor instance and pointed the cursor at their database your first task is now to import the daytime Library so that you can work with date time use the import keyword and import the library using the Alias DT this Alias is used for greater efficiency next write a SQL select statement to return all data from the bookings table pass the statement to the execute module from the cursor as a string argument before entering the loop instruct python to print the column names from the bookings table so that you can view each item in the row you can assign these values to variables and create a new variable called new booking slot that holds the values for the updated time slots to add an hour to each time slot pass an argument of one hour to the time Delta function and then add the function to the booking slot variable finally instruct python to print the values for the new booking slots in the form of a text string this text string details the values of each booking ID along with its respective old and new booking slots Loop through the results from the query and extract the rows from the booking ID and booking slot columns as the results show booking ID is the first value and booking slot is the fourth value you should now be familiar with the different date time functions available in Python and how you can make use of them working with daytime functions can be difficult but you've made a great start towards mastering this topic as a database engineer you can use date and time functions to extract time and date values from a database and you can perform similar tasks using the functions available in Python's native date time Library in this video you'll learn about the different date time functions available in Python and how you can make use of them little lemon has received several bookings from guests for tonight however the restaurant has encountered a scheduling conflict so they need to push each booking slot forward by one hour Python's date time functions are a great way to solve this problem find out how daytime functions work then help little lemon to update their booking slots let's begin with an overview of date time date time is a python class with several built-in functions that can be used to format and change time and date variables it's native to python so you can import it without requiring pip let's review the functions that Python's daytime Library offers the date time Now function is used to retrieve today's date you can also use date time date to retrieve just the date or date time time to call the current time and the time Delta function calculates the difference between two values now let's look at the Syntax for implementing date time to import the daytime python class use the import code followed by the library name then use the as keyword to create an alias of DT you can now use this Alias to call the library instead of typing date time every time you need to use a function you now have a date time object created within your python environment so let's find out how to make use of its functionality using the date time Now function Begin by creating a variable called current time next type the DT Alias as the module name then use it to call the date time Now function finally instruct python to print the current date and time values execute the code to print the time and date of your location python Returns the date and then the time the date is displayed in year month day format the time is displayed in hours minutes seconds format but what if you just need to know the current time or maybe you just want today's date you can use the same code again but this time give python two separate print instructions the first instruction tells python to print the current date and the second instruction tells python to print the current time when the code is executed python displays each value separately let's look at a slightly more complex function time Delta when making plans it can be useful to project into the future for example what date is this same day next week you can answer questions like this using the time Delta function to calculate the difference between two values and return the result in a python friendly format so to find the date and seven days time you can create a new variable called week type the DT module and access the time Delta function as an object instance then pass through seven days as an argument finally instruct python to print the results of the variable when executed python Returns the value of the date one week from now that you know how date time works let's see if you can help little lemon as you learned earlier little lemon have encountered a scheduling conflict to resolve it they need to push each booking slot forward by one hour you can carry out this task by instructing python to retrieve the data from the bookings table and then adding one hour to each booking Begin by creating a connection between the front-end python client and the backend database then pass through your login details this creates a new cursor instance and points the cursor at the little lemon database before you can begin working with date time you first need to import the daytime Library use the import keyword and import the library using the Alias DT this Alias is used for greater efficiency next write a SQL select statement to return all data from the bookings table pass the statement to the execute module from the cursor as a string argument before entering the loop instruct python to print the column names from the bookings table so that you can view each item in the row you can assign these values to variables and create a new variable called new booking slash that holds the values for the updated time slots to add an hour to each time slot pass an argument of one hour to the time Delta function and then add the function to the booking slot variable finally instruct python to print the values for the new booking slots in the form of a text string this text string details the values of each booking ID along with its respective old and new booking slots Loop through the results from the query and extract the rows from the booking ID and booking slot columns as the results show booking ID is the first value and booking slot is the fourth value you should now be familiar with the different date time functions available in Python and how you can make use of them working with daytime functions can be difficult but you've made a great start towards mastering this topic stored procedures provide database Engineers with a useful way of storing and recalling code when needed over the next few minutes you'll receive a quick recap of MySQL storage procedures and how they work let's begin with a look at how stored procedures can help little lemon little lemon checks the online bookings in its database every morning for a list of customers attending the restaurant that day they rewrite the same code each morning to extract this data but they could instead invoke a storage procedure that extracts the required data without the need to rewrite large blocks of code every morning before you find out how they can do this let's recap the basics of stored procedures a stored procedure is a block of code or one or more pre-prepared queries that can be stored in your database you can then invoke or call the stored procedure as required as you might already know this is similar to how a function works but don't forget the key difference between the two concepts functions can only have input parameters but a stored procedure can have both input and output parameters there are three main steps to complete when using stored procedures you should already be familiar with these first you create a stored procedure then you call a stored procedure and finally you can drop or delete a storage procedure let's begin with a recap of the benefits of stored procedures your code is more consistent the same code block is used each time it's invoked you know exactly what to expect from it your code is reusable you can use it as many times as you need across all your database tasks and your code is easier to use and maintain it's stored as one block that can be invoked edited or dropped as required next let's quickly recap the Syntax for creating a stored procedure to create a stored procedure begin with the create procedure command then write the procedure name and a pair of parentheses to hold the parameters make sure that you include all required parameters within the parenthesis finally write the rest of the procedure logic then when you need to invoke the procedure you just type the call command followed by the procedure name don't forget to include the parenthesis and if you need to remove or drop a storage procedure from your database then just type the drop procedure command followed by the procedure name in this instance you don't need to include any parentheses little lemon can use this code to create a storage procedure that extracts the details of the customers due to visit the restaurant they begin with the create procedure command then they name the procedure daily underscore customer underscore details and add the parameters finally they write the logic of the procedure now each morning they just need to type the call command followed by daily underscore customer underscore details the storage procedure then extracts the required customer data from the database so now that you've recapped the concept of stored procedures you might be asking yourself how do stored procedures relate to python stored procedures increase the performance of python applications and reduce traffic between the application and the mySQL database the application only needs to send the name of the stored procedure and parameter to the database instead of a large block of SQL statements you're now familiar with the benefits of stored procedures and how to create invoke and drop them in a database you're now ready to learn how to perform these actions using python great work you should be familiar with creating stored procedures in a mySQL database but how can you make use of stored procedures using python in this video you'll learn how to access stored procedures using python little lemon restaurants have a new promotional campaign where they give vouchers to all guests who spend 50 or more on a meal to find out which guests qualify for vouchers the team need to retrieve the guest names booking IDs and bill amount data from the bookings and orders tables in the database you can complete this task using a join operation but performing a separate join operation for each guest is time consuming a better solution for little lemon is to create a storage procedure they can call using python let's see if you can help little lemon to build a stored procedure using python the first step is to create the stored procedure as a python string stored in a variable called stored procedure query next type the create procedure command and then the name of your procedure which is get customers and bills then create a begin end block type the logic of your storage procedure within this block using a SQL select statement the statement concatenates the required data from both tables with the use of an inner join for all customers who spend fifty dollars or more don't worry about setting the delimiter before and after the procedure the cursor executes the entire python string as one MySQL statement so unlike with a traditional MySQL Query there's no need for a delimiter in Python the storage procedure is passed as a single python string it can also include multiple SQL statements when executing MySQL statements through an API the required closing semicolon is automatically appended to the end of the string when executing a stored procedure you need to store a block of code on the mySQL database that you can invoke when required you can trigger this block with the cursor call procedure or call proc method the cursor carries the stored procedure as a string in its execute module and stores it in the mySQL database you are now ready to execute the stored procedure statement and store it on the mySQL database using python call the execute method from the cursor object and pass the procedure to it as an argument if executed successfully the procedure is stored in the mySQL database now you can call this procedure first you need to call or invoke the procedure you can do this using the cursor objects call proc method pass the name of the procedure to the module on the cursor object as an argument next you need to retrieve the procedures results you can make use of Python's built-in next function to complete this task invoke the stored results module as an argument to the next function and store them as a python variable called results the next function is used to return the next item from the stored results iterator the entire result set from the MySQL server is then buffered in the results variable now you can invoke the fetchall method on the results variable and save it as data set once the code has been run successfully the data set returns a python list of topples each Tuple is an individual record or Row from the stored procedure you can index the data set or run the for Loop to print all records the results of the procedure show that there is one guest who has spent fifty dollars or more with little lemon and qualifies for vouchers you now know how to access stored procedures using python great work a mySQL database needs to provide access to many users at once and each connection must be secure and stable so that only authenticated users can gain access and there's no risk of their connections failing but managing secure and stable connections requires many resource heavy actions so how can you perform these actions efficiently the answer lies in database connection pooling in this video you'll explore the concept of database connection pooling investigate how it works and learn about its advantages little lemon's website has a python-based application that lets guests book time slots with the restaurant the application sends the guests bookings as SQL statements to the little lemon database however the website needs to provide each guest with a secure and stable connection to the little lemon database this is so that they can input their booking data without any risk of connection failure little lemon can manage these connections with the use of database connection pooling let's find out more database connection pooling involves creating and managing a pool of connections to run faster more efficient and optimized connections between clients and a mySQL database to gain a better understanding of how connection pooling Works let's explore a visual example visualize a pool of four open connections to a mySQL database two of the connections are currently being used by clients to access the database the other two connections are free and ready to use by any new user a new user can arrive and request access to the database the connection pool then assigns this new user one of the open connections shortly after the new user is assigned their connection the first two users complete their tasks end their sessions and leave the pool even though the users have left the pool the connections remain open technically speaking the connections aren't closed they're just placed back in the connection pool where they remain available for new users there are now three free and ready-to-use connections that can be assigned to new users but what if all four connections are in use and a fifth user wants to join there are only four connections available so how can you serve the fifth user's access request to avoid this situation the best approach is to create multiple pools with a specific number of connections assigned to each pool this means that different users can be assigned to different pools so there's always an available Connection in at least one pool for new users you can also program the system to create a new connection within a pool if an appropriate connection isn't available in other pools little lemon can use this approach to manage connections to their mySQL database they can create a series of connection pools that their guests can use to record their bookings in the database there are a few key advantages to connection pooling connection pooling makes efficient use of available resources it reduces the time and effort required to establish connections connection pools simplify programming models and they increase the performance of the Python application when connecting to the mySQL database you should now be familiar with the concept of database connection pooling be able to explain how it works and to describe its advantages there's lots more to learn about connection pooling in this lesson but you're off to a great start database connection pools are a great way of providing secure authenticated connections to a database for multiple users and the MySQL connector python API provides a useful method for developing connection pools in the form of the MySQL connection pool module in this video you'll learn how to create a connection pool for a database using the MySQL connection pool module little lemon need to provide secure authenticated connections to their database for their team of web Developers they've decided that the most efficient way of providing these connections is through connection pooling let's take a few minutes to find out how to build a connection pool then use your new skills to help little lemon build a secure connection pool a database connection pool is managed and maintained using a module called MySQL connection pool the module is held in a directory called mysql.connector.pooling you can import the module into your working environment and access its functionality using the import syntax and the from keyword is used to identify the module's location in other words the code is instructing python to access the subdirectory and return the MySQL connection pool Library the MySQL connection pool module has many useful functions and attributes that you can make use of let's look at a few of these the pool main class attribute is used to identify the name of the pool to be used in the connection if you don't specify a pool then one is automatically generated instead you can create as many pools as you need the pool size attribute States the number of connections that have been created for a pool the default number of connections is 5 but you can create up to 32 connections for a single pool and finally there is the connection ID attribute this is a unique ID assigned to each Connection in the pool there are also many class methods available in the MySQL connection pool module you can use the get connection method to request a connection the pool then assigns a free connection if one is available if no connection is available then you'll receive a pool exhausted error instead the is connected method is a Boolean function that returns either a true or false value depending on whether a connection has been made this is a useful way of avoiding errors finally the close method informs the connection pool that a user has completed their session the user no longer needs the connection so the connection can be placed back into the pool as an available connection for any new users who need it now that you're familiar with the module connection pool let's see if you can help little lemon as you learned earlier little lemon want to create a connection pool to provide users with efficient access to their database before you can create a connection pool you first need to import the MySQL connection pool Library using the MySQL connector python API once you've imported the connector the next step is to make a connection to the database call the pool little lemon pool then use the pool size attribute to specify four connections use the Local Host as your host and place the pool on the little lemon database then type the username and password all this code is passed as arguments or parameters to the MySQL connection pool module and assigned to the pool next you need to create a python list of users for the connection pool you can call this list users populate the list with members of little lemon's guest list you then need to create a SQL select statement this statement must accept an integer the integer must correspond with different ID requests accessing different data points as database users now you need to use the for Loop you can use it with the range function the range function is used with the pool size attribute this means that no matter how big the pool is the loop always runs to the end the next step is to set up the applications connection write a statement that checks if a connection was successfully made within the pool this statement also avoids any errors appearing in your code then write a statement that instantiates a new cursor from the existing active Pool connections this action must occur for each new live connection that's successfully made with the pool next create a print statement that displays the following information on screen the user requesting information from the database the unique connection assigned to this user and the unique booking ID that they're requesting the print statement is formatted using curly braces these symbols take the specified variables in the order that they are specified from the format function so this means that the information is printed in the order you specify the next line of your code is a parameterized select statement this statement uses the initial SQL select statement from earlier and combines it with the incremented I to assign a different booking ID to each user then you can use the fetch all methods to return all information that corresponds with this SQL select statement and print the information on screen you can also create an else statement that generates an error message on screen if a live connection cannot be found finally use the close method to return a user's connection back to the pool when they have ended their session you should now know how to create a connection pool for a database using the functions and attributes available with the MySQL connection pool module congratulations you've reached the end of the third module in this course you should now be familiar with how to make use of functions and stored procedures in a mySQL database using python along with creating and managing database connection pools let's take a few moments to recap some of the key skills that you've gained in this module's lessons in the first lesson of this module you learned how to make use of MySQL functions using python you began with a quick review of MySQL functions by reminding yourself of the advantages that functions offer their main advantage is that they eliminate the need to carry out repetitive tasks you also familiarized yourself with many of the common MySQL functions that you explored in earlier courses these included string numeric and date and time functions as well as comparison and control flow functions you recapped how string functions are used to manipulate string values you learned how to make use of numeric functions to perform tasks on numeric data sets you saw how you can extract date and time values from a database using date and time functions you received a reminder of how to compare values within a database using comparison functions and you reviewed the process steps for using control flow functions to evaluate conditions and determine the execution path or flow of a query you then learned how to access MySQL functions using python by exploring examples from the little lemon database and you also demonstrated your new skills and knowledge in the lab and quiz activities in the second lesson of this module you recaptch the basics of stored procedures you learned once again that a stored procedure is a block of code that can be stored in your database and invoked as required stored procedures offer several advantages like consistent code reusable code and the ability to maintain code as one single block you then recaptch the syntax used to create stored procedures including the create procedure call and drop procedure commands and the begin end block and you examined the advantages of using stored procedures with python you saw that they increased the performance of python and reduce traffic between Python and MySQL in the next item in this lesson you learned how to access stored procedures using python you explored some examples of stored procedures from Little lemon like the inner join operation you saw how they made use of these stored procedures in Python to query a mySQL database you also learned how to call or invoke a procedure in Python using the call proc method retrieving results using Python's next function and you saw an example of the use of fetch all method once a stored procedure has been successfully run using python the data is returned as a list of topples you can index the data set or run the for Loop to print all records you then demonstrated your ability to use stored procedures in Python in a lab environment and you tested your knowledge of python and stored procedures in a quiz in the third and final lesson of this module you learned about connection pools during this review you learned that connection pools are resource efficient provide faster connectivity simplify programming models and increase the performance of python applications you then learned about python MySQL connection pools you learned that a database connection pool is managed and maintained using a MySQL connection pool module this module is imported into your working environment so that you can access its functionality the module also offers many useful functions and attributes that you can make use of this includes pool name pool size and connection ID and there are also several class methods available like get connection is connected and close these methods are useful for managing and maintaining the module you then explored an example of how to create a MySQL connection pool using the MySQL connector python API you also exploit the concept of database connection pooling you learned that database connection pooling involves creating and managing a pool of connections to run faster more efficient and optimized connections between clients and a mySQL database connections are managed between clients and users can drop in and out of sessions by using active connections and you can create multiple pools with specific numbers of connections so that there's always an available connection for every user you then completed a lab in which you worked with connection pools and you tested your knowledge of connection pools in a quiz you should now be equipped with the skills and knowledge required to work with mySQL functions and stored procedures using Python and you should also now be able to create and manage database connection pools well done I look forward to guiding you through the next module in which you'll work with a database client in this course you learned about database clients let's take a few moments to recap the key lessons that you encountered in this course you started the module by learning about the MySQL connector python API and how to make use of the PIP package you then learned how to install a front-end python client and connect it to a back-end mySQL database you then explored how to establish communication between Python and MySQL to perform crud operations once you established a connection you then accessed a cursor object once you had access to the cursor object you created a mySQL database and table using python you then committed changes in a mySQL database using python in the third and final lesson of module 1 you explored the concept of cursors in a mySQL database you learned how cursors work in Python and MySQL you also reviewed the key characteristics of cursors and learned that their read-only non-scrollable and a sensitive you then learned that the cursor class is used to translate communication between python and a mySQL database and you also learned how to identify different cursor classes and you also reviewed the basics of interleaving requests the second module focused on performing create read update and delete or crud operations in a mySQL database using python you began the module by learning how to create and read records in a database you reviewed the steps for this process and discovered how python communicates with the database to carry out these actions you then learned how to perform MySQL update and delete operations using python and you learned how to commit the changes to the database you completed this first Lesson by performing a series of lab exercises in which you demonstrated your ability to carry out crud operations in a mySQL database using python in the second lesson of module 2 you learned how to perform Advanced queries in a mySQL database using python the first of these queries involved filtering and sorting data in a mySQL database using python you recapped the basics of MySQL filtering and sorting techniques from earlier courses and learned how these same techniques are applied to python next you learned how to perform a range of different join operations to join data from different tables in a mySQL database using python you then received an opportunity to test your ability to perform Advanced queries in mySQL database using python through a series of labs module 3 focused on Advanced database clients the first lesson in this module began with an overview of how to use MySQL functions with python you began by learning how to identify the importance of MySQL functions and you reviewed the different types of functions available in MySQL you also learned how MySQL makes use of functions using python once you've finished recapping the basics of MySQL functions you then learned how to implement or access MySQL functions using python you also explored date time functions in Python and learned how to make use of these functions to update a mySQL database using python you then demonstrated your ability to make use of these functions in lab exercises in the second lesson in this third module you explored how to use MySQL stored procedures with python you recapt the basics of stored procedures and learned how they differ from functions and why they're used with python you then learned how to access stored procedures through python with the use of the call proc method and you also reviewed the use of delimiters the third and final lesson in this module focused on connection pools you began by developing an understanding of the concept of database connection Cooling you learned how to explain database connection pooling and you learned how to identify the advantages of database connection pooling you then reviewed the steps for creating a connection pool for a database including the process for implementing the connection pool SQL module you've reached the end of this course recap it's now time to try out what you've learned in the graded assessment good luck you've reached the end of this course you've worked hard to get here and developed a lot of new skills along the way you're making great progress on your MySQL journey and you should Now understand database clients you were able to demonstrate some of this learning along with your practical MySQL skill set in the lab project following your completion of this lab project you should now be able to interact with the mySQL database using python perform queries in MySQL using Python and make use of MySQL functions procedures and connection pools the graded assessment then further tested your knowledge of these skills however there's still more for you to learn so if you found this course helpful and want to discover more then why not register for the next one you'll continue to develop your skill set during each of the database engineer courses in the final lab you will apply everything you've learned to create your own fully functional database system whether you're just starting out as a technical professional a student or a business user the course end projects prove your knowledge of the value and capabilities of database systems the lab consolidates your abilities with the practical application of your skills but the lab also has another important benefit it means that you'll have a fully operational database that you can reference within your portfolio this serves to demonstrate your skills to potential employers and not only does it show employers that you are self-driven and Innovative but it also speaks volumes about you as an individual as well as your newly obtained knowledge and once you've completed all the courses in this specialization you'll receive a certificate in database engineering the certificate can also be used as a progression to other role-based certificates depending on your goals you may choose to go deep with Advanced rule-based certificates or take other fundamental courses once you are in this certificate thank you it's been a pleasure to embark on this journey of Discovery with you best of luck in the future welcome to the next course in database engineering the focus of this course is on Advanced Data modeling let's take a few moments to review some of the new skills that you'll develop in these modules you'll begin the course with an introduction to the topic of advanced database modeling you'll learn that a data model provides a visual representation of different data elements and shows how they relate to one another you'll also discover how data modeling is used at meta you'll then explore database modeling in more detail by learning about different kinds of data models you'll discover that there are three levels of database models there's the conceptual data model The Logical data model and the physical data model you'll also review different kinds of models that you can use to design your database next you'll learn how to structure your tables to remove anomalies using database normalization these include the insertion anomaly the update anomaly and the deletion anomaly you'll also explore an example of a data model and design a database model in an exercise in the next lesson of this module you'll receive an introduction to mySQL workbench you'll learn that MySQL workbench is a unified visual tool for database modeling and management it offers a range of useful features for creating editing and managing databases you'll then discover how MySQL workbench is used to build a data model diagram and you'll then learn how mysql's workbench forward engineer feature is used to turn this model into a database schema in MySQL you'll also learn how you can use MySQL to reverse engineer a model this means you can create a data model or ER diagram from an existing database this is essentially the opposite to the forward engineer feature and you can print the model share it or apply changes and push it to the database using the forward engineering method you'll also complete this lesson with a quiz item and an exercise in which you'll design your own database model in MySQL workbench in the next module you'll explore the topic of data warehousing in this module you'll learn about the architecture of a data warehouse and build a dimensional data model you'll begin with an overview of the concept of data warehousing you'll learn that a data warehouse is a centralized data repository that loads integrates stores and processes large amounts of data from multiple sources users can then query this data to perform data analysis you'll then discover that a data warehouse is defined by four key characteristics it's subject oriented it's integrated a data warehouse is also non-volatile and finally data warehouses are time variant you'll also review the different forms of data that a data warehouse handles including structured data semi-structured data and unstructured data you will then explore the architecture of a data warehouse and learn that it includes the following components data sources the data staging area which includes the ETL process the data warehouse itself and data Marts once the data has been loaded from the data sources it is then integrated and stored in the data warehouse it's then organized in data Marts where users can perform data analysis and present their findings these components control the flow of data from different sources they also process and integrate this data so that the user can perform data analysis you'll also explore a case study of a real world data project in the second lesson of this module you'll explore dimensional data modeling the lesson begins with an overview of the fundamentals of dimensional data modeling you'll learn that it's based on dimensions and facts and it's designed using star and snowflake schemas you'll then explore some examples of dimensional data modeling in practice and learn that there are four key steps to follow when creating a model choose the business process to explore choose the grain or level of granularity choose the dimensions and choose the facts once you've made all these choices you can then create the schema finally you'll undertake an exercise in which you'll create your own dimensional model in the third module of this course you'll explore data analytics in the context of dimensions and measures and you'll learn how to perform visualized data analysis using an advanced Analytics tool you'll start with an overview of data analytics you'll recap the basics of data analytics and the key types of data analytics that you've made use of at other points in your database engineering Journey you'll also learn that there are two generic types of data you'll deal with quantitative data which refers to numerical data and qualitative data which refers to non-numerical data when you've determined what kind of data you need you can process and analyze it using four measurement scales the nominal scale the ordinal scale the interval scale and the ratio scale next you'll learn about the topics of data mining and machine learning you'll learn that data mining is the process of detecting patterns and data while machine learning is the process of teaching a computer how to learn machine learning makes use of data mining models to process data like classification analysis the associate rule clustering analysis and regression analysis you'll then learn about data visualization you'll learn that when visualizing your data you must consider your audience and the information they're looking for you'll then need to choose an appropriate chart that best communicates this information finally you'll conclude this lesson with a discussion around what kind of data analytics reports you make use of in the final lesson of this module you'll review the topic of data analytics and learn how to make use of data analytics tools like Tableau as part of your introduction to Tableau you'll learn what its key features are and how they help you perform data analytics you'll then learn how to use Tableau to analyze data you'll learn how to download launch and navigate Tableau load and prepare data for analysis filter and visualize data and create an interactive dashboard finally you'll complete an exercise in which you'll perform data analysis in tableau now that you've reached the end of this course introduction it's time to get started on your Advanced Data modeling Journey good luck the way the industry has developed with the internet web3 the metaverse you are going to be working on systems that connect to a database there are very few jobs that won't require this so this is one of the most important fundamental skills that you can learn for a successful career [Music] thank you [Music] hi my name is Moxie I use data and pronouns I am a software engineer at meta in the Menlo Park office there's not a one-size-fits-all process for data modeling the process for developing a new data model for a new product is very different than retrofitting and old products to add a new feature or to comply with the new regulation and so you will have to adjust your process accordingly to the needs that you're encountering and that's why these skills are so important to learn data modeling is conducted by different people depending on the context we are changing or creating a data model for if we are building a new product and we already have the business case this will be led by a lead engineer uh generally discussing with the high level needs of the product are if we are changing data model for user privacy or for a specific feature the person leading that discussion may be the individual engineer or could be someone involved in the regulatory processes one thing that's very important about meta is that every engineer is empowered to add to the discussion and change things and so even though it may be a lead engineer that's designing the first data model and bringing it Forward every engineer is expected to be able to talk about it and bring forward their ideas to improve our data models for our user needs and our products one of the challenges at meta around data modeling is making sure that we are using our user data properly and not when we are accessing new data that we are being responsible with that data this is a big challenge because there's a lot of data that we have access to and we want to make sure we're only using the specific needs that we have and getting uh actually approvals for all these things and this can be a very arduous process because we do have to justify where we're getting our data from how we're using it how we're storing it how long we're storing it for there's a lot of details and you have to be very prepared for those meetings and very prepared to justify why or why not you're including some data managing changes to a database is a very complicated process and there's a lot of teams at meta dedicated to ensuring that the data is stored securely that's reliable that there's fallbacks there's a lot of considerations you have to take into from an engineering perspective but from an overall infrastructure there is a lot of teams managing and deploying the code and the database changes in order to make this work so something like meta or Facebook with its billions of users is not going to be maintained by a single individual and so I think it's important to understand that because a database is a common point for all the products that we have to take great care when making changes and we have to coordinate a lot of different other teams if you really want to get good at data modeling you have to really think about why are you getting your data how you're storing it how you're protecting it so you really have to think about the trust that is being put into you when doing that especially if you have user information there are a lot of considerations that you're going to need to build a data model and the questions you need to ask are sometimes tough questions and you need to think about trade-offs I hope you walk away from this video learning that databases are very complex systems that require a lot of coordination and even though One Singular data model may be designed by one person you're still going to be able to coordinate with lots of other Technical and cross-functional partners in order to build a successful database When developing a database system you need to make sure that it operates efficiently and that you can extract information from it quickly the best way to create such a system is to First design a data model with the data model you can plan how data is stored and accessed within your database before you create the database system in this video you'll explore the concept of data modeling and review different levels of data models the jewelry store mangata and Gallo or m g are in the process of Designing and building a database system to store data on customers products and orders but their current design is very inefficient however if m g first focuses on creating a suitable database model then they can design a more simplified and logical database system explore the basics of database modeling then see if you can assist m g let's begin with the term data modeling a data model provides a visual representation of data elements and shows how they relate to one another in other words it demonstrates how your database system is structured this structure helps you to understand how data is stored accessed updated and queried within the database and it also ensures a consistent structure in high quality data data modeling is used to develop all kinds of databases particularly entity relational databases these databases are planned with the use of an entity relationship diagram there are three different levels of data modeling conceptual data models logical data models and physical data model let's take a few moments to explore these different types you might already be familiar with conceptual data models from previous courses a conceptual data model consists of high abstract level of data elements called entities the relationship between the data elements or entities links related records of data within your database system the purpose of a conceptual model is to present a high level overview of the database system through a visual representation of the entities it contains and their relationship to one another m g can make use of a conceptual data model to create their database system they can present their customers products and orders as entities then document how these entities are related the conceptual model provides the basis for the logical data model again you should have a basic familiarity with examples of a logical data model from previous courses The Logical data model Builds on the conceptual model by providing a more detailed overview of the entities and their respective relationships it identifies the attributes of each entity defines the primary keys and specifies the foreign Keys m g can build on their conceptual data model by using it to create a logical data model their logical data model must include all attributes required for each entity like a list of the attributes each entity contains it then needs to Define which of these columns serve as the primary key for each table for example the client ID column is the primary key for the clients table an M G's logical data model also specifies the foreign Keys they're using to create relationships between the tables in the current model the client table is connected to the orders table through the client ID foreign key a physical data model is used to create the internal SQL schema of the database which is implemented in the database management system the physical data model must outline features like the data types constraints and attributes for example m g need to define a specific data type for each attribute like varchar for the full name attribute in the client's table or integer for the contact number attribute they also need to apply relevant constraints they can impose a constraint of not null for each column in the client's table to make sure that each one contains data there are also a range of tools available to generate and execute the internal schema of a physical data model you'll cover these tools in later lessons you should now be familiar with the basics of data modeling and the importance of the role that it plays in the development of a database system you should also be able to differentiate between different levels of data models and explain how each one contributes to the creation of a database system great work when creating a database system you first need to design a data model but there are many kinds of data models that you can choose from so how can you determine which one is best for your database system in this video you'll learn how to choose between different types of database models and find out how they can be used to create databases mangata and Gallo or mng need to build their database system so that it meets the needs of their business so they need to choose a data model that fulfills their data requirements explore the different types of data models including their advantages and disadvantages and see if you can help m g figure out which model is best for their business there are many kinds of models that can be used to build a database system in this video you'll explore the following data models the relational data model The Entity relationship model and the hierarchical data model you'll also review the object-oriented model and the dimensional data model let's begin with a look at the relational data model you might already be familiar with the relational data model from previous courses it's a popular and widely used database model it represents the database as a collection of relations each relation is presented as a table that stores information in the form of rows and columns a key advantage of this model is that it's much simpler to use than other models you can quickly identify and access data but the relationships between the data in this model can become more difficult to navigate with complex relational database systems and you might also need to structure and organize the data differently when performing data analytics next is the entity relationship model this model is similar to the relational data model the key difference is that you can present each table as a separate entity by assigning each one its own set of attributes the model also covers many different types of relationships between entities such as one to one one to many and many to many relationships for example m g can use an entity relationship model to visualize the relationship between their clients and orders tables the two entities are connected through the client ID column using a one-to-many relationship in other words one or more orders belong to a specific client there's also the hierarchical data model the hierarchical data model organizes data in a tree-like or parent-child structure each record of data has a parent node and it can also have its own child node the main disadvantage is that it can only be used to record one-to-many relationships between nodes each child node can only have one parent node M and G can use this model to depict the relationship between their orders and clients entities clients are connected to their root node and each order is connected to the related client while each client can be connected to many orders mng can continue to add nodes as required another option for database developers is the object-oriented model this model is based on the object-oriented concept this is where each object is translated to a class that defines the object's characteristics and behavior a key advantage of this model is that you can Define different types of associations between objects like aggregations compositions and inheritance this makes object oriented databases suitable for complicated projects that require an object-oriented approach this model also relies heavily on The Inheritance feature this is where one class inherits its attributes from another you can create a parent or super class also called a base to hold the common attributes each child class that follows inherits the attributes of the parent class however if you do make use of this model then you need a good understanding of object-oriented principles and related programming skills m g can make use of an object-oriented model to retain attributes between classes they can create a base or parent class called person entity that contains attributes and operations the staff and client classes then inherit these attributes and operations from the person entity class so each staff member and client are a person finally there's the dimensional data model this model is based on two key Concepts dimensions and facts facts are measurements obtained from a process for example sales facts obtained from M G's business data Dimensions Define the context of these measurements like a specific sales period so sales facts measure how many quantities of a particular product m g sold in each week the key advantage of this model is that it optimizes the database for faster data retrieval and restructures data for more efficient data analytics you'll explore the dimensional data model in more detail later in this course you should now be familiar with the different types of data models that can be used to build a database system and some of their key advantages and disadvantages you're making great progress on your database modeling Journey at this stage of your database engineering Journey you should be familiar with the concept of database normalization when working with database tables you can often encounter anomalies that can lead to inconsistent data you can solve these anomalies by applying the normalization process over the next few minutes you'll recap the importance of database normalization and the methods for applying it within your databases mangata and Gallo or m g are building a database that holds data related to product orders they've built some tables that contain order product and client data but these tables also contain several challenges around anomalies review the database normalization process then help m d to resolve these anomalies within their database tables let's begin with a quick recap of database normalization normalization is an important process used in database systems it involves structuring tables in order to reduce data duplication avoid data modification implications and simplify data queries from the database as you learned earlier database tables that don't follow the normalization process often give rise to anomalies the most common of these anomalies include an insertion anomaly this this is when new data is inserted into a table which then requires the insertion of additional data an update anomaly this occurs when you attempt to update a record in a table column only to discover that this results in further updates across the table and a deletion anomaly this is when the deletion of a record of data causes the deletion of more than one set of data required in the database so let's quickly recap how the three levels of data normalization can be used to help resolve or avoid these anomalies first normal form sometimes referred to as 1nf enforces data atomicity and eliminates unnecessary repeating groups of data in database tables in other words there must only be one instance of a value per field repeated groups of data cause data redundancy and inconsistency for example M G's products Table stores the engagement and diamond ring products in the same cell of the item column this violates the atomicity rule there should only be one instance of a value per column you can resolve this issue by creating two new tables first create a products table that holds all data related to the product entity assign the table a product's ID column to identify each unique record then create a clients table that holds all data related to the client's entity and once again create an ID column to identify each unique record this solution removes all unnecessary repeated data from your tables next let's look at second normal form for a table to meet second normal form or 2nf it must already be in first normal form it also cannot contain any relationships built on functional or partial dependency the table must be defined with a composite primary key for example the delivery status table from m g has a composite primary key that consists of the order ID and the product ID to comply with the second normal form you must identify if there's any non-key attributes that depend on one part of the composite key the order data in the delivery status table is a non-key attribute it can be determined by using the order ID column only this is called partial dependency this isn't permitted in second normal form because all non-key attributes must be determined by using both parts of the composite key this can be fixed by removing the order date attribute from the delivery status table in other words keep the order date column in the orders table your table now meets second normal form all non-primary key attributes depend only on the primary key value finally there's third normal form third normal form or 3nf removes unnecessary data duplication this ensures data consistency and integrity again a table must adhere to first and second normal form before you can apply third normal form third normal form resolves issues of transitive dependency this is when non-key attributes are dependent on one another for example the city and zip code in the m g orders table are non-key attributes however it's possible to determine the city value based on the ZIP code value and if you change the ZIP code value you need to change the city name value this means a non-key attribute depends on another non-key attribute which violates the third normal form to solve this you can split the table into two tables an orders table with all related data and a city table with two columns the zip code and city name all non-key attributes are now determined only by the primary key in each table so the tables now meet the requirements of 3nf applying the three fundamental forms of normalization is a good way to resolve any anomalies that could arise within your database you can resolve any issues of data redundancy and inconsistency by modeling your database so that it's easy to use access and query you should now be familiar with the process of normalization and how to apply it to your database great work as a database engineer you need to create Implement and manage a database system that meets the specific requirements of your business or organization these can be complicated tasks to carry out but there are a range of tools and Technologies you can use to support your work one example of the tools that you will make use of is the MySQL workbench tool in this video you'll explore the basics of the MySQL workbench tool you'll also learn how the tool can be used to help model and manage your databases over at M G the company is developing a new mySQL database management system the database system must follow some key requirements particularly in relation to operating systems data migration and editing tools mng can build a database that meets these requirements using MySQL workbench take a few minutes to review the basics of MySQL workbench then see if you can help them out let's start with an overview of MySQL workbench my SQL workbench is a unified visual tool developed by Oracle for database modeling and management it contains several key features that are useful for creating editing and managing databases let's review some of my SQL workbench's key features MySQL workbench is open source and cross-platform it can be used with multiple operating systems it simplifies database design and maintenance and it offers a visual SQL editor and other tools that support developers it provides auto complete and highlighting features for writing SQL statements and it facilitates data migration between different versions of MySQL and between MySQL and other relational database systems you'll make use of MySQL workbench in this course to model and manage data in your mySQL database But first you need to download install and set up MySQL workbench on your operating system download a copy of MySQL workbench from dev.mysql.com downloads make sure that you download the correct copy for your specific operating system once you've downloaded a copy you then need to double-click the file to install it on your machine next follow the installation wizard with the custom setup when you run the wizard make sure that you install the following software MySQL server MySQL workbench and MySQL shell if you encounter any difficulties read the MySQL workbench installation file for guidance next let's open the MySQL workbench and find out more about how you can use it to establish connections once you've downloaded a copy of MySQL workbench you need to set it up launch the program and view the MySQL workbench home screen the home screen contains a welcome message links to documentation blogs and discussion forums and provides access to various tools and features you can use the home screen side panel to access MySQL connections models and MySQL workbench migration wizard select the connections option to view a list of connections to local and remote instances of MySQL you can use connections to load configure group and view information about each of your MySQL connections models displays the most recently used models each entry lists the date and time the model was last opened along with its Associated database you can also select the plus sign to add a new model select the folder button to browse and open saved models and select the more button to access additional commands you can also open the migration tab to display an overview of prerequisites for using the wizard starter migration process open the odbc administrator or view documentation let's look at the process steps for creating a new user creating a new user is the most secure way to connect to your mySQL database because you can manage user roles and privileges make sure MySQL connections is selected first log into the MySQL server using the root user enter the root user password you set when installing MySQL save the password for future reference if required next select users and privileges under the management menu to view a list of current database users select add account to add a new user this opens a new window in which you can enter the new user details name the new user admin 1 enter a password confirm the password you can also use this window to control user privileges let's review these privileges account limits is used to limit a user's maximum number of queries updates and connections the administrative roles tab lets you assign a role to a new user or assign them associate privileges in this case select DBA that grants the right to perform all tasks schema privileges lets you control new user access privileges select the apply button to create the new user the next task is to create a new MySQL connection from the MySQL workbench home screen select the plus icon to open the setup new connection form fill in the form to create a new server instance you can now use the following values use test server as the server instance name in the username text field type admin 1. you can use the default settings for all other parts of the form finally make sure your host name is 127.0.0.1 and the port number is 3306. click the test connection button to check that the settings work is required enter the password you set for admin 1 user if you set it up correctly then MySQL workbench should confirm that the connection was successful if not return to the form and check that you've entered the information correctly select ok to save the connection your new MySQL connection is added to the home screen you can now use this connection to begin working with database schemas and SQL queries you should now be familiar with the basic features of the MySQL workbench tool and know how to use it to help model and manage your databases you're well on your way to understanding Advanced Data modeling as a database engineer you'll frequently need to create complex and robust database systems this can be a difficult task but luckily you can use tools like MySQL workbench to create database systems quickly and efficiently in this video you'll learn how to use MySQL workbench to create databases and tables and view insert and select data over at M G they need to create a database system to manage staff records they've decided to create this new database using MySQL workbench because of its SQL editor GUI and other useful features Let's help mng to create their new database using MySQL workbench the first task is to create a new database schema choose a MySQL server instance and select the schema menu to create a new schema select the create schema option from the menu pane in the schema toolbar this action opens a new window within this new window enter mg underscore schema in the database name text field select apply this generates a SQL script called create schema mg schema you are then asked to review the SQL script to be applied to your new database click on the apply button within the review window if you're satisfied with the script a new window screen appears asking if you'd like to execute the create schema statement select the Finish button to create the mg schema the schema has now been successfully created and is listed in the schema menu you might need to select the refresh icon from the menu to view new schemas to view information on the mg schema select it and click the information icon this action brings up a new window that contains several options like tables columns triggers and more you can also double-click the schema name to view a sub menu of all created tables views procedures and functions if you want to delete the schema right-click the name and select the drop schema option the next task is to create a new table inside the mg schema to hold the staff information right click the tables option in the sub menu select create table from the list of options that appear this brings up a new table form enter staff in the table name text field use the default settings for all other fields fill the column details in the middle window as required change the name of the First Column to staff ID Define the column as integer and set it as the primary key using the check boxes add the following remaining columns using the same method full name contact number role and email set each column's data type then declare each column as either null or not null as required finally click the apply button to generate the relevant SQL statement you should now be able to review the SQL statement that creates the staff table click the apply button to generate the relevant SQL statement review the SQL statement and click apply to execute the statement then select finish to save your changes you can now view the staff table in the mg schema database select the information icon to view the table structure the information window appears and shows options for columns indexes and other table elements click the columns tab to show the column structure another method is to type describe staff in the SQL editor then click the Run button to execute the statement this displays the details of the staff table structure your next task is to create a virtual table in the schema called staff View first right-click the view submenu of the mg schema select the create view option to open the SQL editor type A create view SQL statement to create the virtual table create a basic view to show the staff full names and contact numbers click the apply button to bring up the review window you'll see some SQL code with suggestions that you can either accept or amend as required for now amend the table by creating aliases for the columns so that they're easier to view when querying the table finally click the apply button then click finish to create the table you can now view the virtual table in the mg schema submenu next mg needs you to populate the staff table with data to insert data in the staff table you'd usually use the insert SQL statement in the SQL editor however with mySQL workbench you can populate the table grid directly first right-click the staff table then select rows from the list of options that appear enter the mg staff records into the table click the apply button to generate an automatic insert into statement then click the apply button again once you've reviewed the statement in the review window to execute the statement finally select finish the staff records are now stored in the staff table your final task is to query data from the m g database you can query the database using MySQL workbenches SQL editor write a select query that extracts all data from the staff view table this query outputs all data that exists within the stat view table into a table grid mng have now created their staff table and populated it with required data m g have now created their staff table and populated it with the required data and you should now be familiar with how to use MySQL workbench to create databases and tables as well as view insert and select data great work at this stage of the course you understand the importance of database models but how do you create these models you can create database models using professional data modeling tools such as MySQL workbench in this video you'll learn how to use MySQL workbench and make use of the forward and the reverse engineer features mng need to develop a basic database to maintain information about their customers and orders they can use MySQL workbench to create the model then they can use MySQL workbenches forward engineer feature to transform the data model into a SQL schema and implement it automatically into MySQL Let's Help m g create their database using MySQL workbench in the MySQL workbench home screen click the models view from the left sidebar then click the plus icon next to the models to display a new window this action creates a new schema called mydb double-click the schema name and change it to mangata underscore Gallow the next step is to create the data model diagram this diagram is essential for using the forward engineer feature you can create the data model in MySQL workbench and then transform it into a SQL schema that can be implemented automatically in MySQL first double-click add diagram to create the ER diagram this action opens the ER diagram designer page now you need to create the tables click the add table icon and then click a square within the view this action creates a table entity double-click The Entity to load the table editor change the default table 1 name to customers now you need to add columns to the customers table click a cell this creates a default ID customers column change its name to customer ID keep the data type as integer check the primary key not null an auto increment boxes then add three more columns full name contact number and email set the data types as required and Mark all three columns as not null follow the same steps to create the orders table and set the data types as required you also need to create the orders table foreign key Define the table's customer ID column as the foreign key using the foreign Keys tab at the bottom of the window type customer ID FK in the foreign key text field double-click the corresponding field in the reference table then select the customers table check the customer ID referenced column then Mark it as on update Cascade and on delete Cascade you now have a visual representation of the mg schema ER diagram with the customers and orders tables save your work by clicking file save as and name it mangata underscore Gallo underscore model now that you've created the data model you can synchronize it to the MySQL server using the forward engineer feature select the database tab then the forward engineer option from the menu this opens the forward engineer to database wizard select the connection that you created earlier to connect to the MySQL server leave the default setting as is Click next the wizard lists some Advanced options you can ignore these for now just click next a new window appears called select objects to forward engineer with a series of options check the export MySQL table objects box then click next The Next Step displays the SQL script to be executed on the MySQL server to create the internal schema review the script to ensure that it creates the schema as required click next to forward engineer the SQL script a message appears stating forward engineer finished successfully click close to close the wizard the m g database has now been created in MySQL you can confirm this by examining the schema list in the Navigator section or executing a show databases statement inside the workbench SQL editor m g also need to use MySQL workbench to reverse engineer a data model this means building a data model ER diagram from an existing database the first step is to go to the database tab then select the reverse engineer option once you're happy with the connection details click next each connection must be configured appropriately to connect with mySQL server if you're not happy with the existing connection you can choose another one and click next a message appears stating execution completed successfully click next a list of available schemas on the server is displayed select the database schema you want to reverse engineer then click next a message appears on screen stating retrieval completed successfully click next a new screen appears in which you are presented with the option to select all objects the screen confirms that all objects have been retrieved click execute once the retrieval process has been successfully executed a message is displayed which states operation completed successfully the selected objects have now been reverse engineered successfully click next again a final screen is displayed which shows a summary of the import click finish to complete the process MySQL workbench creates the new ER diagram from the internal MySQL schema you can print the data model as a PNG image share it with others or apply changes and push it to the database using the forward engineer feature mng have now developed a basic schema in their database using MySQL workbench and you should now know how to make use of the reverse engineer feature in MySQL workbench to develop a data model diagram well done congratulations on reaching the end of the first module in this Advanced Data modeling course in this module you learned how to design a suitable database model resolve any anomalies and then implement the model in your database using MySQL workbench let's take a few minutes to recap some of the key skills you gained in this module's lessons in the first lesson you reviewed the concept of data modeling and learned that a data model demonstrates how your database system is structured you also discovered that there are three different levels of data modeling the conceptual data model presents an abstract overview of the database system through a visual representation of the entities and the relationship to one another then there's the logical data model this model identifies the attributes of each entity and defines the primary and foreign keys of each table and the third and final level is the physical data model this model provides the detailed level required to implement the internal schema in the database management system you then explore different types of data models available to database Engineers you reviewed their advantages and disadvantages to determine which is best for your needs some of the models that you reviewed included the relational data model The Entity relationship model and the hierarchical data model there's also the object-oriented model and the dimensional data model you also recap the topic of database normalization you discovered that normalization is the process of structuring tables to resolve anomalies such as the insertion anomaly the update anomaly and the deletion anomaly you also recap the three levels of data normalization that are used to resolve these anomalies first normal form or 1nf which focuses on the issue of data atomicity there's second normal form also called 2nf this involves fixing any relationships built on functional dependencies and finally there's third normal form or 3nf this is a method of resolving transitive dependencies in this lesson you also explored an example of a data model you then demonstrated your new skills by designing your own database model in an exercise the second lesson in this module introduce you to my SQL workbench my SQL workbench is a unified visual tool for database modeling and management it offers a range of useful features for creating editing and managing databases as part of your introduction to this tool you learned how to download and install it on your operating system you also saw how to use the tool to create a new user and establish a connection to a mySQL database you then discovered how MySQL workbench can be used to manage databases you saw how to use the tool to create and navigate a database schema and you learn how it can be used to create and view tables including virtual ones and query their data the next Topic in this lesson provided an overview of database modeling in MySQL workbench with mySQL workbench you can create new database schemas you can also build a data model diagram using MySQL workbench's forward engineer feature this process involves creating a data model in MySQL workbench and then transforming it into a SQL schema that can be implemented in a mySQL database system MySQL workbench can also be used to reverse engineer a data model by building a data model ER diagram from an existing database you can print the model share it or apply changes and push it to the database using the forward engineer feature you then put your new skills to the test and an exercise in which you were challenged to design your very own database model in MySQL workbench you should now be familiar with the basics of data modeling and management I look forward to guiding you further through Advanced Data modeling in the next module a regular database collects stores and processes data from transactions in real time but what if you need to Aggregate and analyze data from multiple sources in these instances a data warehouse is the perfect solution it can aggregate data from a range of sources and analyze it using different tools over the next few minutes you'll learn what a data warehouse is explore its main characteristics and review the different types of data that can be used in data analytics the online e-commerce platform Global Superstore has seen a significant drop in sales recently they want to perform data analytics to identify the reasons behind this downturn the company has large amounts of data from multiple different sources like online transactions social media interactions and website data analyzing all this data requires powerful tools a data warehouse is the perfect solution let's explore the concept of data warehousing in more detail and find out how Global Superstore can make use of it a data warehouse is a centralized data repository that Aggregates stores and processes large amounts of data from multiple sources it separates the data analysis workload from the standard transaction workload of a regular database management system users can then query this data to perform data analysis this type of database is called online analytical processing or olap a regular database focuses on collecting storing and processing data in real time it's also known as online transactional processing or oltp there are four key characteristics of a data warehouse they're subject oriented integrated non-volatile and time variant let's explore these characteristics starting with subject oriented when building a data warehouse you need to choose one or more subject areas to explore for example Global Superstore can build a data warehouse that focuses on sales they can then use the warehouse to find all relevant information on their sales processes like best and worst selling products integrated means that a data warehouse integrates data from a range of different sources this data must be integrated in a consistent format integrated data must also resolve issues such as naming conflicts and data types Global Superstores data warehouse integrates data from online purchases website interactions and social media the next characteristic is non-volatile non-volatile means data should not be deleted once it's loaded to the data warehouse the purpose of a data warehouse is to analyze data as it exists the more data you have the better the results of your analysis so the data that Global Superstore integrates must not be deleted the final characteristic is time variant a data warehouse Aggregates data over a long period so that it can measure changes in data over time this helps users to discover Trends patterns and relationships between data elements for example Global Superstore can use data from the last few years of sales to find out why their profits have declined now that you're familiar with the characteristics of a data warehouse let's look at the different forms of data that it encounters there's structured data semi-structured data and unstructured data let's start with a look at structured data this is data that's presented in a structured format within a well-defined data model the relational database model is commonly used for structured data the organized tables help users to access manage and search for data using SQL a data warehouse typically uses structured data this data type is organized for a specific purpose so it's easier to gain insights from and uncover answers to specific questions semi-structured data is data that's only partially structured it requires more effort to perform data analysis an example of semi-structured data is an email message it can contain structured data like a sender and subject but the body is unstructured and can contain several different kinds of data like text images and videos the final type of data is unstructured data this data type doesn't adhere to any specific predefined data model it can include any kind of data like text video or audio this data can be collected and stored without applying any form of data model but analyzing unstructured data requires Advanced data analytics mechanisms like machine learning and data mining you'll explore these techniques later in this course semi-structured and unstructured data are more suited to a data Lake this is like a data warehouse but it can handle unstructured data data lake is used more widely by data scientists businesses prefer working with structured data in data warehouses because of its accuracy you should now be able to explain what a data warehouse is outline its main characteristics and the different types of data that can be used in data analytics that's great progress I look forward to guiding you further through these topics at this stage of the course you should be familiar with the concept of a data warehouse but you might still have questions like what does a data warehouse look like and how does it work in this video you'll explore the architecture of a data warehouse and understand how its components work together to facilitate data collection integration and Analysis over at Global Superstore they've begun building a data warehouse that can aggregate integrate and analyze data to help inform their business activities as a database engineer it's important that you understand the architecture of a data warehouse so let's explore the architecture of global Superstores data warehouse and discover how it works let's begin with a quick overview of the purpose and basic composition of a typical data warehouse's architecture a data warehouses architecture must be constructed so that it can control the flow of data from different sources it needs to be able to process the data it encounters and integrate it in a consistent format this is so that the users of the data warehouse can perform data analysis and extract useful insights to facilitate this process the architecture of a data warehouse is comprised of several different components each of these components plays a key role within the data warehouse to support data analytics these components include data sources data staging area the data warehouse itself and data Marts once the data has been collected and integrated Within These components the data warehouse users can then perform data analysis and present their findings let's explore these components and find out more about how they contribute to the data analysis process the first component of a data warehouses architecture is the sources of data that it relies on for its insights these include external sources like Global Superstores online surveys or social media data internal sources like information collected within the company database on customers and products operational data produced by day-to-day business activities like customer orders and data sources can also include flat files these are files without an internal structure like customers online Behavior or data log entries make sure that the data sources are accurate so that you can avoid irrelevant or poor data analytics the next component is data staging the data staging area includes a set of processes known as the ETL or extract transform and load pipeline you'll explore these terms in more detail later in this lesson now that you've sourced and staged the data the next stage is to store it data is stored in the data storage component this is a central database repository that serves as the foundation of the data warehouse it organizes data in relational databases it also includes a metadata repository that holds different kinds of information about the data like where it was sourced from the features of the data and the tables the data is stored in along with their attributes what does metadata mean in the context of a data warehouse metadata is essentially a table of contents for the data in the data warehouse it helps database Engineers manage and keep track of the changes within their Source systems methods and processes for example Global Superstores metadata contains information like where the data was sourced from it also shows when each file was created who created it and other important information the next component in the data warehouse is data marts these are subject-oriented databases that meet the demands of a specific group of users each Mark contains a subset of data that focuses on particular parts of the business or Organization for example Global Superstores data March relate to specific departments and business functions they can use these marks to perform focused analytical processes on specific parts of the business finally once the data is ready you can perform data analytics data analytics is performed using different analytics techniques like data mining once you've analyzed the data you can then present it the data can be presented in the form of reports like interactive reports analytics reports or static reports Global Superstores data analysts can analyze the data within their repository using different techniques they can then produce reports that provide information on sales profits and other important aspects of the business now that you're familiar with the components of a data warehouse let's take a quick look at some best practices to follow when creating and working with architecture first always separate the analytical and transactional operations make use of scalable Solutions so the data warehouse can process increasingly larger amounts of data and build a flexible architecture that can incorporate and Implement new functionality there are also several other best practices you should follow for example make sure your architecture contains data security features develop a simple and flexible architecture that can work with different forms of data create a data warehouse that's easy to understand Implement use and manage and document the development of the data warehouse this makes it easier to incorporate new functions you should now be familiar with the architecture of a data warehouse and you should also be able to explain how its components work together to facilitate data collection integration and Analysis great work very very large databases are very hard to get data from and so ETL pipelines are some of the critical ways in order to ensure that different products get quicker access to the data they need [Music] hi my name is Moxie Herrera I use they them pronouns and I'm a software engineer at meta in the Menlo Park office ETL stands for extract transform and load this is one of the common ways that data will be transferred to particular areas so you will have some sort of data source and then perhaps a staging area for the data and then a consumer of the data so splitting it up into different data consumers allows you to do two things one have the raw data stored in the backup in a warehouse and then it's extracted then transformed to the data that you need so that can be loaded by the consumers that need that data at the time of use the purpose of an ETL pipeline can vary depending on your uses but fundamentally the point is either to bring together a whole bunch of different data sources or have very large data sources abstracted away from the consumers of the data the extractors bring together all this data forces so transform is doing the data validation the scrubbing and the cleaning maybe encryption and then finally the loading is where the end consumers are actually taking the data the exact usage depends on the case but what this means is that an ETL pipeline is a very common process that's used to solve many different data problems part of the point of an ETL pipeline is to take all these different sources that may be built under different systems and bring them into one system that specific consumers can use this allows for parallelization and so the decisions are often made around what do the end consumers need where am I getting the data from how am I organizing this and what would lead to the most performance approach to this one of the most common problems when dealing with data pipelines is sometimes handling the volume of data and the rarest sources of data this can make it very difficult to ensure that your pipeline is up to date and has the data that is needed at the time of use understanding the delay and how these pipelines works are very critical to ensure that you are not expecting data consumers to be able to grab data that they're not actually it's not actually available to them I changed the database May then trigger a need for a change in the data pipelines that you have built depending on the needs of the consumer and what the database change is so what this requires is a strong understanding of the need of that pipeline what its goals are and a strong sense of ownership by the stakeholders of that pipeline these kind of updates happen all the time and this could trigger all sorts of different changes in the product team and so this requires a lot of ownership and responsibility and understanding of ETL pipelines for when those changes occur and what changes you need to make to accommodate that there's a lot of data in the world in fact too much data to store in a single database so ETL pipelines are an absolutely fundamental point in this world of Big Data cloud computing and the metaverse so far in your database engineering Journey you've studied and worked with different data models like entity relationship and object oriented but these models are built for real-time transactions when working with the data warehouse or data analytics you need a model that can optimize data access and queries for specific analysis in this video you'll explore the fundamentals of dimensional data modeling this is a model used to build databases in a data warehouse for data analytics Global Superstore are in the process of creating their data warehouse their next step is to design a data model for their database system that can handle data analytics the dimensional data model is a good fit let's find out more about this model and see how Global Superstore can build it into their data warehouse a dimensional data model is a data model based on the two key concepts of dimensions and facts let's take a closer look at these two concepts the dimensions represent the different elements of your data the dimensions Define a context or perspective for your measures good examples of Dimension data elements in global supersores database is time and location they can measure sales and other aspects of their business in the context of time and location the term facts represents quantifiable data held within a database good examples of global Superstores facts include the number of sold products and the profits that they have made there are two kinds of measures in the fact table stored measures and calculated measures stored measures are aggregated measures stored in the data warehouse like sales data and product price this data is loaded from the data source and stored in the database Warehouse repository calculated measures are calculated from other measures for example Global Superstore can calculate their profit by deducting the sold product's cost from the sold price these measures are performed through queries that rely on calculation rules programmed in the data warehouse database next let's look at the structure of a dimensional data model A dimensional data model consists of facts and dimensions tables the dimension table includes the dimensions data elements and can be structured as a hierarchy of data this facilitates different levels of data analysis you can navigate through the hierarchy to find the data you need for example you can drill down or roll up through the data elements and the fact table includes the measures data for example Global Superstore can use this structure to find their average sales at specific points in time they can explore data in different dimensional contexts and drill down through different levels they can use the time and location Dimensions to explore the data for average sales per year or by City then they can drill down through this data to find average sales per month and even average sales per week or day there are several best practices that you should follow when designing a data model before designing a dimensional data model you need to be clear about which business activities you want to examine and you need to know which Dimensions provide you with the most meaningful and useful context also make sure you organize data in a way that's easy to understand access and query a common method for designing a dimensional data model is with the use of schemas one of the most widely used schemas in a data warehouse is the star schema the star schema is a common model for Designing databases in a data warehouse it's a simple dimensional data model that consists of facts and dimension tables organized as a star one or more fact tables sit in the middle of the schema connected to one or more Dimension tables Global Superstore can use a basic star schema to organize their dimensional data model the sales fact table is in the center of the diagram it's connected to several Dimension tables suppliers customer product time and location another schema that you can use to design your dimensional data model is the snowflake schema it's called a snowflake schema because the schema diagram resembles a snowflake when working with a snowflake schema you should normalize your Dimensions tables to eliminate data redundancy the best approach to normalization is to group Dimensions data into multiple simple sub-dimensions tables the disadvantage of this schema is that it increases the number of Dimensions tables and requires more foreign keys to connect the tables so more complex queries are required to join records when performing data analytics for example Global Superstore can use the schema to normalize their product Dimension table into three tables a products table a subcategory table and a category table you should Now understand the concept of dimensional data modeling and you should be able to explain how the star and snowflake schemas work well done at this stage of the course you should be familiar with the dimensional data model and many of the key concepts related to it but how do you build a dimensional data model the process for building a dimensional data model revolves around four key steps known as Kimball's dimensional data modeling in this video you'll explore the approach and review each of these four steps in detail let's begin with a look at Global Superstore and their use of the dimensional data model Global Superstore wants to perform data analytics to understand their recent sales figures this requires building a dimensional data model that will help them understand their business and the factors that impact on their sales and profits before you explore Global Superstores process let's quickly recap the purpose of a dimensional data model and take a high level look at the four key steps a dimensional data model must focus on particular aspects of a business or organization in order to address specific problems the model is created using a systematic approach that revolves around four key steps these steps include the business processes the grain the dimensions and the facts each of these steps is a choice you need to choose a business process that your dimensional model must investigate you then need to choose the facts and dimensions that can provide the answers you need let's work through each of these four steps or choices and understand how they contribute to the process of building a dimensional data model when building a dimensional data model the first step is to identify or choose the specific business process to be addressed once you've identified the process you can then determine the grain of data in the data model Global Superstore have decided that the business process to be addressed is their sales activity once you've decided on the process you then need to choose the level of detail required this is referred to as the grain what granularity or level of detail is required for the data warehouse to address your process problem and what's the lowest level of detail required to address the issue for example Global Superstore need to analyze their sales data at both a yearly and daily level they also need to investigate this data at the global and local level the next step in the process is to choose the dimensions in this step you need to choose the relevant dimensions in other words in what context do you need to explore your business activity as you already know Global Superstore need to analyze their sales data and they need to analyze this data in the context of products customers time and locations so now that you've identified the business process the green and the dimensions it's time to establish the facts this is basically answering the question of what do you want to measure you need to select the measures that contain numeric data and populate your fact table with these attributes for example Global Superstore need to explore their facts using the dimensions tables location product and time they can demonstrate how each of these Dimensions impacts the sales and they can also include relevant attributes that provide useful information about each Dimension once you've decided what aspect of your business process you need to investigate and chosen the grain related facts and dimensions you can then create your schema arrange your dimensions and dimensions tables Global Superstore can arrange their dimensions and measures in a star schema their schema examines the performance of their sales activity in the context of four different dimensions customers products locations and time and within each Dimension is a set of relevant attributes that Target the required data once you've decided what aspect of your business process you need to investigate and chosen the related data and dimensions you can create your schema Global Superstore have identified their dimensions and measures step by step based on their business requirements they can now perform different forms of data analysis to achieve their goals you should now be familiar with using a systematic approach to build a dimensional data model and you should also be able to identify and explain each of the four steps in which you must make your decisions around data you've made great progress on your Advanced Data modeling Journey congratulations on reaching the end of the second module in this Advanced Data modeling course in this module you explored the architecture of a data warehouse and learned how to build a dimensional data model let's take a few minutes to recap some of the key skills you've gained in this module's lessons you learn that a data warehouse is a centralized data repository that Aggregates stores and processes large amounts of data from multiple sources users can then query this data to perform data analysis you then discovered that a data warehouse is defined by four key characteristics first it's subject oriented which means that it provides information on chosen subjects or topics data warehouses are also integrated in that they integrate data from a range of different sources the next key characteristic is non-volatile data is maintained in the state in which it was loaded into the data warehouse and finally data warehouses are time variant they aggregate data over a long period of time to measure change you also review the different forms of data that a data warehouse encounters there's structured data which is data presented in a well-defined structured format that's easy to access manage and search through semi-structured data is data that's only partially structured it's more flexible but also requires more effort to analyze and finally there's unstructured data this can include any kind of data without any predefined model but it's much more difficult to analyze unstructured data than structured and semi-structured data once you review the basics of data warehousing you then moved on to explore the architecture of a data warehouse a data warehouse's architecture focuses on the design of the components that aggregate integrate and analyze data in the data warehouse it illustrates the flow of data from different sources it then processes and integrates this data so that users can perform data analysis the architecture of a data warehouse consists of the following components the first is data sources which consist of the data that the organization relies on for its insights next is the data staging area this is where data is prepared for analytics through the ETL or extract transform and load process then there's the data warehouse itself where data is stored and finally there's data Marts these are subject-oriented databases that meet the demands of specific users once the data has been collected and integrated Within These components the data warehouse users can then perform data analysis and present their findings it's also important that you follow best practice when creating and working with the architecture of a data warehouse and make sure that you document the development of the data warehouse so that you can incorporate new functions into the architecture as needed you then explored a case study of a real world data project and tested your new knowledge in a quiz item in the next lesson you learned about dimensional data modeling this lesson began with an overview of the fundamentals of dimensional data modeling you learn that a dimensional data model is a model based on dimensions and facts facts represent the measures these are quantifiable data Dimensions Define the context in which you can explore the measures there are stored measures which include aggregated measures that can be stored in the data warehouse and there's calculated measures which focus on data that's calculated using data from other measures you also review the structure of a dimensional data model a dimension table can be structured using a hierarchy of data this structure allows for different levels of data analytics you can drill down or roll up through the data elements to find the data you need dimensional data models are also designed using schemas like a star schema you can also use a snowflake schema you then explored some examples of dimensional data modeling in practice you learned that there are four key steps that must be followed when creating a model first you need to identify the business process to be addressed once you've decided on the process you must choose the grain then choose the relevant dimensions and decide the measures in the facts table you then undertook an exercise in which you created your own dimensional model using the skills and knowledge that you gain throughout this module you should now be familiar with the basics of a data warehouse and its architecture and the fundamentals of dimensional data modeling great work I look forward to guiding you through the next module in this course in which you'll learn about Advanced data analytics in the context of data modeling your databases collect and store an endless stream of data from a variety of different sources and as you should know by now the true value of this data is what you do with it data is most valuable when it generates insights that help improve Services make plans and minimize risk all these insights are generated through data analysis and advanced data analytics over the next few minutes you'll recap the basics of data analytics and explore different types of data measurements over at Global Superstore they've been collecting large amounts of data and storing it in their databases this data represents an important asset which the store can use to understand and improve their business activities and performance to take advantage of this data they need to perform different types of data analysis and measure their data appropriately let's find out how Global Superstore can make the most of their data and start with a recap of data analytics and the types of data analytics they can use as you should know from previous courses data analytics involves analyzing data to derive useful information and valuable insights you can make the most use of data analytics using data analytics tools and data analysis there are several key types of data analysis that you've encountered so far and made use of in other courses let's briefly recap these descriptive data analysis presents data in a descriptive format exploratory data analysis is used to establish a relationship between different variables and inferential data analysis focuses on a small sample of data to make inferences predictive data analysis identifies patterns and data to make predictions about future performance and causal data analysis explores cause and effect between variables before you can engage in these types of data analysis you first need to understand the type of data that you're dealing with and ask what kind of measurements should you apply to it another key question to ask of your data is if it's quantitative or qualitative quantitative data refers to numerical data this is data that can be counted or Quantified in the case of global Superstore this includes the average number of customers who make purchases each day or the average cost of each Purchase made qualitative data refers to non-numerical data this is textual and descriptive data like information about the quality attributes of a product for example Global Superstores qualitative data includes category names or descriptions of products like furniture or office supplies once you've determined what kind of data you're dealing with you then need to organize identify and analyze your data you can perform these actions using four different measurement scales the first of these measurement scales is the nominal scale this scale describes the identity property of non-numerical data it's purely descriptive which means it just identifies the data in the case of global Superstore they can use this scale to identify products in their stock like a chair or a desk each product is one nominal unit of data the next form of measurement is the ordinal scale this is a qualitative data type scale which places data in a specific ranked order however it doesn't include decisive criteria to determine the difference between the data elements for example Global Superstore can rank chairs using ratings values so they can use a value of 1 for top quality products two for very good products and three for good products and so on however there's no precise criteria that determines the measurement between each value there's also the interval scale this scale includes properties of the nominal and ordered data scales its key feature is that the difference between data points can be clearly identified using specific criteria the scale can also contain both positive and negative numbers and zero does not represent an absolute true value Global Superstore can use the interval scale to provide feedback on products from 10 to -10 finally there's the ratio scale this scale is a quantitative data type that includes properties from nominal ordinal and interval scales of measurement it defines the identity of the data classifies the data in order and marks clear intervals however it holds an absolute value of zero over at Global Superstore they can use the ratio scale to Mark the weight of products for example a small table is 20 kilograms a medium-sized table is 40 kilograms while a larger table weighs a total of 60 kilograms in this instance there's a clear order between variables and an equal distance of 20 kilograms between each measurement so all data points can be measured accurately you should now be familiar with the basics of data analytics be able to identify different types of data and explain different types of data measurements great work the more data you collect and store the more difficult it becomes to analyze and make sense of so database Engineers who run large databases rely on Advanced data analytics methods like Data Mining and machine learning to discover patterns paradigms and Trends in data over the next few minutes you'll explore these methods and learn how they help businesses and organizations to understand performance through data and make predictions in actionable plans Global Superstore has been in operation for 15 years during that time they've collected huge amounts of data on all aspects of the business like sales customers and marketing however the more data they collect the more difficult it is for them to analyze and understand it several years ago they began using Advanced data analytics methods like Data Mining and machine learning to help make sense of their data the terms Data Mining and machine learning are often used interchangeably while they're both useful methods for analyzing data they operate very differently data mining is the process of detecting patterns and data you can then gain insights make judgments and deliver predictions based on these patterns Global Superstore used data mining to identify Associated patterns between the sales of certain products for example many customers who buy tables also buy chairs so this data suggests that the store might benefit from advertising or selling these products together machine learning is the process of teaching a computer how to learn specifically it involves teaching a machine to determine probabilities and make predictions there are two main methods of machine learning supervised and unsupervised machine learning supervised machine learning involves classifying data based on given labels for example Global Superstore can label the available chair and table images as CH and desks as DK the computer can then learn to recognize classify and group these product images based on their labels unsupervised machine learning is when data is classified based on shared characteristics but without the use of labels so the machine learns to recognize and categorize images of objects like chairs tables and desks based on the shapes in the images Machine learning makes use of many kinds of data mining models when processing data let's take a few minutes to explore some examples of these models the first model is classification analysis this model assigns data items into categories or classes of data you can then use this data to predict your target class for the items for example many Global Superstore customers purchase low priced Office Products so they can be classified as low budget customers and targeted with advertisements for low budget products there is also the association rule the association rule is a model that identifies the relationship between different data elements it determines if there is a correlation between these elements based on certain criteria many Global Superstore customers who purchase cell phones also buy items like phone chargers and battery packs this data suggests that these products should be advertised and sold together the next model is outlier detection outlier detection is a model that reveals unusual data within a particular data set in other words it detects data outliers or anomalies that don't conform to the expected pattern for example a group of global Superstore customers with a shared history of purchasing low price products suddenly begin purchasing expensive products in this instance the company needs to reclassify these customers and Target them with advertisements for more expensive items another model you need to be aware of is clustering analysis the clustering analysis model searches for similarities within a data set it then separates the data into clusters of subsets based on the similarities it finds or the common characteristics within the subsets the model Works in a similar manner to the classification analysis model however that model is initially assigned to pre-defined groups not newly discovered ones Global Superstore can use the model to classify low and high budget customers based on similar types of navigation behavior in the company's online store the final model you need to know about is regression analysis the regression analysis model considers the different factors that impact data it then determines the relationship between these factors this model can help Global Superstore to develop a better understanding of their sales patterns the data shows that each time the store discounts certain products it leads to an increase in sales so the store can conclude that discounts impact sales data you should now be familiar with different methods of data mining and machine learning you should also be able to explain the different data mining models that machine learning can make use of great work database analysis isn't just about extracting and analyzing the information you need from the database how you present your data is also just as important good data visualization helps decision makers to interpret the data and make the right choices around it in this video you'll review the different factors that inform data visualization and you'll also explore the different methods that you can use to present your data over at Global Superstore the company's data analysts are performing Advanced data analytics to help improve the store's performance they now need to present their findings to the company the best way for the data analysts to make sense of the information and present their findings is by visualizing the data let's explore some different methods of data visualization that Global Superstores data analysts can use to present their findings let's start with a quick overview of what database analysts and Engineers mean by data visualization the term data visualization refers to presenting or visualizing data in a way that lets decision makers interpret the information quickly and easily Your Role is to remove the noise from the data and present the important elements like Trends paradigms and outliers in a user-friendly way in other words how can you tell an informative and engaging story through your data there are four factors to consider when deciding what type of visual data to represent the first of these is your target audience who are you presenting this data to what's their background what's their level of understanding of the issues or topics to be investigated Factor these questions into your presentation you also need to carefully consider what information your visualization includes what information answers your audience's questions and what information is redundant another issue to consider is time how much time do you want your audience to spend examining each chart should they be able to understand a chart after just a few minutes of observation or should it take them longer and finally think about the level of accuracy your audience is looking for does your audience just need to understand the data in a general sense or do they need to drill down into finer levels of detail once you've answered the key questions identified your audience and determined what kind of data you need to show it's time to decide on your data visualization charts each type of chart has a different purpose and each chart can be used to solve a different type of problem or communicate a different kind of message what's most important is to select a chart that tells your audience the story of your data in the most appropriate way let's explore a few examples of commonly used data visualization charts and find out what kind of message each one communicates one of the most frequently used charts is a bar chart this is a comparison chart type it helps audiences to recognize differences or similarities between data values you can present data horizontally or vertically to show numerical comparisons across categories Global Superstore uses a bar chart to depict the sale of products against profit ratio this shows how much profit is made in each product category the line graph is another common chart a line graph shows quantitative data over a continuous interval or time period line graphs work by showing connections between the data points on a Cartesian coordinate system over at Global Superstore they use line graphs to show the trend and profits over the last number of years there's also the bubble chart a bubble chart shows the relationship between numeric variables each variable is assigned its own bubble audiences can understand what a bubble chart is saying by comparing the sizes positions and colors of the bubbles for example the larger bubbles in global Superstores bubble chart indicate that the Departments each bubble depicts are more profitable than the Departments assigned to the smaller bubbles next is the map chart a map chart presents data in geographical areas each data variable can be reflected in the map using a variety of different methods like colors and labels Global Superstore use map charts to visualize sales across different Global regions and finally there's the scatter plot graph this graph plots variables as points on a Cartesian coordinate grid you can then use these data points to search for correlations between variables you can also add trending lines for each data group within category labels to show the performance of Select categories Global Superstore used this approach to depict sales and profits across different categories or departments you should now be able to explain the different factors that inform data visualization and take these factors into account next time you visualize your data and you should also know how to select the charts or graphs that best communicate the story that you want to tell through your data excellent work data analytics is a complex process and the task is beyond the capabilities of traditional database Management Systems that's why data analysis is performed using data analytics tools these tools make it possible for users to view and understand large amounts of data using artificial intelligence in this video you'll review some examples of well-known analytics tools and learn about their key features you'll also explore the Tableau tool which you'll make use of later in this course for your own projects Global Superstores data analysts are performing Advanced data analytics to generate Data Insights that will help inform their business decisions this approach requires the use of powerful data analytics tools using these tools Global Superstore can use their data to identify new business opportunities grow their sales and improve their services alongside other benefits let's find out more about how these tools work and discover how Global Superstore makes use of them data analytics tools help database users to perform data analysis the results of the data analysis generate insights that inform the development of businesses and other organizations these tools make use of artificial intelligence machine learning and data mining and they provide tools for visualizing data to help you understand and communicate your findings there are many kinds of analytics tools that you can make use of the most common tools that database analysts rely on include Tableau SAS business intelligence and Microsoft power bi there are several key features these tools offer which make them useful for dealing with large databases they can deal with massive amounts of data they can work with data in many different formats and they can interact with many different data sources and database systems in addition each tool uses Advanced Data analysis techniques to generate insights and they provide Advanced Data visualization tools these features make it much easier for users to view and understand their data Tableau is a widely used data visualization tool there's a free 14-day trial and it offers a one-year license for tutors and students in credited academic institutions there are several key features that Tableau users can take advantage of when visualizing their data it stores data in the form of different data types like string date and time it can connect to a wide range of data sources like MySQL Microsoft SQL mongodb bi and Oracle DB and it can interact with many different data sheets and file systems like Excel Json and PDF in addition to these features Tableau can also generate interactive dashboards that present data in real time support scripting in Python and R programming languages and complete tasks using interactive UI tools like drag and drop Tableau comes in both desktop and Cloud versions in this course you'll make use of Tableau desktop you can download the software directly from tableau.com now that you're familiar with the Tableau tool let's explore it in more detail and find out how organizations like Global Superstore make use of it first launch Tableau by clicking on the Tableau desktop icon this opens the Tableau launch page the launch page offers several different options open existing workbooks using open a workbook work from sample workbooks in the accelerators section and access useful learning resources in the Discover section you can also connect to your data sources using the connect pane for example Global Superstore can choose a Microsoft Excel document as their data source and load the file's data into their Tableau workspace select the Tableau icon to switch from the start page to the authoring workspace once you're connected to a data source the source connection and related fields appear in the data pane you can then use the authoring workspaces user interface elements to create a visualization of your data add data to your View using the marks card or the row and column shelves for example Global Superstore can drag measures and dimensions around the workspace as required this is useful for comparing data and categories access commands and navigation tools quickly and easily in the toolbar Global Superstore often makes use of the Sorting icons to arrange bars in ascending or descending order or use the dashboard to view and work with multiple sources of data at once you can even create an interactive dashboard that combines different sheets to present relative information to the audience you can also use story a tool that's similar to the dashboard Story presents sequences of worksheets and dashboards to tell your data story you should now be able to identify well-known analytics tools and describe many of the key features that they share you should also be able to access and make use of the Tableau work environment to visualize your data you're making good progress in developing your understanding of advanced data analytics at this stage of the lesson your next question might be how do I use tableau in this video you'll learn how to connect Tableau to your data sources then clean and prepare your data for analysis Global Superstore needs to use Tableau to analyze the records of a large Excel file but they first need to clean and prepare the existing data Let's Help Global Superstore to connect their Excel data source to Tableau then clean and prepare the data for analysis to perform data analysis in Tableau you first need to establish a live connection to a data source to establish a live connection first open Tableau on the connection page under the connect tab on the left hand side click Microsoft Excel this opens a dialog box that you can use to navigate to the Excel file on your machine select and open the file the Excel file name appears on the left hand side of the screen the data from the Excel file is displayed in the data pane with a live connection you can make sure that any updates to the original Source are automatically reflected in your database however it's faster to process a data extract particularly when dealing with large amounts of data on the left side of the data pane is the metadata grid this shows relevant information about the different data fields you can keep or hide the metadata grid by clicking the related button now that you've connected to the data that you need you can begin to clean it and prepare it for analysis this process involves fixing errors in the data and shaping it so that it's easier to understand and analyze you can do this by performing different types of operations like filtering sorting and renaming the data you'll cover these operations in more detail in a later video the top of each column specifies the column's data type along with a suitable symbol you can change these data types as required let's take the order date column as an example select the small Arrow next to the symbol above the column click describe on the list of options that appear this action shows key information about this field of data click the ABC data type and select the date data type the data type has now been changed you can also hide a relevant table details so that you can focus only on the necessary data you can also change the number of rows displayed within the data pane let's hide the order date column select the small Arrow again then select hide the column is Now hidden to show the data again click the Tableau settings icon then select the show hidden Fields option this action displays faded versions of the fields to indicate what has been hidden to restore these fields click the small arrow and select unhide your next task is to split the customer name column into two separate columns one for each customer's full name and another for their last name click the small arrow and then select the split option this automatically creates two new Fields you can rename them as required click the corresponding small Arrow select rename and call The Columns first name and last name Global Superstore also need you to create a new data field for their Returns the field must include the final date by which each product can be returned under the company's returns policy this is 15 days from the date of purchase select the small arrow in the orders date column and click create a calculated field name this new field return date in the calculation editor enter the following basic formula order date plus 15. this formula adds 15 days to each order's order date value this creates a new returns column populated with the relevant data when finished click OK the new calculated field is added to the data pane global supersource data has now been cleaned and prepared for analysis and you should now be familiar with how to connect to data sources in Tableau and clean and prepare your own data for data analysis great work once you've imported your data into Tableau you then need to prepare it for analysis however the process is more efficient if you focus only on the data you need to analyze with Tableau you can focus on relevant data using the software's filtering and visualization features in this video you'll learn how to filter data and create a data analysis chart in tableau over at Global Superstore they're preparing to launch a new marketing campaign in Canada but first they need to analyze their sales data to optimize their campaign they can use data filtering techniques to arrange and exclude data so that their records are focused only on Canada this provides a more relevant reliable and accurate level of information you can help Global Superstore to complete this task by using Tableau in Tableau you can filter data in either the data source page or the worksheet however filtering data directly in the data source page limits your data analysis and all worksheets to the filtered criteria only for example if Global Superstore filter their categories to include only office furniture then they can't perform data analytics in other categories in the worksheet let's begin by applying data filtering in the data source page click the add option under filters this opens a dialog box that lists all filtered fields in the data source click add again to add a new filter field next select region and click ok the select from list option in the general tab area check Canada then click ok the data pane now shows filtered records from Canada only you can repeat this process to add more data source filters if required to remove a filter click edit filter select Canada then remove and OK you can also filter data in the worksheet open the sheet tab at the bottom of the page in the worksheet The Columns from your data source are displayed as fields on the left side of the data pane the data pane contains a variety of fields the fields above the Gray Line are Dimension Fields the fields below the Gray Line are measure fields Dimension Fields hold categorical data in the case of global Superstore this includes product categories types and dates measure Fields hold numeric data like sales profit and quantity Global Superstore want to compare sales of all category products sold in Canada to help them with this task drag the category field from the dimension section of the data pane to the rows in the Shelf area then drag the sales field from the measure section in the data pane into the columns in the Shelf area the horizontal bar chart that appears shows information about different product categories you need to filter this data to show only products sold within Canada drag the region Dimension from the data pane to the filter card this opens a pop-up window in the general tab select Canada then click apply and ok your data is now filtered to show sales in Canada only you can also take further steps to make your chart easier to read and understand click the swap Icon to change the horizontal bar to a vertical chart click the descending order icon to filter data from Maximum sales to minimum sales drag sales to the color part in the mark section of the screen to change the bar colors based on sales drag profits from the data pane's measure section to the label mark this shows the profit of each category in the chart you can also provide a title for the sheet in this instance you can call the sheet sales in Canada a lot of the information in the chart is generic it might be better to add the subcategory to the view to provide more detail around the sales and profits drag the subcategory field from the dimension section of the data pane to the columns in the Shelf area then click on the descending order icon to filter data from the maximum to the minimum values you can now view all categories and subcategories you can even focus on one category like furniture drag the category Dimension from the left data pane to the filter card then use the filter categorical data option to retain the furniture category while unchecking the technology and office supply categories click apply then ok you can also filter categories according to best-selling items drag the subcategory to the filter card then select the top tab tick buy field and enter a numerical value of 2 to view the top two categories then click apply the wildcard can also be used to filter data based on specific patterns for example you can type chair in the text field to include subcategories that contain this text value in this instance the data Returns the chair subcategory another filtering technique is condition filtering you can use condition filtering to select a field of data and define specific rules to be applied for example you can select the sales field then specify a sum value greater than 500 000. click apply and then OK this returns all values within your data greater than 500 000. to remove all subcategory filters just right click the subcategory and click remove you can also remove the category and keep the region to focus on Canada you can now view all sales data related to the furniture category and subcategories in Canada Global Superstore has now filtered the required data for data analysis using Tableau and you should now also be able to apply different filtering techniques to your data using Tableau you're making great progress on your data analytics Journey once you've analyzed your data in Tableau you then need to determine the most informative way to present it to your audience with Tableau you can present data visually in the form of an interactive dashboard over the next few minutes you'll learn how to create a basic dashboard with multiple views and interactivity at Global Superstore they want to create an interactive dashboard that shows profits and sales by country and indicates how profits are trending over time they can then use this dashboard to compare sales and profits in each country help Global Superstore to complete this task by creating worksheets in Tableau as follows one worksheet to show sales and profits in each country and another worksheet that shows trending profits you then need to combine these worksheets in One dashboard where they can interact with one another based on the needs of the user this tutorial assumes that you already connected to the data source and that all necessary data has been cleaned and prepared for analysis let's begin with the map chart click the sheet Tab and change the default title to profit and sales map by country double-click country in the data pane the map view is automatically created because the country field is a geographic field drag the profits field from the data pane to the color on the marks card change the color so that it's easier to identify this data next drag the sales field from the data pane to the tooltip on the marks card select Maps then background Maps in the background pane click the normal option roll over each country using your mouse to display the name profits and sales data your next task is to create another view that shows the global Superstore profit Trends over time create a new worksheet and change the title and Sheet name to profits trends drag the order date field from the data pane to the column Shelf make sure that the order date has been assigned a data type Drag The Profit fields from the data pane to the rows in the Shelf section Tableau automatically generates a trending chart for profit you can change the trend lines color set by dragging the profit to the color Mark select a new color set to differentiate it from other data then drag the profits field from the data pane to the label mark this shows the profits made each year you've created two worksheets that communicate important information you now need to combine the two sheets within one dashboard to show how profits are trending within each country the first step is to set up your dashboard click the dashboard tab then the new dashboard option you can also use the dashboard icon at the bottom of the page call your new dashboard profits dashboard to the side of the dashboard pane you can access the sheets you've already created drag the map chart to the empty view within the dashboard and drag the profits chart below the map chart the dashboard now has two related charts Global Superstore can use it to compare profit and sales by country however it would also be useful to add some interactivity to this chart for example you can add interactivity to view profit Trends by clicking on each country select the map from the dashboard then click the use as filter icon select a country within the map like Argentina this shows the sales and profits in the map chart and it also shows the trend in profits within the selected country you can repeat these actions for each country thanks to your worksheets Global Superstore can now compare their sales by country and you should now be able to create a basic interactive dashboard with multiple views and interactivity you're making great progress congratulations on reaching the end of the third module in this Advanced Data modeling course in this module you explore data analytics in the context of data modeling and learned how to perform data analysis using a visual Analytics tool let's take a few minutes to recap some of the key skills you've gained in this module's lessons you began the first lesson in this module with an overview of data analytics you learned that data analytics involves converting and processing aggregated data into useful and meaningful information you also explored the topic of data analysis there are several key types of data analysis that you've encountered throughout your database engineering Journey descriptive data analysis presents data in a descriptive format exploratory data analysis is used to establish a relationship between different variables an inferential data analysis focuses on a small sample of data to make inferences predictive data analysis identifies patterns and data to make predictions about future performance in causal data analysis explores cause and effect between variables and you also learned that there are two types of data you'll deal with quantitative data which refers to numerical data and qualitative data which refers to non-numerical data once you've determined what kind of data you're dealing with you then need to organize identify and analyze it you can perform these actions using four measurement scales the nominal scale is used to label data without assigning any quantitative value or order and the ordinal scale places data in a specific ranked order the interval scale identifies clear differences between data points using specific criteria it also can represent negative values and finally the ratio scale defines the identity of the data classifies the data in order and marks clear intervals it cannot represent negative values you then explore the topics of data mining and machine learning you learn that data mining is the process of detecting patterns and data while machine learning is the process of teaching a computer how to learn this can be done through either supervised or unsupervised machine learning machine learning makes use of several different kinds of data mining models when processing data classification analysis assigns data items into categories the associate rule identifies relationships or associations between different data elements outlier detection is a model that detects data outliers or anomalies that don't conform to the expected pattern clustering analysis searches for similarities and data sets then separates them into clusters and finally regression analysis considers the different factors that impact data then determines the relationship between these factors in the next part of this lesson you learned about the importance of data visualization this means that you must present or visualize your data in a way that lets decision makers interpret the information quickly and easily when visualizing your data you must consider the following questions who's your audience what information do they need to know how much time should they spend examining this information and what level of accuracy do they require once you've answered these questions you can then choose an appropriate data visualization chart there are many different types of charts to choose from including a bar chart a line graph and a bubble chart you can also use a map chart or a scatter plot graph each chart serves a different purpose what's most important is to select the chart that best informs your audience you then concluded this lesson with a discussion in which you considered what kind of data analytics reports you engage with and how they help you in your tasks in the next lesson of this module you reviewed the topic of advanced data analytics you learn that data analytics tools help database users to perform data analysis the results of this data analysis inform the development of their businesses or organization the data analysis tool you worked with in this course is Tableau its key features are as follows it stores data in the form of different data types it can connect to a wide range of data sources and it can interact with many different data sheets and file systems in addition to these features Tableau can also generate interactive dashboards support scripting in multiple languages and it also offers interactive UI tools like drag and drop you first learn how to download launch and navigate Tableau then you learned how to import and prepare data in Tableau this involves setting up a live connection to your data source or importing data into the tool and cleaning and preparing your data for analysis the Second Step often involves actions like filtering irrelevant data splitting data for greater accessibility creating new data fields as required and fixing data types once you've connected to the data source or imported your data you can then filter analyze and visualize the data in Tableau you can filter data using either the data source page or the worksheet Tableau also lets you filter data using conditions or by adding subcategories you then learned how to create an interactive dashboard using Tableau worksheets and finally you undertook an exercise in which you perform data analysis in Tableau you should now be familiar with data analytics and data analysis software that's great progress I look forward to guiding you through the next module in which you'll undertake a data modeling project in this course you explore the topic of Advanced Data modeling let's take a few moments to recap the key lessons that you encountered in this course you began the course with an introduction to the topic of advanced database modeling you learned that a data model provides a visual representation of different data elements and shows how they relate to one another you then explore database modeling in more detail by learning about different levels and types of data models you discovered that there are three levels of database models there's the conceptual data model The Logical data model and the physical data model you also reviewed different types of data models that you can use to design your database like entity relationship and object oriented next you learned how to structure your tables to deal with the data anomalies using the three main forms of database normalization these include the insertion anomaly the update anomaly and the deletion anomaly you also explored an example of a data model and designed a database model in an exercise in the next lesson of this module you were introduced to mySQL workbench you learn that MySQL workbench has a unified visual tool for database modeling and management it offers a range of useful features for creating editing and managing databases you then discovered how MySQL workbench is used to build a data model diagram using the software's forward engineer feature you also learned how you can use MySQL to reverse engineer a model this means you can create a data model from an existing mySQL database schema this is essentially the opposite of the forward engineer feature and you can print the model share it or apply changes and push it to the database using forward engineering you also completed this lesson with a quiz item and an exercise in which you designed your own database model in MySQL workbench in the next module you explore the topic of data warehousing in this module you learned about the architecture of a data warehouse and built a dimensional data model you began with an overview of the concept of data warehousing you learned that a data warehouse is a centralized data repository that Aggregates integrates stores and processes large amounts of data from multiple sources users can then query this data to perform data analysis you then discovered that a data warehouse is defined by four key characteristics it's subject oriented it's integrated a data warehouse is also non-volatile and finally data warehouses are time variant you also review the different forms of data that a data warehouse encounters including structured data semi-structured data and unstructured data you then explored the architecture of a data warehouse and learned that it includes the following components data sources the data staging area which includes the ETL process the data warehouse itself and data Marts once the data has been aggregated from the data sources it is then integrated and stored in the data warehouse It's then organized in data Marts where users can perform data analysis and present their findings these components control the flow of data from different sources for data analysis and Reporting they also process and integrate this data so that users can perform data analysis you also explored a case study of a real-world data project in the second lesson of this module you explore dimensional data modeling the lesson began with an overview of the fundamentals of dimensional data modeling you learned that a dimensional data model is based on dimensions and facts and it's designed using star and snowflake schemas you then explored some examples of dimensional data modeling in practice and learn that there are four key steps when creating a model choose the business process then the grain followed by the dimensions and finally choose the facts finally you undertook an exercise in which you created your own dimensional model in the third module of this course you explore data analytics in the context of dimensions and measures and you learn how to perform visualized data analysis using an advanced Analytics tool you started with an overview of data analytics you recap the basics of data analytics and the key types that you've made use of at other points in your database engineering Journey you also learned that there are two generic types of data you'll deal with quantitative data which refers to numerical data and qualitative data which refers to non-numerical data when you've determined what kind of data you need you can process and analyze it using four measurement scales the nominal scale the ordinal scale the interval scale and the ratio scale next you learned about the topics of data mining and machine learning you learn that data mining is the process of detecting patterns and data while machine learning is the process of teaching a computer how to learn machine learning makes use of data mining models to process data like classification analysis the associate rule clustering analysis and regression analysis you then learned about data visualization you learned that when visualizing your data you must consider your audience and the information they're looking for you then need to choose an appropriate chart that best communicates this information finally you concluded this lesson with a discussion prompt that revolved around what kind of data analytics reports you make use of in the final lesson of this module you reviewed the topic of data analytics and learned how to make use of data analytics tools like Tableau as part of your introduction to Tableau you learned what its key features are and how they help you perform data analytics you then learn how to use Tableau to analyze data this included the following steps download launch and navigate Tableau load and prepare data for analysis filter and visualize data and create an interactive dashboard finally you undertook a lab exercise in which you perform data analysis in tableau you've reached the end of this course recap it's now time to try out what you've learned in the graded assessment good luck congratulations you've reached the end of this course you've worked hard to get here and developed a lot of new skills along the way you're making great progress on your Advanced Data modeling journey and you should now possess an advanced understanding of database modeling you are able to demonstrate some of this learning in an exercise following your completion of this exercise you should now be able to design a database model in MySQL workbench understand the role of the data warehouse in the data analytics process create a dimensional data model using a data warehouse and perform data analysis using Tableau and present your results using data visualization techniques the graded assessment then further tested your knowledge of these skills however there's still more for you to learn so if you found this course helpful and want to discover more then why not register for the next one you'll continue to develop your skill set during each of the database engineer courses in the final project you'll apply everything you've learned to create your own fully functional database system whether you're just starting out as a technical professional a student or a business user the course and projects prove your knowledge of the value and capabilities of database systems the project consolidates your abilities with the practical application of your skills but the project also has another important benefit it means that you'll have a fully operational database that you can reference within your portfolio this serves to demonstrate your skills to potential employers and not only does it show employers that you are self-driven and Innovative but it also speaks volumes about you as an individual as well as your newly obtained knowledge and once you've completed all the courses in this specialization you'll receive a certificate in database engineering the certificate can also be used as a progression to other role-based certificates depending on your goals you may choose to go deep with Advanced role-based certificates or take other fundamental courses once you earn the certificate thank you it's been a pleasure to embark on this journey of Discovery with you best of luck in the future welcome to the Capstone course you're now within reaching distance of the end of your database engineering journey in this final course you need to prove your new skills by helping little lemon complete a series of database related tasks these tasks include setting up a database in MySQL workbench using a MySQL instance server creating an entity relationship or ER diagram and implementing it in MySQL workbench and you need to commit the project using git you also need to help them create sales reports from the data in their database build a table booking system generate Data Insights using data analytics and create a database client let's take a few minutes to review the processes and tools that you'll use to complete these exercises in the first set of tasks you'll help little lemon to build a relational database system by designing a well-structured entity model or ER diagram that conforms to the three fundamental normal forms you'll design the ER diagram using MySQL workbench a unified visual tool used for database modeling and management a key feature of MySQL workbench that you'll make use of is the ability to transform your data model in a physical database schema in a MySQL server once you've created the little lemon database you'll then commit your project using git the Version Control System you'll also make use of GitHub to store your git repositories your next task is then to create sales reports from the data in the little lemon database you'll create these sales reports using database queries procedures and prepared statements let's look at these in more detail you'll use Virtual tables to make use of data that exists in other tables and simplify data access and queries you'll also make use of different kinds of join Clauses to link records of data between one or more tables based on a common column you'll help little lemon to use stored procedures to create reusable code that can be invoked and executed as required and you'll also rely on prepared statements that can be used repeatedly without the need for compiling or using valuable MySQL resources another task you'll assist at lemon with is building a table booking system in their database that they can use to keep track of guests visiting the restaurant this task mainly consists of using SQL queries and transactions let's review some examples of the SQL queries and transactions that you'll use you'll create data using standard insert into statements you'll change data in the database using update statements you'll also delete or drop data using delete statements and finally you'll read your data using read queries like select statements you'll also make use of triggers to store a set of actions in the form of a stored program that you can then invoke automatically when certain events occur once you're confident that your code is correct you'll commit your progress to git in the next task you'll help little lemon use their data to generate business insights you'll carry out this task using Tableau the data visualization tool let's review the process steps that you'll follow to complete this task you'll first connect your data sources to Tableau you'll then prepare your data for analysis and focus on the most relevant data the next step is to create a visualization of your data using its UI elements finally you'll use Tableau to produce interactive real-time data visualizations in the form of dashboards these process steps will help to provide clear and relevant answers to little lemon's important business questions your final task is to help little lemon create a database client so that they can interact with their database using a python-based application to begin you'll first need to identify which version of python is running on your machine once you've confirmed that you're running the most recent iteration of python you'll need to install the Jupiter IDE to run your code on you can then open a new instance of the Jupiter notebook and use it to connect python to the little lemon mySQL database you can establish this connection using the python Library MySQL connector and the PIP software package once you've set up your python environment you can begin working with your database client so now that you're familiar with the tasks you need to complete it's time to get started don't worry I'll be here to provide you with guidance along the way you can also refer to the relevant learning material from previous courses if you need more help best of luck in this lesson you need to help little lemons set up their database project there are three key steps required to set up the project set up the database in MySQL workbench using a MySQL instance Server create an entity relationship diagram and implement it in MySQL workbench and then commit the project over the next few minutes you'll recap these topics and learn how you can make use of them in the lesson to help little lemon build their relational database system you'll first need to design a well-structured entity relationship data model or ER diagram you'll need to make sure that the diagram conforms to the three fundamental normal forms by conforming to these forms you'll ensure the Integrity of your database and avoid the insertion update and delete anomalies as you've discovered in previous courses there are many professional tools that you can use to design an ER diagram in this project you'll work with mySQL workbench you should be familiar with mySQL workbench from other courses so for now let's just quickly recap the basics MySQL workbench is a unified visual tool that's used for database modeling and data management its key advantages are that it's open source cross-platform and provides support for a visual SQL editor it also lets you transform your data model in a physical database schema in a MySQL server if you haven't already installed my SQL workbench on your operating system then you can download and install a copy from dev.mysql.com downloads once you've downloaded a copy of MySQL workbench run the installer file and make sure to install the following MySQL server MySQL workbench and MySQL shell the installation process is relatively straightforward however if you do face any challenges you can refer to the installation material in the previous courses or visit the Oracle website for a detailed set of instructions once you've created the little lemon database you then need to commit your project you can commit a project using git git is a free open source distributed Version Control System you can use it to manage all source code history you can keep a history of your commits revert to previous versions and share code to collaborate with other developers you can download and install git from the URL get hyphen scm.com downloads your git repositories are typically stored on GitHub GitHub includes the source Control Management features of Git along with other useful features these features include project management support ticket management and Bug tracking you can also use it to share access and store repositories including backups to sign up to GitHub and get started visit the official site at github.com now that you're familiar with the required technology you can begin helping little lemon to develop their database system you can set up the database in MySQL workbench create an ER diagram and commit your model if you need more information on any of these topics then you can review The Learning material from previous courses good luck in this lesson you helped little lemon set up their database project there are three key steps that you carried out to set up the project you set up the database in MySQL workbench using a MySQL server instance you created an entity relationship or ER diagram and implemented it in MySQL workbench and then you committed the project let's take a few minutes to recap how you completed these tasks in the module to help little lemon build their relational database system you designed a well-structured entity relationship data model or ER diagram that conformed to the three fundamental normal forums by conforming to these forums you ensure the Integrity of your database and avoided the insertion update and deletion anomalies there are many professional tools that can be used to design an ER diagram in this module you worked with mySQL workbench MySQL workbench is a unified visual tool that's used for database modeling and data management there are several key advantages of the tool that you made use of in your project its key advantages are that it's open source cross-platform and provides support for a visual SQL editor it also lets you transform your data model in a physical database schema in a MySQL server you are able to download and install a copy from dev.mysql.com downloads once you downloaded a copy of MySQL workbench you run the installer file and made sure to install the following MySQL server MySQL workbench and MySQL shell once you created the little lemon database you then committed your project using git git is a free open source distributed Version Control System you used it to manage all your source code history keep a history of your commits revert to previous versions and share code to collaborate with other developers you downloaded and installed git from the URL get hyphen scm.com downloads you are able to store your git repositories on GitHub GitHub includes the source Control Management features of get along with other useful features these features include project management support ticket management and Bug tracking you are also able to use it to share access and store repositories including backups now that you've reached the end of this module summary the next stage of this module is to complete the module quiz and then review the additional resources when you've completed these tasks you can then progress to the next module little lemon need to create sales reports from the data in their database you can help them to produce their sales report by querying their data you can query their data using virtual tables joins stored procedures and prepared statements recap the basics of these tasks and then see if you can help little lemon little lemon can query their database using a virtual table as you should know from previous courses a virtual table makes use of data that exists in other tables it doesn't physically store any data it's more like an interface that provides access to data in the database there are several benefits to these tables they simplify data access and queries they can be used to create a join from virtual and base tables you can use them to efficiently manipulate and filter data and they support database security when creating virtual tables you'll make use of joins to build a view from multiple tables joins are used to link records of data between one or more tables based on a common column you might use a join to find information about a specific activity or object within the database or you might need to find where the relevant information exists in more than one table there are several types of joins that you've explored Within These courses these include inner join left join and right join there's also the self join and the full outer join you can make use of these joins to query little lemon's database and retrieve the information they need you'll also need to help little lemon using stored procedures the main purpose of stored procedures is to create reusable code that can be invoked executed efficiently this makes your code more consistent reusable and easier to use and maintain so instead of typing the same code repeatedly you can save your blocks of code AS stored procedures that you can then invoke when required you can create as many procedures as you need and they can include multiple parameters your code can also include various types of SQL code just make sure each one has a unique name remember that how you create a stored procedure depends on the task that you need to achieve you'll also assist at lemon with the use of prepared statements each time you create SQL statements they need to be compiled and parsed by MySQL before they can be executed a more efficient method is to create a prepared statement that only needs to be compiled once and can then be used repeatedly in other words you can create a prepared statement that MySQL compiles and parses just once before it's executed so each time the statement is invoked my sequel knows that it's ready to use and safe to execute prepared statements are a much more efficient and optimal way of executing statements without using valuable MySQL resources you should now be familiar with the techniques and methods that you can use to create a sales report for little lemon if you need more information on these topics then remember that you can review The Learning material from previous courses well done little lemon need to build a table booking system within their database they can then use this system to keep track of guests visiting the restaurant you can use your knowledge of SQL transactions to help them create this system you can help them to create the system using SQL queries transactions and crud operations developing a table booking system requires the use of SQL transactions or queries as you know by now transactions are statements that are executed within a database the main types of statements that you need to use include create read update and delete queries these are also known as crud operations let's run through the basics of how you can use these queries to complete the tasks in this lesson you can help little lemon to develop and populate their table booking system by creating data in the form of new bookings you can create data using a standard insert into statement just be sure to identify the following within your syntax the table you want to create the data within the columns that must be populated and the values that they need to contain execute this statement to create the data within your database there might also be instances in which the data that you originally created needs to change perhaps someone wants to update their booking or maybe they've canceled their booking so their data needs to be deleted from the table you can carry out these actions using update and delete statements use an update query to alter information within the table identify the following information within your query the name of the table to be updated the columns to be updated and the new values to be added to these columns if you're deleting or dropping information from the table then you'll need to use a delete query your delete query must contain the following information the name of the table that contains the data to be deleted and any conditions related to that data you can enact these conditions using a where Clause once you've created updated or deleted data within the booking table you'll need to run tests to make sure that your queries have been executed successfully you can carry out these tests by reading the data a read query returns all information that matches the criteria within your statement an example of a basic read query is a select statement in this case you must make sure that the select statement contains the following the name of the table and columns that hold the data the values you require and any conditions required to help you target the data you can also enhance your transaction actions with the use of triggers a MySQL trigger is a set of actions available in the form of a stored program the set of actions is then invoked automatically when certain events occur you can use triggers for different types of events like crud operations to use a trigger you first need to create it using the create trigger statement then Define the trigger type is it an insert update or delete trigger and should it be executed before or after the event you also need to define the trigger's logic specify which table is assigned to and how it should be applied to the table you've explored many different examples of read queries within this course the important thing to remember is to make sure that you include conditions within your statement that Target the exact data you need this is good advice to follow for all types of operations once you're confident that your code is correct you can commit your progress to git it's also a good idea to enact Version Control this way you can keep track of snapshots that show the project in different stages of development you can then roll back to previous versions if required you should now be ready to make use of SQL queries and transactions to help little lemon develop a booking table system within their database if you need more information on these topics remember that you can review The Learning material from previous courses good luck in this module you helped little lemon to create sales reports from the data in their database you also help them to build a table booking system let's take a few minutes to recap the tasks processes and tools you completed or made use of in this module in the first task you created a sales report for a little lemon by querying the data in their database using virtual tables joins stored procedures and prepared statements you helped little lemon to query their database using a virtual table to make use of data that exists in other tables the benefits of the virtual table are that you are able to use it to simplify data access and queries create a join from virtual and base tables efficiently manipulate and filter data and support database security you also made use of the join Clause to link records of data between one or more tables based on a common column there are several types of joins that you were able to make use of in this module including inner join left join and right join you also had the opportunity to use the self join and the full outer join you also help little lemon to use stored procedures stored procedures are used to create reusable code that can be invoked and executed efficiently by using these procedures you made your code more consistent reusable and easier to use and maintain so instead of typing the same code repeatedly you are able to save your blocks of code AS stored procedures that you could then invoke when required you are able to create many stored procedures and you could include multiple parameters in each one you are also able to include a range of syntax like SQL statements variables and control structures while making sure that each one had a unique name and how you created your stored procedures depended on the tasks that you needed to achieve you also assisted at a lemon with the use of prepared statements each time little lemon creates SQL statements they need to be compiled and parsed by MySQL before they can be executed you showed them that a more efficient method is to create a prepared statement that can be used repeatedly without the need for compiling prepared statements are a much more efficient and optimal way of executing statements without using valuable MySQL resources MySQL compiles and parses a prepared statement just once before it's executed in the second lesson of this module you help little lemon to build a table booking system in their database that they could use to keep track of guests visiting the restaurant you were able to help little lemon develop and populate their table booking system by creating data in the form of new bookings you created data using standard insert into statements while identifying the following within your syntax the tables you wanted to create the data within the columns to be populated and the values that they needed to contain you then executed your insert into statements to create the data within your database there are also instances in which the data that you originally created needed to change you were able to carry out these actions using update and delete statements you used an update query to alter information within the table while identifying key information in each of your queries when you were deleting or dropping information from the table you used a delete query again making sure to identify key information in your query once you create it updated or deleted data within the booking table you run tests to make sure that your queries were executed successfully you carried out these tests using read queries like select statements in this case you made sure that your select statements contained the following the name of the table and columns that held the data the values you required and any conditions required to help you target the data you also made use of triggers to store a set of actions in the form of a stored program that you could then invoke automatically when certain events occur to use triggers you first created them using the create trigger statement then you define the trigger type for example you specified if they were insert update or delete triggers and if they should be executed before or after the event you also Define the logic of your triggers specified which tables they were assigned to and how they should be applied to the table once you are confident that your code was correct you then committed your progress to get an enacted Version Control you have helped little lemon to create sales reports on a table booking system using database queries procedures and prepared statements well done I look forward to providing you with more guidance in the next module little lemon need to perform Advanced data analytics to generate Data Insights that can help to inform their business decisions they can then use these Data Insights to inform their business decisions like identifying new opportunities for growth or improving services this task requires the use of powerful data analytics tools like Tableau with Tableau little lemon generate insights using its data analysis features they also need to connect their data source to the software prepare the data for analysis and present their insights using worksheets and interactive dashboards in this video you'll recap the key process steps and features of the tool and find out how you can make use of Tableau to help little lemon generate their business insights Tableau is a widely used data visualization tool there are several key features it offers that users can take advantage of when analyzing data for example with Tableau you can connect to a wide range of data sources process large amounts of different data types and create visualized data charts you can also generate interactive real-time dashboards scripting Python and R and complete tasks using interactive UI tools the dashboard interface offers many useful features for analyzing data you can connect your data sources to Tableau using the connect pane in the launch page once you've connected to a data source the related fields appear in the data pane you can then use the authoring workspaces UI elements to create a visualization of your data using worksheets dashboards and story you can also use the worksheets to add data to your View using the marks card or you can analyze and visualize data using the row and column shelves you can make use of tableau's other useful features to access commands and tools in the toolbar menu work with multiple sources of data in the dashboard view arrange data in ascending or descending order using sorting icons and you can use story to present worksheets and dashboards once you've loaded data into Tableau you then need to prepare it for analysis there are several steps involved in this process including splitting data for greater accessibility creating calculated data fields fixing data types and filtering data with Tableau you can focus on relevant data using the software's filtering and visualization features by filtering data you can focus only on the data you need you can also drill down roll up and filter data to show it from different perspectives or in different levels of detail Tableau can also be used to filter data using either the worksheet or the source Data Page however filtering data directly in the data source page limits your data analysis in all worksheets to the filtered criteria only one of tableau's key features is its ability to produce interactive real-time data visualizations in the form of dashboards a well-organized dashboard can help to provide clear views and relevant answers to little lemon's important business questions once you've analyzed data in tableau's worksheets you can then combine data from multiple sources you can add filters or drill down or roll up into specific information you've now recapped the key features of the Tableau tool you should now know how to make use of this tool to create worksheets and interactive dashboards that can help little lemon to generate business insights from their data if you need more information on any of these topics then remember that you can review The Learning material from previous courses good luck little lemon need to create a database client so that they can interact with their database using a python-based application you can help them by completing the following tasks review the version of python installed on your machine install a suitable integrated development environment or IDE for Python and connect python to the little lemon mySQL database let's take a few minutes to review these tasks as you just saw the first task is to identify which version of python is running on your machine open the command prompt and type python-version to check which version of python is running on your operating machine if python is correctly installed then Python 3 should appear on your console screen this means that you're running python version 3. there should also be several numbers after three to indicate which iteration of Python 3 you're running make sure that these numbers match the most recent version on the python.org website if you search for Python and see a message that says python is not recognized as an internal or external command then review your python installation or the relevant documentation on the python website once you've installed python or confirmed that you're running the correct version you then need to choose an IDE to run your code on an IDE is software that you can use to display your code in this course you'll use the Jupiter put her IDE to demonstrate python to install Jupiter type python-mpip install Jupiter within your python environment then follow the jupyter installation process once you've installed jupyter type jupyter notebook to open a new instance of the jupyter notebook to use within your default browser the next task is to connect python to your mySQL database you can create the installation using a purpose-built python Library called MySQL connector this library is an API that provides useful features for working with mySQL the MySQL connector must be installed separately using a package installer called pip the PIP package is included with the python software that you installed create a new notebook instance and name it configuring MySQL connector then install the connector using pip to install the connector type an exclamation mark and pip to call the package then type the install command next type the name of the library which is my SQL Dash connector Dash python make sure you type python with a lowercase p then press shift and enter or select run to execute the code the final step is to check that your environment has been correctly configured type import mysql.connector as connector and click run if there's no output in the cell then the library has been imported successfully you should now know how to install and configure your environment to help create and connect a database client to little lemon's database if there's any parts of this lesson that you need more guidance on then you can review the specific learning material in previous courses great work in this module you help little lemon to perform Advanced data analytics to generate Data Insights to inform their business decisions you also help them to create a database client that they could use to interact with their database using a python-based application let's take a few minutes to recap the tasks processes and tools that you used in this module in the first lesson you use Tableau to help little lemon generate insights using the tools data analysis features Tableau is a widely used data visualization tool there are several key features it offers that you were able to take advantage of when analyzing data for example with Tableau you are able to connect to a wide range of data sources process large amounts of different data types and create visualized data charts you were also able to generate real-time interactive dashboards script in Python and R and complete tasks using interactive UI tools you're also able to make use of the dashboard interface features to analyze data you connected your data sources to Tableau using the connect pane in the launch page once you connect it to your data source you then use the authoring workspaces UI elements to create a visualization of your data using worksheets dashboards and story you also use the worksheets to add data to your View using the marks card and analyzing visualize data using the row and column shelves you then prepared your data for analysis using process steps like splitting data for greater accessibility creating calculated data fields fixing data types and filtering data with Tableau you also focused on relevant data using the software's filtering and visualization features so you could focus only on the data you need you also use the drill down roll up and other filtering features to show the data from different perspectives or in different levels of detail you also use Tableau to produce interactive real-time data visualizations in the form of dashboards your well-organized dashboards help to provide clear views and relevant answers to little lemon's important business questions in the next lesson you helped a little lemon to create a database client so that they could interact with their database using a python-based application you first identified which version of python was running on your machine using the command prompt once you confirm that the correct version of python was correctly installed you checked which iteration of python3 you were running and you checked that it matched the most recent iteration on the official python website once you confirmed that you were running python you then chose an IDE or integrated development environment to run your code on in this course you use the Jupiter IDE to demonstrate python you followed the Jupiter installation process and then typed Jupiter notebook to open a new instance of the Jupiter notebook to use within your default browser you then connected python to the little lemon mySQL database you created the installation using a purpose-built python Library called MySQL connector you are able to do this using the PIP software package then you check that your environment was correctly configured to ensure that the library was imported successfully once you set up your python environment you are able to begin working with your database client you completed an exercise in which you added or implemented query functions using python to query the little lemon database finally you committed your progress to get you have helped little lemon to perform Advanced data analytics to generate Data Insights to inform their business decisions and you have also helped them to create a database client that they can use to interact with their database using a python-based application well done I look forward to providing you with more guidance in the next module congratulations you've almost reached the end of this Capstone course and your database engineering journey in this final module you'll need to demonstrate your knowledge in a peer review exercise and graded assessment but before you begin let's recap the tasks that you helped little lemon to complete in this course these tasks include setting up their database in MySQL workbench using a MySQL server instance creating an entity relationship or ER diagram and implementing it in MySQL workbench and committing the project using get you also help little lemon to create sales reports from the data in their database build a table booking system generate Data Insights using data analytics and create a database client let's take a few minutes to review the processes and tools that you use to complete these tasks in the first set of tasks you help little lemon to build a relational database system by designing a well-structured entity model or ER diagram that conform to to the three fundamental normal forms you design the ER diagram using MySQL workbench a unified visual tool used for database modeling and management the key feature of MySQL workbench that you made use of is its ability to transform your data model in a physical database schema in a MySQL server once you've created the little lemon database you then committed your project using get the Version Control System you also made use of GitHub to store your git repositories your next task involved creating sales reports from the data in the little lemon database you created these sales reports using database queries procedures and prepared statements let's take a quick look at some of the different types of queries that you used you use Virtual tables to make use of data that exists in other tables and to simplify data access and queries you also made use of different kinds of join Clauses to link records of data between one or more tables based on a common column you helped little lemon to use stored procedures to create reusable code that they could invoke and execute as required and you also relied on prepared statements that could be compiled just once and then used repeatedly another task you assisted little lemon with involved building a table booking system in their database that they could use to keep track of guests visiting the restaurant this task mainly consisted of using SQL queries and transactions let's review some examples of the SQL queries and transactions that you used you created data using insert into statements you change data in the database using update statements you also deleted or dropped data using delete statements and finally you read your data using read queries like select statements you also made use of triggers to store a set of actions in the form of a stored program that you could then invoke automatically when certain events occurred once you were confident that your code was correct you committed your progress to get an enacted Version Control in the next task you help little lemon use their data to generate business insights you carried out this task using Tableau the data visualization tool let's review the process steps that you followed to complete this task you first connected your data sources to Tableau you then prepared your data for analysis and focused on the most relevant data The Next Step was to create a visualization of your data using its UI elements finally you use Tableau to produce interactive real-time data visualizations in the form of dashboards these process steps help to provide clear and relevant answers to little lemon's important business questions your final task was to help little lemon create a database client so that they could interact with their database using a python-based application to begin you first identified which version of python was running on your machine once you confirmed that you are running the most recent iteration of python you installed the Jupiter IDE to run your code on you then opened a new instance of the Jupiter notebook and used it to connect python to the little lemon mySQL database you establish this connection using the python Library MySQL connector and the PIP software package once you set up your python environment you then began working with your database client so now that you've recapped the tasks you completed it's time to begin the peer review project don't worry you've worked hard to make it this far so I'm sure you'll do your very best in the project best of luck congratulations you've reached the end of this Capstone project course you've worked hard to get here and developed a lot of new skills along the way you made great progress on your MySQL Journey this course and all you have achieved is really a culmination of all the previous courses you've completed in this database engineering program you understand the basics of database engineering and MySQL syntax you have a solid foundation in database structures and management and you're familiar with Advanced MySQL you're also familiar with the basics of python you understand and can Implement Advanced Data modeling techniques and you demonstrated your skill set in this final course by designing a database project with this course you are able to reinforce and demonstrate the learning and practical development skill set you've gained throughout this program this was achieved through Hands-On guided practice around the creation of a fully functioning database system for a little lemon the graded assessment further tested your knowledge of database engineering now that you've completed the final project it's a great time to pause and reflect on your journey you can reflect on the completed course from several Vantage points you could consider the links between this course and the previous ones you've completed or you could reflect on the process of completing the project for example what were the hardest parts of the project what were the easiest what experience did you gain from working on the project and would you benefit from revisiting previous courses whether you're just starting out as a technical professional a student or a business user this course and project proves your knowledge of the value and capabilities of database systems the project consolidates your abilities with the practical application of your skills but the project also has another important benefit it means that you have a fully operational database that you can reference within your portfolio this serves to demonstrate your skills to potential employers and not only does it show employers that you are self-driven and Innovative but it also speaks volumes about you as an individual as well as your newly obtained knowledge you've completed all the courses in this specialization and earned your certificate in database engineering the certificate can also be used as a progression to other role-based certificates depending on your goals you may choose to go deep with Advanced role-based certificates or take other fundamental courses certifications provide globally recognized and Industry endorsed evidence of mastering technical skills you've done a great job and you should be proud of your progress The Experience you've gained shows potential employers that you are motivated capable and not afraid to learn new things thank you it's been a pleasure to embark on this journey of Discovery with you best of luck in the future hello and welcome to this coding interview preparation course this course will help prepare you for the unique and challenging aspects of a potential coding interview including some of the approaches to problem solving and computer science foundations that you may need to be aware of or apply let's take a moment to preview some of the key Concepts and skills that you'll learn in the first module you'll start by discovering what a coding interview is what it can consist of and the types of coding interviews that you might encounter you'll also explore how you can prepare yourself for a coding interview including a focus on communication such as explaining your thought process handling mistakes and the star method you'll also learn about how to work with pseudocode to demonstrate how you might reach a solution some important tips that might help with any practical solution design and how to test your Solutions next you'll get an introduction to computer science starting with the fundamental concepts of binary and how binary relates to real-life hardware and computing you will explore memory and the key components of computer memory read access memory RAM and read-only memory ROM and how your computer uses memory to perform its tasks processes information and store data next you will take a dive into time complexity and the key concept that underpins this Big O notation and you'll discover some of the types of Big O notation and how this applies to algorithmic processing you will explore space complexity which is essentially the space required to compute a result in the second module you will learn about data structures and how each one comes with certain benefits and limitations so understanding each of these can be really important when designing a solution you will start with basic data structures by addressing the implementation and capabilities of data structures between various programming languages and the similar patterns of the overarching architecture you will explore the main basic data structures strings integers booleans arrays and objects you will go on to examine some collection data structures starting with lists and sets then you will learn about Stacks cues and trees before moving on to some Advanced data structures namely hash tables heaps and graphs in the third module you will get an introduction to algorithms including the types of algorithms available to you and how best to work with them to sort and search your data you will start by exploring sorting algorithms and how working with sorted data or having the ability to sort your own data can result in significant Time Savings and you will explore the three main types of sorting selection sort insertion sort and quick sort and you will learn that each approach has its trade-offs and is more effective in some environments than others next you will discover searching algorithms and how each type provides its own framework for problem solving you will also gain insight into time and space complexity in both searching and sorting algorithms you will take a deep dive into the processes and underlying mechanisms involved with divide and conquer recursion dynamic programming and greedy algorithms finally in the last module you will get the chance to recap on everything you've learned throughout the course before taking the graded course quiz which will test you on all of the key Concepts and skills you have learned throughout the course in this video you have had a broad overview of the course specifically you have discovered how this course will help you prepare for the unique and challenging aspects of a potential coding interview including some of the approaches to problem solving and computer science foundations that you may need to be aware of or reply now let's get started it takes approximately 39 months to find software engineers and developers in Tech Hub cities in the US the interviews that you will go through sometimes have skills that you don't normally use in your day-to-day job it's not just about how well you can program it's also about how well you're displaying a lot of interpersonal skills how well you're able to drive projects how well you're able to collaborate with others you don't need to worry about showing up in a suit you really can just kind of wear whatever be yourself and focus on the technical aspects of the interview [Music] hi my name is Julie and I am a software engineer on the IG shopping team at meta New York hi my name is Moxie Herrera I use datam pronouns I'm a software engineer in the social impact work and meta and I work at the Menlo Park office my name is Chanel Johnson I work remotely in Maryland for meta and I'm a software engineer for the Facebook app core architecture team where we work on infrastructure for the Facebook mobile app my name is Mari batalando I am a software engineer for the web 3 monetization team within meta and I work on different ways creators and influencers can make a living off of the Facebook platform using web3 Technologies like nfts and cryptocurrency I think it can be broken up into three general areas one is the application process one is actual interviewing and one is the calibration process when they discuss your packet and give you an offer in terms of the actual interview process it's broken up into technical architecture and behavioral for the application Phase it's a lot of it's recruiters kind of screening the your resume um your work experience the recruiter then will meet with you and talk with you once again a better idea about your skills your experiences what you're looking for to make sure that you're a good fit for the role phase two is the technical aspect of it there can be a range of one to four or five or even more interviews there so this could be things like a coder pad interview where you're on the phone or over a voice chat with a recruiter or a engineer to kind of go through some technical challenges you'll have some Behavior girl interviews where people are just kind of engaging of what it's like to work with you how do you solve problems and then oftentimes there are architecture interviews as well where you build kind of an end-to-end product to discuss the full architecture for that phase three the Final Phase is when everyone involved in the process kind of gets together and discusses how you did throughout the phases and if they should extend an offer to you if you decide to take on the offer you go through what is called boot camp process where you get to learn the ropes of how it is working at meta and also sit with the teams that you're interested in so that you know uh what choice you end up making and you get to choose ultimately what team you go in if you are interviewing for a specific pipeline like iOS Android or ml or AI you should expect some questions that deal with that specific domain so for example in my case I was interviewing for the iOS pipeline so in addition to those algorithms and data structure questions that I got asked I also got asked some iOS questions or some questions that I deal with in my day-to-day work as an iOS engineer so you can imagine similar things as if you were an Android engineer or if you're an AI or ml engineer I think in the application Phase is when a lot of candidates get screened out what can really help there is crafting your resume and focusing on real experience that will really help when it comes to software engineering roles so if you don't have job experience totally fine work on side projects and that will show both you know concrete experience and a drive to that you're actually interested in working on the role that you're applying for I will say the coding interview portion is where a lot of people get disqualified there can be many reasons for it but the biggest reason I see is that sometimes uh when you're writing out your code the person being interviewed is not explaining their thought process what's going on they're not asking clarifying questions um sometimes the interviewer gives them a hint they're not listening so communication skills is where I also see a lot of people getting um disqualified in the coding interview portion some candidates I see do not have a structured process for answering problems they may not have heard or seen before and they kind of just uh go Cowboy coding and just start coding without even knowing what they're supposed to solve so I think having a robust problem-solving process for questions you may or may not have seen before is very important in doing well in in these coding interviews one common problem is people get stuck and they get so wrapped up in it that they're unable to kind of take feedback what interviewers are looking for are a holistic approach to the problem so it's not just about solving the problem it's about how you go about it it's about collaboration with the interviewer so that's a really important piece to remember the biggest thing that I've learned is to really be myself I found it much more successful and way less stressful to just be who I am unapologetically but it also means whenever I come come forward as like a candidate like I am presenting myself as I am and not trying to hide anything given that this is who I want to be at work this is who I want to be in my life and I think having that mentality has made it so much easier regardless of the outcome of the process think really hard about what you want and choose companies that resonate with what you value there are other things other than compensation and just The Prestige of a company that will uh that matters in how happy you are with the company so make sure you make a list of those and consider that whenever you're choosing companies my biggest piece of advice is just going through as much practice as possible so two main ways to do that I think is one while you go through lead code problems or mock interviewing doing the whole thing end to end even if you're unable to solve the problem you get stuck you don't know what to do finishing it out as if it's a real interview it'll give you really good practice for when you actually get to the interview and then the second thing is apply to as many places as possible it gives you the best chance of getting interviews and then once you're in a real life interview situation it's a lot easier if you've already had experience doing that before it really helps calm the nerves and it helps you get more practice in there's not all just only like one particular person we're looking for when it comes to Tech we want different backgrounds and different perspectives and experiences because that's the only way we're going to make our products better we need that it's such a rewarding process to uh make an impact and actually for the better um and just meet with people and keep learning you will never get bored at this job never as you start to interview you will face many ups and downs and these are all experiences you can learn from this if you keep pushing keep learning you will eventually get to a role where you can start an amazing career in technology the amount of preparation you're going to do is going to lead you to having a really big impact in the world because software is just everywhere and it's used by millions and billions of people so I think it's well worth the preparation the amount of effort it takes to land a job at a very influential and impactful uh tech company a technical interview is where you demonstrate your competency to code normally you would have completed a screening call and demonstrated that your soft skills are suitable for the company soft skills relate to your ability to conduct yourself socially this includes communicating clearly having a good work ethic and that your presentation aligns with the company values the technical interview is to determine that you are technically capable of the responsibilities of the role in this video you will learn how to approach the technical interview when going for a coding interview it will help you to keep the following steps in mind prepare to succeed solve the problem conceptually first employ appropriate tools and lastly optimize the solution doing a deep dive into these Concepts will help you understand how to apply the method firstly prepare to succeed many candidates might feel some trepidation at doing a technical interview what happens if a question is asked and my mind blanks fortunately there are steps that you can take to prepare for Success failing to prepare is preparing to fail solve the problem conceptually first before employing a solution it is a good idea to First have a clear picture of the question and what the answer will look like take some time to ensure that you are clear on what is being asked an interviewer will have no issue with you seeking clarification from the onset if there is a whiteboard then use it jot down the major points of the problem before outlining a potential solution here is an excellent opportunity to show your ability to reason a problem using pseudocode before writing a single line of code demonstrating your ability to reason out a problem is halfway to it remember one can always be taught to code an ability for problem solving is a much sought after ability be vocal as you assess the problem and show the interviewers how you engage with problem solving and why you elected one approach over another it can be a big help if you can equate the problem with one you already know later in the course there is a video on the practice of divide and conquer this is a good opportunity to employ that breaking the problem into smaller ones can help to solve a seemingly complex problem if there is an additional time constraint and you exceed the allowed time you will still be able to show functional chunks of code employ appropriate tools the types of problems presented in a coding interview will have to be completed during the interview time thus the solutions will not be excessively complicated in nature they are designed to test your problem-solving ability on a microcosmic level and your awareness of the available tools consider the classical count the Sox problem you are given an array that represents sock colors yellow socks are represented by one blue socks by two Red Socks by three green socks by four and lastly orange shocks by five sock colors equal the numbers as explained namely one two two one one three five one four and four determine how many pairs of the same color socks exist so there are four ones which equates to two pairs of yellow socks three and five represent Odd Socks although it is a red and an orange sock they don't have matching socks to form pairs there are two twos and two fours which each represent a pair of blue and green socks to solve this problem succinctly you can utilize an appropriate data structure later in this course you will review data structures one video outlines how a dictionary stores key value pairs a solution would be to use the sock colors as keys and the count as values then iterate over the dictionary and retrieve all odd numbers which indicates the presence of any Odd Socks while there are many programmatic ways of solving this the use of existing structures minimizes the code required and demonstrates familiarity with fundamental building blocks when possible always utilize existing approaches rather than attempting to implement manual Solutions in addition to familiarizing yourself with staple structures review common sorting and searching algorithms before engaging in any technical interviews it is good practice to optimize your code that means writing or rewriting code so a program uses the least possible memory or disk space and minimizes CPU time or network bandwidth coding the solution is a good step towards a respectable solution ensure you make time to optimize your code another concept you'll meet in this course is time and space complexity can you demonstrate to the interviewer that you understand these crucial Concepts but simply it is a way of measuring how fast and how much space Your solution will take when presenting your answer outline your Solutions time and space complexity and then see if you can improve identify any repeat or overlapping code demonstrate that you can modularize this code into a function that is callable repeatedly and reuse code when possible and often repeated principle for good programming is dry don't repeat yourself it is the idea to only say A Thing Once in code and reuse as often as needed additionally if there are portions of your code that are no longer required as a result of modularizing or as a result of an Avenue of thought that was not completed remove it avoid excessive compiler calls if you are searching for a value in an array terminate the loop when the item is found a very achievable optimization on your code is to include a return statement when a value is found or to use a loop that is dependent on a Boolean as soon as a result is found the loop can be terminated this increases overall efficiency and reduces time complexity space complexity is all about being clever with memory usage whenever you can avoid creating more variables than needed in this video you learned about some approaches that can be used regardless of the challenge presented even if you're not familiar with the problem or don't achieve a result in the time allotted always strive to demonstrate your reasoning and best practice approaches prepare for technical interviews by doing practice solutions to Online problems and when possible employ a similar methodology to each challenge so that regardless of the challenge faced you are working from a comfortable framework the coding interview can appear like a daunting task there will always be elements of the unknown involved and your desire to succeed may add some pre-interview nerves just stay calm and think logically good luck the success of an interview is almost fully dependent on how you communicate with the interviewers you wish to convey your suitability and the company would like to find a candidate that is appropriate for the role in this video you will learn about verbal and physical or non-verbal communication never underestimate the power of First Impressions it is important that every interaction with you as a potential employee reflects the capabilities that you will bring to the organization the first non-verbal sign that you can show is punctuality it is good practice to arrive at least 10 minutes before the scheduled meeting is supposed to start particularly if you are unsure of the specific venue where the interview will be conducted it takes time to navigate a building and you wish to appear composed and ready for the interview not out of breath and flustered over the course of the interview ensure that you maintain eye contact and actively listen to the questions that are being asked dress appropriately for the meeting generally a job interview calls for you to wear professional or business attire make sure your clothes are clean and neat that shows respect and reflects positively on yourself finally maintain a good posture and refrain from squirming and needlessly touching the face or ringing the hands while being nervous about enduring a meeting is understandable these gestures May unintentionally convey a sense that you don't feel up for the task a great way to settle the nurse is to have done your due diligence prior to the meeting make sure you understand what the job entails and what the company does and stands for although it is important to understand the importance of non-verbal communication verbal communication is equally important you need to be able to speak to your interviewers a good indicator of how to conduct yourself in an interview is to observe the interviewers listen carefully they will lead with questions to see if you fit the required skills and personality profile typically an interviewer will aim for the 80 20 rule speaking for 20 of the time and allowing you to present yourself for the other eighty percent so allow the interviewer to direct the question to you completely before answering use clear and concise language in your answers the Temptation particularly if you have done diligent preparation is to try and respond to a question with everything you know on the topic this may lead to some rambling a better answer is one that stays on topic and allows for the opportunity for further questions a good interviewer will follow up with related questions so this allows for the conversation to flow refrain from exaggerating your abilities or being negative towards yourself be careful not to use emotive terms that can convey negative attitudes about yourself for example rather than saying I failed at that task you could say that task was challenging but provided me with some ideas for future areas of research to explore additionally avoid excessive slang cursing or inappropriate humor a good methodology to follow when conducting an interview is the star method initially the interviewer will attempt to make you feel welcome by giving you an opportunity to talk about yourself what is on your CV what do you know about the company or the role as the interview progresses the discussion will focus more on your abilities and suitability for the role it is important that you can convey why you are a good fit typically questions will focus on the business needs either technologies that are being used or problems that have had to be overcome the interviewer wants to know how you would respond to issues that arise when engaged with the job therefore try to answer questions using the star method include the following four points when answering a question the situation the task the action and lastly the result here are some examples to demonstrate the method more clearly what is the context of the situation what is the project and what are the challenges faced looking at the task what would your responsibilities and assignments be what actions will you take to rectify or address the challenges what are the results or outcomes of your actions how did taking this approach impact the result using this approach as a template for an answer will give depth to your responses it provides a workable framework for an answer it also gives the interviewer a chance to respond with more related questions on areas you feel comfortable discussing let's recap what you have learned in this video an interviewer will be on the lookout for candidates that can clearly convey a concept your first task is to communicate why you are suitable for the role this is done verbally and non-verbally finally the star method is a very efficient framework for engaging with technical questions that will arise over the course of an interview in this video you have learned that verbal and non-verbal communication is key during an interview in any given role in a company you may have to deal with the stakeholders whether it be complications with the role or why a given solution is the optimal path to take so take what you have learned about communication and apply the principles with confidence almost half of software developers according to data from a popular employment website find the coding interview portion the most stressful portion out of all the um technical interview you are going to be asked a lot of technical problems a lot of architectural problems organizational problems they really want to know who you are as a person and what you value when I was interviewing one way that I practiced was I had an actual stuffed animal that I would talk to and this just really forced me to practice the interview end to end as if I was actually talking out loud I think the interviewers at meta have been trained very well to make the interviewee feel um welcome and comfortable in solving problems together [Music] thank you [Music] my name is Mari batalando I am a software engineer at meta and I work in the FB web 3 monetization team where I help creators and influencers make a living by using the Facebook product my name is Moxie Herrera I use datam pronouns I'm a software engineer in the social impact work and meta and I work at the Menlo Park office hi I'm Julie I'm a software engineer on the IG shopping team at meta New York my name is Chanel Johnson I work remotely in Maryland for meta and I'm a software engineer for the Facebook app core architecture team where we work on infrastructure for the Facebook mobile app there are generally three different types of interviews technical which is your regular Elite code interviews architecture interviews and then behavioral the technical interviews the Lee code ask questions you'll typically have two to four of these in the interview process one or two will be as a screen and then there'll be a couple more after you pass the initial screen and these are just going to be 20 or 30 minutes per question just your classically code questions the architecture interview is going to be about a 45 minute to hour long interview where you'll get a question of how to build kind of an end-to-end feature so this can either be more product oriented like build Tetris or it can be more back-end oriented kind of focused on how data flows and how to scale this to many users and then finally for the behavioral interviews you can expect questions like about your experience working with other people collab vibrating challenges you've faced exciting projects that you've worked on questions what is an experience have you had when working on a team that things did not go well and the main reason I asked for this is because at Meadow you're gonna have to be working with a lot of people it's not just a solitary um job so you need to learn how to communicate and how um to really learn and grow with people when I was interviewing I was asked a lot of general questions that I would expect as an iOS engineer so for example how would I build the news feed surface in the Facebook app you know what objects will I make how will those objects talk to each other what networking apis do I expect and being able to at a high level describe how how I would tackle these challenges and how I would structure the app as an iOS engineer so that's a very classic question what if I saw a gift in order to um practice for interview questions is to talk throughout your solution when you're whiteboarding it so one thing I would do is like with friends or colleagues when you write out your solution explain your thought process on what's going on are you considering time complexity are you considering um how to make this faster how to make how to reduce space these are things that we also want to see as interviewers because when you're including on the job these are things you need to think about so this is a good practice for you to explain when you go through your code of like hey this is why I'm doing this because of X Y and Z and I'm also considering this as an interviewer I am actually rooting for you to do well because what I want as an interviewer is to not sit there and stare at the screen for 45 minutes watching you struggle I would much rather see you succeed and for us to work together on a problem so that I can gather as much signal from you as I can so that I can make a good decision whether or not you be a fit for meta in terms of dress for the interview the key is to really just be yourself wear something that you're comfortable with no one expects you to show up in a suit I get a lot of interviewees that have that are wearing just a T-shirt and jeans I am really looking for a candidate that is willing to explain their thinking willing to engage very deeply and show a high level of confidence and knowledge so the most common mistake I see people making is they might be a really talented iOS engineer but they're software General skills might be lacking so make sure you're preparing for those General algorithms and data structure type of questions because they will come up when you are interviewing regardless of the pipeline that you're interviewing for sometimes in the technical challenge you'll get stuck and the worst thing you can do is say nothing because I can't read your mind and I don't know what you're doing so you really want to explain what you're thinking for how you're stuck why you're stuck because even if you don't get the answer right if you can show that you have a good understanding of the problem that can be enough to get you to the next round what really impresses me about candidates is when they come prepared for the interview and you should obviously come prepared to the interview but when they've done their research on the company they know the values and they know how they apply to their values and what they can bring to meta what really impresses me about a candidate during an interview is when they can take feedback from me so even if they are on the right track already sometimes if I can see that and push them a little bit more that's a really good sign that they can hear my feedback and work with me to solve the problem whatever problem is at hand it gives a good indication that it's someone that I would want to work with in real life interviewing is really about showing Who You Are it's really about you know showing what you know how you grow how you learn and there's not a single trick to it you really have to kind of figure out a little bit about yourself and how you display these things and make it work for you a candidate that's very enthusiastic excited to be there and has a really positive demeanor that just really shows it to someone I want to work with someone that's going to be passionate about the work they do and that was that is what I think leads to success is that passion on the other side of the interviews even though it can be a really long process it is really rewarding to start to actually work on the products that people use every day at a place like meta where things scale so large it's really cool to see one of the features you build being used by millions of users and you get to work with the smartest people I've ever met in my life and that will help you grow as an engineer and you would be shocked at how much you will learn and grow within a short time here thanks for watching hope that you were able to learn some good tips at interviewing at meta and good luck on the rest of your journey in this video you will learn about binary numbers what they are and how computers use them to represent human language you will learn how positional encoding can turn a limited set of numbers into an Infinity size representation of values lastly you will learn how Computing the power of a number can be applied to determine how many states the simple representation can hold traditionally you count using 10 different digits 0 to 9. this stems from the early development of maths it was a natural progression resulting from humans having 10 fingers and 10 toes counting with the use of 10 digits is referred to as base 10. base 10 means you have 10 different numbers to use before you have to add another digit and reuse numbers each time you exhaust the range you reset the number on the left and add a zero to the right this new digit has to be 10 times greater than the digit to its right the number on the right is the reset and the count begins again the use of the position of the number to denote a progressive increase in value is called positional notation when you consider it it is an early implementation of an algorithm to allow for the recording of an infinite number of values simple in implementation but very powerful in effect binary Works using the same positional notation approach it is another common counting approach that employs base 2. this means that all of the values are represented with either a one or a zero computer store information as bytes each byte is made up of eight bits that can either be one or zero as you have now learned in decimal the count would come to nine and you would add another digit and reset in binary the same thing happens but in this case only two digits are used to progress account you move the number left move the one to the left until all configurations of ones and zeros have been used at which point you add another zero to the end at this stage all the numbers are reset to zero apart from a single one at the beginning let's explore it step by step start to count with a zero then add one to get to three start back at zero again but add a 1 on the left as soon as all the ones are full start back at zero again and add one to the number on the left but that number is already at one so it also goes back to zero and one is added to the next position on the left binary has many uses in Computing it is a very convenient way of translating electricity into computer code if a signal is present a one is displayed otherwise zero is used the binary counting system allows these base two signals to amount to a significant amount of information transportation and Storage this is the same way as Boolean numbers are stored a Boolean value is either one for true and zero for false some powerful applications can be built using this simple information representation the ASCII American Standard code for information interchange is a map of binary to character encoding or a mapping from binary to text there is a binary number reserved for each digit and character as well as for a number of special characters like a question mark brackets full stop and even the space bar it was already mentioned that a byte is made up of 8 Bits each bit can take the value of 0 or 1. so that raises the question how many different values can be represented in each byte here we would use exponentiation or counting the power of a number an example would be 2 to the power of three that is 2 multiplied by two multiplied by two which equals eight now consider that you have a lock with four different digits each digit can be a zero or a one how many potential past numbers can you have for the lock the answer is 2 to the power of four or two times two times two times two equals sixteen you are working with a binary lock therefore each digit can only be either zero or one so you can take four digits and multiply them by two every time and the total is 16. each time you add a potential digit you increase the possible permutations so the same lock with five digits would have two to the power of 5 or 32 different combinations now coming back to our original question how many different representations can there be in a byte it was already mentioned that a byte is made up of eight bits which can be either a zero or a one eight bits would have two to the power of 8 or 256 different combinations in this video you learned about binary numbers the language of computers while at first glance it seemed quite limited to on or off you learned that through the use of positional encoding it could be used to represent a much larger number set you learned how computers can use electricity to store and read numbers and how exponentiation or counting the power of a number relates to Counting unique states and how to use it to count the number of possible combinations a number lock could have binary is the language of computers understanding how it is used to store information will give you a greater understanding when discussing data and the structures that hold it in a previous video bikes were introduced each bike consists of eight bits a bit is the simplest form of computing memory in this video you will learn about the central processing unit or the CPU and the roles and functions of the different types of memory typically a computer will be made up of a series of memory blocks which contain both information and instructions on how this information needs to be processed memory capacity then refers to the number of bytes that a computer can hold there are different types of memory that need to be considered namely cache memory main memory and secondary memory firstly to better understand the various layers of memory it is important to pause and consider how a computer works a computer functions around the central processing unit or CPU this takes both information and some instructions on how this information is to be processed all this information exists as bytes or a series of ones and zeros that are determined by a small electrical current the CPU can work faster than information can be transferred to it often a CPU will be working on a number of different tasks near simultaneously the switching between tasks can allow information to be transferred into the cache for processing and the results to be stored in the appropriate location the proximity of a memory cell to the CPU can reduce the time it takes to load the information therefore quicker and more expensive memory is always found near the CPU so an important concept to consider when discussing memory is the transfer rate this relates to the speed at which a computer can transfer memory into the cache for processing now that you better understand the processing part let's explore the different types of memory and start with cache memory cache memory is the most expensive form of memory and lives close to your CPU chip when the CPU receives an instruction to process some information it first checks the cache to see if the information is here if the information is available in the cache it is processed if it fails to find the required information here the information is not processed the CPU then queries the larger slower main memory then loads this information into the cache for processing storing recently accessed information in the Cache can improve the effectiveness of your system by reducing the search and transfer time of regularly used data much like a metro in a large Metropolis cache memory is organized in zones of importance the most readily required information is in zone 1. each subsequent zone is of lesser importance and is numbered Zone 2 3 4 and so on next you will learn about main memory a computer's main memory consists of read access memory RAM and read-only memory ROM main memory holds only the information that a computer is currently working on it can be volatile or non-volatile volatile memory stores information actively so if the computer loses power it is lost non-volatile memory retains its information when the power is cut ROM as the name suggests is read only meaning the information cannot be overwritten this memory is programmed once at the factory and cannot be altered typically one will find instructions and data that are critical for a computer's function here ROM is busiest when the computer starts and information on the required application is loaded Ram is programmable it can retain the information and instructions Ram holds the current data and instructions that are in current use the amount of ram your computer has is directly correlated to how fast it can go this is because of the transfer rate large amounts of ram mean that the system does not need to transfer information constantly instead it can hold and run a number of applications at once using Ram all the memory needed to operate these applications needs to be available from your RAM having too many programs open will affect the performance of your system by exhausting your RAM memory there are a number of algorithms for reading and storing these memory addresses that fall outside the perimeters of this course now let's explore secondary memory in more depth secondary memory relates to external memory that can be plugged in externally and used to increase the storage capacity of your system accessing secondary memory is slower and requires transferring all required information and instructions into RAM examples of secondary memory would include cloud storage external hard drives and memory sticks in this video the various components of memory have been discussed you have learned how all memory allocation revolves around the CPU which oversees the reading processing and storing of information on the computer you have also learned how there are different types of memory that vary in speed and importance this informs their proximity to the CPU with quicker more expensive memory cells found nearer the source this information should assist you in understanding how your computer works much better When developing applications or evaluating efficiency it is always useful to have a metric or a lens through which you can evaluate Fitness for function evaluation and computer science will often consider two aspects namely time and space in this video you will learn how to evaluate time efficiency or gauge performance by the time taken to complete a task you can refer to this as the time complexity of a task an application must return information within an acceptable time frame these days people expect an instantaneous response when clicking on a website there might however be some extra scope afforded to more complex queries depending on the user's needs and expectations Big O notation is a metric for determining an algorithm's efficiency but simply it gives an estimate of how long it takes your code to run on different sets of inputs in this video the amount of time an algorithm will take is considered some of the Big O notation you will encounter include the following o of one o of log log n o of log m o of N and so on so how do you measure the quickest possible time that something can be computed you make use of a constant time algorithm where it takes o of one time to compute but simply it means that no matter what is entered into a system it will only take one computation a simple example to illustrate this is to consider printing the first item in an array in this instance no matter how many values exist in the array the approach has a big O of one things can get more complex if you need to do a search consider that you have an array of 10 items and you wish to know if a certain value is in this array you might apply a loop and check each item to see if the value exists in this example the complexity is said to be o of n this is called linear time the search is going to be equal to the length of the array passed the larger the array the more time is required to search it so if in place of 10 items you have a hundred items then the search will take 10 times as long let's explore an example each operation comes at a time price for time complexity so o of 1 means it costs one single computation and O of N means it costs n computations for example you wouldn't say it takes 45 seconds you would say the complexity is n so for every N Action that is plus one on our final result of complexity if n equals 100 it is 100 checks the complexity is still o to n Only N means it is 10 times longer this means that your application speed depends on the size of the data being processed print array at position n is an example of an O to one operation that means print the dock at whatever n is it doesn't matter how big n is the cost is always one let's continue to O of log n this search is less intensive than o to n but worse than o to one o of log N is a logarithmic search so it will increase as new inputs are added but these inputs only offer marginal increases a good example of this in action is a binary search imagine you are playing a guessing game with the following prompts too high too low correct given a range of 100 to 1 you may decide to approach the problem systematically first you guess 50 and it is too high then you guess 25 and it is still too high you then decide to go 12 or 13. it is still too high what is happening here is that you are halving the search space with each guess so while the input to this function was 100 using a binary search approach you should come upon the answer in under six or seven guesses this solution would be said to have a Time complexity of O log n even if n the range of numbers entered is made 10 times bigger it will not take 10 times as many guesses let's move on to O N squared O N squared is heavy on computation this is a quadratic complexity meaning that the work is doubled for every element in the array a good way to visualize this is to consider that you have an array of arrays the first Loop will equal the number of elements inputted namely n the second Loop would also look at the number of input elements n so the overall complexity of running this approach can be said to be n times n which is N squared so how will you visually represent the problem this graph displays the algorithm for time complexity the x-axis relates to the number of inputs and the y-axis relates to the time taken notice that as the number of inputs increases it has a different impact on the gradient of the line for all cases but o of one in this graphical representation of how n relates to the number of computations taken the best time to aim for is O to one o of log n is still very good o to n is acceptable and o n square is not great of course it is not always possible to tell how long an approach is going to take let's return to the example of looking for something in a loop well you could say that to search a loop takes o to n time this might not always be the case consider that the item being searched for is the first in Array then the return will be in o1 time pretty good equally the element might be missing so every item must be searched oh end time the middle case would be that it is found around the middle of the loop o of n over 2. when evaluating an approach there are three definitions used best case worst case and average case to conclude in this video the notion of time in relation to complexity was introduced you have been given something to consider when implementing a solution to a problem a good question to ask yourself before you start is how many computations does my solution employ and is there a better way now that you use a metric to evaluate Your solution to a given problem you can start thinking of its efficiency in relation to time complexity this is not the only way to consider a solution and in the next video the focus will be placed on Space complexity in a previous video you learned that the time spent on an algorithm depends on the problem's complexity and data structure another consideration when evaluating suitability is space how much memory will a given solution take this is often a trade-off with time the selection of this data structure will pivot on what your priority is speed or compactness some algorithms like the hash tables you learn about later in this course provide very fast lookups in O of one time however to work efficiently they must have a lookup for every element stored this results in a space complexity of O of n the Big O notation for space complexity is the same as for time o of one o of log log n o of log n and so on in all these notations n refers to the size of the input this is often measured in bytes different languages have different memory costs associated with them in Java for instance an integer requires four bytes of memory a blank array will consume 12 bytes for the header object and an additional four bytes for padding thus if n refers to an array of integers size 4 then the total memory requirement is 32 bytes of memory when discussing space complexity you have to consider what the increase in input size has on the overall usage the space complexity of a problem can be broken into two sections namely auxiliary and input space auxiliary space is the space required to hold all data required for the solution it refers to the temporary space needed to compute a given solution input space refers to the space required to add data to the function algorithm application or system that you are evaluating consider when you are learning long division you may have been taught using a methodical approach that involved breaking each computation into simpler steps to achieve this you would create a table to hold the temporary calculations some complex problems require the same additional allocation of space to hold their workings temporarily while the solution is being calculated big o space complexity allows for the auxiliary space required for coming upon a given solution so it can be said that space complexity equals input space plus auxiliary space that is the space required to compute a result remember that you calculated the space complexity where an integer requires four bytes of memory you added the 12 bytes of the header object and the four bytes for the padding the total was 32 bytes however consider that the size of the array is doubled to eight integers space complexity is now computed the same way and the total will be 48. the space complexity is higher adding additional input did not increase the size of the auxiliary space so when Computing the Big O you can discount the auxiliary size if it is not impacted by increasing the input size knowing that each decision made in Computing a solution requires memory it is worth noting the aspects that can increase memory usage some common memory actions could be assigning variables these can be temporary variables when Computing a solution as with the long division analogy before creating a new data structure some solutions require that a new array be created to contain the values or a duplicate array that retains index locations creating a new data structure instance has an o to n auxiliary memory cost function calling and allocation also have additional memory overheads it is worth bearing in mind how space is being used when designing an application creating a new variable to contain a value in place of overwriting an existing one will impact your space efficiency this impact is greatly increased if you needlessly copy arrays or complex data structures with high data overhead additionally writing functions that use complex structures when simpler less intensive structures will suffice can incur a penalty particularly if these structures need to be duplicated in Computing a solution in conclusion in this video the concept of Big O was expanded from one focused on time consideration to one that includes space complexity it was highlighted how there is often a trade-off between speed and memory efficiency additionally there were some observations on the efficient use of space when designing a solution that is worth bearing in mind well done you've reached the end of the introduction to the coding interview module let's take a few moments to review what you learned during this module you began the module with a course introduction where you were informed of the content of the coding interview Prep course you then moved on to the coding interview lesson your first lesson focused on the technical coding interview primarily to determine that you are technically capable of the role's responsibilities you learned about the steps that you must keep in mind when this interview is conducted you were taught that using the appropriate tools is always important and that you have to keep time constraints in mind you then learned about code optimization and should be able to write or rewrite code so a program uses the least possible memory or disk space and minimizes CPU time or network bandwidth to summarize you learned about some approaches that can be used regardless of the challenge presented even if you are not familiar with the problem or don't achieve a result in the time allotted always strive to demonstrate your reasoning and best practice approaches prepare for technical interviews by practicing solutions to Online problems and when possible employ a similar methodology to each challenge this will assist you in the future so that regardless of the challenge faced you are working from a comfortable framework you then focused on communication and the importance of First Impressions you were introduced to verbal and non-verbal communication and the importance of both you learned about the star method and how to use it to your benefit when communicating with interviewers you should now be able to look at the context of a situation and the challenges faced the responsibilities around the tasks involved the actions required to address the challenges and lastly the outcomes that need to be achieved to summarize you should now be able to clearly convey a concept during an interview communicate why you are suitable for the role in a verbal and non-verbal manner finally use the star method for engaging with technical questions that will arise over the course of an interview you then moved on to the next lesson where you were introduced to computer science you started with binary where you learned about the difference between base 10 and base 2. you then discovered positional notation this is the use of the position of the number to denote a progressive increase in value you were then introduced to how a computer stores data as bytes and that each byte is made up of eight bits that can either be one or zero you were also given some examples you've examined the concept of exponentiation or counting the power of a number this was followed with examples where a lock with a different number of digits was used to explain the concept you should now be able to apply this knowledge and understand that binary is the language of computers next you explored memory the first concept you learned about was memory capacity which refers to the number of bytes that a computer can hold you also learned about the different types of memory that need to be considered namely cache memory main memory and secondary memory you should know by now that to better understand the various layers of memory it is important to pause and consider how a computer works you learned about the transfer rate or the speed at which a computer can transfer memory into the cache for processing you then explored cash and secondary memory and should be able to describe the differences you were then introduced to the concept that a computer's main memory consists of read access memory RAM and read-only memory ROM you should be able to describe the role of the main memory and distinguish between RAM and ROM you should now be better positioned to work with memory you then moved on to explore time complexity where you learned how to evaluate time efficiency or gauge performance by the time taken to complete a task you discovered Big O notation which is a metric for determining an algorithm's efficiency thus it gives an estimate of how long it takes your code to run on different sets of inputs or it considers the amount of time an algorithm will take you are given some examples and should have a solid idea of how to measure time complexity you then learned about space complexity it is not just the speed of an algorithm that is important but also how much memory will a given solution take to understand space complexity you were introduced to the concept of auxiliary space which is the space required to hold all data required for the solution also referred to the temporary space needed to compute a given solution the other concept was input space which referred to the space required to add data to the function algorithm application or system that you are evaluating to summarize space complexity equals input space plus auxiliary space that is the space required to compute a result you have done some quizzes on all the topics mentioned that's a great start to your Learning Journey in this course and all this content should Empower you to have excellent coding interviews in the future having a knowledge of data structures is useful for any coding interview you may encounter from basic data structures like strings booleans or arrays to more advanced data structures like collections graphs and heaps understanding the data you're working with and the most appropriate structure to use can be very beneficial in this video you will be introduced to data structures and the two main types mutable and immutable you will also learn what to look for when considering a given data structure in your own applications a data structure models and object so that it can be stored and organized easily in computer memory it can be a simple immutable structure that does not change after creation or it can be a mutable structure that facilitates operations to be performed on the contents operations might include updates and queries to be performed on the contents of the structure on the surface it may seem that a mutable structure should always be used however mutable structures require time and effort to model and some objects are very complex and not easily modeled other concerns such as space may be a factor understanding the underlying mechanics of data structures can be a great Advantage because decisions to use a particular data structure can have far-reaching implications on a Project's progress while the implementation and capabilities of a data structure can range between various programming languages the overarching architecture generally follows similar patterns here is a universal classification of data structures that categorizes the different types of structure into two main branches linear and non-linear this relates to how the elements are stored within the data structure a linear structure relates to how the information is stored the elements of the structure are arranged one after another or sequentially reflecting the order that they were inputted examples of linear structures are arrays cues stacks and lists and it infers that each element is attached to the element that precedes it some languages will demand that only similar types of data are stored on the same structure therefore you will have integer lists or string arrays other languages will allow for mixed arrays this would mean that storing an integer and string in the same array is not prohibited this easy approach can come at the cost of error handling down the line for example imagine you have created an array and want to find the sum of your values only to discover that the total is three pineapples and one apple once a simple structure has been created such as a list or array it will contain an index an index is a way of accessing elements that may not necessarily be the first or last instances generally use of an index is done through appending square brackets and the location of the item as an integer so array 4 would indicate that the required element is the fourth item of the array however programming languages are predominantly zero based which means that the count will start at zero therefore array 4 would actually be the fifth item in the array accessing an array through the use of an index can throw an error if index location 8 is requested but there are only seven elements in the array a common feature of these structures is that most languages have a built-in length method that will inform as to how big an array is an example of this would be calling array.length in Java or placing it inside a len function in Python while the mechanism of how to retrieve the length varies it is possible in most programming languages arrays and lists are typically first-class objects this means that all functionality that is available to other variables is available to them this definition generally indicates that a data structure can be passed as a parameter to a function returned as a result or assigned to a variable when passing a list or array to a function care should be taken that the structure is actually passed and not just a reference to the structure this can be a memory saving device used to prevent copying the information however such instances can cause an error if a change in the structure inadvertently affects the array in the calling environment in this example a string has been added to a list of integers and because the new list points to the initial list the initial list is also changed therefore it is better to make a copy of the array and pass the copy to the function another memory related issue to be mindful of is memory leak memory as previously mentioned can be arbitrarily allocated if this memory is not used then it is good practice to deallocate the memory location as a result of careless programming or other issues it is possible that a program makes repeated calls that result in excessive memory being allocated and not then deallocated over a prolonged time or through repeated calls this can cause the application to run out of memory and crash most compilers have sophisticated algorithms for detecting and deallocating memory to avoid this issue in contrast to linear structures there are non-linear instances such as trees or graphs these structures do not allow you to Traverse the data in one smooth motion instead you can investigate certain paths the makeup of these structures means that they can include natural sorting which makes querying for specific data very quick you will learn about different types of sorting later in the course in this video you had a general overview of data structures including their two main types linear and non-linear you have also learned about some of the considerations that should be made when deciding the type of data structure you should use as you progress through this module you will explore these structures further and learn about some of the individual strengths and weaknesses have you ever needed to store some data but were unsure about what sort of data structure to use it's a common coding problem in this video you will discover two important data structures that could be used lists and sets both are very useful data structures with their own strengths and weaknesses lists and sets are common in many programming languages let's get started by exploring lists in most programming languages lists are represented as objects this means that in addition to storing data they also have their own inbuilt methods here an inbuilt sort method is used to arrange the numbers in a list as with arrays it is common to find lists that are declared as either a string an integer or float in some programming languages you can have lists with mixed element types a list is an abstract concept that refers to a container of elements a stable implementation of a list is done using either an array or a linked list an array-based list is an ordered collection built using arrays as the underlying data structure as such they are subject to the same strengths and limitations associated with arrays array-based implementations relate to the initial sizing rather than simply pointing to another node as with a linked list some languages require that you initially determine how big a structure will be While others allow for dynamically growing structures it should be noted that this freedom is somewhat surface level for many Dynamic structures there is an initial size automatically configured at instantiation when this limit is reached the array will copy itself into a new structure with a larger size allocation therefore the decision not to arbitrarily allocate space at the onset may come at a cost at runtime when such data structures may have to expand multiple times during the execution of other operations consider the computation cost of a list dynamically growing while performing operations in a loop in this case it would help to set the initial list size to be larger rather than dynamically growing which can be costly due to having to create and copy over values into increasingly bigger lists a linked list Works differently a linked list contains two pieces of information the data and a pointer to the next list item a linked list begins with an empty list and can grow dynamically by introducing new cells to the list to grow a linked list you simply have to add a new node and point the list at its location this makes them very fast for storing large amounts of data the flexibility of linked lists is achieved by including some additional storage requirements notably in each node there must be some reference to the nodes around it there is also a head and a tail the head is a unique node that indicates that it is the start of the list and the tail indicates where the list ends this approach to Growing the size of the data structure is very powerful and can lead to very large but manageable data sets so what do sets entail set is very similar to a list however a set will store its elements in an unordered way though there are some possible implementations of ordered sets sets have some unusual Tendencies a set will only hold unique elements so adding an element that already exists to a set will make no difference to the data stored there the unordered process in which sets store their information means that printing out a set will not necessarily reflect the order in which the element was added to the set once a value has been added to a set it cannot change instead you would have to delete it and add a new value instead sets are exceptionally fast to search this is because of its internal mechanisms a set uses hash tables to determine where to store the elements of a set therefore each number that is passed to a set will have a hashing function applied to it a hashing function can be defined as an algorithm that takes in some data and Maps it to a fixed size value the value is theoretically unique and every time the function is applied to the data the same value is returned this means that searching a set can be done in o1 time this is due to the mechanism that is used to save values in a set you will learn about hashing functions in more detail later in the course a on approach would be to iterate over the entire data structure to check for the presence or absence of a value sets instead apply the mapping function to the input data and check the resulting output to see if a value exists there if it does then the value is returned if it doesn't exist in the set then the data was not stored in the set and hence will return a false while sets can perform an exceptionally quick search performance degrades when dealing with very large data sets this is due to the nature of the hashing function the more values retained the more risk there is of clashing clashing is when the hashing functions return the same unique mapping for two different values the larger the data set used the more likely clashing is prone to happen so there we are in this video you have explored two very important and useful data structures lists and sets and learned about the strengths and weaknesses inherent in both you should now have a greater sense of when to use each depending on the storage needs of the solution so what is the difference between a stack and a queue and what does it mean to use one of these data structures over another well in this video you will learn about stacks and cues the difference between the two and why you might choose to use one over another depending on the requirements of the solution stacks and queues are abstract data structures that have many different implementations depending on the programming language the unique principles that are common to both are how elements are added and removed while lists and arrays allow for Random Access stacks and queues employ sequential access this limited approach to holding data can be very useful when you want to control how the data is accessed let's start by exploring Stacks in a little more detail stacks on linear data structures with strict ways of adding and removing items as the name suggests a stack is a collection of elements that are stacked on top of one another what this means is that it is impossible to pull items from The Middle instead a stack works with a strict first in last out or feel low basis this can also be phrased as last in first out or lifo it's a simple yet powerful concept that informs you that items can only be retrieved from the top of the stack which determines the order in which you can retrieve them an example of this principle in action is hitting Ctrl Z in a URL Word document or any coding environment Ctrl Z and does the very last action hitting it again will undo the previous action and so forth to extend the analogy control y will redo the action or push it by adding it back to the stack Stacks tend to have very few methods push pop is empty is full and Peak the functions of these methods correlate with their names push will add an item to the stack and pop will remove it is empty checks that the stack contains nothing and is full is a Boolean that will return true if there is no more room in the stack you might have heard of the popular computer question and answer platform named after this very issue namely stack overflow so popping an item takes it from the top of the stack and calling pop again will return the next item in the stack pop can be called until there is nothing left in the stack push then will place an item on the top of the stack it is worth noting that by calling pop or push you are changing the stack you have now learned about all the methods except Peak so what does that entail to have a look at the contents one would call Peak which allows you to view the top item without removing it from the stack so calling it will not change the state of the structure unlike pop or push which permanently Alters the stack some implementations will include a search feature for looking through the stack though this won't always be the case now let's explore an example imagine that an application generated a deck of cards you could create a stack of 52 playing cards and each time a card is dealt it is removed from the top of the stack just as in a real deck using a stack in this way would simplify the code required for maintaining the state of the deck now let's explore cues Q is very similar to a stack in that it tends to have the same methods it can create insert remove and check the state of the queue unlike a stack a queue works on a first in first out or fifo basis again the name is a good indicator of how the structure works as an example imagine you have a line of people waiting to get a burger at a fast food restaurant the first person to enter the queue gets served and each subsequent customer stands behind the one in front and is processed in turn as with the stack a queue will pop the selected item from the structure though different languages have different implementations for this the element that is removed from the queue is the one at the bottom in other words the least recently added item or the first to join the queue using a real-world it example a server balancing system usually uses a queue to retrieve tasks the structure would hold each task in order of insertion and when a server becomes available to process the task the first task entered into the queue would be removed and passed to that server in this video you have learned about stacks and cues and the differences between them these are very useful tools to have in your programming toolkit and knowing them will be an advantage when dealing with problems requiring a structured way of accessing and inserting data in previous videos you have learned about data structures like lists stacks and queues another data structure you have not yet learned about is trees so what exactly is a tree in the data structure context trees are a powerful data structure that gives you great flexibility in adding and searching values the inherent structure of the tree can allow you to understand a lot about the relations between the data stored which can save a lot of time and code when extracting information from the data in this video you will explore the general structure and inherent features that trees provide you will also learn about some of the different types of trees and the advantages of using a tree data structure so let's get started a tree is a very complex data structure that resembles a tree in design it consists of nodes that are linked with one another a node can be apparent or child node a parent node may have a connected set of children nodes nodes with no children are referred to as Leaf nodes as with a tree nodes can Branch off in different directions allowing for powerful search and storage features generally we can look at a tree as a graph-like structure that has nodes that contain data and edges that model how each node relates to one another when discussing trees it is important to know some of the terminologies the top level node is referred to as the root each subsequent node down that is connected to this node is referred to as a child node nodes that have the same parent are referred to as siblings and are considered to be on the same level one might picture a chapter of a book where the subsections correlate to Connected nodes the theme of these nodes will be of a very similar nature other branches would be other chapters that still fall under the general theme but on different topics path refers to a series of connected nodes you might assume a connection between two nodes by determining the shortest path this is to say the quickest way that you can move from one node to another intuitively nodes with shorter paths will have more in common the depth of a node refers to how many edges there are from the parent to the root or the longest path the height of the tree refers to the number of edges between the topmost node to the deepest node within the structure and finally the size of a tree refers to the total number of nodes within the tree there are many variants and implementations of trees such as binary trees B trees and B plus trees there are also quad trees and AVL trees to name a few while all of them will contain the general theme outlined their use and implementation differ slightly depending on the type of tree being applied there are many advantages to storing your data in a tree-like structure the connections between the nodes indicate a relationship that is inherent in your data they can store information in a hierarchical fashion where the topmost content is stored in the upper nodes and more in-depth information can be retrieved by traversing a given Branch to a tree they are also very efficient for inserting and deleting data due to the flexible way in which they are implemented the non-linear nature of a tree means that there are many ways of traversing the data in binary trees this feature can be very useful when storing data a left node has a lesser value while the right node indicates that there is a greater value let's demonstrate that with some data values the First Data contains the value of 23. then a 4 is added because it is less than 23 it goes to the left following is a one also less than 23 but also less than four that also goes to the left the next number to follow is 30 because it is now larger than 23 it goes to the right a 24 is added and because it is less than the 30 it goes to the left but the 56 that was added goes to the right of the 30 again one can Traverse a tree in depth first or breadth first method a depth first method involves visiting every node from top to bottom sequentially a breadth first method involves searching each node on the same level before ascending to the next level and repeating until the root node has been reached more benefits of trees are that they can be used to model file systems on your laptop class hierarchies like those found in Java or modeling hierarchies in organizations in this video you explored the general structure and inherent features that trees provide you also learned about some of the different types of trees and the advantages of implementing a tree data structure at this point of the course you have been introduced to several different data structures and you've discovered that there's not a perfect way of storing information instead there is a wide variety of approaches Each of which is an appropriate solution depending on the problem in this video you'll learn about what a hash table is its structure and inherent features and how it works you will also explore some of the advantages of using hash tables and discover what is meant by collisions in hashing a hash table contains several slots or buckets to hold key value pairs it requires a hashing function to determine the correct bucket to place the data into a hashing function is any algorithm or formula that is applied to a key to generate a unique number each data item to be stored must have a key and a value the key is taken and the hashing function is applied to it such that it is reduced to a fixed size value there are a variety of hashing functions one could apply you may be familiar with them in relation to compression when you want to send information over the internet you might first compress the size of it to a manageable number of bytes send it over the internet and then decompress it on the other side this is an example of how hashing works it reduces the key to a small manageable size which then acts as the index indicator what information is used to generate the index is dependent on the application it might be the data itself if it is small enough or it might be the last four digits of an employee ID number or it might be a key in a dictionary most programming languages have built-in hashing functions like md5 sha or crc32 so implementing a hashing function is a straightforward job when discussing Big O notation the idea that speed and space are often at odds with one another was introduced this means that you can reduce the time taken to retrieve an item but in doing so you add overhead to your application hash tables prioritize speed over space and can retrieve an item in O of one time recall the discussion on arrays when you want to check if a value exists a search must be executed that checks each element of the list and makes a comparison with a Target value worst case scenario this will take o of end time or in other words if the element was at the end of the array it must make end checks hash tables offer an alternative approach to storing and searching data this is done through use of an index to achieve this you must Implement an algorithm that takes in a key and Maps it to a value which is stored in an index then when a new key is presented the algorithm need only run the same function and determine where in the index the value lies much like an index in a book this will drastically speed up the time it takes to identify the location of some data you are likely to find Hash tables used in caches dictionaries database indexes and sets consider a scenario where you have an array of 10 Keys which are numbers 0 to 9 you elect to employ a hashing function to decide where in memory to store these numbers you opt for a simplistic approach of applying the modulus of 20 to the numbers so for each key from 0 to 9 you apply your hashing function start with zero mod 20 equals zero one Mod 20 equals one two mod twenty equals two three mod twenty equals three and so forth in this way you would generate nine unique values which are used to represent in memory where the data associated with those keys is placed this example is simplistic but illustrates the mechanism behind creating hash Maps the issue arises when the number of keys to be stored grows beyond 20. remember one Mod 20 equals one but 20 mod 21 also equals one let's move on to collisions in hash tables what are collisions in hash tables a hashing function will apply a clever algorithm that will reduce the size of the key to a manageable size some approaches are more intricate than others so what happens if the result of two hashings is the same to expand on this idea it is worth pondering on Von meese's birthday Paradox due to probability sometimes an event is more likely to occur than we believe it to in this case if you survey a random group of just 23 people there is actually about a 50 50 chance that two of them will have the same birthday this is known as the birthday paradox say there are 24 employees in a company and a clever hashing function has been applied that takes the date and month of their birthday and uses this as an index with only 24 employees and a hash table of 365 index slots to hold a reference to them you may think the probability of any two employees sharing a birthday is unlikely in fact it has been shown that it is over 50 percent likely to occur next time you're at a party be sure to check if any two attendees have the same birthday to check for yourself what this illustrates is that there will be duplicate hashes generated when the hashing function is applied to the key and that allowances must be made for it there are a few solutions to the issue one solution is to grow the table every time a collision occurs then increase the complexity of the hashing approach redistributing the values to new addresses in this way a table will grow organically to match the size of the data required another is to create a linked list at the point of collision and simply store the additional values so in the event of a collision instead of storing a value you would instead store a linked list of values in this video you discovered what hash tables are the structure and features and how they work you have also learned that hashing is a very clever approach to generate o of one-time searches using a hashing function and an index you explored collisions and how they can be used to inform the size of the table and to even learned that if you are at a party with more than 24 guests it is more likely than not that at least two will share the same birthday a heap may not sound like a particularly promising name for a data structure however it is a very important organizational tool and combines some features and benefits of other data structures in this video you will learn about the structure and features of heaps you will also discover how heaps can be used to organize elements from least to most important and how by limiting the functionality of heaps productivity can be increased so let's get started with heaps Heap is a specialized data structure that is modeled like a tree but behaves in a similar way to a queue though with a notable difference of assigning priority to some elements each element in a heap has a key value and the priority can either be the smallest or the largest key value heaps that place priority on the lowest valued key are called Min heaps and ones that place the priority on the maximum value are called Max heaps heaps were first introduced as a means of storing and searching data efficiently but since then it has been recognized that there are a number of very useful operations that a heap can be applied to a heap has a few select core operations that it can perform insert find underscore Min and delete underscore men for a Min Heap and insert find underscore Max and delete underscore Max for a Max Heap for the rest of this video the discussion will revolve around Min heaps but you can reverse everything said and it will apply to a Max Heap the only difference between the two is where the priority is placed as with many of the data structures discussed on this course these methods are the fundamental elements that constitutes a heap different implementations in different languages could have additional methods added an instance of which may be decrease key this is where the value of a key is changed the motivation for which would lie with the priority of the key in real-world instance changes when discussing trees it was mentioned that binary trees sort values in order of size if the value is less than the node go down the left path if the value is greater than the node go down the right path because of this underlying architecture heaps are often built using binary trees though another approach would be to make an array act in a way that mimics the behavior of a binary tree the minimum value is placed on the root node and each subsequent value is placed on the hierarchy where their value dictates this means that to retrieve the minimum value from a heap is out of one because it will be stored always at the root unlike a stack retrieving a value does not cause it to be moved from the tree instead a delete Min method exists that can be called if the intent is to remove items as they are processed typically a heap would not support operations such as deleting items other than the priority element the reason being that a heap is built for specialized purpose that involves identifying the most important item and returning this in the shortest time possible then queuing up the next item of importance deleting items in the tree would require restructuring the tree and this would lead to a degradation in performance if you are looking for a data structure that can act in this way then you might consider structures other than a heap insertion into a Min Heap is done through propagation each item is inserted at the root a comparison is then made with the left value if this is less than the newly inserted item they are swapped this continues until the newly inserted item has no greater value above it and the value below is lower insertion in a heap can be achieved in oovlog end time having examined the underlying mechanisms that power a heap you may now have some idea of how this data structure can be applied considering that its inherent structure prioritizes a particular value from a group of elements the natural application would be for scheduling this could apply to CPUs routers or packet handling additionally one could imagine that such a structure would be useful in prioritizing certain tasks like interview scheduling where the key used to store the candidate May relate to what stage of the interview process they lie at or what priority the role has within the organization having a process that automatically applies schedules based on importance can be a huge time saving device in this video you have gained a greater understanding of heaps and how they can be used to organize elements from least to most important you have been shown that by limiting functionality productivity can be increased as with the selection of any data structure it is important to find the right tool for the right job when considering a given problem in computer science it is always important to consider what executions might be required to solve your problem and through this reflection choose an appropriate data structure to hold your data consider that you might work for a large internet company that wants to store a directory of locations and their connectedness to one another in this illustration are cities plotted in relation to one another notice how every possible detail need not be recorded say for instance you want to know how far Chicago is from Boston you can easily deduce this from the way in which the data is organized the same approach could be used to model internet destinations relationships between words or people on a social network this approach to saving information is a graph based approach in the coming video some terminology and advantages to this approach will be outlined this structure illustration is a graph that is made up of nodes to denote destinations and edges that show how each node relates to another the presence of values between the nodes means that this is a weighted graph there are no arrows present which means that this is an undirected graph in contrast to a directed graph an undirected graph has no order of Precedence one way to think about directed and undirected graphs is like two-way and one-way streets sometimes it helps when ordering data to highlight some progression when in other instances the edges are there just to show Association path then is a sequence of two or more nodes that's connected by an edge a connection in a directed graph is considered weakly connected if the edge is only one way however if there are two connections going either way between two nodes then it is said to be strongly connected at this point you may be considering that a graph resembles a tree in some ways you can say a tree is a simple graph notably a tree has a starting point and models a hierarchy with parents and children a graph is a far more complex structure that has no beginning or end two nodes bordering one another are called Neighbors and nodes that are connected through a neighbor are said to be adjacent grass-like trees can be traversed breadth first and depth first recall a breadth first search involves visiting every node on the same level then going lower and a depth first search involves drilling into the end of every Branch before moving on to the next one a breadth first approach involves choosing a given starting location and iterating over all the neighboring nodes each neighbor will have a connection of connected nodes which can be added to another data structure already mentioned the queue in this way you can systematically visit every node to achieve a depth first search you can employ a stack recall a stack processes elements differently than a queue while the queue prioritizes first in first out a stack focuses on last in last out so by placing all of the neighbor nodes systematically on a stack you would ensure a depth first traversal graphs are a much studied data structure and are the basis of many algorithms that have been developed to establish importance between nodes regardless of the element stored in the node one notable one is shortest path what is the quickest way to get from node a to node a The Edge weight would inform as to the cost of choosing each path this approach is used when routing Internet packages on the Internet or when calculating a journey on Google Maps another common graph based challenge is the traveling salesman a salesman has a select few nodes to visit what is the best route to plot that hits all the nodes in the shortest time this would be used in package routing given X destinations and why Vehicles plot out the most efficient route so that all packages get delivered with the least spending of resources in this video you learned how graphs give you the opportunity to model data in a flexible way that facilitates inferring information on the data by how it is stored this versatile approach only retains the minimum of information the distance from Chicago to Boston is not stored anywhere but can be deduced it's easy to query different questions without changing the makeup of the data calculating the best time when walking can easily be substituted in place of driving with minimal fuss there has been a whole field of Statistics devoted to inferring information from node placements which can be levered to make inference on any data stored there well done you've reached the end of the introduction to data structures module let's take a few moments to review what you learned during this module you started the module with a lesson on basic data structures this ranged from basic data structures like strings booleans or arrays to more advanced data structures like collections graphs and heaps understanding the data you are working with and the most appropriate structure to use can be very beneficial you learned about a simple immutable structure that does not change after creation and mutable structure that facilitates operations to be performed on the contents you then took a deep dive into all the types of data structures to refresh your memory here is a universal classification of data structures that categorizes the different types of structure into two main branches linear and non-linear examples of linear structures are arrays cues stacks and lists and it infers that each element is attached to the element that precedes it you learned about the structures in detail and should now be able to describe each of them you then moved on to focus on non-linear data structures in contrast to linear structures there are non-linear instances such as trees or graphs these structures do not allow you to Traverse the data in one smooth motion instead you can investigate certain paths you then moved on to the next lesson where you were introduced to lists and sets you learned that as with arrays it is common to find lists that are declared as either a string an integer or float in some programming languages you can have lists with mixed element types you also learned that some languages require that you initially determine how big a structure will be While others allow for dynamically growing structures that was followed by a section on linked lists and how they work remember that a linked list contains two pieces of information namely the data and a pointer to the next list item you then learned about sets and how they work a set is very similar to a list however a set will store its elements in an unordered way following sets you were introduced to the hash functions and their role in searching in sets sets are exceptionally fast to search you then moved on to the next video on stacks and cues in the same lesson to refresh your memory stacks and cues are abstract data structures that have many different implementations depending on the programming language the unique principles that are common to both or how elements are added and removed you learned that stacks and queues employ sequential access and use the empty stack push and pop methods to move and or add and remove items you also learned about the filo first in last out and fifo first in first out principles when you visited queues you learned that a queue is very similar to a stack in that it tends to have the same methods it can create insert remove and check the state of the queue unlike a stack a queue works on a first in first out or fifo basis again the name is a good indicator of how the structure works the last video in this lesson focused on trees trees are a powerful data structure that gives you great flexibility in adding and searching values the inherent structure of a tree can allow you to understand a lot about the relations between the data stored which can save a lot of time and code when extracting information from the data you learned about tree structures and how data moves in a tree in the next lesson you were introduced to Advanced data structures first you learned about what a hash table is its structure and inherent features and how it works you also explored some of the advantages of using hash tables and discovered what is meant by collisions in hashing let's quickly revisit what this entailed you were introduced to the hash function and learned that the key is taken and the hashing function is applied to it in such a nature that is reduced to a fixed size value you learned about compression with an example from our field of experience when you want to send information over the internet you might first compress the size of it to a manageable number of bytes send it over the internet and then decompress it on the other side this was followed by an explanation of how hash tables offer an alternative approach to storing and searching data through use of an index to achieve this you must Implement an algorithm that takes in a key and Maps it to a value which is stored in an index the next video in this lesson focused on the structure and features of heaps you also discovered how heaps can be used to organize elements from least to most important and how by limiting the functionality of heaps productivity can be increased you learned that heaps could place priority on the lowest valued key and are then called Min heaps and ones that place the priority on the maximum values are called Max heaps a heap has a few select core operations that it can perform namely the insert find and delete of items following this you learned that deleting items in the tree would require restructuring the tree and this would lead to a degradation in performance to summarize in the video about heaps you have gained a greater understanding of heaps and how they can be used to organize elements from least to most important you have been shown that by limiting functionality productivity can be increased as with selecting any data structure it is important to find the right tool for the right job finally you focused on graphs and set the scene as follows when considering a given problem in computer science it is always important to consider what executions might be required to solve your problem and through this reflection choose an appropriate data structure to hold your data consider that you might work for a large internet company that wants to store a directory of locations and their connectedness to one another an illustration of cities plotted in relation to one another was used to explain all the concepts such as a weighted graft an undirected graph and that in contrast to a directed graph and undirected graph has no order of Precedence following that you learned that a connection in a directed graph is considered weekly connected if the edge is only one way however if there are two connections going either way between two nodes it is said to be strongly connected in this video you learned about the key Concepts and topics covered throughout this module you have done some quizzes on all the topics mentioned you are getting more equipped for your future good luck with the next module sorting a set of data might sound like a straightforward task given what you have already learned throughout this course however it can be surprisingly challenging when you get into the details in this lesson you'll explore sorting algorithms and the different sorting methods that are available to you you'll be introduced to some of the various approaches to searching such as linear and binary you'll also discover the steps involved in implementing both of these approaches and explore the advantages they offer you'll also learn about the steps required for implementing selection insertion and quick sort and discover the strengths and weaknesses of each sorting approach there are several algorithms that have been developed for this Challenge and some data structures that have previously been discussed like binary trees and heaps both of these data structures have been designed with the aim of retaining the data in a sorted manner working with sorted data or having the ability to sort your own data can result in significant Time Savings therefore a data set of elements that can be ordered is fundamentally necessary this order could be alphabetically sequentially chronologically by size of shape or by Hue of color the actual metric that is used is less important than the fact that they can be arranged in and ascending or descending order a second consideration that's factored in is whether the ordering is permuted meaning reordered or has been accomplished by creating a copy whilst keeping the original list selection sort is an early approach to sorting it mimics how a human might approach the problem the underlying principle is very straightforward you start by searching through a list to identify the smallest element then switch this with the first element so the smallest element is placed at the top now the previous occupant of the top spot has been switched into the vacated spot in the list this is repeated for every element in the list until the list has been ordered from the smallest through to the largest let's explore this in an example you'll see in this diagram element 35 at index location 0 in a selection sort a comparison is made between the element at index 0 and each element in the array until the lowest is found equally element 46 in the next location is compared to each element and in this case switched with six next is element 36 found at index 2. you'll notice that element 9 at index 3 is deemed the smallest however the entire array must be searched this process is continued until every element is ordered by size smallest to the left largest to the right another straightforward approach to sorting is insertion sort rather than searching through all the elements this approach Begins by examining the first two elements in a list the smaller of the two is then moved to the front this is repeated for every element each one being compared to the element on its left a subsequent switch to the left is made if it's found to be smaller so element 2 is compared with element 4. it's found to be smaller so a swap happens next element 2 is compared with element one it's found to be larger so no more comparisons are made then element 3 is first compared with element four it's found to be smaller so a swap occurs next element 3 is compared to element 2. it's larger so no further comparisons are made Let's explore an example of this on screen you'll notice an array of numbers the first element 35 has nothing larger to the left so it remains where it is then element 46 is compared and is also left where it is next you see element 36 this is compared with location one it's smaller than 46 so they are swapped checks against location 0 shows that no further swaps need a curve for this element at step 3 you'll notice element nine this is compared with 46 and is therefore swapped to location 2. it is further compared with location 0 and 1 and swapped again next the element found at location 4 is compared with location 3 and swapped it's further swapped with location 2 and location 1 is also compared with location 0 but has its greater no further movement is made the process is continued moving from right to left until the entire array is sorted both the insertion and selection sort are straightforward approaches working on a simple paradigm quicksort is a more sophisticated approach that is more complex to implement however it shows far greater efficiency quicksort operates on the principle of pivots the algorithm selects an element in the array as the pivot then all items in the array larger than this value are moved to the right of the pivot and all elements less than the value are moved to the left this process is repeated for both sides of the pivot until all the items are sorted let's explore this in an example here element 9 is selected as the Pivot Point using quick sort all items that are less than nine are swapped left and all items larger than 9 are swapped right therefore the smaller elements have now been moved left after this first split in this example these smaller elements are 6 and 3. applying the same procedure again to the resulting array terminates when three is found to be the only element not split now taking the values that are greater than the original selected pivot you select a new pivot in this case 36 is selected and a further swapping of elements is performed finally the remaining unsorted index locations are swapped in relation to a new pivot once all elements have been sorted the algorithm terminates there are many additional sorting approaches that can be used with some approaches even forming hybrids of these existing ones in practice you probably would not write your own implementations as there are excellent implementations in every language the goal here is to show how they operate under the hood so that you can choose the best one when faced with a given problem as with data structures there is not one sorting algorithm that provides the best result in every given scenario each approach has its trade-offs and is more effective in some environments than others you will learn more about the efficiency of these approaches when compared to Big O notation soon in this video you've explored sorting algorithms and the different sorting methods that are available to you you have also learned about the steps required for implementing selection insertion and quick sort and discovered the strengths and weaknesses of each sorting approach in the previous video you explored sorting and were introduced to several sorting approaches that can be used for a data set however what if you need to search this data for a specific element in this video you'll be introduced to some of the various approaches to searching such as linear and binary you'll also discover the steps involved in implementing both of these approaches and explore the advantages they offer in computer science searching is a fundamental operation when provided with a collection of data there may be a need to identify specific elements within this data however the exact description of the element does leave room for some interpretation from the onset you might consider the question given a hash table is there a key value pair that matches this key this is a simple like for like comparison that will produce either the absence of a unique key or the return of a unique key some considerations when making a search might include finding the largest number in this array or the smallest or to return the median number from this collection of numbers however what if the value does not exist what should be returned returning a null value can interfere with an application's ability to run afterwards when doing a search you need to consider what safeguards should be put in place when there is no value returned you should also consider if the search is supposed to return the first instance of the value or the last in the additional reading at the end of this lesson there is a link to a talk from Tony Hall the inventor of null who refers to it as his billion dollar mistake the simplest search that you can Implement is a linear search if you have an array of elements a linear search begins at the start of the index and searches through the array until an appropriate element is found or there are no more elements to check in this approach the best case scenario would be o of one and worst case o of n as each element would have to be checked before it's possible to say that the target element is absent in relation to data structures it has been shown that some have inherent sorting Tendencies such as a heap or a binary tree you can also take any data structure and apply a sorting algorithm to it before applying a searching approach using a binary search will half the search space at each iteration on screen is a data list a binary search will firstly check the halfway point and determines if the element is greater or smaller than the target element if the middle element is less than the left half of the list is discarded and the right half becomes the focus so now only the right half of the list is queried for the middle value again if it's less than the target element it is once again discarded and the right half of that filtered list will be examined in this way the algorithm halves the search space at each iteration this approach is quicker than a linear search but does require the data to be sorted before beginning the search it may not seem like a reasonable requirement to have this but if your data is read more often than it is updated then such a solution might be an appropriate implementation again as with the linear search you covered earlier the best possible outcome from this approach is that the element is found in the first go o of one however the worst case scenario is less optimistic after the first search the list is halved if this iteration is not successful it is halved again then after the third division it is halved again and so on therefore it can be said that after J iterations it is n over 2 to the power of K or in other words o log m this is considerably more efficient than a linear approach however it is worth bearing in mind that any perceived gain in time needs to be offset by the time taken to sort the list if the list is updated regularly this can become a costly process in this video you explored binary and linear search functions the steps taken to complete these searches and how they work you also learned how the application of Big O can be used to estimate the efficiency of both you've even learned how through some clever adaptations to a standard approach it's possible to seriously improve performance keep up the great work in the next lesson you'll start learning how to work with algorithms the divide and conquer Paradigm offers a useful framework for thinking about how to solve a given problem it encompasses two principles discussed in this module namely recursion and breaking problems down into smaller problems in this video you will learn about the divide and conquer Paradigm and how it offers a framework for problem solving you will also learn about the mandatory and optional steps involved in the divide and conquer Paradigm and what advantages this Paradigm brings to computers so how does the divide and conquer algorithm work the algorithm comprises two steps with an optional third namely divide and conquer which are the mandatory steps and combine which is an optional step in the Divide step the input is split into smaller segments and processed individually in the conquer step every task associated with a given segment is solved the optional last step combined is combining all the solved segments this will not happen in every instance but will for the example provided the following is an example of the divide and conquer Paradigm when discussing sorting approaches it was shown that there are many ways to solve a problem using sorting as our example let's discuss another sorting approach that can be solved using a divide and conquer approach merge sort is a sophisticated approach for sorting an array it starts by halving the array these two halves are then halved and halved again this process is repeated until there is only one element remaining then the process reverses and each smaller list is sorted before rejoining the part it was halved from this solution to a problem is based on the idea that by breaking a problem into smaller problems it is easier to complete the overall task to gain greater intuition on how divide and conquer can be applied to merge sort let us explore a real world example consider you and three housemates have decided to do the shopping together after having compiled an extensive list you all go to the supermarket one solution might be that you all walk around the supermarket and pick up each item from the list a better approach might be to break the list into four parts and each take a section this would reduce the overall time spent in the shop though this might cause an amount of overlap between parties a further optimization of the task then might be to First sort the list so that all similar items are together for example all the beverages the fruits the meats and so on then assign each member a given area of the supermarket this would be an even more efficient approach to completing the task they say a problem share is a problem halved so how does this work on computers there are two immediate advantages namely parallelization and memory management parallelism is when you have different threads or computers working on the same problem at the same time to complete it in a quicker time a benefit to employing divide and conquer Solutions is that you can then employ parallelism when coding now let's explore memory management with the merge sort example consider that each array can be sent to a different core or server depending on the architecture of your organization and the results are then returned it might be the data being processed is too large to hold in memory and must be processed in chunks Additionally the boss may have provisioned access to cloud computing so the solution can involve accessing an online server and exporting some of the problems from the company servers all this contributes to managing your available memory in this video you were introduced to the Paradigm of divide and conquer through an example of merge sort and how it offers a framework for problem solving some various terminology associated with it as well as how this approach lends itself to real-world computer optimization approaches were demonstrated you also learned about the mandatory and optional steps involved in the divide and conquer paradigm and what advantages this Paradigm brings to computers in a previous video you learned about the divide and conquer Paradigm in this video you are going to learn about recursions and how to implement the requirements for a recursive solution one of the basic tenets of Any Given language is its ability to perform loops loops enable us to perform actions repeatedly until the desired output is achieved unlike humans computers never Tire of Performing the same mundane task repeatedly an alternative approach to solving a problem than a loop is recursion the practice of having functions call themselves is referred to as recursion and the focus of this video recursion is when a function calls itself with a smaller instance of a problem repeatedly until some exit condition is met so what is required for recursion there are three requirements for implementing a recursive solution namely the base case the diminishing structure and the recursive call let's look at an example relating to Binary to better illustrate these three requirements consider a challenge where you are tasked with finding the exponent of a number recall that calculating an exponent of a number is to determine how many potential permutations can be derived from it this was discussed when demonstrating how binary can be used to represent a range of characters the base case ensures that the function will not continue to call itself and eventually ends line one outlines a function that will take two arguments X and n the base case is if n equals zero in this instance the program will terminate line four is the second part of the conditional statement if a termination Point has not been reached call the function again with a reduced structure in this instance the goal is to multiply X by n to find the total number of potential states that could exist for the binary number reducing the input value is as important as establishing a base case this way the function will eventually reach the base case and cease to call itself the third component of a recursive function is to include a call to itself this happens on line 5 where the exponent is accepting the diminished structure the structure can be said to be diminished because the size has been reduced from one call to another each time the function is called a new instance is created on the call stack calling the above function with x equals 2 and n equals 3 will result in three instances being created and placed on the call stack this increases computational cost as resources are required to make a function call however the computation from each result will be retained on the call stack this can be useful when Computing hierarchical problems or problems where one can benefit from knowing which steps resulted in a given outcome like traversing a graph let's explore an example of the use of recursion consider the video on binary search a binary search function will accept an argument of a list and a Target value first the middle point of the list is checked against the elements to determine which half of the list to check this process is repeated until the target element is found or deemed to be not there you might consider solving this problem through a loop or recursively the input to the recursion would be a list and a search element and the recursive function would call itself until the target endpoint is reached so why then use recursion when a simple Loop will do some problems lend themselves well to recursive calls consider calculating the Fibonacci of a given number Fibonacci is a sequence of numbers where the first two numbers are zero and one and every other number is the sum of the previous two numbers in the set calculating the result involves passing a number calculating the output changing the number then calling the function again with a new integer input writing the code this way means that you can simply call the function with a different integer and it will return a breakdown of the required steps readability is a strong Plus for recursion sometimes when a problem requires many checks a loop can quickly become unwieldy recursive Solutions reduce the amount of code required to solve a problem and can be easier to read and understand finally one would employ a recursive approach as part of a divide and conquer solution here the problem is broken into smaller steps and repeated to come upon the optimum solution in this video recursion has been introduced you have learned that while recursion can add some computation overheads to a problem it can also result in eloquent easily read code additionally that recursion epitomizes a divide and conquer solution breaking the problem into its smallest components and solving those in the build up to dynamic programming you have learned about the divide and conquer Paradigm and recursions in this video you will learn about the concepts of memoization and dynamic programming dynamic programming is a programming Paradigm that promotes solving problems by breaking them into smaller problems and solving these the solutions are then stored in an appropriate data structure for later use the advantage to this is that if these sub-problems require being computed again one only searches for the answer instead of computing the problem again the technique of solving sub-problems and storing them to save time on a potential future lookup is known as memoization dynamic programming relates to two concepts already encountered in the previous videos let's have a quick refresher on these Concepts the first is divide and conquer that is taking one large problem and breaking it into a smaller set of sub-problems and then solving these the second is a subset of this known as recursion recursion is the practice of coding a solution that avoids running Loops but instead uses multiple self calls in coming upon a solution dynamic programming is an extension of these approaches which in addition involves keeping a record of results generated from running the subproblems each time they are newly run in subsequent runs instead of recomputing results a lookup is queried for the last time the question was asked as said this approach is called memoization and to reinforce the concept this is when the results of previous calculations are stored and used in place of re-running the calculations when the compiler identifies that the computation has been run for a previous task to exemplify this consider the question posed in the video about binary numbers how many combinations were possible with a binary lock of six digits in a previous video it was shown that you can discover this through exponentiation or finding the power of a number so the same lock with six digits would have two to the power of 6 or 64 combinations so 2 to the power of six equals two times two times two times two times two alternatively you can divide these into two groups where you have calculated two times two times two and again two times two times two that will result in eight times eight and again give the same result applying a divide and conquer approach to Computing this efficiently using memorization would reduce the computations by first calculating to the power of three and again two to the power of three by applying memoization the first two to the power of three would be computed then reused for the second bracket reducing the overall computation required so what sorts of problems are good fits for a dynamic solution the dynamic programming approach is commonly applied to combination or optimization problems one example of a combination problem already mentioned is the Fibonacci sequence another instance you may encounter in an interview is the knapsack problem this is both a combination and an optimization problem say for a planned camping trip you can fill a knapsack with required items each item has a weight cost to it a torch equals one kilogram water equals two kilograms and the tent equals three kilograms additionally each item has a value the torch equals one water equals two and the tent equals three in short the knapsack problem outlines a list of items that weigh different amounts and have different values you can only carry so many items in your knapsack the problem requires calculating the optimum combination of items you can carry if your backpack can carry a certain weight the goal is to find the best return for the weight capacity of the knapsack to compute a solution for this problem you must select all items that add up to a given weight and contain a given value the weight carryable will change this problem can be applied to Resource allocation where you have so much CPU power and X tasks to run just like the capacity of a CPU dedicated to completing tasks sometimes the weight will be 7 kg and other times it might be 10 kg dynamic programming involves saving the computations used to come upon a given solution so if you have computed an Optimum selection for 7 kg and it is raised to 10 kg you will not have to rerun the initial computations again this can be a time saving metric when Computing dynamic programming Solutions you must firstly determine the objective function that is the description of what the optimum outcome is to be next you must break the problem into smaller steps one approach already discussed for achieving this can be the use of recursive functions that is functions that will call themselves repeatedly until a solution is come upon they should be written in such a way that you can change the outcome without altering the code for the methods already written to conclude in this video it has been shown that dynamic programming is an approach that looks to optimize solutions to a given problem it uses principles of memoization and overlapping subproblems to identify when an objective function can be achieved quickly optimizing the computation steps required in this video you will learn how you can implement this Paradigm to solve complex Problems by using greedy algorithms there is a philosophical principle called Occam's razor it states that the simplest solution is almost always the best one this problem-solving principle argues that Simplicity is better than complexity in our case the greedy algorithm is The Simple Solution this is an alternative approach to dynamic programming as this approach seeks to present an immediate solution for a task and favors local optimization over a more holistic Global approach when engaging with a problem subdivided into segments utilizing a dynamic programming approach would find a globally optimal solution so that each subproblem is solved and the best subset is selected and implemented a greedy approach would instead look at the list of solutions and Implement a local optimization usually the current most rewarding option is chosen to make this more clear let's take an instance of a CPU that has a list of tasks to be completed applying the dynamic programming approach would entails selecting a subset of activities that could be completed within a given time and executing these tasks with the knapsack analogy this would involve determining what subset of items to pack that would maximize the value in the process of filling the bag a greedy algorithm approach to this would be always to select the most valuable item and place this in the bag giving no thought to what other items this would exclude from the process thus in our CPU example a greedy approach would involve selecting first the shortest running program and then the next shortest program and so on while this might not lead to a globally optimized solution it will reduce any overhead in calculating the most efficient subset of items to better understand how these two approaches would differ let us consider the problem of the shortest path the image shows a map with nine different nodes A B C D E F G H and S each node is connected to another node by a weighted path this weight reflects the cost that would be incurred by selecting this path thus in our CPU example a greedy approach would involve selecting first the shortest running program and then the next shortest program and so on while this might not lead to a globally optimized solution it will reduce any overhead in calculating the most efficient subset of items you are now faced with making a journey from E to F and want to plot the most effective route a dynamic programming approach would involve creating a table and calculating each potential node from e from there it would introduce the next set of nodes and calculate the accumulated cost this approach would undoubtedly arrive at the most efficient solution using memoization after the initial calculations were made they would be saved and subsequent Journeys would benefit from a quicker computed time this is a bottom-up global approach to the problem a greedy approach would differ in its methodology instead of trying to find an optimal subset of connected Roots it would instead begin at node e and look at each available connection thus it would have a selection of Weights namely 5 3 2 4 and 12 which correspond to node c b d g and H the lowest value in that array is 2 which corresponds to node D following the greedy principle it would make this election and progress to the next node assuming that the data structure was a directed graph it would be presented with a further three nodes a f and g that have the values of 7 5 and 6 respectively since f is the final location it would select F and arrive at its destination happily having amassed a total penalty of seven for travel time from E to D and then to F that carries a weight of two plus five visually you can see that this is the most efficient approach and it was come upon without creating an exhaustive table of combinations and Computing all Roots however had the node between G and F had a penalty of two then it would have selected a less optimal solution this is a trade-off when choosing a greedy approach over a dynamic one while the overhead for a greedy algorithm is low and coding a solution is quite straightforward it will not always guarantee that the best option is returned you now have a greater understanding of the greedy algorithm approach in addition you have seen how it Compares with a dynamic solution and thus deepened your grasp of the strengths and weaknesses of this alternative approach next time you are plotting a route in Google Maps consider the selection of routes that are offered and think about how these roots might have been calculated well done you've reached the end of the working with algorithms module let's take a few moments to review what you learned during this module you began the module with a lesson on sorting and searching first you learned about why sorting is important and explored the three main methods for sorting selection insertion and quick sort and examined the steps involved for each method that these algorithms use to sort data and explored the strengths and weaknesses of the three sorting approaches when choosing an algorithm to use for a given solution importantly that there is not one sorting algorithm that provides the best results in every given scenario next you learned about searching algorithms which are a fundamental Concept in computer science and some of the various methods that are used by algorithms for searching you explored two core approaches to searching linear and binary linear searches progress through every item in a given data structure until a specific item is found whereas binary searches half the search space at each iteration he also learned the steps involved in implementing both approaches and some of the advantages they offer you also took a deep dive into time and space complexity for both searching and sorting algorithms you then moved on to the next lesson where you were introduced to working with algorithms here you learned about different approaches to working with algorithms first you explore the divide and conquer Paradigm in the Divide step the input is split into smaller segments and processed individually in the conquer step every task associated with a given segment is solved and the optional last step combine is combining all the solved segments and you discovered how the divide and conquer technique offers an effective framework for problem solving and the various benefits that it provides next you explored another important algorithmic approach recursion recursion is when a function calls itself with a smaller instance of a problem repeatedly until some exit condition is met and you learned that there are three requirements for implementing a recursive solution namely the base case the diminishing structure and the recursive call you were then introduced to dynamic programming which is a programming Paradigm that promotes solving problems by breaking them into smaller problems you explore the concept of memoization the technique of solving sub problems and storing them to save time on a potential future search and to examine the process involved to compute a dynamic programming solution essentially this can be outlined as first determining the objective function that is the description of what the optimum outcome is to be next breaking the problem into smaller steps and then deciding which dynamic programming approach you would like to apply to achieve your desired outcome finally you learned about greedy algorithms in comparison to the dynamic programming approach a greedy approach would look at the list of solutions and Implement a local optimization usually the current most rewarding option is chosen you explored how a greedy algorithm approach could be implemented to reach a solution and that there is a trade-off when choosing a greedy approach over a dynamic one while the overhead for a greedy algorithm is low and coding a solution is quite straightforward it will not always guarantee that the best option is returned with all the knowledge you have acquired all that is left is to complete the final quiz for this module before moving on to the final module where you will complete the graded assessment and then you've really made it you're so close good luck and enjoy the rest of your journey in this course you have learned a range of Concepts and skills as you prepare for a coding interview let's take a few moments to recap the key topics that you learned about in the first module you started by discovering what a coding interview is what it can consist of and the types of coding interview that you might encounter your first lesson focused on the technical coding interview primarily to determine that you are technically capable of the role's responsibilities you learned about the steps that you must keep in mind when this interview is conducted you were taught that using the appropriate tools is always important and that you have to keep time constraints in mind you also explored how you can prepare yourself for a coding interview and the importance of First Impressions including a focus on communication such as explaining your thought processes and handling mistakes you learned about the star method and how to use it to your benefit when communicating with interviewers you also learned about how to work with pseudo code to demonstrate how you might reach a solution some important tips for practical solution design and how to test your Solutions in the next lesson you got an introduction to computer science starting with an overview of binary where you learned about the difference between base 10 and base 2. you then discovered positional notation the use of the position of the number to denote a progressive increase in value you then moved on to explore key components of computer memory and how it works you should know by now that to better understand the various layers of memory and should be able to describe the differences you learned about the transfer rate or the speed at which a computer can transfer memory into the cache for processing you then moved on to time complexity where you learned how to evaluate time efficiency or gauge performance by the time taken to complete a task you discovered Big O notation which is a metric for determining an algorithm's efficiency you explored space complexity which is essentially the space required to compute a result and that any decisions around space complexity are not just based on the speed of an algorithm but also on how much memory capacity a given solution will use there will always be a choice to prioritize speed or compactness in the second module you learned about data structures this ranged from basic data structures like strings booleans or arrays to more advanced data structures like collections graphs and heaps and how each one comes with certain benefits and limitations you've explored all the types of data structures and how they are classified into two main branches linear and non-linear next you were introduced to stacks and cues abstract data structures that both have specific characteristics around how elements are added and removed when you visited queues you learned that a queue is very similar to a stack in that it tends to have the same methods it can create insert remove and check the state of the queue unlike a stack a queue works on a first in first out or fifo basis finally you found out that trees are a powerful data structure that give you great flexibility in adding and searching values following this you went on to examine some Advanced data structures namely hash tables heaps and graphs next you explored heaps you discovered how heaps can be used to organize elements from least to most important and how by limiting the functionality of heaps productivity can be increased finally you examined graphs this structure illustration is a graph that is made up of nodes to denote destinations and edges that show how each node relates to another the presence of values between the nodes mean that this is a weighted graph there are no arrows present which means that it is an undirected graph in contrast to a directed graph an undirected graph has no order of Precedence and you learned that a connection in a directed graph is considered weakly connected if the edge is only one way however if there are two connections going either way between two nodes then it is said to be strongly connected in the third module you had an introduction to algorithms including the types of algorithms available to you and how best to work with them to sort and search your data you started by exploring sorting algorithms and how working with sorted data or having the ability to sort your own data can result in significant Time Savings you discovered why sorting is important and explored the three main methods for sorting selection insertion and quick sort next you went on to discover searching algorithms and how each type provides its own framework for problem solving you explored two core approaches to searching linear and binary linear searches progressed through every item in a given data structure until a specific item is found whereas binary searches half the search space at each iteration you also gained insight into time and space complexity in both searching and sorting algorithms you then moved on to the final lesson where you were introduced to working with algorithms here you learned about different approaches to working with algorithms first you explore the divide and conquer Paradigm you learned that in the Divide step the input is split into smaller segments and processed individually in the conquer step every task associated with a given segment is solved and the optional last step combined is combining all the solved segments next you've explored another important algorithmic approach recursion recursion is when a function calls itself with a smaller instance of a problem repeatedly until some exit condition is met and you learned that there are three requirements for implementing a recursive solution namely the base case the diminishing structure and the recursive call you would then introduce to dynamic programming which is a programming Paradigm that promotes solving problems by breaking them into smaller problems and you examined the process involved to compute a dynamic programming solution essentially this can be outlined as first determining the objective function that is the description of what the optimum outcome is to be next breaking the problem into smaller steps and then deciding which approach you would like to apply to achieve your desired outcome finally you learned about greedy algorithms in comparison to the dynamic programming approach a greedy approach would look at the list of solutions and Implement a local optimization usually the current most rewarding option is chosen and you looked at an example of how a greedy algorithm approach could be implemented to reach a solution while the overhead for a greedy algorithm is low and coding a solution is quite straightforward it will not always guarantee that the best option is returned so there is a trade-off when choosing a greedy approach over a dynamic one well done you have covered so many important Concepts and approaches throughout this course it is a real achievement but it should also serve to prepare you for any potential coding interviews that you may go on to attend all you have left to do is to take the final course quiz before wrapping up the course good luck you've reached the end of this coding interview preparation course you've worked hard to get here and accumulated a lot of knowledge along the way you've made great progress on your developer Journey you should Now understand the unique and challenging aspects of the coding interview specifically you should be well prepared with some interview soft skills that will help you to be ready when you go for your coding interview the foundations of computer science and some problem-solving methods to apply to any challenge you may face in an interview scenario following your completion of this course encoding interview prep you should now be able to prepare for the interview process offer strategy and tips for successful interviewing and openly discuss the emotional components of the process the key skills measured in the graded assessment revealed your knowledge and understanding of data structures in the context of coding interviews the concepts and usage of algorithms how to visualize an algorithm and combining new and previously learned coding patterns to solve problems congratulations you've successfully completed all of the courses in this program at this stage you may want to consider registering for another course specialization or certificate pathway certifications provide globally recognized and Industry endorsed evidence of mastering technical skills whether you're just starting out as a technical professional a student or a business user the courses you have completed and the range of practical projects you have in your portfolio will prove your knowledge and ability as a developer these can serve to demonstrate your skills to potential employers and not only does it show employers that you are self-driven and Innovative but it also speaks volumes about you as an individual as well as your newly obtained knowledge you've done a great job so far and you should be proud of your progress the experience you gained so far will show potential employers that you are motivated capable and not afraid to learn new things again congratulations on finishing this course and good luck with the rest of your educational Journey