Transcript for:
4-Mastering Test Documentation - How to Write Bug Reports

Yeah. Uh new but not so new here. Uh you might have met Alex before. So um yeah, first things first uh if we haven't joined the Slack, please uh go ahead and do that as well. I think maybe our co-host here can maybe throw some links in here um if possible. But, uh, it will be really vital to bouncing ideas off of each other and then also being able to communicate with, uh, our, um, people to help you through the the internship if you've signed up for that. So, uh, but today we are starting off on lesson four. We're going to be talking about test documentation. So, the first couple of uh lessons we have here are just kind of to build your skills up uh with QA and understanding uh what all the terms and items we have and how we actually go about doing it, what kind of information we're trying to pull from some of these documents so we can actually start crafting something so we can start testing. All right. So, uh, but, uh, a little bit about me since I guess I'm new here. My name is Patrick. I'm another instructor here with Careerist. I've been doing QA for a little over eight years now. I've worked at some places like Intel and Google and now Meta, um, some other places in between there. But when I originally joined, it was mostly VR. There's a lot of VR testing. And we'll see that as you guys go through your career. You'll find that the IT industry is going through different niches. So, right after VR, it was all about cloud computing. Everybody had to get on the cloud. We're wondering what the cloud is, right? Apple took good advantage of that and so did Google. So, everything we're doing now is pretty much on the cloud. Um, which just means being able to host things. You know, you go into Google Drive, you go into Google Sheets, it's not living on your computer, right? Uh, now the big thing is AI, of course. So, I'm working on AI. You guys might also be working on AI depending on what kind of industry you get into. And uh but also a lot like you all, I did not start my career off doing QA. I was headed in a totally different direction out of school. I wanted to do animation and art and make things move and motion pictures and Pixar. Um, I had grown up with someone who did all the cool uh story boards for uh all the big action movies and the mist and the Hulk and now even recently Star Wars. Um, but uh as I was nearing the end of my degree, I was looking into the industry I was getting into and I realized maybe it wasn't for me. It was going to be a lot of work, a lot of hours and the starting pay was not actually as great as maybe I thought it was going to be. and uh something that required a lot of technical skill um that just couldn't be replicated by by AI. Um anyone else having audio issues by the way? If not, you might need to check your Zoom and then uh check what your output source is in the settings there. Okay, great. Um but anyways, I found myself wanting to do something a little bit different. I had put my resume out there. I put a lot of networking in and I did it the oldfashioned way which was really difficult. There was no boot camp for QA. I eventually found my way into Oculus doing the dog fooding managing which is dog fooding is a type of testing program that we do usually in it uh maybe some other companies where uh everybody else in the company gets to test the product. So if we looked at the software development life cycle so far plays into it that really only the people looking at the product that was going to be sold to people are the engineers and QA you know two departments of people that leaves out anyone else who could have a voice in the product the CEO the president the HR marketing sales all the all the other positions uh we give them the product to be able to use it and we gather their feedback back and see what can be improved just from someone who hasn't been staring at the application every day for weeks or months or gosh forbid only a year. Um, but that's when I discovered there was QA. There's this whole other department that you didn't need a full computer science degree to get into, but just some background knowledge, right? Um, and then from there, uh, I started applying and, uh, I did a lot of interviews and I'm we're bringing now some of the lessons I've learned personally to to what you guys and wish for things that I wish I could have said at the time. Um, the big foundation we have in QA is reading some types of documentation. So we going to review a couple different types of documentation that's given to us and then we're going to end up writing our own. So I would say half of QA is actually testing and doing things and the other half is writing it down and documenting and saying that each test case is passed or failed and then writing a report at the end about what happened and how many bugs you found is the other half. It's giving some uh validity to the job that you're doing so that your lead, your boss, your managers, uh engineers can see that you're actually, you know, doing something instead of just taking your word for it. Uh do I have a QA channel where I talk about QA? Uh I do not, but I think there are maybe some other instructors if you ask them around that that do. Uh so for today, we're going to be looking at the requirement documentation. We're going to see if we can work with documentation and without and we're going to actually go about writing some test cases together. All right? And uh if you guys want to give some input on that uh cases that you might find in your head, any good ideas, any ideas at all, let me know. We'll write it all together. I like being interactive and chatting with you guys and I think that's part of a great experience here. So, if we have any questions, just let me know uh while we're on the slide and I'll get to it. If we have any other general QA questions, I am happy to answer them at the end. Uh I like talking about QA. So, if you got any worries about after the fact what you do when you get there, I can help answer some of that. Our goals is we're going to learn about our requirement documentation, understand how we work with and without because there's a good exercise what we're going to learn about today and then we're going to practice writing some. It's pretty quick to write the test cases. The hardest part is just coming up with them. But writing them down is super easy. Just a little question for you guys. It's there's no right or wrong answer, but I'm just curious to see where your mind is at. uh when QA engineers test a software or a feature, how do you think we go about differentiating between what's expected behavior, what's supposed to happen, and then what's a bug, what is abnormal, what's unusual, what is uh not supposed to happen. How do we go about trying to figure out, you know, we're opening up an application for the first time. Maybe it's uh maybe you're new to Tik Tok. You've never seen Tik Tok before and you look at this and how do we figure out what is a bug and what is not. Let me know. So, some uh good ideas here maybe describing things between if they're functional or not. Uh we can look at maybe developer notes. Uh maybe some things that have been passed to us in a meeting. um requirements, right? Maybe where do we get these requirements from? We can collaborate with a designer, maybe someone who's designed the the UI, the user interface, and ask them questions about what is supposed to what it's supposed to look like or maybe, you know, uh I see a lot of UI bugs actually come up, you know, where things will overlap each other. Those can be kind of obvious. But what happens if a color is just totally different, right? Sometimes these designs happen, they change. Uh how do we understand if a design has changed? So there's a lot of documents. These are all really good ideas here. All methods that you could use. So the very first round that we would look at to determine if something is a feature or not or if it's a bug is to just do your own research really quickly. So if you have some questions, try reading a little bit to see if you have uh the ability to answer your own question. Usually we'll be given this holy grail of documents. It's called the the product requirements document. PRD PD. We're going to remember this acronym forever now. Uh because it's going to be in every project that we get and it's our source of truth for what is supposed to be in the the software or what is not or how it's supposed to behave, but it's kind of like glossing over it. It doesn't cover every single scenario. So, he might have questions. After reading the PRD, you might then ask your QA lead. Maybe they have some more information on it. Maybe they've been to some other meetings. And if you still can't get your question answered, maybe you would ask the engineer directly that's working on it. They since they're building it, they should be able to answer that question because they'll need to write some logic for it depending on how it's supposed to work. And then uh we can also have meetings with them. We can also talk to the design engineer as someone also referenced. Um then there's usually some sort of um uh bug bash that we'll do together. The engineers will also start testing to see how it's performing and then you can also clarify questions there. But our very first step bringing it all the way back guys is this PRD. So in the very beginning of our project, so let's imagine we're in an agile scrum environment. You know, we got these weekly uh scrums uh where we have sprints going on. Each little feature is uh characterized by working on it for a week or so or maybe two. And the developers uh maybe the product manager, the product owner is going to give us this document to read so we can understand how it works. It's usually shared on like a Google sheets so anyone can collaborate with it. This is usually made by the product managers, product owners, business analysts, people who are not QA and engineers though the developers will usually give their input at some time after they've read it. Uh this is usually created by the people in the business who are the idea makers. The product manager is someone who's responsible for the overall success or failure of the project. It's a pretty high up level position. You'll see them having maybe daily or weekly or uh some often meetings between everybody in the group that's working on it. Mostly the heads of each department. So they can align themsel about uh what's going on, what's happening, what needs to be worked on, if there's any blocking issues that prevent the application from being made or maybe there'll be people leaving the company they need to be aware about or maybe they need access to some data, they need access to some tool. The product manager is there to help uh facilitate that to uh delegate that to someone else who's usually the product owner. So the product manager is usually the good idea fairy where they come up with, hey, maybe we should include comments in our product or some sort of real feature where you can have 30 seconds of video and the product owner their job is to actually go through and try to implement this and then create all these documents and understand uh how the developers need to do their job. The business analysts are there's they're just to forecast the position of that feature, if it's going to be a success or not, if it's worth dumping money into it. Uh there's a lot of money being dumped into AI right now and features being made that don't make it to market and it's just for research, right? They're understanding to see if they even can do this. And we usually get this document provided to QA somewhere in the planning phase. Let's go ahead and check uh let's say an example of this. We as junior QA might not be the first to look at this. When it gets handed to QA, it's probably going to go to our QA leads which are managing usually a whole feature and they'll pass it down to us after they've reviewed it so they can answer some questions for themselves. And I like this PRD here. It is maybe not something actually used, but it is something that's very similar to what I'm seeing every every day because I'm working on new features at least once or twice a week. I'll make it a little bigger for us. And it's broken down by several sections. Usually, we'll have an introduction uh describing about what it's for, who it's for. uh they're gonna answer some questions about why we're doing this because the most successful products answer a question, right? Uh you we often go astray in trying to create something cool to sell to people if it when it doesn't have a use for it, right? So, we have to have somebody who wants to buy this, some sort of need. And then we'll go through and actually describe the uh features about how it works. Um, some of us are slides for another course. This should be lesson number four, right? Sometimes I get off by one, but if it's um should be test documentation, right? Okay. Yeah. So, look for number four when in doubt. uh I as QA will be going through this looking at it from my perspective as if we are going to start writing test cases for this. So it's our job to now test based off of whatever this is. And since we're in the planning phase, we we don't have an actual working piece of software yet to look at and play around with. And so we don't have um maybe like a firm grasp of how it works and how uh each you know transition is supposed to happen as as you go between pages. So uh usually we'll give a a glossery of some sorts uh describing about all the features that are inside of it. And right off the bat, I notice that I have at least four different types of features. So now I'm starting to think, how many test cases am I going to need to write to test each one of these? So that's at least four. We can post, comment, reply, and vote. And this looks like it's a feature for some sort of maybe blog, right? Uh some sort of website. It's a community where people can uh post things and comment and vote on it. Maybe similar to Reddit or Hacker News, something of that sort. And then we have different views. Now, just having used social media for so long, I know that you can usually do similar actions in different views, right? So I'll just say that we need to then test everything again in the different views because we don't want to take it for granted. I've seen many times where you can just because you can post something, let's say Instagram, you can make a post from the homepage, you can make a post from your profile page, you can send something to your story from three different areas, also known as entry points, and they can break. You know, one path might work and the other two might not. So, we want to try to test them all in these different views. So that's at least uh four test cases here for the index view. It's another four for the detailed view and uh so we'll have 12 for the profile view here. And then it introduces us to users, user types, uh non-registered, uh registered users, contributors, and admins. Sorry, there's four here. It's not bold. Um, so we'll want to be able to gauge these uh different user types based on their capabilities. So the non-registered users, we probably can't do much with them except maybe view things. The registered users have the ability to vote, but that's all they can do. Very odd. Um, but they can't post or comment. And the contributors can do all the normal things that we would expect. We can post, comment, upvote. We can do all the four features here. But a funny thing is that our product manager wants to call these contributors hunters. So from then on, in order to not confuse anybody, when we start filing our bugs and making our test cases, we'll also refer to contributors as hunters. Uh, we're going to have a whole another slide if we haven't covered already about, you know, staying on track with using similar names for features like the kebab menu and hamburger menu and whatnot. The index views, this is going to explain in detail about each one of these index views. So, this is where I can actually understand about how I need to test the index view. We need to be able to show posts chronologically and segmented day by day. Chronologically meaning um newest on top to oldest, right? So, you wouldn't want to see any social media web page with the oldest stuff at the very top and the newest at the bottom. You'll be scrolling forever, right? And then we get a CTA. Does anybody know what a a CTA is here? what this stands for. Anyone heard this? It's a uh Yeah, nice. It is a call to action. It can be anything. It's usually like a popup of some sort. It can be big or small, but I most commonly think of this as like a when you go to like a shopping website, maybe like a clothing website, and there's a popup inside of the browser that says, "Sign up now for 10% off on your next order in your email address when you sign up now, right? That's kind of a call to action." Or if an app introduces a new feature and they want you to click on something, maybe like Instagram introduced a way to share filters, right? Uh a little button will pop up somewhere and that's also a CTA. Uh so we'll need to make test cases for all of these. That's at least three. These hunters contributors are going to be highlighted. So, their profile name should probably be, you know, displayed in some way. We don't know what that looks like yet. So, we'll need to get confirmation from the designers, but we know they should stand out in some way. Oh, yeah. Another good example for CTA when we're about to buy something and then they suggest two additional items. Uh Amazon I think does that all the time and maybe some other shopping websites you know things that you'll enjoy or maybe they need to go along with it. Let's say you buy like a toy and then something pops up and say hey do you want to buy batteries for this because it requires batteries. Our detailed view we're going to need make test cases for the information that's displayed in the post. Right. uh we don't have the exact details about that. So that's probably something we'll need to write in the notes to ask the developers or product owner here. Uh the page should also include uh who's currently in the conversation or who's voted. So a list of names posts are going to contain all of these different things. This is what I like to see when we start to get some details about how the feature actually works. So the name of a post, you know, how can we end up testing the name of a post, right? Um, we can test how many characters in total length a name of a post can be or how short it can be. Can you make a post with no letters or numbers or characters in it? Right? Can you make a post with just special characters like the carrot and asterisk and pound sign? Um, what happens if you put some profanity in there? Is there going to be a profanity filter? Right? These are test cases we would want to include and then get some confirmation if it's uh actually supposed to be in there. Here we do get some information about maybe the tagline. Um, it's usually like a short kind of catchy phrase or something underneath. Uh, less than 60 characters it looks like. So that's a good maximum value that we can have. We need to have the maximum values for the name, how many characters you can put up there. The URL that should be including putting the www in there for whatever page you're linking. Uh we don't know if we need to include https colon slash in there. We can ask about that. Submitted by is going to be the name of the post with their profile picture. And so that should show up. And we'll want to write test cases in case maybe their profile picture changes because users do that. Is it or is it going to stay the same until they post again? And then I'm also thinking, can we click on their name, their profile picture to bring us to their profile? Does that work? It doesn't say that here, but that's usually how I expect a social media to work. And then votes, right? Usually you can vote up or down or give it a number. Um, can you change your votes? That's another question I have to write test cases for. And then the comments, number of comments for that post. How many comments are going to be displayed by default before it starts uh you know being truncated? Like where you have to click a button and everything is uh expanded where before it was collapsed to save room. You wouldn't want to have on the front page a post with you know 200 comments and you have to scroll through all the comments just to get to the next post. Right? At some point the numbers or the comments should collapse. We just don't have that information here. So these details I've listed usually in a full-on PRD there will be some reference to those answering my questions here but if they are not they will be in another type of document that we'll talk about here shortly. All right, the comments here. Great. This is more information. I'll just kind of skim over the rest, but uh each part of these is going to be defining about how we can go about testing it, right? And seeing if it answers any of our questions here. Uh email notifications is something pretty interesting. I want to be able to think about how we can test this because this is kind of unique. Uh we can't do it really quickly just right off the thought in my head. So, we need to list all the posts for the day and it can be changed and it's like a weekly digest. It sends it to your email and then you can review it. Right? So, now we need to create some test accounts probably some test data. We need to populate posts on the web page before we release it to anybody. You know, we're the only ones on this social media website here. it feels kind of barren. So, we as testers have to create the content to test. Maybe the developers can add posts for us and then we'll have to wait some time for our email to show up. Now, that's a long time. Uh well, what you can do is ask the developers to create a tool or some sort of uh button to click on that just makes a bunch of random posts. It doesn't matter what the posts are. Um and then be able to send that digest um to your email whenever the tester needs it. Right? So maybe like two buttons, two things we can click on that makes some content for us and sends that email because that's all we're really testing. We're not testing the the quality of the post, right? We're just seeing if those two functions are working. Uh we can do that with a lot of other features as we come across it. Um we can ask developers to help test for us or create tools in our testing so we don't have to do everything completely manually. I had come across before in this uh VR app. We needed to see how long it took for uh the pages to switch whenever you clicked on the submit button for the video to start loading. Well, you could take a stopwatch and try to do that manually or you can ask the developers to include a tool that just gives a little readout line of how long it took for those two pages to switch. And they can totally do that. All right. And uh I'm glad someone else here had fixed their audio issues. Welcome back here. And we got our registration page area. Uh we probably want some error handling in here to check for names. Uh right, if your name has already been registered before, it shouldn't let two people with the same name register. We want to be able to not register naughty names. Maybe some sort of filter for that. Uh profile pictures, the same with that. A lot of social media websites don't let you upload your your own picture. You just pick a default one. Uh otherwise that creates a lot of moderation. And here at the end of this PRD, we'll usually get what's called like the brainstormed ideas or P3s and P4s. Uh P3 4 meaning priority. So these features we've all seen up here are going to be priority zero, priority one. These are features that must be in the minimal viable product, the MVP, not the most valuable player, but the minimal viable product, which is just um our software here, our social media website that we can get up launching with the the features it's intended with. And it might not be everything, but it's the minimal amount that you need to just make it work. I hope that makes sense. And these other ideas, we can add them in later. So, usually there was like a brainstorming session about all these features and then they would rank it from most important to least important and things that might be fun to have but don't actually add any value uh in terms of like monetarily for the company. They're just would be fun for maybe users. We can add them in later. And then maybe they might include some competitors, some product inspiration. Big companies uh do this all the time. Uh where they just kind of take whatever is the best out there and then implemented in their own, right? Um a lot of companies built off of what Vine was doing with their, you know, like 10-second videos or whatever it was, and that's now in Snapchat and Tik Tok and Instagram, right? Um, the same thing with this VR app I was working on where we already had a competitor and they did some stuff well, but we wanted to change some things. So, we were kind of loosely basing what we were doing off of them. Yes. And uh, now the best part because I'm someone who's visual is that we have this area that will be called mockups. Mockups or wireframes or Figmas. You're going to hear these three terms uh wireframes or figas again as interchangeable with mock-ups. All this is is the designer, the u user interface, uh user experience, UIUX has made just a fake mockup of our website here based off of everything that they have read from here from all of our comments and following all these features and they decided what it should look like, right? They probably had a couple meetings with the product managers and owners, get everything aligned correctly with the color scheme, but for the most part, they just read what we just did and they based it all off of this. So, we as QA can now take the design point and ensure that the designs are going to be correct. We can create test cases to ensure that this button over here is going to be orange. The color theme seems to be black and white and orange. our profile pictures are going to line up like this. Uh maybe the the vote counts and number of clicks should be aligned as such. And it looks real. It looks really convincing to some of the things that I've seen before for various different apps. It looks like you can click on it, but they're all just fake. They've created these assets with uh usually like a design standard for the company and then just kind of plugged everything in. So like Instagram, everything they do and write on Instagram goes through a design specialist and a whole department to ensure that it all looks very Instagrammy, right? You wouldn't want to create a new feature and it looks disjointed and um maybe looks totally different from the rest of it. And we might have something called like tech notes. This for manual QA for us isn't going to be really relevant, but it's here for those that are developers. Maybe QA automation. So they can create test cases, start working on them because they needed to understand the names of each button here. So uh everything that's a button is going to have some sort of ID attached to it. uh if you get into automation, you'll see how that works there. Oops. And then maybe their understanding about how to go to market, when this is going to be released. So now we can make a schedule based off of, you know, how long we have before everything is launched or before they decide to do a code cut where they they freeze everything so you can't make any changes. Uh, but that'll be more for your QA lead, QA manager to determine what kind of uh testing you're doing. Um, yeah, sorry. Uh so it looks like I think we should be on mastering testing documentation as one of the comments put down here. But uh it looks like it's mislabeled as web application testing fundamentals just just FYI. If it's uh not there it'll be there tomorrow. All right. Uh we do have another example of a PRD here, but we're going to use this one for after our break at the bottom of the half hour here. We'll come back and then we'll review this and start creating our own test cases based off of this. Uh so our next document here is the TRD the technical requirements document. Uh this is what is produced by the developers where the product owners uh managers created the PRD. The engineers will also look at it just as we did and then they'll create their own documentation. So now they need to figure out okay how long should all these pages take to load at maximum? Uh how uh what kind of minimum bandwidth do we need or like internet do we need to actually make this you know website work correctly? Um can someone with really low internet uh speeds use this or does it require high amounts? Are we going to need a big data center uh to host all these pictures and profiles and accounts? What kind of security do we need? Um and then it might answer for us some of the technical parts that I'm looking for like comments and features like you how many comments are there in that tree before it starts to expand or contract or how many letters can we put into the post name? That should all be answered here. uh usually created by maybe the lead engineers, architectures, u architects and this is created in the design phase. So after the developer I mean the design UI uh writers have end up taking a look at it. Let's check out this guy here and we'll notice it's vastly different. This previous PRD was probably made for an agile scrum environment where everything goes fast. It's a it's a maybe a twoe sprint for everything, right? But then we also have maybe a waterfall environment where everything goes through procedure, right, through each department before it moves on. And I can tell it's for this because it's over 90 pages. Uh, and it's got borders and logos and stuff like that. So, it's probably made for a very long-term project. This TRD is not the same uh for the PRD. So, it's not related. This is just something different we have here as an example. Uh we have maybe a list of team members here. They're going to be uh for us to contact in case something goes wrong um or we need to assign a bug to somebody. Uh this is helpful because different parts of the application will usually have different developers associated with them, what they're working on. So if we come across a bug and we need to give it to the developers, we can usually find out uh who the correct developer is to assign it to. So, for example, like you're working on maybe Instagram, uh someone that is working on stories and video, um but you end up finding a bug related to capturing pictures with your camera, right? It wouldn't be right to send it to them. You need to find the PC uh for the the pictures portion. You might have a table of contents here. Like I said, it's very big. So, you need to be able to understand what it is uh or where to look for your particular area. And there's only a couple of parts here I want to emphasize. A lot of this is not for us. It's not meant for QA, not manual QA anyways. Um but we can pull some uh relevant information out. Uh they might give Oh, this is the part I really like here. Uh the system environment. Our previous slides we had maybe talked about uh environments, right? And bugs occur because they're not I mean they're we're switching different environments. The the real life environment with the users use is different than our environment as QA which is vastly different than the developers. They're using something called Net Beans and Grails and Jennet and Vizio and Apache Tomcat 6. that's uh we don't use any of these softwares or hardware capabilities uh for manual QA here. So we can see why there might be a disconnect, right? That's just a small kind of caveat. Um the UI design here, there might be additional UI for us to look at so we can uh get our bearings straight about understanding where everything is going to be located and maybe the color scheme uh maybe some of the functionality behind it. Uh and then we if we want to understand really what's kind of happening behind the scenes of our application, we have something called a high-level view where a highle view is not uh something very detailed actually it's just a brief summary. So, if someone asks you, and you might hear it kind of often, uh, hey, can I get a highle view of this app or this documentation, they're actually just asking for a summary. And if they're describing an application, it's usually just uh all written down in kind of uh containers, squares, and directions of flow of information. So users are going to access a web app which is just your browser and then they can see that there's some sort of UI and then it has to go to the server and then back and forth. Um what isn't going to be on the PRD is this part here which I think we can pay more attention to which is the exception handling. So there are negative test cases right? So the positive ones are how our application is expected to h have uh handle things and then the negative test cases are when things are going to go wrong right so uh let's say we're trying to upload a video to somewhere but the video is like 500 megabytes it's really really big what kind of error are you expecting to get right or what kind of notification should there be usually there should be something that tells the user you can't do this And here's what's wrong. Uh, sometimes it might just be a code like 404 whenever you try to visit a web page that's no longer there or um maybe uh another one for when you don't have permission, right? Uh if we don't have an error then sometimes actually the applications end up crashing and there's no indication as to why. Uh so there usually will be a list of errors that handle or um that occur for certain situations. So um there's a lot of things like an AI if you're trying to enter in something inappropriate um uh like adult content and it might give you a specific error for that or it might return another error if you're trying to use maybe like a a well-known celebrity or political figure for something. Yeah. And if if you got any hands up, just uh let me know in the chat here. I'm pretty sure this is the correct lesson here. Um because I got it on my my lesson number four as my schedule for what you guys have been doing. You guys have gone through three classes, right, with Alex, right? Correct. Okay. Yeah, the slides. Um, yeah, we'll get that corrected for you. If it's um on the web application, I think there's just a switch there. It's no big deal. They'll get a fixed. Uh, but you'll be able to have it for reviewing purposes probably by the end of today. Yeah. All right. Um, and then it might describe about how these users are leveled in the application, what kind of uh permissions they might have. Uh, really similar to our other PRD that we had where they had unregistered users or registered ones and admins. Another highle diagram. Uh, so a lot of this is something that doesn't mean anything to me. Look at all this. uh we just want to find the tidbits that help us create test cases. So I think that is actually going to be on line or page 80. All right. So these are some test cases probably unit cases that the uh developers would end up using because they're actually inputting code here and they're getting some result. But if we scroll down enough here, it might describe about the nonfunctional testing requirements. So everybody can be aligned on that because maybe not every aspect of the application is going to be needed to be test by manual QA, right? So there might be some sections the developers are going to cover, some sections that the um uh QA automation is going to cover and some things that another team, a manual QA team would end up covering such as I see it more often in uh translation or um localization rather or they might just use an entirely different team that is fluent in another language or a culture. So, and then yeah, so you might also see inside of this TRD, we got some things like, oh, they added all the bugs that they found first. Uh, developers are often the first ones to test everything, right? Before they end up sending tests, they're going to do their unit tests. So, they don't often put it into a bug report. They don't put it into u our test case management system that we have or uh bug reporting software. they'll just probably put it in a spreadsheet and get back to it and label it fixed whenever they do. That's my kind of one pet peeve about uh this because if we have bugs that they found already, we want to keep track of that so we're not submitting something they already are aware of. Um but they just end up writing it down on some random document. So, uh, now we have two major pieces of document here that we don't create, which is the PRD and the TRD. Uh, the TRD is a little bit more complex there. There's only so much information you're trying to pull out of it, just mostly to see how the feature actually works. If you can get anything else out of it, it's usually going to be not directed towards us. But once we have that, now we can start creating our own documentation. And one of the first things we need to do is create an outline that shows our strategy. We need to understand uh what we're testing and be able to explain that to our leads, our managers, our other engineers that are going to be looking at this and relying on QA to find bugs in their software. So this test plan is going to include things like our approach, our resources like what kind of software we need to use as a tool, um how many testers we need, our scope, which is everything that we are responsible for testing. Like I just said, we might not be responsible for everything. Maybe we might not do localization testing. So we'll be sure to include that that mention that note in there that we are not doing that just so there's no unknowns. Uh and then what might arise is that the engineers would look at this and say oh great yeah you're testing everything. Well if they had read it you know we aren't responsible for certain sections and then a testing schedule. how often we'll end up testing every day, every other day, once a week, or um whatever the requirements of that feature needs. And that's usually really dependent upon just the project. So, every project I've been on has been different. At Google, we were testing the same thing every single day. At Meta now, we're testing maybe once a week for a particular feature. um sometimes every day but it lasts no longer than maybe two months or so. Uh this is going to be created by us uh with our input usually the QA leads uh they might have some more uh eyes about this. Let's take a look at this plan though. We want to introduce this to you because you could create a test plan. Uh when I first started working at Intel, it was a very small team. Um and there was just one engineer and one manager and then a whole other team doing some other features. Uh so with just one QA which is myself, now I have to create the test plan because I am the QA lead, default QA lead here. So I had to look through previous test plans and try to come up with my own. Uh we should have an introduction that just kind of describes about what it is we're we're including such as our risks and schedules, our objective, um what kind of tools we will be using. Uh some of this will make sense a little bit later, but uh the key points we need to include are our team members, right? Uh usually we'll have a team of maybe two to five per application we're working on and then what their roles are. Uh we might switch up what we're testing every now and then, but usually we'll be dedicated to testing a particular portion. Uh because some of it is just too big for any one person. For example, we were testing the Google Android operating system for your phone, right? The new version that was going to come out. I think it was the Pixel 7 and 8 a while ago and uh that I can't do it all. I can't test the phone and text messaging and u the native applications like the clock and the calendar and things. So they just had myself doing the camera and then another person would take the role of testing the phone capabilities, the internet capabilities, the text messaging capabilities, the ne the native applications that are built into Android and they would have you know testing all of those on their own. Uh because just for the camera there was quite a bit to do almost the full days of work. Our scope is going to include all the things that we are responsible for. So it looks like probably the functional cases, the non-functional um and everything that goes along with that assumptions and risks. This is something you kind of learn about maybe when you start getting into project management. Um we can assume that we uh have all the information that we needed and uh we can keep it a like a a document updated about what's going on. But our risks are something that could impact the project negatively from the QA point of view. Uh what happens sometimes is that on agile scrum we get scope creep where people keep adding new features all the time like every now and then once a month they'll come up with something new and the design changes and then that's another test case we have to add. So maybe we started off with 50 test cases and then now we have 150 test cases at the end. Um I've still experienced this. So now it like it triples the amount of work that we're doing that we initially planned for. Right? So that changes our original test plan idea about how many resources we need. Um, and then another big one that actually happens a lot is we tend to release software during the holidays. Uh, but what comes with the holidays, especially in the United States, we have Thanksgiving. We usually get some days off for Thanksgiving and then we get some of those Christmas time holidays or other uh religious holidays during the end of the year. And then we have New Year's. And then some of us like to take the first week off of January. So then we're not really looking at a lot of work days at the end of the year, right? So the developers are not working and QA is not testing as much, but they might release that software anyways. I mentioned this scenario because it actually still happens quite a bit. and how it goes wrong is that there's a very famous simulator out there called Microsoft Flight Simulator, right? 2024, the newest version. And what happened was they released this product uh too early. There was many, many, many bugs that I wouldn't personally have released it in early November. And then there was Thanksgiving. So, they took Thanksgiving off. And then they had a couple of weeks to work on it. And then half the company took off for the holidays. and he had tons of angry people uh you know using the software that was very much I would consider broken and it wasn't fixed until after that two three uh weeks or months after but then what also happens is globalization so we're also working on teams that are in Asia in Europe in the Middle East and we're working almost 24/7 uh with at but they also take holidays during specific times like Asia takes most of the beginning of January off so there's really not a lot going on uh during that time right and this can impact our testing that happens we have our testing approach so we'll just kind of briefly verbalize uh what kind of testing we're going to do so the first couple days we might do exploratory testing which is just noodling around with it, seeing how it works, how the transitions work and the colors and the design. Uh just trying to get a feel for it. And then we'll actually create an approach where we uh filter in our functional tests versus our non-functional um you know, security uh performance, localization. We'll we'll figure that into here. We might also include automation into our test plan. Looks like it's the end of it. Um, automation is not a bad thing, guys. It's uh, don't worry about it as a manual QA tester. Um, because we need to have a bit of both. Automation, the risk here is that it takes longer to do. It takes actually many days, many weeks to make, you know, 100 test cases or more. To do that for a manual QA is, as we'll see, that takes only maybe a couple minutes. and then but to run them is really fast on automation but it doesn't capture everything you know it if it tries to determine if a picture looks wrong or not it it can't do that it's not there yet it's just simply if something uh is correct or not like is a button does it have a certain color or not is it in the correct position or not and we're usually not going to have a million test case automations for this um but they should test the key critical areas that we do over and over again that's kind of boring like logging in um to see if you know normal credentials could work. So we do automate that and then we will want to automate most of our test cases eventually because we'll we'll go through our phases. We will uh be testing smoke tests every day, every other day, right? Eventually, our feature will launch and then we'll be given a new project because that's how you make a successful company is you keep making stuff and now we need to manually test all this. But we're also responsible for that other project we make. Um, so it'll get into the maintenance phase and then it'll have updates periodically and then they'll make updates to services that connect to it and touch it that might break it. So, we don't really want to do that. we just have automation test all the things in the maintenance phase. So that is where the big help comes from automation there. I wouldn't want to test you know a thousand test cases every day eventually you know when you have uh like five six seven uh features that are launched eventually you need to cut down on all those test cases. All right. So once we have this test plan that is usually created sometime during the planning design phase uh then we can go on and start creating our smallest unit of documentation which are test cases. I think this is probably what we're most familiar now with just kind of having the last couple of courses and thinking about things and how QA does it. But test cases are just uh a title and a couple of steps describing about how we verify a feature if it is broken or not. And all of the uh validity to our test cases is based on that PRD or the TRD. So that helps us write our test cases. And sometimes I just copy the um the expected area and copy it into the test cases here. So there's four basic uh areas of a a test case. There's four lines we need to fill out just to test something very simple like a login if we need to be able to log in with a valid ID and password. Right? So, we want to have a title. And keep in mind when we write our titles, it should describe the feature we're testing under a specific condition. So, we want to be able to log in with a valid credential, right? Because we can do positive and negative testing usually, and you might be able to log in from many different areas. Maybe log in from the homepage. You can throw that in there. But we want to read the titles as if we're imagining we have a thousand test cases all lined up in a spreadsheet. And you don't want to spend too much time reading or comprehending it because that's what might happen. Uh you'll eventually have to have them reviewed by the engineers who will look at your test cases and approve them or ask you to change them because they're white box testers, right? so they understand how the application will work. We can write many many test cases. We might write test cases for situations we don't know that are true yet. Um but maybe it is. So we can ask the developers to you know see if it is correct or not. That saves us some time uh down the line. It's easier to take away test cases than it is to to start writing them. So we got a nice simple title. Should be easy to read. It's just describing what it is and how you're testing it. Our precondition, which is if we need to execute something before we start doing everything like we need to have a an account to log in to. So you need to register something, right? Our precondition could be having a registered account. And then we're going to list down the steps. Sequential order 1 2 3 4 5. It could be up to maybe 15 steps. after 15 is is kind of excessive. That's a lot. Uh I really like this test. Maybe some of us have done it um back in maybe a communication class or high school uh some sort of writing class or English class where we do the the peanut butter jelly sandwich directions test. Has anybody done this? where you're asked to write the directions to how to make a peanut butter jelly sandwich, right? And you do your best. You try to write it down step by step like saying, uh, open the jar and, you know, spread on some jelly onto a piece of bread and then you spread it on peanut butter onto another piece and you put it together, right? But then you have someone else who pretends they don't know anything about anything. They're just a baby, right? and then they read your directions and it's kind of comical because then they have peanut butter in their hair and the bread is like vertical. Um, and it doesn't seem as specific. Now once the you have somebody try to reproduce this, read your directions. Uh, doing this test kind of highlights how exact you should be. You know, naming every step that you take, everything that you tap on, that you write in, right? Um, yeah, it's it's pretty fun. Uh, then we have our expected result. So, this should be the portion that tells you once you've gone through the steps, if it matches our expected result, then it is passing. And if it doesn't match the expected result, whatever we've written in here, then we can mark it as a fail. So, there really shouldn't be too much determination about what we're doing. It shouldn't be a mystery. Not too much wiggle room. We're just going to follow the steps and our expected result. Our expected result, I'm just going to grab it from the PRD. The TRD, it's telling you exactly how the application works so we don't have to guess about it. Right? The hardest part, I think, for me is just coming up with good titles. And that's just uh something you'll find in in practice if you don't shorthand write stuff a lot. So looking at here, this is just something super simple for login. Um because we've all had lots of experience with logging in somewhere. Had to log into Zoom to get here. Um I'll rate this title here. So we want to verify that user is able to log in successfully. Now I would say this is like a B kind of title. Uh we want to get to A+. So verify the user is able to log in through the homepage with valid credentials I think would make it a little bit better u because now we given a a spot that you can log in and then what kind of tested it's going to be a positive one or a negative one. Our condition is pretty good. It's the account uh has a is registered already. and our steps. We can open the website login page and notice how it doesn't have the uh the URL here, the www whatever page login.com, right? This is because in development the URL will tend to change quite a bit. Usually you will use like a placeholder name until it's fully launched if it's a website. So if you have 50 test cases and then they change the name, then you have to go in through all of those 50 test cases and then update. So that might take some time. To avoid that, we just kind of leave it ambiguous. Number two, we want to enter a valid username. I like that. We're going to enter in a valid password and we click the login button. So I think for most features that we're going to look at are probably somewhere around between one and eight steps. One and four is probably the most average that I like to write here. And then our expected result is that we're directed to the website's homepage and we're recognized as logged in. We can have multiple things we need to verify in the expected result, but I personally like to have just one thing we're verifying in the expected result and keep it separate. So I might write another test case that just says, hey, we're being redirected to the homepage and then another one that says that we're recognized as logged in. I think uh the LMS is the learning management system, right? That's just my my take off of acronyms. Oh, okay. Uh we got this like checklist here. Um it's interesting to cover, but we've already got an hour here. So, let's take a quick five minutes, guys, and then we'll review um some a little bit more. And then we're going to take a look at how we test a pen and toasters and start writing our own test cases. So, let's kind of be back at 6:40 PST or that's a 9:40 Eastern. and I'll get some stuff prepared for you guys. All right, I'll see you guys here shortly. Just five minutes, guys. Thank you. Okay, we should be coming back here. Just a quick little break here. So, we last left off on this checklist here. This isn't something that's mandatory. It's just kind of a a helpful idea to maybe take and put in your own toolbox if you feel so inclined. I don't see many people do this, but it is helpful to have some reference to come back on um in terms of making sure that you're checking all the the boxes for what it is you're testing. So, when you're first creating your test plan, um, sure you say that you're going to do some functional testing and UI testing and compatibility testing, but you know what does that actually entail? What it is that you need to do compatibility testing for, you can take this checklist here and then if you know what your product is supposed to work on, like Firefox or Safari, you can add that in this checkbox here. And then when you create test cases for you know logging in on Chrome versus Firefox versus Safari and Edge um you can mark this check box off here so you know that you have this that you've covered everything. Now this is what's known as having good coverage. Uh it's being able to test almost every aspect of the application. Now you might not have every portion of these slides or I mean the um the application tested to its full potential there. I you would have so many test cases. Um but the majority of these should be focused on you know what is the user going to do the most? Um are they going to use Chrome the most? Are they going to be able to log in? Is that a common feature that they're going to utilize? And then you can start focusing on the other aspects they don't use so much which is maybe like the maybe like contact us section or maybe viewing the terms and conditions which is still very important. It's just not accessed as much, right? Uh these are known as critical user flows or critical user paths, user journey. They say it in different ways, but it's a it's a critical portion of testing. And this other part I'm just going to skim over for the interest of time because there's other parts I want to um really hammer down on and talk about more. But we have this thing called the test matrix or requirement traceability matrix RTM. Now this is something that can help you with your checklist and creating test cases to ensure that you have good coverage. We can check it out here. Um, you would kind of put in all of your environment that you have, which version of software, uh, operating system or browser, and then you would go through and then write the expected results first. You know, what things you need to test. And we grab these expected results from the PRD, from the TRD, for meetings, how it's supposed to work, you know, the wireframes. And then now we can load in our steps and make a description or the title to that. And this ensures that we do indeed have a test case associated with everything that we have pulled from the PRD. Right? That that's the very least of this. Now there are something called edge cases. Uh scenarios that are going to happen not mentioned in the PRD. So like we can log in through one browser right but the edge cases what happens if someone tries to log in uh previously so they've already done that on Chrome they minimize that browser and then they try to log in again on another type of browser like Firefox or Safari what happens now that really could be dependent right whatever the developers are going to say here uh different software handles this in different ways depending on how it's monetized. Like uh we know now that uh Netflix, you know, you can't log in in two different geographical locations unless you pay out the butt for it. So, you know, that is a good edge case there. That's actually probably a critical um user journey that they have. And we can actually break this down by sprints if you want because different sprints will have different test cases as new features become uh finished after being worked on. And the cool thing about this guy as a template that you guys can use is that you can actually do some testing in here. You can pass fail, tell what kind of bug number it is. And this is just a good way to kind of do some quick testing. Uh the alternative to using Google Sheets is something we'll talk about shortly here, but it's a uh it's a test case management system. Um mostly like um actually we'll we'll just talk about it, but um it's just way of organizing it. It's easier to edit things on Google Sheets than it is on some of these TCMS's. So, we want to get it down, get it in Google Sheets first, edit it there, and then put it into your your uh TCMS. So, just a little bit more about test cases because they are so important to us is uh maybe some best practices. Uh I don't like using the words like successfully or you know expected functionality because they don't really describe anything to me or to someone else. The test case uh titles we can expect that they're going to be read and executed by someone else. So the way you write it might make sense to you, but the reason why you do that kind of peanut butter jelly sandwich test is because more often than not, someone else is going to execute these test cases. You might end up writing them all, but then you might go to another project or your QA lead might put you onto a different feature and someone else will backfill that position and do it, right? Um, how they interpret your writing will be only as good as how you write it. So, uh, the more descriptive you are, the better, I guess we could say. Uh, for the titles, uh, we don't want to just have them one or two words because they really don't describe anything. We wanted to have a subject. We wanted to have a action and a condition here. All right. So kind of bringing it back to school um where we want to have the object or the subject, the action or and the conditions. That's how you write really good test cases. So, as long as you have all three of these elements in here, then it seems to be uh really easy to read and we know that we're testing a specific part of the the functionality with specific conditions because there's many different conditions we can change for the same action uh and object here. So, for example, like we want to verify a user, which is our object, is able to log in. So, we can successfully log in. That's our action with valid credentials, right? Our the last one here, a user again, our object, our action, it's kind like a verb here is being taken to their account page after providing valid credentials. It's pretty similar object action condition other you know B+ titles and beyond here uh a user is unable to submit a form without providing a valid zip code. So, this is kind of like a negative case, which I'll explain here in a second, but we want to be able to not do something uh without doing something else like a zip code. It has to be there. And then what happens there? Uh let's say number four, validating the system successfully processes orders after submitting all the details. I would still accept that because maybe the processing order form is like many 10 lines long. You don't want to include every aspect to that. Um, I'm thinking of it as some sort of like Amazon kind of checkout area where you need to of course put your your billing information, your shipping information, location, things like that. Um, so that seems to be all right. Clicking a submit button takes users to a feedback page form. Um, also sounds great. The object is still the user. The action, though it's a little bit out of place, is clicking the submit button. And uh we should have a condition here. All right. The precondition though is something that kind of gets overlooked in the you know four areas that we have for creating test cases. Uh again this is everything that we need to do before running our our test case. Right? We're going to have those steps there. Um, before we do that peanut butter jelly sandwich, what's something you want to have as a precondition? Right? You probably want to maybe clean the table. Maybe you want to have clean knives uh and forks. Maybe you need to have a uh like a cutting surface, cutting board, right? All the things that you would have there prior, right? Of other preconditions. Uh we can also do other things with preconditions like wrap up the extra steps into the precondition area. So let's say we're kind of deep in our application now. Um something for example like Facebook where uh maybe our requirement is to test a dating application, right? Uh they got Facebook dating on there. So, in order to test that, what do you need to do? First, you need to go to Facebook and then uh create an account and then, you know, add all your account details and then confirm your email. And there's probably going to be about 15 to 20 steps before you even get to the um portion where you start testing the dating feature. But we can wrap all that up into the preconditions by just saying, you know, have a valid account that is registered with a profile picture and has all these, you know, uh, necessary, uh, prerequisite, uh, details in there before you can do that. And you can just start off with clicking the dating button and then whatever it is you're testing. Yeah. Oh, yeah. And uh really appreciate you guys um sending the link area there uh in the chat for the slides. I know some of us like to read along as we go. Um our steps and our expected result. Uh each company might tell you to do it a little bit differently. Uh some might actually want those URLs as I told you not to put in there. But I think just to get everybody on the same page, we can abstain from doing that. We don't have to put the URLs. If they want it, that's great. We can always change it. But just a a good uh housekeeping is not to include them in there. Uh if we're clicking through menus, clicking on buttons, we want to use the uh correct terms uh for those buttons and like the navigation menus. So, usually we might have a menu on the left side. We're either going to call it navigation menu or just left side menu. Uh we want to call the icons by their proper name. There's all kinds of interesting names within the meta universe for things like there's a whole name for that little plus attachment icon. Um and everything that opens up out of it. Uh so there usually will be some repository of information about all these names that you can read about and understand to get everybody in the company on the same page because it's not just you. It's also marketing and HR and whatnot and sales that all need to kind of know what it is they're they're talking about. And I'll explain where we have that. But uh once we have our steps down our expected result uh we just want to list what is you know the right approach and I usually again just pull that from the PRD. So if it says that we need to test the maximum number of characters we can put into a name for registering and it says the most is 60 from the PRD. I'll just copy it from the PRD and put it in there. Uh we don't have to worry about plagiarizing anything. This is just taking data from one place and putting it in the other. Yeah. So I did touch on positive and negative test cases, right? So we I think we talked a little bit about testing itself like boundary testing, right? left and right like zero versus like 100. Uh if you got like a age verification, right? You could do the same thing with positive and negative cases here. Uh the PRD uh might describe whatever it is, how the feature is supposed to work with a positive test case, right? So, we log in with a valid credentials. Takes you to the next page, takes you to your profile page, but it might not explain what happens when you enter in the password wrong or you enter in your email wrong or username. We might usually find that like I talked about in the TRD, the test requirements document. That's where I find most of my negative cases or my exception handling. It might be under the exception handling area. and it'll tell you the login errors that are supposed to occur like like oops we couldn't find that username associated, right? Or oops, your password is too short if you're entering a password in when you're registering or um oops, you forgot to enter in your password and or username. Sometimes it might be really ambiguous. it might not tell you which one you've entered in incorrectly just from a security point of view. Um the negative tests are things that are not supposed to happen. So there might not always be an error associated with it. We just need to know what's supposed to happen when uh you try to do that. So, let's say you try to uh try to upload a video to something where it's only going to accept pictures or you're uploading a picture with a different kind of picture format. Most common is JPEG, right? But there's other things like PNG or or GIFs or um and whatnot, TIFFs or RAW files that might not accept it. So, what's supposed to happen there? It might give you an error. It might tell you that you can't upload that right now. It might crash. Ideally, not crash. Uh it might gray those file types out for you so you can't even select them to begin with. Uh so it's pretty easy to come up with the positive test case, but these edge negative cases, they take a little bit more thinking about. You're not going to be able to capture every single one. So you'll probably end up adding these negative test cases as you go along, as you come up with some good ideas, as you get other feedback from QA engineers and your QA lead. Some good examples here. Uh let's say we're trying to verify a user can submit an order with an item in the shopping cart from some sort of website like Amazon. That's going to be our positive case. That's something that we're expecting to be able to do, right? Add items to buy and purchase. Um, but what happens if you try to submit your order, but the shopping cart's empty, right? Or what happens if you did fill up your shopping cart, but then you uh deleted some items, right? It was previously stuff in there. Is it still going to show you a total? Can you still click the, you know, shipping button? What happens? Um, positive test cases for maybe some adult websites here. You know, maybe the access is restricted to people 18 years or older. The negative test case is to restrict that. So what happens if you don't if you click no, I'm not 18 years old. Uh where does it go? Right? Does it give you an error? Does it kick you back to the last page? And this is kind of like more for a device. Our last example here is if a user can make a call to user B, I guess they're on a phone. And if both of them are connected to Wi-Fi, right, that's our positive test case here. So, they're using regular Wi-Fi instead of cellular um connection. That's our positive case. But what happens if one of them is not? Maybe one's on cell network and the other one's on Wi-Fi. Uh maybe this is like a walkie-talkie kind of app here where they both have to be connected to Wi-Fi, otherwise it looks like they cannot make that call. Um there's another negative test case. I can think of this one because I've done it before and Spotify does it to me all the time is Spotify assumes that you're going to be using your cell network or your Wi-Fi, right? That doesn't know how to handle this edge case, this negative case of if you try to switch off of Wi-Fi in the middle of a song. Then what happens? Um, they might have fixed it recently, but before it doesn't switch over right away. it actually says you've lost connection and you have to restart the app over again. So this is kind of like an edge negative case that you know we want to think about because users might be listening to music in their house and then they leave the house and then what happens uh you get some bugs right um some other recommendations here you know number one we covered it try not to use URLs number two our test cases should be independent and not connected to each other in terms of you know overlappinging. So maybe we can combine a little bit in our login uh test case where we want to be able to verify it takes you to the next page and then it also shows that your um profile picture is displayed somewhere that you are indeed logged in. But you wouldn't want to have um too many more test cases associated with that. Like you don't want to have five test cases that all mention your profile picture changing after you log in. U because there are going to be many features that are connected to each other in this kind of um webbing way. Um we can emit maybe some obvious things. If you get really comfortable writing your test cases, we can maybe start excluding some aspects to that. Um, you don't need to have additional details in there. You can just tell the user uh just click on this button. You don't have to tell the user click on the button that's next to this tray on the lefth hand navigation side menu. Right? We can just uh explicitly say what we need to keep it short and simple. And then instead of saying you are able to log in, we can kind of keep it in this um uh third person. So, the user, the customer, we want to keep it in the realm of kind of professional documentation writing. Uh, we're not thinking about it as like you and I. I'm trying to get someone else to do something. It's always going to be in reference to the user or customer. I I don't even say customer. I just always write user. The user should be able to log in. The user can click the button. The user can enter their credit card information uh for the billing section. Okay, just a little bit of check on learning. It's been a an interesting ride here. Uh let me know in the chat here what you guys got. So number one, uh what is the name of the document that describes how the product should function and what features it should have? What what did we first talk about? And then what is the name of the smallest yet most fundamental element of documentation? we were talking about. Um, I might give a pass on that one just because I only mentioned it one time, but I think you guys are pretty smart here. And then what's RTM stand for? Yeah, thank you guys. We are seeing PRD. PRD, the product requirements document. Now, usually it's like a living document. It will change as the pro um product goes on. it might change in the testing phase um in the development phase. So you kind of want to keep referring back to that one. Um what sometimes does happen is that we create our test cases really early on in planning and design phase and then we uh get some changes that developers don't tell us about. Um hopefully they update the PRD and then we go to test using our test cases and we find it doesn't match anymore. Maybe it did last week, but this week now, uh, it's different. So, we usually mark it as a bug because it's not following our standard. It's not the expected result anymore. Um, but, uh, we can get an update if they do change that PRD. One of the that's one of the big gripes QA has. It's in every company. It's just working together to update documentation. And I've seen lately they've been pretty good at it. I've been to some other companies where they don't often um publish what the changes are. So, every day was a big surprise. Yes. And coming back, number two was um test cases. You are correct. Thank you. And three was uh requirements traceability matrix. The matrix again, you're not really going to see that um in your job. That's just like a tool. You can use this document to make create a template if you would like just to help keep track of creating test cases for coverage. All right. And just touching on briefly, I what is the matrix anyway? I love this. Um I mentioned the test case management system. So we could write all our test cases in that RTM, right? right? As a way to keep track of them in Google Sheets and that might be a good way to validate that with the engineers because what ideally happens is we get that PD, we rate it, we create our test plan, we look at the designs, the mockups, we'll then create our test cases. And I like to first put them into Google Sheets because it's easy to edit. And I'll put my steps, my titles, my preconditions, expected results, all four elements, right? Four elements. And then the developers are going to read this. They will sign off. This is a very common uh part of the QA test case creation process. They will review it and approve it. And then we can go to begin testing. Now we can manage all these test cases in something called test rail which is the most common uh TCMS management system for test cases in the corporate environment for small to mediumsiz companies. They subscribe to this. So every company um pays a license fee and you guys will end up using this during the internship if you signed up for it. And um they also have a free trial period as well, so you can keep renewing it. But uh we want to get some exposure with test rail. And it looks just like the RTM in some ways where you can click pass or fail, but it provides some statistics. It organizes that in a nice way. It shows you how many were passed, how many failed um and it shows that over time and graphs and which section is failing or not or blocked. And uh it just creates a a lot of nice graphs for you for your QA lead and manager to then analyze, right? Uh Google Sheets does not do that on its own, which is why most companies do not use it. So, uh however, the larger companies like the ones in Fang, like Google and Facebook and Microsoft and Cisco and everybody else, they end up just creating their own version of test rail. uh it's all looks very much the same when you go to these larger companies. If you just say you have experience in test rail, which you will, then it's not a big deal. They don't expect you to have knowledge on how their test case management system actually works or looks like, but it looks like all the same. Uh when you get access to the slides, go ahead and check the video out. Uh it's pretty neat to see. Uh we'll end up using the the test rail for internship. Jira um is this other component that is related to Confluence. Jira um you guys talked about Jira, right? Um Jira is a project management software made by Atlassian where you can add your bugs as we talked about and you create your bug numbers and your bug reports. Um, it's also made by this company uh that also makes confluence. So, they talk to each other. So, we have all our test plans, right? We got our PRD, we got our TRD, but where does it live? How do I access it if I work from home, right? Because I don't have any paper copies anywhere. So, they usually make something called Confluence. and they added all there where you can add documents and sections for your department for QA, for engineers, for sales, HR and you can put any information you want there. Uh this is where QA would usually add their test plan and then the results of our testing which can be made into separate pages that key personnel are subscribed to and made aware of when a new page is posted uh to give any updates, right? Uh, Confluence works a lot like Wikipedia. It's pretty similar. I think it's actually based off of that. And some people can get authorization to edit pages, uh, create pages. But it's really cool with Confluence. This wiki page is built into Jira. So you can actually have dashboards that show in real time like how many bugs you have open, how many bugs were closed over the last week or month, how many bugs were closed that were fixed versus closed because they weren't actually bugs. You know, these are some metrics as we go on as we become a QA lead, a manager, we'll start end up looking at. And again, go ahead and check out this other video as um for some time permitting. Um other companies, even big companies still also use Google Sheets, Google Drive, that whole uh cloud service thing. I got most of my documentation is in uh Google Sheets. Uh however, they had something similar to Confluence before and they just they switched over. So it just works for some larger companies. these mid to small size companies, Confluence works pretty good as well. Um, if you want to answer these, it was just such a short kind of visit on this, you know, what is the name of the most popular test case management system started with a ends with a rail and then what is the name of the documentation tool that we got that we're probably going to end up using? Yeah, test rail. There is another free version of Test Rail. Uh it's not called Test Rail. It's something else, but um uh gosh, I'm blanking out the name of it, but it's it's great for like companies that are not very big at all. If they want to use it, you can use it in your your own time, too. Yes, it's Test Rail and Confluence. All right. And again, you'll end up using test rail. such a universally accepted and used tool. Um the bigger companies, they don't end up using test rails because they have engineers who can just make their own. Right? If you're a big company with 10,000 employees and it's a tool, you can just make your own tool. Yes, this is the part I wanted to kind of get down to. I'll first review this page so we get access to the slides. Um, each version of this presentation should have this at the end as far as I'm aware. Um, so we want to understand how we go about testing things like a pen, a toaster, a vending machine. I would say checking out these links is required. Uh, because I almost every interview I've had, they will ask me, "How do you test a pen?" All right, let's check this link out here. Because when we get asked this now, suddenly we don't have a PRD to draw from. We don't have any documentation. Um they're just going to ask you how you go about writing test cases for a pen. And the idea isn't to, you know, come up with some sort of documented answer, but to just kind of demonstrate that you can put test cases into a specific category and they're correct for that category. like you can make test cases for functional and non-functional for uh maybe like the UI for localization for uh performance right negative scenarios and that you can kind of rattle off as many test cases as you possibly can for that. Uh the pen is the most common that's been asked, right? You might get that especially as junior QA. Uh if you go to some more advanced interviews like Apple, it's like a seven-h hour interview. They even test you during your lunch. And uh they'll ask you, okay, what are your hobbies? Right? And you might say, for me personally, I like flying. I like simulators. I like doing certain things. And I'm like into flying RC planes and whatnot. And I start talking about drones and the military. Um and then they ask you, "Oh, okay. How do you test a drone? So, be careful what you share about your personal hobbies. If you're not ready to answer these kind of questions about how you can test something, you should be able to kind of come up with test cases for every object after this and see how you can um stack up to that. Um, now the pen is the most common. So, the first thing you might ask is, is there a PRD? And they're always going to say no. But the second most uh question you can probably answer is uh what kind of pen do you have? Is it a fountain pen or a ballpoint or a quill pen? Because that kind of makes the difference here. And then just start thinking about what kind of cases can you write that would you know if you imagine if you had a pen in a dark bag or a room and you you could you have to play guess who what kind of testes could you do to um to prove this? Right? So for like functional we can verify things are able to um write on a normal you know flat surface that's a positive test case and then you can start writing in different orientations on a wall upside down uh maybe in space and then uh different environments maybe uh if it's very hot maybe if it's very humid maybe can the pen write underwater um does the pen have the ability to change inks. Some of them do that, right? Um, does the pen kind of bleed its ink if you change the ink color? Uh, is the ink itself waterproof? Right. Uh, what happens if you start writing on different uh environments that it's meant for? Um, paper. Okay. What about cardboard? What about different types of paper like wax paper, glossy paper, watercolor paper? um we want to write on things that are maybe like glass uh then different types of uh materials, right? Uh so review this. There's all kinds of different scenarios that we can we can use. And this page itself isn't exhaustive. You can Google search uh test cases for other pens. But I like this one because it lists it out into different uh scenarios, right? It's a good starting point, right? And there's different parts of the pen you can start to test, right? And the UI of it, right? It gets really involved. And uh now we can start to begin to understand we can start testing something uh without a PRD, right? Without the requirements documentation telling us how it's supposed to work. We can create these scenarios uh even though they might not all be applicable for it. Right? It's always okay to add more test cases than not. It's okay to fail some things. Uh because ideally it should get reviewed by the developers to show that okay it's a uh you know this feature and you know this test case is applicable for the feature because sometimes we might write test cases that you know it's not how it works at all. Uh other good ones like the toaster a vending machine. A vending machine is pretty interesting because it showcases to us that a lot of objects have like a debug menu. So there's a debug menu in vending machines. If you hit the buttons in a certain order, I used to do it as a kid. You would go into a menu. You can do things like see the total amount of sales. You can change the prices if you have the password to them. I didn't do that. Um and a couple of other interesting features. these smart devices nowadays, like refrigerator, you can uh go into the debug menu and change things like what's displayed on the menu. You can defrost your your refrigerator if you need to. Um and many other options. Okay. Uh actually uh just one second here. Let me grab a a sheet. We'll stop sharing for just a second. How are we feeling so far? Uh, apologize about the confusion with the the slides being a little bit different, but they should be pretty similar. All right. Okay. We're doing fine. Doing good. Okay. So, this kind of looks familiar. It's my kind of RTM sheet. It's just a template that I like to use um that I can quickly write down my test case titles, my steps, my expected behavior, and then my preconditions. I can even add things like priorities uh based on that and I can do a little testing over here. So this is before I would put into our test rail, right? So I can edit things easily and see them all at a glance. Now I wanted to bring our attention to uh this guy uh this PRD that we got here. So I want to review this and I want to start writing test cases for this login requirement. Um, oh, cool. I'm grouping things. Login and registration requirements. If we check this out, uh, it's going to tell us exactly how it works. It should be the first page the user sees, and we're going to have two text fields, one for the login, one for the password. Now, some scenarios. If the other field is blank, uh, we get an error. If the both fields are filled in but there's no record of a username or password, they also get an error. Uh users that have not registered to log in, uh they first have to click the register button and then uh if we try to enter in an unknown login name or password, we get this error. And then the registration page, this this is the design that it's going to look like. It looks like we should have three text fields, a username and password one and two. Now, this is already drawing some questions I have. Um, in QA, you know, we can say we can write the test case for this password number two. Um, but then we can also file a bug for it after saying, hey, it should be changed to saying re-enter or retype password. Because when I first saw this for the first time, I thought it wanted me to enter in two different passwords. And I if I'm going to make that mistake, the users are going to make that mistake, right? Um and then again, there's another error. If all if any of these passwords are left blank, it's going to be reported to the user. And here's some of our exception handling. So these would be the exception handlings um as a picture, as a mockup. I would usually see this in the TRD as well. So, uh, now is where we can start throwing down some test case titles in here and how we're going to thoroughly test this. If you guys have some ideas, uh, some titles you want to be tested or or aspects, let me know in the chat here. I usually start off by just verifying this entire design. Like, if you go to this page, this is the first thing you see. You get two login areas and two buttons, right? And then we'll go into the functionality of it. So I want to do my nonfunctional first. So uh a good title or just an okay title would be uh verify design of login page. So that's our our subject our condition says according to mockups of now it's not the most exact I would say maybe there's a couple different elements we need to verify but I could also link the design here as reference the mockup I could take a picture of the mockup and just link it here so if I am executing this and I need to know what's supposed to be there I can uh click that for reference I'll include the steps in the expected behavior and precondition in a second. Uh the next thing we want to do is now verify just okay maybe we can log in we can press that login button. Uh we can we don't have to say verify every time as well either just going to give our actions. So want to login um from homepage. We'll just say from login page with valid account credentials. Right? So that's our positive test case here. So um again there's not a lot we can do positively here but we have a lot of edge cases. Right? We can try to make these errors happen. So we can start flipping this up. We can log in from login page with a valid username and invalid password. So this is our negative test case. We can flip that around and make another one. Login login page with an invalid username and a valid password. All right. So, we got a valid um one or the other. So, we can try maybe we're missing something. Um uh I'm just going to keep I'm just going to copy and paste it actually. So, what happens if we are missing our password, right? And then what happens if we are missing a login username we but we enter the password. All right. So we've already kind of got this covered. We have we uh met all the conditions for this. Maybe we have no username and no password in here to get both of these errors. Okay. And then I think that satisfies that. And this is what happens when we have an invalid username or password. And we go to the registration page. Right. Uh what are some other edge cases for our login? Maybe things that are not so common but have happened. Um, something that comes to mind even though the PRD doesn't say it is that what happens if we have our username with capitals is it in it is our username case sensitive uh uppercase lowercase letters in username with valid password. All right. So now I'm just trying to think of some other scenarios that could happen. Uh, we can log in with valid credentials on two separate browsers at the same time, right? And then that's kind of pushing it, but those are things that can happen, right? And we can say this is the area of the login. And now I want to start talking about the registration. So of we can create an account with valid passwords and unique username. And then our negative we can do here is what happens if our username is already taken. Right? Create an account with a valid password and taken username. Uh what else can we do here? Right. The registration page. What happens? What are some other scenarios? Like what happens if we make a really short password or duplicate account? Oh, that's right. Um, can we create an account with the same credentials? Create an account with minimum number of password characters. Create an account with the maximum number of password characters. um you know and some silly things happen like what happens if we were to uh create our account and then as we're registering and as it's like spinning around like what happens if we close the page and come back really quick? Does that complete this registration process? Um sometimes that can goof stuff up, right? Um, there's other stuff we want to consider uh that's not on here like weapons. If there's I like the have idea having that button where you can find your account, right? That's not listed on there. You know, that's a good one. tapping the registration button multiple times. All right. And then we can start getting down to the steps. We might go over time a little bit, guys, today by about five minutes or so, but bear with me. Uh I like doing the kind of demo practical portions here. Uh so our steps, this one's pretty easy. So we can navigate to our login page, right? We don't have the URL here. Our next one, our steps. So now we need to log in, go to our page. We need to enter a valid password, valid username. Step number three, enter a valid password. And step four, we're going to press submit. Tap. Submit. I like to use tap now because I don't know if this is written for like people using a desktop or people using their phone. Most internet traffic to websites nowadays is through your phone. So, they're probably viewing this through a web browser like Chrome or Samsung browser. And now I can just copy and paste these to make it go faster. Uh, with an invalid password, I'm just going to change these to match my title. Oops. If I could type today. Invalid password. And then this says an invalid username and no password. All right. Um, this says navigate to login page and then instead of just saying you can say enter in no nothing for your username and password or you can just say tap submit. I prefer to being really clear and not adding extra steps. So we just going to go to the page and tap submit for having Oh actually we need our password here. I'll getting ahead and enter a valid password. Change it to number three. Cool. And then we can enter in our expected behavior. Um page loads according to mockup designs. Uh it doesn't explicitly tell us that we get moved to a login page, a dashboard or redirected or what happens. But we can assume here page redirects the user to let's just say profile homepage. um error displayed when we have an invalid password and that error is going to be incorrect name or password. I hope that makes sense. This is the same error here. Now, they usually don't tell you which one is which because then it allows hackers to kind of guess like, "Oh, I've guessed their password correct or I've guessed their username." So, you want to keep it as uh ambiguous as possible. Uh, login from the login page with invalid username and no password. So it might say um two errors actually maybe incorrect name or password and then you have to enter in a password. A password is required. Now there's some interesting logic that they do here. Now if if uh you can kind of deduce right um uh what's happening one of these is incorrect but you can kind of guess you know uh based on the error messages that are happening like which one is going to be correct if you entered in a a correct password to somebody unless it just gives you a blanket generic error message here and then our precondition We don't need to have anything. It's not not available. Uh precondition a registered account. Anywhere we're using valid information like for our name, we just need a registered account. Here we're using a valid password. And then here the rest of these we don't have any uh condition, right? So it's invalid username um and a valid password. I guess we do need this for having a valid password. Uh, and I guess it only took us about maybe six, seven minutes or so to create just uh these test cases. So, it doesn't take very long at all. The idea, the hardest part is just coming up with the titles, right? What it is that we're actually needing to test in the first place and then getting those guys down. Uh don't worry about this is just a short introduction about it. You don't have to capture all this knowledge. You'll be doing some of this for homework uh later on. So um we'll have the ability to refer back to the slides and the homework. Um and you know just keep in mind that the title should be short and sweet and it just describes about what it is you are testing. Yeah. Um, so for finishing this off for this evening, you know, want to go over some interview questions, things that have been asked of me, of others that might get asked to yourself. I think this is the probably most important part is that can we name some common types of test documents used in software testing? Now, we went over a few documents today. We're asking for test documents. So, we did cover test plans, which is a full written page, right? And then we had test cases. Another one that we didn't go over, it's not really a document, but we'll hear that we have test sets. And the test set is just a collection of test cases. That's all that means. It's like a, you know, kind of like a folder. Think of a set as a folder. But we have test plans and test cases. Those are most common and most important. And then how do we write an effective test case? And how do we ensure a test case is effective? Like it's just kind of rewarding the same thing, but uh we want to think about the elements of a test case. What goes into it? And now the whole industry uses this four sections of a test case. There's no uh regulatory body for QA engineers like there is for lawyers or you know doctors, right? Um but the whole industry uses these four parts which are the title that gives a good indication of your object, your action and your precondition. You know what it is you're testing uh you know where you're testing it and how you're testing it. Our precondition things that we need to execute before or the numerous other steps you need to have done prior to to get to whatever it is you're trying to test. your steps that should be in sequential order just describing uh how to interact and navigate with the application to test that particular feature and then our expected result. What is correct? What is supposed to happen? What did the developers and the product managers intend to happen? Right? And we can pull that from the PRD most of the time. So we have those four elements to make an effective test case. And then our test plan. Uh maybe something we'll end up writing but just to be aware. Uh it includes our uh introduction, our test approach, you know, what kind of uh non-functional and functional testing we will be doing. Uh our scope, the area that we're responsible for testing because we might not have to test every aspect of an application. you know, if um something like a phone, uh we might be responsible for just testing maybe the the talking portion and another team might do maybe the internet portion. Uh then we have our environments, the tools, the software that we're going to be using. We want to be able to put that in there in the software versions. And then our risks and what uh might impact our testing and then also you know including our team members who is in there and what role do they have because our engineers will inevitably contact us to test some portion of it or clarify bugs or test cases. Oh quite a bit today guys. Any other questions? anything uh we want to recover or just clarify or just QA questions in general or anything I didn't answer before. Cool. No, I think we're looking pretty good. Again, uh make sure you guys are on Slack. Uh and then there should be a kind of check on learning quiz in your uh LMS here where we review uh whatever it is we had today. Uh is there a good question? How do we get on Slack there? Maybe the support here can help you out with that side. Otherwise, they'll be back in the next couple of lessons here though. Um, we'll probably end up getting to talk about Android and testing Android u applications, which is, you know, really what I like doing here. So, all right. Thank you guys and I will see you next time. Thank you guys.