Alrighty, good evening everyone and welcome to the understanding behavior live stream. I'm so glad you can make it today. We are wrapping up experimental design tonight with part three. Uh we're going to go over our multiple baseline designs, our variations of those. We're going to talk about changing criterion designs. We'll do a little quiz. We'll talk about internal and external validity. And we'll even talk about experimental analyses towards the end. How we doing? How we doing? Yeah, it looks like a lot of people like the consensus in chat is that experimental design is a scary topic, but you know, I've got all the right info for you and hopefully I'm making this easier to understand and uh hopefully you're getting more comfortable with this as we wrap up with part three tonight. Um, if not, you can always watch the streams again on YouTube. They're available immediately right after uh we finish up. And if you want, um, if you have the Beast Slayer course, you can also get the, uh, more refined versions that get you right to the point. They have a few extra details in those, uh, Beast Slayer course and more questions for you to practice with as well. So, check those out if um, you're feeling it. But yeah, how we doing tonight? How we doing? Any new faces here? Anybody testing soon? Should be a fun one tonight. I'm excited to wrap up experimental design. I uh we used to do this in like two parts, but recently it's been taking three, and I think the extra time is good because this topic is hard for sure. All right, we've got a couple people testing soon. We've got Ally on the 16th, Julie on the 26th, Casey on the 24th, Jacqueline testing Wednesday. Good luck. Coral also Wednesday. Good luck. All right, lots of people testing this month. Couple in October, couple in November. Aloha, maha. Good to see you here. All right, lots of October. Oh, we've got Brianna Moore testing tomorrow. Hope to hear some good news. Ah, good luck to everybody who's testing soon. You guys are going to kill it. I promise. Well, I I can't promise, but you guys are going to kill it. Uh, and if not, you know, that just means you're going to hang out with Nick Longer. Um, all right. Well, um, if you haven't yet and are looking for some extra boost in your preparation, make sure you check out understanding behavior. shop. We've got all of the best products for you. We've got uh affordable mock exams that will uh give you really good feedback and teach you h uh what your mistakes are. We also um sent out an error analysis form this week in our emails. Um if you didn't get that uh uh make sure you're subscribed to our email list and I'll post this somewhere else, too. Um but this is a really awesome error analysis form that you can use for your mock exams. Basically, it's just a way um for you to identify uh like how you're erroring on your mock exam. So, you can target that more specifically. So, with the feedback from the mocks and this error analysis, it's really going to take the value that you're getting from your mock exams like to the extreme. So, uh make sure you check this out. If you're um subscribe to our email list, you should have got the um link to this this week. I'll probably just send it out in our weekly email next Monday as well. So, make sure you're signed up to our emails um so you can get that. And if you're not signed up to our emails, you can go on to our shop. Just click this here and uh put your email in and you will be on our list. Sweet. Yeah, Julie got it. Going to use it. Yeah, it's a great tool. And like you can't really fix your problems unless you know what your problems are. Got it. Um same thing with our um like theory of behavior analysis too. It's like we have to do assessment first before we can choose an intervention because if we just try to intervene blindly then we're probably not going to be as successful. Then if we were to actually make like a functionbased intervention that targets the reasons why errors are occurring. So uh make sure you check that out. Make sure you sign to our up to our email list uh if you're not already and we'll send that out again on Monday. All right, cool beans. But um like I said, we're going to go over experimental design part three today. And um yeah, we're starting off with our multiple baseline design. So uh I'm going to give you guys a quick little probe on multiple baseline designs. Um first thing that I want to ask is how many interventions can we use in a standard multiple baseline design? Is it one intervention or is it more than one intervention? What do you think? We'll start there. So, uh, first question again is in a multiple baseline design, can we use one intervention or more than one intervention? Oh, also I forgot to mention we're going to do some giveaways of some mock exams later. And to uh make those giveaways happen a little bit faster, make sure you're hitting that thumbs up like button on the video. All right, cool. Consensus is looking pretty good here. Um, the multiple baseline design allows us to use just one intervention. We're going to test out a single intervention and how it impacts our uh behavior and uh we're going to look at at how it affects three different behaviors. Okay, cool. Well, I'm going to probe you one more time here before we go over all of our content for the multiple baseline design. How does a multiple baseline design show experimental control? Oh, a little bit uh gonna require a lengthier answer here, but that's our next question. How does a multiple baseline design show experimental control? Oo, what do you guys think? Let's see what we got. We got Chris saying like consistent steady level. Okay. So, when do we need to see steady levels and is it just going to be steady the whole time? All right. Crystal saying baseline logic. Can we break that down a little bit more? Not wrong, but could use some more details there. Okay. Danielle's got a few more details in here. She said, "Got application of intervention to other settings, individuals, or behaviors with baseline logic." Okay, on the right track. I like where we're going with these. Let's see. Uh oh, I Oh, Shante is absolutely killing it with hers her explanation here. Uh she has replicating AB relationships across different dependent variables. I I honestly couldn't say it better myself. That's great. Uh what else we got here? We've got dungeon changing IV shows different uh effect in different participant settings and situations. I like that. That's good. All right. Yeah, Ally is like talking about baseline logic so we can see verification replication in each condition. Okay. Yeah, Riann, I really like that. Replicating AB relationships across many different DVs. Good. Yeah, Annie Leva looks good. Good. Great answers here, guys. Love it. Love it. Love it. All right. Well, uh I think Shante just stole like the words off the slide. She was absolutely killed it with that one. So um to show experimental control in a multiple baseline design, what we're going to do is we're going to replicate an AB relationship across either different settings, people or behaviors. Some people in their explanations also uh labeled what these different settings, people or behaviors are. And these are our dependent variables which remember uh always refer to the behaviors that we're tracking. So in this graph, what we have here is the behavior of Rachel, Sarah, and Jackie. So um these are going to be our three different dependent variables that we're doing this one across different participants. Good. Um, a way to visually analyze or visually identify a multiple baseline design is you want to look for stacked graphs. But more importantly is that you want to look for this uh phase change line that kind of creates a staircase pattern. So this staircase pattern shows that the dependent variables have different duration baselines and this is a requirement for our multiple baseline design. So uh we can see here for Rachel we took five data points of baseline and Sarah we took uh 10 different data points of baseline and Jackie we took 15 different uh data points of baseline. So uh let's go back to our baseline logic real quick. Why would we extend this baseline here? Is this to show prediction, verification or replication? So, uh, we're looking at, so we took baseline data for Rachel, we implemented the intervention and but we stayed in baseline for five more sessions with Sarah. Is this to demonstrate prediction, verification, or replication? Getting back to our baseline logic. If you need a review from that, u make sure you check out part one. Actually, I think we reviewed in part two last week. I might be wrong. I forget. Just watch them all. you'll get it. All right. So, we've got a mixed bag of answers here. We've got um about half and half saying prediction, half saying uh verification. Okay. Okay. Okay. Um so, let's uh talk about this once again. So uh remember a prediction is predicting that a trend is going to continue. So we can make a prediction about um Rachel's data path here. We have um pretty clear no trend at about 40 level. So our prediction would be that we would uh predict that any additional uh dependent variables are going to stay around 40 level and have no trend. So when we actually implement that with Sarah, so we have the intervention going with Rachel, we're continuing baseline with Sarah, what we're able to demonstrate here is verification. So this is that when we prove that our prediction about this baseline condition is accurate. So right here would be our prediction and uh we prove our prediction is accurate with verification. So if you answered verification there you are spot on correct. Awesome. Um we also do uh test for verification later on. So we can make again this same prediction that we're going to see this no trend kind of continue with Sarah. uh and we would predict that that would continue with Jackie with our third dependent variable and we can prove that when we actually extend that baseline with her and then show a uh verification. So remember verification occurs when we per uh prove that our prediction about a baseline condition is accurate. So we had no trend uh at about 40% level for Rachel. we predicted that that no trend was going to continue. When we um continued the baseline with Sarah, we demonstrate verification and that we proved that this prediction was pretty accurate. We did that once again with Sarah um or sorry Sarah and Jackie's data rather. So we predicted that uh if we were to stay in baseline again we're going to have no trend about 40 level. And then we proved that prediction to be accurate with Jackie when we uh continued that baseline and saw a very similar pattern. So that was our verification. Uh we can also make our uh predictions about our uh treatment conditions. So we uh would look at the initial trend and um or really just like the trend of like any part of the data path here. So we kind of see that it's slightly ascending and then kind of levels off maybe around 80%, maybe 85 to 90 even. Um so we would make the same kind of prediction that like hey if if we were to continue this phase with our um or try it with a different um dependent variable, we're going to see a pretty similar pattern is that we're going to see like this uh slightly ascending at first and then kind of level off at like 80%. So that was our uh prediction and we would predict that for our second dependent variable and we would show that this uh prediction is accurate when we implement the intervention with our second dependent variable which in this case is going to be Sarah. So remember that phase change line this separates our baseline from intervention phase and we see a very similar pattern. We see slightly ascending at first and then kind of levels off. didn't level off quite as hard or sorry, it didn't go like quite as hard as uh Rachel's data path. It looks like it's just a little bit lower, but uh pretty similar there with Jackie. We also see a similar pattern. Um but it's more ascending and more gradually ascending throughout, but again kind of shows a similar uh change where we're going to see that ascending pattern with our intervention and then kind of level off maybe around 80% or so. If we had a few more data points with Jackie, that would make this even stronger. So, um, remember to show experimental control, what we want to do is we want to see if we have replication across our different dependent variables. So, we had DB1, DB2, and DB3. So, in this one, we had pretty good experimental control. And remember, experimental control is a spectrum. So, it goes from like really weak to like really strong. So, in this one, we we showed those replications pretty decently. They're not um perfect here, but we did see a pretty uh significant increase in all of our dependent variables with our intervention. So, we would say that we do have a decent amount of experimental control here. Cool, cool, cool. You guys hanging with me? I know. I know it kind of got confusing. We We're hanging in there. Just to recap one more time to show experimental control for a multiple baseline design we need to replicate this AB relationship across our different dependent variables. So uh we can make predictions about both. We uh want to look at what is that change um from our baseline to intervention condition. Do we replicate that change across our different dependent variables? The uh more that we do so with our data, the um better our experiment is. Um the less that we do that the weaker our experiment is or the weaker experimental control that we say we have. Cool. Looking groovy. Awesome. Awesome. Awesome. Um so uh look for the phase change pattern after intervention has been implemented uh with our previous condition. So um again we want to look at like when that phase changes um and um and note that we like have that continuation of baseline in our following dependent variables once we implement the intervention with our previous independent variable or previous dependent variables. Uh and again just to recap, we uh demonstrate experimental control by replicating that AB relationship across many different dependent variables. If we have two then that shows that we have some experimental control. The more that we have the better here. So this um oftentimes three is pretty common, but you might see a multiple baseline with five dependent variables. So it all just kind of depends. All right, let's talk about the utility of our multiple baseline design. This is our most commonly used design. The reason why is because it's like kind of easy to run. It's it's easy to get some good data with a multiple baseline design. Um, with that, it's also the weakest demonstration of experimental control compared to our other designs. Does anybody remember from um either week one or week two what um feature? So let's think about like the withdrawal design like what feature did a withdrawal design have that a multiple baseline doesn't have that made withdrawal be like a really strong demonstration of experimental control but uh not as strong with our a multiple baseline. Yeah. Good, good, good, good, good. Yeah. So, and uh with throttle design that is our strongest form of experimental control because we have that reversal back to our initial condition. So, whether uh we started with baseline, we revert back to baseline or if it was a BAB that we reverse back to that intervention condition. So in general when we have that really clear like removal of an intervention and we show data um that corresponds well to that that shows really strong experimental control. In our alternating treatment design we didn't have complete removal like back to a baseline condition but we had like removal and replacement. So um withdrawal is like here in the standard of like how much experimental uh control it can show. Our alternating treatments design is just like a slight step below that because we don't reverse back to like our initial baseline condition. And then multiple baseline design is significantly weaker because it doesn't have that removal of an intervention at all. Uh multiple baseline designs in general are really nice to teach irreversible behaviors. Again, kind of going uh back to the features of this design. Since we don't need to do a reversal to show experimental control, then then the multiple baseline design is perfect for these irreversible behaviors because we wouldn't expect to get the same kind of data if we were to um uh remove our intervention. So really great for irreversible behaviors. Uh overall it is a pretty flexible design. You can use it with a wide variety of behaviors. Um you can use it for skill acquisition behaviors. You can use it for u behavior reduction uh targets. That's just not the best for severe or dangerous behavior. The reason for that is um for severe or dangerous behavior, we're going to have to require it to stay in baseline for really long periods of time. So um that's not great. We if we like identify that an intervention is effective, then we would want to implement it immediately. It would maybe unethical uh not to do so. Uh so having those extended baselines is uh really going to make this not so great for those severe and dangerous behavior. What is better for irreversible behaviors, multiple um or multiple baseline or alternating treatments? It really depends on what you're trying to show. If you're trying to test out multiple interventions with an ir um irreversible behavior, then you would need to do um an alternating treatment. But um if you're just trying to uh test out one intervention then multiple baseline would be better. So kind of need a little bit more info. Both are um able to be used though. All right. Um those are the basics between of our multiple baseline design. We've got a couple variations of our multiple baseline design as well. We're going to start off with the multiple probe design. A lot of students um uh miss a lot about these. So I'm going to give you the good good information that you need for them. Multiple probe designs are specifically only used for irreversible behaviors. So um the features of the behavior that we want to look for that make good candidates for multiple probe designs is that the behavior will not be learned in baseline and the behavior will not be unlearned after our intervention. So a probe is essentially just like a quick test. So we're we're quickly testing if a skill is present or not. So we can see this on our graph um is that we can do this quick test here. We do three data points. Okay, cool. They don't have this skill. And then we can move on to our next phase. Looking at our second dependent variable here, look the um what they did is that they did uh three data points. They tested do they have the skill? Nope. It's at zero. Um then they took a lot of time off. They didn't collect any data here on dependent variable 2. And then uh right before they implemented intervention, they went back to probing um right before and there's like hey let's just check make sure they still don't have the skill. Cool. So uh again a probe is just a quick test. When we're doing multiple probe we'll do lots of these quick tests um at certain points. So, uh, we're going to have lots of space between our probes. In general, when we implement our intervention, one thing to, uh, pay attention to is that we're not taking probe data. We're taking data the entire time. So, look at this instruction with CTD phase. Every data point is recorded. We don't skip any data points here. Um and in this one we had two dependent variables. We had the independent response and the prompted and prompted responses. We um got the percentage for both of those. And look, so what we wanted was to get um them 100% at independ uh sorry independent responses. And once they did that for three consecutive data points, then they went to probing this skill. So, um, think about it like if we didn't teach like if sorry if we taught them this skill, they've learned it to 100% and then we do three sessions of probing and they're still at 100%. Then I don't need to test them every day to like hey are you still at 100%. Uh, and these probes become sufficient. So again, this becomes really important in how uh we choose our targets is that we want the targets uh that are not going to be learned in baseline and they're not going to be unlearned after our intervention. So it's not just to save time, it's to save time when the data collection is not going to be useful for us. So um add in that extra component there. It's like hey this like this extra data were not going to be useful. So like if we were to take our data here too again, we're going to um use that uh that trick where it's like trending data are likely to continue to trend. Like we're just going to see hundreds hundreds hundreds hundreds hundreds hundreds. So this data was like completely unnecessary to take. So it saved time because uh we um can make a really accurate prediction about what's going to happen and that yeah they're still going to have it. Um so then we go back to probing and um then we do one final probe at the end uh just at like end of our experiment make sure they still got it. Cool. Um then note like we also do it for behaviors that are not going to be learned and baseline. So same kind of thing like if we don't teach them then we're just going to u make the same prediction like hey like it's just going to be zero the whole time. We don't need to keep testing them on it. Um it's just a waste of time uh because the data are very predictable. And um then we just continue that with our probes. So that's our multiple probe design. Again, we only use it for our irreversible behaviors. Cool, cool, cool. Uh another variation that we have, um Danielle's asking a quick question. What happens if the probe data isn't 100%? Would 80% be appropriate? Um it really depends. like it's if you train it to 100% and um we got our steady data, it's probably just not going to go to 80%. Um and like you might want to look like if there was a different factor that was affecting like that 80%. But let's just say like uh for example, let's say you got 100 80 100 uh then this would affect our uh data a little bit. But what we might do is what we might like can extend this probe phase a little bit longer until they get 300s in a row and then we would phase change and then go back to probing. So um see if it we could see like hey just like was there a fluke in this section in this session um wait for that 300s and uh if it's like consistent like hey they're like 80 100 100 then that might show that um some weakness into our intervention at least like in maintenance. Um, so maybe we should have had like a stronger mastery criteria. So it provides information either way. Good question. All right, that's our multiple probe design. I'm going to move to our next graph. Clear this off real quick. Um, which is the delayed or non-concurrent multiple baseline design. Um, these are actually a little bit different. I meant to update this slide, but essentially our delayed multiple baseline design is whenever we um have our baselines that are uh not necessarily overlapped with each other. So um we have baselines that essentially like take time uh before they start um collecting those data at all. um where the non-concurrent is specifically like um like hey we're gonna like run these interventions at different times. So like it might be like March and we um take baseline data with client one and then in June we start client two and then we start taking baseline data with them. Uh so those are like occurring at totally non- different times. Delayed just means that we're delayed start to our baseline data collection. So, um, look for those. Really, they're they're not too different from each other, but delay is specifically going to have like this space before we start our baseline data collection for some of our dependent variables. Non-concurrent might not even be displayed on the graph, but essentially it's going to be when our um when we start the participants and uh the experiment with the participants at different times. So, very similar, but the the difference is very nuanced. Don't worry too much about it. Um, and yeah, cool. We uh yeah, have a couple people asking like why would we do this? Why would we use participants that were added later? So, um sometimes you just like if you're doing research, you just don't have a proper candidate for it. So, you know, let's say like you started your your research and you had two candidates and then like three months later it's like a great candidate pops up u so you get them in the study later and sweet we we have them in and we're now using this design. So that's the most common reason for uh using a non-concurrent multiple baseline. It's like you haven't identified like the right participant for it yet, but you want to include them in the study. Good question. Good question. All right, that's all we got for multiple baseline designs and their variations. Hopefully those are making sense. Yeah, for sure. If you get new clients in and be like, "Yeah, this guy's great participant for this. Let's get him in." Then, uh, that's that's a great time to to do that. Yeah, if you have a new clinic, too, like you might get some new participants in there, too. Sweet, sweet, sweet. All right. Uh I forgot I have a quick question for you. So uh a researcher is interested in comparing the effectiveness of two different teaching methods on the acquisition of new vocabulary words by children with developmental disabilities. Which experimental design would best suit this purpose? Let's see those answers. Let's see those rationale. What you got in your rationale? Make sure you identify the key details that lead you to your answer. All right, I'm liking seeing the rationale's coming up. There are multiple things to consider for this question. So, make sure you highlight more than one thing in your rationale. I like Gungeon's got both of those. Excellent. Good. Yeah, Danielle's got both, too. There you go, Paige. Good, Amy. Nice job. Yeah, there you go, Lena. Nice job getting both of those there. EA, too. Nice. Good. Yeah, Carlen, that looks great. Nice job here, crew. Awesome work. Um, let's go ahead and review it. Our best answer is going to be B, the alternating treatments design. So, a couple different things that we needed to look at. First of all, we saw that we wanted to compare two different teaching methods. This would rule out our multiple baseline design because multiple baseline design only allows us to test out one intervention. Um so any of these three can be used to test out multiple interventions. The next thing that we want to look at is uh what the dependent variable is. So we're looking at the acquisition of new vocabulary words. This is a behavior that is not going to be easily reversed. So that means our reversal variations are not going to be very useful here. Um, since we're going to be uh have some failure to get either verification or replication or sorry, we're failure to get verification. Um, so our alternating treatments design is going to be best where we're just flip-fpping between the two and uh watching that progress to see which one makes the biggest climbs. Good. Nice job there, Karina. A lot of you guys got both parts of that. If you did, give yourself a huge pat on the back. You nailed it. All right. Our last experimental design is the changing criterion design. Um changing criterion designs are um the uh most unique I would say out of our main experimental designs. What we do is that we are uh generally use this to gradually change the behavior that's currently in the individual's repertoire. So behavior that they already know how to do, they're already doing it and uh we're just gradually changing the rate of this behavior. Um the way that we do that is that we use a goal line or a horizontal prediction line to shape the behavior. And this graph that's depicted by our dash lines are where our goal lines are set up. So um let's give it I'm going to probe you guys one more time here. Um, how do we tell if we have experimental control in this design? I'm gonna give you a hint because I'm going to rule out the most common answer. It's not due to a birectional change. Oo. All right. So, um, there's there's your question. Um, what is the way that we can tell that we have experimental control within this design? And your u one hint is that it's not going to be uh due to a birectional change. All right, I'm seeing some good answers here. Yeah, Della, Amy, on the right track. Good, Carly. Yeah, Crystal, I like where you're going. Christine need a little bit more there. Good. Yeah, Riannan's on it. Good. Annie Lea's on it. Good, Olivia. I love that. Great explanation. Good Mirthth. Good Shante. Awesome. These are looking awesome crew. I can tell that you guys have been studying and learning this the right way. Yeah. EA on the right track. Yeah, Hie on the right track. Good. Perfect, perfect, perfect. So, this design, it shows experimental control um when we see that the the levels of the behavior conform to the uh the levels of the goal line. So, what we want to see is that the um the the data points are right on top of that goal line. the better that they align with that then uh the stronger experimental control that we can say that we have. We'll we'll talk about that more in a sec. I thought it was early on in our slide but it's not. So uh again the main thing is that we want to see the level of the behavior conform to the level of the goal line. So stays right on that goal line. That's when we have really strong experimental control. Uh if it's above or below that so um that shows a little bit weaker experimental control. All right. So when we set our goal line or horizontal prediction line, usually there's going to be like some consequence associated with meeting the criteria of that goal line. So it might be like, hey, if you complete 10 math problems at this phase, then you're going to get a Snickers at the other session or whatever. Uh so there's usually going to be some kind of consequence, good or bad, associated with meeting it or not meeting it. Changing criterion designs can be used for behaviors targeted for increase or decrease. Um the only thing again is that we're not really going to use it for severe or dangerous behavior. Usually those behaviors are some are behaviors that we want to rapidly decrease instead of gradually decreasing those. So uh anything like really severe or dangerous not the design to go to. Um but if it's just a gradual change then we can use uh it for increase or decrease targets. Good. All right, let's talk a little bit more about experimental control. So again, um it shows experimental control when we have the level of the behavior conform to the level of the goal line. So seeing those data points line right up with that goal line, the better that it does. So the stronger experimental control that we say that we have. Um farther away from the goal line equals less experimental control. So, let's say like we set the goal line at 10, but we saw the behavior um go to 12 and it just kind of stayed at 12. So, um like clinically this is a great outcome. It's like sweet, they're doing more math problems than we even expected them to do. But remember our experiments, we want to make sure uh for experiments like a good experiment is that uh when we like set something at a level, then the behavior is actually going to be at that level. It's a really strong uh we have that really strong relationship between the independent variable and the dependent variable. So if it's above that line then it's like not quite as strong because like hey we set it at 10 but it's at 12 and that's just kind of like uh away farther away from our prediction. So um this would still be great great clinical outcome u but weaker experimental outcome. There are a bunch of different ways that we can enhance experimental control. Uh so this is uh just looking at different ways that we can manipulate the criterion jumps. Uh we can do more criterion jumps. So in this one we did quite a few criterion jumps. It looks like we did like one, two, three, four, five, six jumps. That's a lot. We can also change the size of the criterion jumps. So for example like this one we increased it from 10 all the way to 15 but in this one here we increased it from like 23 to 25. So uh we did small jumps and big jumps. The more that we vary those the stronger experimental control that we have. Last up is what's known as a birectional change. We don't have a birectional change on this chart but a birectional change is essentially when we move the goal line to a previous level. So let's say we're at this phase and um we changed the goal line back to our previous one. So if we saw that the behavior conformed here, this is kind of just like a way for us to kind of like flex on experimental control. It's like, hey, like even when we changed it in the opposite direction, then um the behavior still conformed to the goal line. And again, it's just like another way to just kind of like flex on experimental control like, yep, this really does the thing. Yep. So, um the birectional change again is just another way that we can enhance experimental control. But if we do a birectional change and like the behavior is like all over the place still like that still doesn't really show a good experimental control. So make sure that we actually see um the level of the behavior still conform to that level of the goal line um is the main thing to focus on. The birectional change is just another like extra element that we can add. Same thing with these um additional elements like doing more jumps or changing up the size of those jumps across too. Just additional ways that we can really like be like yeah this worked well. Okay, let's go ahead and take a look at this graph uh pulled from Java. This one we have a changing criterion design and what um what they did in this experiment is that they had these different participants they were um on like recumbent bike so just like a stationary bike and the bikes were equipped that whenever they would meet um like a certain reinforcement schedule. So after they pedal um like at this phase uh about 80 times, there would be a little light that would go off and the light would signal that they get some kind of reinforcer at the end of their cycling session. So let's go ahead and take a look at one of these charts. The first one, they set the reinforcement schedule at VR80. So after about 80 pedals, they would see that uh the uh little light go off. And that again signals some kind of reinforcer was uh going to occur later. So when we set it at VR80 for Scott here, our first uh participant, uh we could we see that he actually was like way over that reinforcement schedule. So our dependent variable here is uh how many revolutions per minute they're doing. So we set it at VR80 and he's doing like over a 100 stabilized at. So what they did is that um for the next phase they just increased that uh VR schedule and they set uh they essentially were just like hey like if we set the uh VR schedule at a higher requirement will he go faster and we did we saw the behavior conform a lot closer to that goal line and then they moved it up again and then again the uh the behavior occurred much faster than where our goal line was set at. We had a question about like how do we set the criteria for this one? They I think they just took the highest baseline data points or maybe they took an average of like the the high data points and set it uh slightly above there. So it looks like um his highest one in baseline without that light going off was at about 80. So they set their first schedule at VR80 for this first participant. Uh again, it's it's going to really depend on like what you're doing. Um, but kind of like set it at a level that's attainable for them is is where you essentially want to go. For our second one, we we set the um the first criteria was set a little bit lower than like the highest data points. It was more about like those average data points. Actually, this graph is kind of weird. Uh, it looks like his first data point here was at like 80, but this dash line looks like it's more at like 70 and then is like BR 85. Uh these are different participants that's why we have different baselines. Um so in this one they actually did sorry they also did um like a changing criterion with a reversal here. So they tested like after like a few different schedule changes like hey what if we just like don't use the light is this necessary like are they still going to pedal fast? And for all of their participants we saw a descending trend when we reverted back to baseline. So it's like, yep, okay, it is the light and this reinforcement schedule that's that's causing those uh that increased rate and pedaling. So yeah, this one it uh does have a reversal element in it. Little combination design there. All right, cool. Um oh, I want you guys to look at this graph and we're just going to say like participant one, two, three, and four. Uh but tell me like which which phase or phases have really good experimental control. So what I want you to do I I'll uh make it a little bit bigger here too. Uh so you guys can see but uh tell me one or more phases that have really good experimental control. So label the participant and which phase it was. And there's there's multiple uh correct answers here for sure. So, which phases um had really strong experimental control for our change in criteria and design? Again, label the participant. You just do one, two, three, and four. And then which schedule it was on Yeah, Martha, you can use change of criterion design for decrease targets. So, if we want to like decrease like amount of cigarettes that they smoke each day, we could set that goal line lower and lower. All right, cool. Let's take a look at a few of these answers. So, we have Scott VR 1115. Let's go ahead and take a look at this one. Um, this one, yeah, looks really great where, uh, really strong experimental control where, um, most of the data points are pretty much exactly on that goal line. It increases slightly above towards the end, but overall looks really great there. Um, same with Scott P uh, which is participant one at 130. Very similar pattern where we see like most of the data points can form with that goal line and it just goes slightly above. So that slightly above might uh gives us a good indicator that we're either ready to increase that goal um or just do like a phase change like they did here. They wanted to do that reversal element. All right, let's see what else we got here. Uh we've got uh Magdal says Peter VR95. Let's look at uh Peter. So participant for um this one not too bad. Not too bad. Looks pretty good. Uh, most of these data points are pretty close to that goal line. Not too bad overall. Not the strongest one, but not bad at all. I'll give you that one for sure. Well, let's take a look at another one. We've got uh somebody is saying P3 VR 125. Oh, yeah. This one is gold here. So, uh, looking at this one, our behavior is pretty much right on that goal line the whole time. We got those two little dips there, but they're very insignificant. Uh, this one has really strong experimental control on this phase. Good. Baseline is just baseline technically phase one, I guess, but it's uh, it doesn't have a criterion set in baseline. So, it's way more likely to vary and we saw that. Good. Let's see. We've got uh Delana saying P uh participant 2 VR 115. Let's look at this one. Not too bad. So, this one I would say it's just like I'd give this one like these are all like a like a a like grade A for like experimental control. This one I would say is a little bit weaker. maybe a B. Uh, so this one like it took a couple phases for them to adjust to that and then we see it it kind of stabilize out pretty well. So we'll give it a B just because it took that little bit of warm-up period into that new phase uh for it to increase. But overall not too shabby. What about phase one 130? Yeah, phase 1 130 looks great. At least our first sorry participant 130 also looks really great. I would I would give this one another A. That's That's killer experimental control there. A little bit weaker on this one where it's just way over consistently. Good. Phase four for P1. Yeah. Depends how you're counting phase four, but it's like independent variable for which would be this VR 130. Again, a little bit weaker since it's always above partic 100. Again, this one's all all above, too. This one like went above after a few data points. So, yeah, I think I think we got most of the important ones that have really good experimental control. Um, definitely these two, this one and this one are all very strong experimental control. We see that the data points are like right on that line. Cool, cool, cool. Yeah, for Scott VR130. If we're looking at this last phase, yeah, like we potentially could have set the the reinforcement schedule to be thinner and we might have gotten faster responding out of him. Like I think if we continue this experiment, that would be the move is like set the next criteria at like 150 and then see like seeing if he uh performs even faster. And I would I would bet he does. I would bet he does for sure. Cool. And yeah, u Danielle is saying like I bet I feel like they would all have different athletic abilities. Absolutely. Um and we see that we have different performances too. Like Scott was just like a maniac on this thing pedaling like 140 um revolutions per minute. Um even with like our highest schedule with Peter, like he's uh his highest is maybe like 130 120 130. So, uh definitely like that ability is going to uh uh like affect the results of each individual. But essentially u think about what we're testing here is like uh what we're testing is essentially is does this goal line change the rate of their pedaling? And uh it absolutely does and we show that consistently with each participant. So that's the most important thing. And then we prove that with this reversal to baseline too. is just like, "Yeah, this is not a fluke." It's not just like they're they're uh used to pedaling faster now. Uh it's like, "Nope, this goal line and that light that signals that reinforcer is really crucial to their increased speeds. They're not going to they're not going to go as fast if they're not getting the reinforcers." And we we show that very clearly is with this extra reversal here. Cool beans. Nice job, guys. Uh that looks great. I got a couple questions for you. Let's check these out and then we're going to go into our next topic. Uh, you are working with a client who exhibits three different types of disruptive behaviors in the classroom. We've got shouting out without raising a hand, leaving their seat without permission, and distracting other students. You plan to introduce a token economy system to address each behavior one at a time. Which experimental design would allow you to easily demonstrate the effectiveness of the token economy on each type of disruptive behavior? What do you guys think? Let's see it. Where are those rationale at? You know I'm asking for him. All right. All right. Let Casey where you're going with that. Looks good. Yes. So it looks like multiple baseline designs a pretty common answer here. Can can we use multiple baselines with uh behaviors target for decrease? What do you guys think? Some great rationale coming through. I like where we're going. Think about that. Can we use multiple baseline design for behavior target for decrease? I'm seeing a lot of yes on that. Okay. Okay. Okay. All right. Yeah, Casey, I like those details. Taylor, now I have But now you have me second guessing my entire life. Uh well, remember to get over that anxiety, make sure you have that rationale and uh confidence in your rationale. If you're going to switch your answer, uh make sure you have a better rationale for the answer that you're switching it to than your previous one. So if you struggle with anxiety or switching your answers, um rationale are the way to counteract that. Good Danielle. All right, I'm taking final answers. You got five seconds. All right, let's do it. Uh are uh okay, a couple people put their final answers in there. Cool. Good, Olivia. Awesome. All right, best answer here is the multiple baseline design. Nice job here. Uh so I asked that trick question. Uh can you use a multiple baseline design for behaviors targeted for decrease? Absolutely. Um the only uh exception there is that we're not going to use it for severe or dangerous behaviors. So, um, let's go ahead and take a look at our dependent variables. Again, we've got shouting out without raising a hand. I've never heard that hurt anybody. Leaving their seat without permission, also not too dangerous. And distracting other students, yeah, it's not very severe or dangerous either. Um, so we did check that box off where a multiple baseline design could be good there. Uh, we also want to see uh how many interventions we're looking at. So, it looks like we're using a token economy system and we want to address each behavior one at a time. So multiple baseline gives us um that perfect format to do that. We uh could do it with other designs, but multiple baseline design is going to be the easiest here and uh be sufficient to give us uh everything that we need. Sorry, Taylor. Uh you know, the exam is more brutal than I am in my class. So get used to it. Toughen up. Um learn from your mistakes and you'll be grooving. Alrighty, let's check out one more here. Uh, Ronaldo has three different forms of self-injury. We've got headbanging, scratching, and wall charging. Due to the intense severity of these behaviors, an effective intervention needs to be discovered and implemented immediately. Which of the following would be the worst experimental design for this situation? All right, I'm going to point this out for you just so you don't miss it. We're looking at the worst experimental design. Um, I would highly recommend for this question to uh use the play it out strategy. If you're not familiar with that strategy, run through in your head what all of these different situations would look like and um find find the reason why one would be worse than another. I already seeing in chat we've got a huge mix of answers. Looks like we're pretty split between A and C. We've got a couple D's in here, too. Um, so again, uh, for your rationale, I really want you to use that play it out strategy. Run through which of uh, run through how these would work. Uh, what is the least intrusive way that we could do all of these and then decide which one is the worst. So, we have lots of bad options here, but which one is the worst? I like it. I can tell thinking caps are on. This is a difficult question. Takes a lot of thought. Oh, we've got Ally. Tricky tricky, but I'm sure it's C. I like the confidence. Okay, Dusella is changing to C. Got a good rationale for your change. Yeah, Randon, take your time. So, again, we like a lot of these are are not great, but which one is worse? So, I want you to think in your rationale. Don't just tell me why your answer is bad. Tell me why it's worse than the other answers. So a good rationale is going to be really thorough in that way. Don't just label one reason why the um why something is bad. There are three really bad designs here. So uh what you need to do is compare them. Which one is worse? So think about not just why one is bad, but why is it worse than the other options? Really important to hit in your rationale otherwise you're going to miss it. If you're not too sure, make sure you go um use that play it out strategy. So, run through your head what each of these situations would look like. Danielle. Okay. I love all the thinking here. All the rationale is coming through. Yeah. Oh, Olivia, I really like where you're going with that. All right, time to nail those final answers. All right, we're going to go ahead and review this one. Thanks for all of your participation here. Let's check it out. Our answer is C. The multiple baseline design is our worst experimental design for this situation. Uh let's go through the rationale here. We're going to use our play it out strategy. So um first one that we can rule off easily is our alternating treatments design. Uh we know alternating treatments designs are our best design for severe or dangerous behavior because we can implement that intervention immediately and we can try out different interventions until we find an effective one. So that's our best design. we can easily rule that one out. Let's go ahead and use play it out for our other design. So, um first one we're going to do is our reversal design. So, this is going to be a B A B C. So, um kind of nice here is that we can do our B phase and then uh we'll probably see our behavior decrease pretty quickly. So, that would be great. um for our um reversal back to baseline, you know, not so ideal, but our reversal might only need to be one data point. So, um we could just do that one data point and then we get right back into intervention. They have something effective in place. still not the most ethical because we had to get that one data point to show our um um to just to show that reverse in in behavior is due to our intervention. But um it's only one data point and then we can try out C. We can be like okay C is okay it's not as quite as great but then we can just go back to B. So um there's only one data point where they don't have an intervention in place. So not great because it is um some severe dangerous behavior but not the worst either. Um so not going to be our reversal design, our changing criterion design. This one gets a little um sketch. So um we can skip baseline. We can have our intervention in immediately. We're going to gradually change the behavior. Um so let's say like we look for like a 50% reduction at first and let's see uh it kind of conforms to that. Cool. Um but we could do a big jump down to 50 to down to zero um and see if that's effective. So if it is effective, sweet. Then uh then then we could try that pretty fast. We might even not need these three data points. We could probably do it in two. Uh then we can move it down to a lower criteria. See if it decreases. And cool. If it does, then we've got an effective intervention in place. That's awesome. So uh though that's what would be our uh changing criteria design. Not the most ideal. We'd have to make some pretty big jumps and um hope that they those jumps actually work. Um but uh it does it could possibly decrease the paper pretty quickly. Now let's look at the multiple baseline design. This was our worst design. Uh we've got our three uh different dependent variables. So these are three dangerous behaviors. And remember for a multiple baseline design, we need to uh show verification by extending the baselines for those uh three different behaviors. So uh so what happens here is that we like take a couple baseline data points or sorry these would be high um and then uh we implement our intervention and it works sweet. Um but then we need to continue our baseline delay data collection with our other dependent variables. So this is going to extend baseline for very significant amounts of time. Uh so this extended baseline without an intervention is going to make this the worst design. So uh compared to a BABC, we only have that one data point of um no intervention. We have uh require lots more data points of no intervention with this design. Change of criterion design might be a little bit awkward, but it could potentially be more effective than our uh multiple baseline design. Cool. Yeah, alternating treatment design would be the best option here because we can skip baseline. We can go right into an intervention and then we're alternating between different interventions until we find uh one that is effective. Cool means this is a really difficult question. If you got that and nailed that rationale, give yourself a big pat on the back. Um this is a really difficult question. uh but use this play it out strategy when you get difficult questions like this because it's going to give you a much clearer picture about what is actually happening and then uh once you have the information about how these work uh it's much easier to compare and contrast them. So like if you had this information laid out for you right here uh you can be like yo yeah really easily. is obviously multiple baseline design, but if you're just looking at them like uh like the terms right in front of you, then like we might be have some of our uh judgment be clouded. So, use a played out strategy. Uh get the right information. Uh make comparisons to the other answers. So again, your rationale should include not only why this is a good answer, but why it's a better answer than the other answer options. Alrighty. Um, we're going to talk about uh complex experimental designs real quick. So, this is essentially when we combine more than one experimental design together. We actually already looked at an example of this where we have this changing criterion design with a reversal. Um, we're not going to spend a whole lot of time on this, but most experiments like in Java do have complex experimental designs. uh essentially like the the reason why we would do that is so we can uh get the advantages of more than one design uh at once. So this one it doesn't have like the clear like stairst step pattern but this is actually a multiple baseline with a reversal. So uh what we did here is that we have the extended baseline for our second dependent variable and then uh we do reversals between our u baseline and intervention conditions. So uh kind of cool here. Um so this allows us to test across our different subject settings behaviors. Uh but it enhances that experimental control for us because we saw the weakness of multiple baseline design is that it doesn't show very strong experimental control but if we add that reversal element it's going to enhance that experimental control very significantly. Most commonly like we're just adding in reversals to other designs. Uh another like uh this is a really cool one here is uh they did an alternating treatments with a reversal. So we can test different intervention strategies within different settings. So in this one we had our instructional method A and instructional method B, but we tested them in different settings. So we did high density practice, low density practice and high density practice. So essentially what we have here is four different independent variables. uh because we have method A with high density, method B with high density, then we have method A with low density and method B with low density as well. So essentially we're testing out four different uh independent variables here, but we do this in kind of like a neat uh reversal style. So uh another cool way to kind of like combine our experimental designs. All right, who can tell me which independent variable was most effective on this one? Remember we had four different independent variables. So uh label both components here. Which which method was most effective in this uh chart here? Give me a little chart analysis practice. So your answer should include method A, method or method B. It should also include high density or low density. Good. Yeah. Nice. Elizabeth and Olivia. Good. Jordan, good. page. Awesome. Awesome. Good, Sedonna. Yeah, now we now we're rocking here. Good. Yeah, Crystal's got a rationale. Super cool. There you go, Kelly. Perfect. All right. Awesome. So um the best intervention here for increasing correct responses is doing high density practice with instructional method B. So that's our open circles but um only with our highdensity practice. So method B was still not very great with low density practice. Neither were really that great with uh low density practice. Um but we see with high density practice is really that crucial thing that makes B better. But A still wasn't very good with high density practice. Cool. Nice job there. Um, so yeah, you um we'll probably see um the complex experimental designs like they're they're going to take a little bit more to look at. Um, this one, what do we have here? This is Oh, this is a this is another withdrawal with alternating treatments, but they don't use a baseline. Okay, wait. What do what do we actually have here? SR mag. So they do alternating treatments. You can see like uh we see like different sessions on uh have like our different data points here and we have different phases. I don't think they ever use a withdrawal. I think this is just like an alternating treatment with different phases and they don't actually do a reversal here because there's no baseline. So this is a super weird one. I don't even know why I have this one on here, but uh this is uh multiple or sorry, it's an alternating treatments with different phases. So great. Yep. Super confusing. This is actually not a compound design. It's just a really confusing graph. All right. Anyway, we'll uh we'll save that one for later. Actually, not for later. We just won't look at it. All right. Let's talk about internal versus external validity real quick. uh internal validity. This essentially refers to like how tight an experiment is. So there's lots of different things that can that contribute to internal sorry I can't talk. There's a lot of things that contribute to internal validity. Uh first is going to be like choosing the appropriate design and we also know that like some designs are stronger um in showing experimental control than other designs. So that's important too. um having the relevant participants. So like if your participant like doesn't struggle with the issue that you're working on, then that weakens your experiment. Uh we want to think about the design being timely. So uh we complete it within like a short block. Uh this is going to control for maturation. Maturation is when our u client matures and they just kind of just get better because they are uh older or uh the the essentially because they they've aged and get better at the skill. So if I'm looking at like something to increase like jumping height and my experiment lasts three years from like two to five years old, well that's gonna like uh just them naturally aging is going to get them to jump higher. So we need to consider that strong IOA also important to stronger internal validity. Strong procedural integrity uh procedural integrity is basically like how well your uh procedures implemented uh based on how you describe it. And lastly just more general but this can be a huge breath of things is that how well we actually control for confounding variables. So there's a million different things that uh can confound an experiment. the more that of those that we can control. The more that we isolate that one independent variable, the stronger our experiment is. So, um internal validity, think about it. It's like all inside the experiment. How well did we run it? How tight is it? How confident can we be that these results do what they do. External validity on the other hand um if we want to look at how widely our results expand. So we can see that like how widely the results expand to either different people or populations. So like for example if you did something only with autistic um 5-year-olds like would this expand to autistic 10-year-olds? Would it expand to um neurotypical 5-year-olds or neurotypical 10-year-olds? The more um that the research is applicable to the stronger external validity we can um have. Uh it can also expand to different concepts. So let's say we use an extinction procedure and we find out um like a particular result. Would this also expand to like a punishment procedure? Um if it does, we have stronger external validity. Um it can also uh expand to similar procedures. So um let's say like if we're doing it with like uh attention extinction, would this also work with like an escape maintain behavior and doing escape extinction? So um the kind of um some overlap between these with like different uh different concepts or similar procedures um like will it will these results expand or apply to those different things or not. So easy way to remember this internal validity we're looking at inside the experiment how tight is everything. External validity is looking at the results and how much does this apply to. We could yeah we can um think about it as like generalizability. So like uh like generalizes to um other populations or to other concepts or procedures. Yeah. On the right track. It's not like it's it's different than like generalization. We we usually look at like from like an individual standpoint but it's kind of like generalization of the results. So don't just think like generalization only. Think about like generalization of the results of the experiment. Yeah. Katarina, she said, uh, I had a question on that asked about two groups and had a choice about maturation versus something similar. I can remember the other choice, but it was very similar. So, I already picked it out of a question. So, yeah, maturation is specifically like your client getting older and that's causing this the um uh kind of like confounding with how u effective we know the intervention is. So, yeah, I would I would need more information to to get that exact one, Katarina. Cool, man. All right. Uh, last thing that we're going to talk about is experimental analyses. I have a couple questions for you here. I realize that we uh we haven't done any giveaways yet, but I'm seeing the like counter at a cool 69 right here. Nice. Uh anyway, uh we'll give away a couple mock exams once we finish this. Uh, I think we've just been so engrossed in experimental design tonight that we forgot. Like nobody's even reminded me or anything. Um, so I'm glad you guys are excited about learning. Uh, I'm excited about our, uh, how much you guys have been learning to participate tonight, too. So, uh, we'll do some a nice giveaway at the end of this. [Laughter] All right, let's talk about experimental analyses. U experimental analyses are essentially like different ways to design an experiment but um they don't necessarily relate to any particular design. So uh our first one that we're going to look at is a comparative analysis and a comparative analysis we're going to be e uh comparing the effects of either two different treatments or conditions together. So um here is uh like what are the two different ways that we can do these comparisons. So um does DRA or uh NCR reduce the aggression more? So that's doing our two different treatments for our two different conditions. We could look at just a single treatment over a baseline condition. So like does DRA reduce aggression over our baseline condition? So either way would be a comparative analysis for comparing the conditions. Uh more often than not, you'll see this as comparison of two different treatments, but it's not always um the case. They uh used to have the term non-parametric analysis on the task list. Um that was more looking at like the single intervention and baseline, but they uh in the sixth edition, they just kind of combined it into comparative analysis. So uh either way is going to be a comparative analysis and don't worry about it too much. So uh comparative analysis is this verse that. So either this intervention versus that intervention or this intervention condition versus that baseline condition. Either way works. Um next up we have our component analysis. This is analysis to look at what parts do we need. So uh this analyzes how important uh the components of a treatment package are. A treatment package is essentially when one or more intervention is being uh put in place at the same time. Here's the process for doing a component analysis. Um, oftentimes like we uh what we have is that like we put a treatment package in place with multiple interventions or sometimes we get into a case and like they have like three different things going on and be like okay let's do this analysis and find out what is actually effective, what is working, what don't we have to do. So um the first thing we're going to do is uh we're going to remove one of the interventions from the treatment package. If the behavior stays at a similar level, then we're like, okay, cool. This uh component of the treatment package is not necessary, and we can just toss it out. It's like, this was not useful. If the behavior does change significantly, then we've identified, okay, yep, this is useful. This is necessary to uh change the behavior however we wanted it. So, we're going to put it back in. Uh here's a quick example. So if both DRRA and NCR are in to reduce aggression, I can remove NCR and note if aggression increases. So if it does increase and be like, okay, this NCR is necessary, and I'm going to put it back in. If it doesn't uh increase, then I'm just going to keep it on. Be like, okay, this NCR is not necessary. Let's stop doing it. Uh the reason that we would stop doing it is uh because clients have a the right to effective behavioral intervention and one of those rights is that they uh have a right to the least restrictive intervention possible. So think about if we're adding NCR that's another intervention that's another restriction that's required to get this behavior down. So if we can eliminate uh our need to intervene then we should like let's uh reduce the restrictions and um that's pulled from the uh Van Hton Edel article and uh it's I would say that article is a must readad for um ethics and the behavior and for the BCBA exam. I'm going to see if I can find find it real quick and post a link for y'all. Um, but essentially like if we don't need an intervention, then we shouldn't uh keep it in place, so we're going to remove it. There's that article if you want to check it out. Um, it's a quick read. I would say it's a must readad to nail down ethics for the BCBA exam. So, make sure you're reading that one. Um, if we uh we could potentially repeat this with the DRRA. So, um, like I might say like, hey, like this DRRA is necessary. Um, and uh, we would want to reinforce it. So maybe we like we should just like keep it in. But uh, like if we think that we could be successful without it, maybe remove it, but we probably want the DRA in anyway. Uh, I just posted a link to that article if you want to check it out. It's uh, Van Edel. It's the right to effective behavioral treatment. Super good article. It's four pages long, so it's it's not going to like drain your whole Saturday. um make sure you read this and can become familiar with these because they are on the exam and it's just a great framework for for ethics anyway. All right, our last one is the parametric analysis. So looking at this uh analysis, we're looking at like how much of this do we need? So we're looking at a single intervention and we're looking at how much of that intervention do we need? Um, so we're comparing the different values of that intervention. We can also think about it as like the dosage of that intervention of one treatment. So, uh, quick example here. What happens to aggression with an NCR 60-second schedule versus an NCR 120 second schedule? So, this is the same exact intervention. It's um, NCR, just we're looking at different values of it. So, 60 seconds versus 120 seconds. and we find out which one is more effective at decreasing this behavior. So that's our um parametric analysis. We're uh taking the same intervention, we're just using dosages. Yeah, I had a question about nonparametric analysis. Non-parametric analysis is the uh single intervention in a baseline. I believe they took it off the sixth edition though and they've just combined it into comparative analysis. So um don't worry about that one too much. I think they just kind of threw it into comparative comparative. All right, I've got a couple questions here for you and then we'll do our giveaway to wrap things up. We have a behavior analyst is studying the effectiveness of a comprehensive behavioral intervention package designed to reduce disruptive behaviors in the classroom and includes a token economy system, a response cost procedure, and scheduled teacher praise. What type of experimental analysis should be used to confirm whether or not the intervention package is effective? Let's see our answers. Let's see some rationale here. Make sure you're reading carefully. Hint hint. Read carefully. Provide a rationale. It's always necessary. Promise. Seeing a lot of answers. Not a whole lot of ration here. Paige, is that what they asked about? All right, going to give you guys maybe 20 more seconds on this one. Like I'm liking more rational coming through. Paige is switching your answer. You got a better rationale. Alrighty. Uhoh. Looks like I'm going to crush a lot of people's uh days here with this one. Let's check it out. Um the question was asking what type of experimental analysis should be used to confirm whether or not the intervention package is effective. The best way to do this is B the comparative analysis. Um so this is looking at uh we have our treatment package versus a baseline condition. So um whether or not the intervention package is effective at all. We were not looking at uh which components are most necessary here. That would be a component analysis. But uh in this one we're just looking at is the treatment package better than baseline. We have this this versus that comparison. Uh this one took a lot of careful reading. Uh a big pat on your back if you got this one. This one is a mean question, but uh again it just uh really emp uh is there to emphasize like how important that careful reading is looking at what does the question actually ask about. So um this one is specifically asking about is it um the whole package effective over baseline. Um remember um that would be a comparative analysis. Um the component analysis would be uh taking one out at a time. does this actually um or each is each component necessary for this change in behavior or not? But that's not what we're doing here. Um so yeah, kind of a mean question. and it was kind of designed to throw you off. But again, um hopefully it emphasizes what uh that careful reading and what is necessary and just because you see some details in a question like yeah we had a treatment package here. Um but that's not what we were we weren't asking about like the different components of it. We can still do a comparative analysis with a treatment package. Um, so that u again just uh really requires that careful reading for sure. I know it's a it's a tough lesson, but it's it's better to up with me in class than it is to up on the real exam. So, um hopefully this lesson sticks if you didn't get that one. And um I'll I'll try to make the boo boo a little bit better with a giveaway right now. So, all right, that's going to wrap up uh tonight's session for us, but we're or we're going to do a little giveaway to um end the stream tonight on a uh higher note. So, uh if you haven't yet, make sure you're checking out our stuff on understanding behavior.shop. We've got the best mock exams that are super affordable, come with great rationale. If you want uh to spend 16 hours with me uh on video and uh check out our behavior be slayer course. Uh it comes with videos that are split up into like 5 to 20 minute segments and uh easy to navigate so you can just get to whatever thing that you want to study. Um and uh it also comes with a bunch of cool graphics. We're actually going to put uh more infographics in for you guys uh within the next week. And the um the videos are also interactive, so they have questions built into them that you can respond to. And uh it's really great. It works. We uh like we've had people that like haven't passed after seven attempts. We've had people that haven't passed after 10 attempts. And um they use Beast Slayer course and they pass. Uh so check it out. It works. Uh yeah, a lot of people in chat saying that they love it and it works too. Good, good, good. I'm so glad that you guys are loving it. Um, so yeah, check it out if you're looking for um some thorough study content. And um yeah, we'll we'll do a nice little giveaway tonight, too. Um we'll pick two winners and uh excuse me, super burpy. Um we'll pick two winners for this one. Uh they get to choose whichever product that they want. We'll we'll give we'll give them five choices. So I can get any of the beat the beast mocks. So, beat the beast one, two, or three. You can get the fluency package or you can get our mini mocks. Um, these come with our toughest questions and um yeah, our toughest questions and they're split up into 25 uh question quizzes and yeah, it's really great. You can also try it for free. So, um we'll pick two winners. You can uh choose any of those five products. If you are a winner, let's go ahead and I'm going to pull up a random number generator. Um, how big should we make our range tonight? Let's make it let's do 1 to 750. Sounds good. Um, so yeah, you can insert a number now. Uh, you get one guess per person. The two closest people are going to win their choice of product. We have a question. Uh, realistically, how long does it take to go through the whole course? Um the whole course it takes about I would say um 25 to 40 hours is a good estimate. The information is pretty dense within the course. Um so it's there's a lot of stuff that you're going to want to review multiple times. Uh here's like the whole contents of the course here. So uh video-wise it's just over 16 hours. We're gonna actually um have some more videos in early next week for you guys too. Um, so like little like 16 to 17 hours for videos. Um, there's 65 or more quizzes. Actually should probably get that count. I think I don't know how many there are. There's at least 65 um quizzes and activities for you to engage with. Um, they're broken up by section. It gives you everything you need. The information is um is it like um precisely what you need. There's some good articles to support here. And there's infographics. Um, and yeah, you get all of that for 325. You can use a discount code UB nerd to get uh 10% off. And um, you'll love it. I promise. It's it's a really great course. Uh, for the beat the beast box, you get one attempt at each. Fluency questions. You get to take as as many quizzes as you want. Uh the quizzes will uh switch up every time that you take them. And then the mini mocks, you get two attempts on each quiz. Yeah, the code UB nerd uh will get you 10% off. That works on our courses and it also works on our bundles. If you want the course and you like want some mock products, too, our complete beast mode bundle comes with everything. Um, it comes with $614 worth of products. You can get it for $4.75 and you can use our our 10% discount code on top of that, too. Um, so really good deal on all of that stuff and it's not much more than paying for another exam fee. So, um, just avoid it and get the stuff. All right, cool. We're gonna pick our number. Let's see. 256. All right, help me out here, crew. Uh, we have two winners. So, our two closest to 256. Let's see what we got. I see 289 currently. That's not too bad. I see 280. That's even closer. Claudia 256. Okay, so that's like 24 off. Oh, we had Samira. 258. Samira is definitely going to be one of our winners. And then we had Natalie at 253. Oh, Amy was 259. Oh my gosh. I'm seeing Natalie 253 is only three off. Samira uh 258 was only two off. Oh, wait. We had two 253 and 259. Oh, man. That's They're both three off. Amy and Natalie. Uh oh. I think we have a roll off here, crew. I I think we've got a roll off. All righty. All right. So, Samira is definitely our winner. Um, you know what? I'm not going to do a rolloff. She'd be really nice. Uh, we'll give Amy and Natalie both a win as well. So, um, shoot me an email and at understanding behavior
[email protected]. I really appreciate all of your amazing participation tonight, everybody. Uh, we'll give it to all three of our winners uh, tonight. So, Samira, Natalie, and Amy, I almost threw you off with the roll off, huh? I'm like, it's late. We've been on stream forever. We'll just give our stuff away because it's fun. Um, all right. Right. So, congrats to uh our three winners tonight. You guys uh rule. Thanks so much for all of your amazing participation tonight. Um check out our um shop for all the great stuff. Sign up to our email list um so you'll get that error analysis sent to you on Monday. And I hope you guys have a really lovely weekend. If you're testing soon, best of luck. I hope you go crush it. Keep it cool. Keep it confident. Read Read the questions. Read the questions. Uh, use your strategies. Compare and contrast. Play it out. Read the questions. You're gonna be great. I promise. All right. Thanks so much, everybody. Have a great night and I'll see you next time. Bye-bye.