- [Instructor] Alright,
welcome back to ABA exam review and a very exciting day
as today we are going to begin our review of our 6th Edition BCBA Exam Study Guide. We're gonna go through each
task list item one by one. This video is going to
cover A, B, C, and D. And then the subsequent videos, we're going to cover the
rest of the task list. So with the change to the 6th Edition, the content is mostly the same. In a lot of ways, it's
actually been simplified. What's changed, however,
is some of the formatting, some of the wording in
some of the locations of certain items. But if you are already
familiar with the content or if you're just getting
into it, it's okay. Don't be too worried about
the change from the 5th to the 6th Edition. What we're gonna do is we're
gonna simplify it as much as possible and try to explain
it in a way that's going to help you increase your fluency as fluency is the most
important thing for your exam. So without further ado, let's jump right into our
6th Edition Study Guide. Now starting with A, we're
gonna start with behaviorism and philosophical foundations. These are exactly what it sounds like, the foundations for which
our science is built upon. So if we start with A-1, the
goals of behavior analysis as a science, that includes description, prediction, and control. And our goal of behavior analysis is to describe events that happen, try to find out why they're happening and then control those events
for increase or reduction. So if we start with description, descriptions are simply facts
about an event or a behavior. You're simply describing what happened. You're not making predictions,
you're not hypothesizing, you're not experimenting, you're just describing what happened. So what does the behavior look like? What were the antecedents? What were the consequences? For example, let's say
you go out one night, and you tell your friend about the night. All you're doing is stating facts. Description is a statement of facts. We then move to prediction
where we're taking some of those facts and some
of those observations and trying to hypothesize what is causing those things to occur. So now we're drawing correlations. So correlations between two events. So after repeated observations,
we're trying to determine what two events are correlated. We're not yet experimenting here, but we are making hypothesis
about why things may happen. For example, if you present
your client with a task demand, they will attempt to elope from the room. That is your prediction. How are you going to determine that? Well, we are going to try to
go to our last goal of control. And control is the experiment phase where we are actually manipulating and introducing variables
to try to control the behavior of interest. Once we have that
control over the behavior and we've established
a functional relation, we've achieved our
highest goal of control. So for example, reinforcement
is a very common independent variable that is introduced. And then if we can
reliably increase behavior with reinforcement, we have control. We've established that
functional relationship. A-2, philosophical assumptions
underlying the science of behavior analysis. With our assumptions or or attitudes, this is how we believe the world works. And on the exam you wanna
be very precise about what assumption you are choosing as far as answer choices go. Let's start with selectionism. Selectionism is just the basic idea of how behaviors persist. Behaviors are chosen based on
environment and consequences. The first type of
selectionism is phylogeny or the phylogenic selection. This is natural selection. This is the evolution of
our species over time. Us as humans, animals,
plants all develop traits and behaviors that help us survive. The traits and behaviors that don't help us survive
are eventually weeded out thanks to consequences. The ontogenic or ontogeny is
where we typically work in because this is the
individual's learning history. So if you think of the
phylogenic selection as selection over a long, long, decades and centuries, periods
of time versus ontogeny, which is each individual's
learning history, that's how you're going to discriminate between those two ideas. And then third is cultural. Cultural where behavior is
passed based on imitation and modeling. So a mom passes down to a son or people in a certain tribe
pass behaviors to one another. Determinism says the universe
is lawful and orderly. Things don't happen accidentally things, don't happen for no reason. Whenever a parent says they
do such and such behavior and there's no explanation why, that is violating our
deterministic nature. Being deterministic leads us
to not taking the easy way out. when trying to figure
out why behavior happens. There's always an explanation
for why behavior occurs. Empiricism, being empirical. When we talk about
using empirical studies, empirical data, that means data or studies that are proven based on objective observation
and data collection. When you think empiricism,
the main thing you want to think about is observation. Are you observing the event
from an objective standpoint and measuring that event using data. Being empirical is crucial
for us making decisions. When we do assessments, we
do have indirect assessments. We should never make decisions
without an empirical, direct observation. Continuing with parsimony. Parsimony is something I preach when I'm training technicians. Too often we can get caught up in trying to figure out just a very
convoluted and complicated explanation for why things happen. Parsimony says we want to do the opposite. We want the simplest and most
logical explanation first. Always rule out the
simplest explanations first because if you're right and it is the simple explanation, you saved yourself a ton
of time and resources. You wanna be parsimonious,
you wanna be simple. Pragmatism. Pragmatism is making choices based on your anticipated outcomes. You're being practical
about the choices you make, and you're making those choices
based on prior knowledge of the choices and the outcomes and what you think is going
to benefit someone the most. Just like empiricism, being
pragmatic is being objective, and being pragmatic just
means you're outcome driven. We are making decisions
based on results both past and what we anticipate
those results to be. And then finally, philosophical doubt. Philosophical doubt just says
we are questioning everything. Our science of behavior,
what we all agree on as the science of behavior, your
own intervention results, the results of others. So when you go to conferences and people are presenting their studies, you want to just engage in doubt in a way where you're continuing to question the validity of these findings. Replication is huge to get
rid of some of the doubt, but doubt keeps you growing,
it keeps you curious, and it keeps you from
just becoming dogmatic or married to one idea or one theory. So we're always engaging
in philosophical doubt. A-3, explain behavior from the perspective of radical behaviorism
with radical behaviorism. This is what B.F. Skinner coined. This is his idea. It followed methodological behaviorism, and it adhered to the SRS contingency. This three term contingency
where we have our antecedent, our behavior, and our consequence. Even more important than
that though is this idea of private internal events. Private events such as emotions, thoughts and feelings are
internal, unobservable events. What is the only difference between private and public events? The observability. Private events are influenced by behavioral ideas and techniques and interventions, just
the same as public events. The only reason I say we do
not use private events in ABA is because they're very hard to consistently observe and measure them. But with radical behaviorism, we do take private events into consideration when analyzing behavior. They need to be part of our analysis so we're not neglecting emotion, thoughts and feelings of our clients, but we are trying to
work within public events because we can routinely
observe and measure those. Then you have this idea of mentalism, which are these hypothetical
constructs which we use or I should say the
general population uses to explain behavior. And the issue with these
mentalism is we get away from environmental explanations, and we just start talking
about states of being. So a big one is let's say ego. Ego is just this state of being. It's just a construct. Doesn't mean anything
relative to the environment. You have a big ego, you don't have an ego, it doesn't mean anything. It's very nebulous. When we use ego as the cause of a behavior that becomes the explanatory fiction in which we want to avoid. With circular reasoning,
we're using faulty logic and the most common type and one we want to avoid at all cost is blaming the diagnosis. Let's say the autism
diagnosis for the behavior. So if you say they engage in
this behavior because of autism and then you say they have autism, therefore they engage in
the behavior or the cause is the effect and the effect is the cause. And we've gotten no closer to figuring out a
solution for the behavior. A-4, distinguish among behaviorism, the experimental analysis of behavior, applied behavior analysis and professional practice guided by the science of behavior analysis. These are quote unquote our
four branches of behavior. Very easy to discriminate
between these four and understand the difference. Let's start with behaviorism 'cause we're gonna group
that as its own thing. Behaviorism is a guiding philosophy. When we talk about radical
behaviorism, that's a philosophy, it's an idea, it's a guiding theory on what behavior is and isn't. Now behaviorism is kind of an umbrella 'cause behaviorism isn't an
experiment, it isn't practice, it's just a theory. When we think of experimental
analysis of behavior and applied behavior analysis, this is our research. Now we're actually going
in and experimenting. The main difference between
experimental analysis of behavior and applied behavior
analysis is this applied concept where we are doing
studies in applied settings, offices, clinic schools on human subjects. With experimental analysis of behavior, we are typically doing the experiments with animals in very
controlled unapplied settings. Think of Skinner, when you think of experimental
analysis of behavior. What did he do with his pigeons? Think of yourself
implementing treatment designs and experiments when you think of ABA. That is the main difference. Controlled with animals over here and then more human based and in applied natural settings for ABA. So where does that leave practice guided by behavior analysis? Well, when we take all
this research we've done, and we write a treatment plan and our technicians implement
that treatment plan, that is practice guided
by behavior analysis. All the results of our studies
we put into use with humans. A-5, identify and describe dimensions of applied behavior analysis. So our dimensions are our guiding lights. Everything we do should be
guided by these dimensions. They've been around pretty
much since the existence of or the the creation of
applied behavior analysis. We have seven dimensions in total, however you choose to
remember them as your choice. Some people use BAT CAGE to remember them. All we want to be sure
of is that we are fluent, just like our assumptions
in each one of these 'cause you're gonna have to
be very specific on the exam. Let's start with applied. Applied. When we talk about the applied and applied behavior analysis,
what are we talking about? We're talking about making positive and socially significant
change in a human's life. We want change that is meaningful. So when we talk about applied,
we're talking about what kind of goals are we setting? What kind of goals are
we looking to achieve? Are they meaningful to our clients? Analytic. Analytic goes back to control. Do we have control over the behavior? Are we establishing a
functional relationship between our intervention and
the behavior of interest? When we are analytic and
when we control the behavior, we can change the behavior. Behavioral, the obvious one,
behavior must be observable and must be measurable. We want to choose behaviors we can define and that we can consistently
observe and measure. That one's pretty straightforward. Conceptually systematic. Also pretty straightforward. You want to be consistent
with behavior principles. This means if you're an ABA practitioner and your expertise is ABA. We don't wanna start using
let's say cognitive behavior therapy or talk therapy or occupational therapy
ideas in our interventions because we are ABA providers. Now, if you are qualified
in multiple areas, that's a different story. But we still have to distinguish
between our ABA practice and our other practices, and we still need to be
conceptually systematic in all we do when targeting behaviors. Effective. Effective means we are making significant and important level of
changes to behaviors. Effective means different
things for different people. Going from one to two for
some clients isn't a big deal. Going from one to two for let's say a highly
impacted client could be an extremely big deal. So effective is relative to the needs of our clients. What's the difference between
applied and effective? Well, we can be applied, we
can set out to make changes that are positive and
socially significant. We can set goals and target behaviors that are socially significant. Effective means we're
actually achieving our goals. We're actually making the change, not just setting out to make the change. Generality. The target behavior should
change not only in the learning environment, but outside
of the learning environment as well. It's exactly what it sounds like. This is all about generalization. Generalization is incredibly important when dealing with human behavior. If we're only teaching to the point where our clients can do
the behavior in the learning environment, we're not doing enough. Generalization has to occur. In the technological, think replication. Can your intervention
be replicated by others? You wanna write and design
interventions that are replicable because we always want to expand the available technologies in our field. On to B, concepts and principles. These are our basic
concepts, basic principles. When we talk about being
conceptually systematic, we wanna adhere to these
concepts and principles. So let's start with B-1. What identify and distinguish
among behavior response and response class? What is the difference
between these three things? Because it seems very
overly specific, right? And in a lot of ways it is. But on the exam we wanna
be as precise as possible 'cause even though we often use behavior and response interchangeably,
there's a difference. A behavior is anything an organism does. So anything an organism
does, we can classify it as a behavior. We can define that behavior, and we can make a goal out of behavior. Behavior are actions, right? So we always wanna be active, okay? We want the organism doing something. When we think of a response, that's a single instance of a behavior. So if we consider, let's say addition, doing addition problems, the behavior and the response is answering four in response to two plus two. Doing addition is the overall behavior. Each time an addition problem
is done is a single instance of a behavior and that's
the true difference in a behavior and a response. Now a response class is simply a group or set of responses that
serve the same function or have the same impact
on the environment. So go back to our math example of math, doing math as the behavior. Answering four is the response. Then writing, saying, or showing for in response to two plus two is a response class. These are all responses that
serve the same function. They're all part of the same class. Quickly, when we think
of pivotal behaviors and behavior cusp, don't
get too overwhelmed or confused by these. They can be very difficult
to distinguish between. Pivotal behaviors are behaviors that lead to new untrained behaviors. So functional communication,
training and joint attention. Think of these as almost
pre learner skills. Behavior cusp are behaviors
that allow learners to contact new reinforcers
or parts of the environment. So reading and learning
to use transportation often rely on pivotal behaviors,
but they're more complex and lead to more access
to the environment. B-2, identify and
distinguish between stimulus and stimulus class. So just like we have our
responses, we have our stimulus and remember our contingency:
stimulus, response, stimulus. So now we're focused on the
antecedents and consequences. A stimulus is a change in the environment, and they evoke a functional reaction. So a stimulus evokes a response. The entire class is talking loudly until the teacher walks in the class. What is the stimulus here? Well, the teacher walks in the class. The response is the class
stops talking loudly. Any change in the
environment is a stimulus. A stimulus class, just like
a response class is a group or set of stimuli that shares
similar characteristics, and there's different types of classes. We have our topographical class or physical/formal/feature. Stimuli that look and sound alike, so red objects. Vegetables often look or have a similar topography. Functional effect behavior the same way. Different music makes you dance, stop signs, red lights
saying stop are all stimuli that make you stop. They're all having the same function. And you can see that
stimulus classes can have multiple types of descriptions, right? You can have a physical and
functional stimulus class. Temporal class. When the stimulus occurs,
are the stimuli antecedents or are they consequences? And then arbitraries antecedent,
stimuli evoke the same response but do not resemble each other. Kit Kats and Dr. Pepper
don't have similarities, but they evoke the response. They contain sugar. These are arbitrary stimuli, but part of a stimulus class, they're having the same
effect on the response. And then probing. Probing is just asking a client to perform a task to
assess whether they can perform the task. This is relevant to our
stimuli and responses because probing is going
to act as a stimulus where we're probing
out a desired response. B-3, identify and distinguish
between respondent and operant conditioning. Our main focus is operant conditioning. Let's not forget that, but you still need to be aware
of classical conditioning or respondent conditioning. How are we going to work
through respondent conditioning? You're going to identify
where the stimulus are, the stimuli are and where the reflexes are because with a conditioned or an unconditioned stimulus
in respondent conditioning is going to elicit a reflex. For example, you're reading
a magazine which is neutral. You hear a loud bang,
which is unconditioned and unconditioned stimulus. It makes your heart rate
increase in unconditioned reflex, which develops into a condition reflex and a condition stimulus. That is the main idea behind
respondent conditioning. This stimulus response
contingency where a stimulus elicits a reflex, and we compare those stimuli
to elicit new reflexes or condition reflexes in
the presence of new stimuli. What we're more concerned
with is operant conditioning. And the primary difference
is consequences. Operant conditioning, conditioning
is based on consequences. Consequences affect the future probability of behavior occurring or not occurring. That's key, right? We are worried about future behavior. So reinforcement and punishment
are the primary ones. We can also undo right
with operant extinction, which we'll go over later. But this is the stimulus
response, stimulus contingency, and we are evoking a response. Operant conditioning is what
we mainly deal with in ABA. Before, identify and
distinguish between positive and negative reinforcement contingencies. When we think reinforcement,
what are we thinking? We are thinking about behavior increasing. Remember, reinforcement
increases behavior, right? So anytime you are wondering
if you have a punishment, or a reinforcement or even extinction, you
have to ask yourself, is the behavior increasing or decreasing? If that behavior's increasing,
it is reinforcement. Punishment decreases behavior. Positive reinforcement, a stimulus presented following a response or behavior that will increase
or maintain that response. So we know it's gonna increase, and it is presented, it's positive. Negative means a stimulus is
removed following a response. Reinforcement means it
will increase our response. How do we start to work through these reinforcement
contingencies while we establish if then statements. If you complete your homework,
then you get a reward. The behavior is completing homework. The reward are hopefully reinforcement. If it acts as reinforcement,
homework is going to increase in the future. There's this idea of automaticity where the behavior is
modified by consequences whether the person is
aware of the consequence or not. Meaning you don't have to even know that a consequence is happening, that reinforcement is
happening or not happening in order to be reinforced by it. That's a very important idea, especially with self-management. Or let's say you have a parent who thinks their child won't
be aware of the reinforcement. It doesn't matter. If it's reinforcing, it
will change the behavior. B-5, identify and
distinguish between positive and negative punishment contingencies. The new task list was nice enough to have a better flow straight from reinforcement to punishment. What is the difference? Punishment decreases. Remember, we're not worried
about the topography when deciding between
reinforcement and punishment. It is the effect on future behavior. So whenever someone tells
you timeout is punishing or candy is reinforcing, you say, how does it affect the behavior? Positive is still a stimulus added. Negative is still a stimulus removed. A contingency is still
an if-then statement. Very core ideas, understanding
reinforcement increases, punishment decreases. Positive is is added, negative is removed. B-6, identify and distinguish
between automatic socially mediated contingencies. What do we mean by socially mediated? We mean another person is involved. And when we are in the practice of ABA, most of what we do is
socially mediated, right? Because we are delivering
the consequences. When we talk about automatic,
we aren't required. Consequences are produced without needing another individual. There's a specific function,
automatic behavior, think stemming, engaging
self stimulatory behavior. Think scratching an itch. The main difference here is the social component. There's a second person or third person or however many people involved
when it's socially mediated. When there's no other person involved and the consequence is
automatic, think alone. B-7, identify and distinguish
among unconditioned, conditioned and generalized reinforcers. Let's start with
unconditioned reinforcement. Unconditioned reinforcement
are primary reinforcers with no learning history. Food, water, sleep,
sexual activity, warmth, all these things people
need without conditioning. You need these from the beginning. These are natural human traits. Condition reinforcement or
when we take a neutral stimuli or a stimuli with no reinforcing
properties and we pair it and it becomes reinforcing. The two most common are
token boards and money. But also think of when you have
a edible reinforcer, right? Let's say candy. Candy, which is food or food reinforcement can be very reinforcing. How are we gonna create a
condition reinforcement? Are we going to present the food and the let's say, token at the same time? In other words, we are pairing the unconditioned reinforcement
with the neutral stimuli and eventually that
neutral stimuli is going to become conditioned. That's how tokens get their value. We start trading tokens for reinforcers and over a period of time, those tokens become
conditioned through pairing. Then generalized reinforcer. Now social praise and attention
can also be conditioned. But when we talk about
generalized reinforcement or generalized reinforcers,
these are reinforcers that have impaired with other reinforcers. It can be used in a variety of context. Tokens are also a great
generalized condition reinforcer. These are just things that
are easy to transport. They're easy to use
for multiple behaviors, multiple settings. And it doesn't have to be
specific to one setting or one behavior. B-8, identify and distinguish
among unconditioned, conditioned and generalized punishers. Again, making it much easier on you 'cause we go straight from
reinforcement to punishers. Ideas don't change, right? Only the difference is what? Reinforcement increases,
punishment decreases. Unconditioned means the same thing. Primary punishers, no
learning history needed: pain, excessive heat, electric
shock, excessive cold. These are things that don't
need to be conditioned. Conditioned or secondary punishment, again, neutral stimuli that becomes punishers through learning. Timeout and reprimands are paired with unconditioned punishers and eventually developed
punishing properties. And then a generalized punisher, just like a generalized reinforcer that can be used in a variety of context for a variety of behaviors. B-9, identify and distinguish
among simple schedules of reinforcement. Yes, we have to know simple
schedules as well as compound. When you think of
continuous reinforcement, you think of FR1. FR1 is the only continuous schedule. On a continuous schedule,
reinforcement is provided for every occurrence or behavior. Every single one. FR1 is the only continuous
reinforcement schedule. Everything else is considered intermittent because when we talk about
intermittent reinforcement, we mean reinforcement that is not delivered every single time. There's going to be some sort of gap. How do we best distinguish between fixed or between basic schedules? We'll break it down, right? You have fixed versus an interval schedule and then you have a ratio, I'm sorry, fixed versus a variable schedule. And then you have a
ratio versus an interval. So when we talk about
fixed, what do we mean? We mean unchanging. So if you look at a fixed schedule here, we have a set number and
a set amount of time. When we talk about variable, we're talking about a
changing number or an average. Now when we talk about ratios, we're talking about responses. When we talk about interval,
we talk about time. If you ever get lost on simple
schedules, break them down. Am I changing my reinforcement? Is my schedule changing
or is it not changing? And am I reinforcing based on responses or after a certain amount of time? B-10, identify and
distinguish among concurrent, multiple, mixed and chain
schedules of reinforcement. They actually simplified these as well because they used to be, we also had two additional schedules, but we're only gonna focus on what they have on our list here. Now, complex schedules. Complex schedules are two or more basic schedules
operating at the same time. That's why they're called complex. Let's start with concurrent. Concurrent is choice. Why is it choice? Because we have two or
more schedules for two or more behaviors
operating at the same time. So behavior one has one reinforcement and the other behavior has
the other reinforcement. Oops, the other reinforcement. And you can choose what
behavior to engage in. How is that different from
the rest of our schedules? Well the rest of the schedules, we're not choosing the behavior. Concurrent is unique 'cause it
has to do with matching law. It has to do with choice. Where we are trying to
produce the quickest or best reinforcement
by engaging in behavior that matches the reinforcement. So behavior that gets FR1
continuously reinforced is going to happen according
to matching law five times as much as an FR5. Behavior's going to happen proportionate. Let's look direct, the
remaining compound schedules or complex schedules. We have multiple schedules
and mixed schedules. They have the same type
except multiple has an SD; mixed has no SD. So we have one or more behaviors with two or more basic schedules in
an alternating sequence. For example, ED receives
a break after FR3. He receives a break after VI10. What is the SD for FR3? The worksheet. What is the SD for VI10? It's cleaning. Mixed. We have one or more behaviors with no SD signaling a schedule. So reinforcement for doing
math problems can occur on a VR3 or VI4. When you think multiple
mixed, think alternating, often random. And then chained where we have two more
basic schedule requirements that occur in a row. For example, we sprint for 30 seconds, we walk for 90 seconds,
we receive reinforcement. It has to be done in a certain order to contact the reinforcement. Don't get intimidated by these compound and complex schedules. Okay? Just familiarize yourself with the idea that we have two more
schedules operating on behavior and with multiple and mixed, one has SDs, the multiple has
an SD with a mixed, does not. Chained is a schedule where our basic schedules need
to happen in a certain order, just like task chains. Concurrent schedules
have to do with choice and matching law. B-11, identify and
distinguish between operant and respondent extinction
as operations and processes. Now this is the first task list item that I think the wording is
just needlessly confusing. Just keep it simple. What's the difference between operant and respondent extinction? Operant extinction,
withholding a consequence from a previously reinforced behavior. This is the extinction we
are typically going to do, and the extinction we're gonna talk about. Respondent extinction is
unpairing a condition stimulus with the stimulus that was
previously paired with. So for respondent extinction,
if pepper makes you sneeze, so let's say pepper is the
unconditioned stimulus, and sneezing is the unconditioned reflex. We pair pepper with the Corvette. The Corvette becomes conditioned. Now we need to present the
Corvette without the pepper to undo the conditioning. Operant extinction is the one
we are most concerned with because this is the one
we're most familiar with and the one we're going
to be using in practice. In operant extinction, we
are withholding reinforcement for a previously reinforced behavior. The whole idea is discontinuing or withholding reinforcement. That's what separates
extinction from punishment. With punishment, we're
adding or removing something. With extinction, we are
just withholding it. We're not giving it anymore. Side effects of extinction. We have what we would
call an extinction burst. So if we have extinction
and our behavior is here and we start extinction, we anticipate behavior
to increase it first. That is our extinction burst, a predictable temporary increase. Then we'll hit a point
and behavior will go down and hopefully go extinct. Now once a behavior is extinct, we can have what happens
in spontaneous recovery where it slightly recovers. It's a sudden reemergence of
a previously extinct behavior. Pretty straightforward, pretty simple. Just like if you are an RBT, RBT, the same thing you would do, the same knowledge you
have for extinction. That doesn't change. One thing added as a BCABA
is this idea of resurgence, right? And resurgence is very similar
to spontaneous recovery because an extinct behavior comes back. But with resurgence it comes back because the replacement
behavior was put on extinction. In other words, both
behaviors are on extinction. The old and the new. With spontaneous recovery, the previous behavior
comes back outta nowhere. With resurgence, we can
blame it on the fact that the replacement behavior
was put on extinction. So for example, if we
taught a child who would cry for attention to ask for attention, and we put crying on extinction
and reinforced asking, and then after a while they
only asked and didn't cry, but we stopped reinforcing asking and the crying came back,
that would be resurgence. Finally, response blocking
is not an effective means of extinction. Why? With response blocking,
we're doing exactly what it says, we're preventing
the response from occurring. But if you think about extinction, we need to withhold
reinforcement for a response. If we block the response, we
can't withhold reinforcement. So typically response
blocking is not going to be an effective way to
put something on extinction. B-12, identify examples
of stimulus control. Stimulus control occurs when behaviors, responses and responses occur or don't occur in the
presence are more often or less often in the
presence of a stimulus. In other words, if a stimulus is present, is a behavior occurring
more or less often? If so, we could say it
has stimulus control, and we want stimulus control typically, so we have control over the behavior. When we talk about things like
having stimulus control over our client, when we walk in, hopefully the client's behavior can change and adjust based on our presence. Another example, when your
college friend comes in town, you tend to drink and party more. That behavior of drinking and party more is under stimulus control of your college friend. When you see a red light, you stop. When you see a green
light, you accelerate. The green light has stimulus
control over the response. Why? How does it occur? 'Cause the green light
is an SD for the response because the green light
signals reinforcement is available or going. So we create stimulus control by reinforcing in the
presence of a response. B-13, identify examples of
stimulus discrimination. Don't overcomplicate
stimulus discrimination. Stimulus discrimination,
simply identifying the difference between stimuli. If I have a square and a circle, if I can tell the difference between the square and the circle,
that's discrimination. Very straightforward. Differential reinforcement
leads to discrimination. How? Well, because you are
reinforcing a target, putting the other behavior on extinction. If I tell you point to circle and you point to square, I put
that behavior on extinction. I only reinforce when you point to circle. Eventually what's going to happen? Well, when I say point to
circle, you're going to be able to discriminate between these two objects. Differential reinforcement
leads to discrimination, and we'll talk about more
complex discrimination later on. B-14, identify and
distinguish between stimulus and response generalization. One of the more tricky I think, ideas in B-14 is stimulus
and response generalization. Let's again refresh our ideas or refresh ourselves on
what generalization is. Remember generalization and
generality is one of our keys, key dimensions, and
generalization can occur across settings, people,
materials, behaviors, time. Generalization is so important. It does us no good if we can
only produce the behavior under very certain conditions. We just talked about stimulus control. We don't want behavior to be under such tight stimulus control that it only occurs in very select places or in front of people or settings. Very typically you see in
young kids, if they're around, let's say their parents, and they can answer a lot
of different questions, they can talk in convers,
they can have conversation, then they get around strangers and all of a sudden all those behaviors are gone. Stimulus control is different;
there's no generalization. So what's the difference between stimulus and response generalization? With stimulus generalization,
we have a situation where a stimulus evokes a response and then the same response
is evoked by other stimuli that share similar physical properties of the controlling stimulus. In other words, the same
response occurs across multiple similar stimuli. Think about it like this. We have a stimulus class, right? All these different
stimuli are in a class, and they're all evoking the same response. That response is generalized
across those stimuli. So a child screams the response
when he sees a white rat and stuffed animals. Compare that to response generalization, which is more frequently talked about with things like response induction. When a person performs a variety of responses in the
presence of the same stimuli or different behaviors
with the same function occur across one stimulus. So that would look like this where you have a single stimulus evoking multiple responses. Again, kind of a counterintuitive
thing to wrap your head around, but very important because it comes up a lot, a lot, a lot, especially later on with some
of our other task list items. B-15, identify examples
of response maintenance. Maintenance is actually a sub component of generalization, but generalization maintenance
are often considered very one and the same. If behavior maintains,
then it's persisting after intervention has stopped. So maintenance occurs when
a learned response continues once teaching has stopped. We want responses to maintain. Think about if you were
five or six years old and you took piano lessons, and
you were very good at piano. Now let's say you quit when you were 10 and then when you were 30, you tried to play again
without any teaching, and you just forgot everything. You did not maintain that behavior. With our clients, we look at much simpler
skills than becoming very good at piano as important for maintaining, especially if you're working
with very young learners or learners who are very
low developmentally, making progress and maintaining progress and those learning skills
is very, very important. Now, how can we mediate
generalization and maintenance? And we are gonna be more specific later, but we can train across
multiple settings, people and stimuli. You can use a variety of
reinforcement schedules. You can teach self-management, and you always wanna
reinforce generalization when it happens. B-16, define and provide examples
of motivating operations. I want you to think of
motivating operations as making you want or not want something. When you're motivated to do something, you want to do something. When you're not motivated,
you don't want to. And you might say, well,
that's sounds very simple. It is. But the key for MOs is to
tell a difference between an MO and an SD, right? Because if we have our
three term contingency: SD, behavior, consequence;
MO will be number four. This should be our four term contingency where the motivating
operation is affecting the value of this. And then temporarily evoking the behavior. The SD is signaling the
consequences available. So think about the MO as making you want or not want something. The motivating operation alters
the value of a consequence and then alters the, lemme
clean this up real quick. Motivating operation alters
the value of a consequence and then alters the frequency
of a behavior temporarily. So you have two types. You have a value altering
effect where we are increasing or decreasing the effectiveness of a reinforcer or a consequence. And then you have a
behavior altering effect where we are evoking or abating behavior. Now establishing operations and evocative effects are
temporarily are typically thought of as one and the same because if the effectiveness
of a reinforcer is increased, we're gonna want that reinforcer, which is going to make us act in a way to gain that reinforcement. An abolishing operation, if the value of a reinforcer goes down, then we're not going
to be as likely to act to gain that reinforcement. Then also think about the idea
of deprivation and satiation. If we're deprived of a reinforcer,
if it's withheld from us, typically the value goes up. That's an establishing operation. If we're satiated, if we
have too much of an item, typically the value is going to decrease. Think about if you really like pizza, but you have pizza three nights in a row. More often than not, the value of pizza is
temporarily going to decrease. So think about motivating
operations as temporary events that increase or decrease
the value of a consequence and make you temporarily
act one way or another. B-17, distinguish between
motivating operations and stimulus control. This was a new task list item. It's not my favorite task list item. I think it almost needlessly
complicates things. What they're saying here is exactly what we described before. What is the difference between
an MO and an SD, right? That is the difference. What is the difference
between an MO and an SD? That's what they want you
to distinguish between. The MO, like we just said, temporarily alters the
value of a consequence and the likelihood of a behavior. The SD signals something as available. So you can be motivated, you
can want to do something, there can be value in doing something, but until an SD signals the availability, there is no reinforcement available. That's the main difference. With an MO, we can be motivated, we can want to do something. The SD has stimulus
control over the behavior 'cause it actually signals
reinforcement is available. Think about if you're
watching a commercial and your favorite Mexican
restaurant comes on, right? A commercial for your
favorite Mexican restaurant. That might increase the value, right? Or temporarily evoke a
behavior of wanting of of going to get Mexican, right? Because now Mexican food
has increased in value. Until there's an SD signaling
that food is available, you can't have it, right? So only what the SDs where the behavior truly start to change. You can want something
but it not be available. The MO changes your want; the
SD signals the availability. B-18, define and provide
examples of rule-governed and contingency shaped behavior. These are our two types
of operant behavior. Most of the time, we're gonna
be dealing with contingencies. Now, rules are exactly
what it sounds like. Behavior under control
of a verbal contingency. If you tell a group of students that every time you hold your hand up, they have to hold their hand
up, then that's a rule, right? That's a rule that's under
control of the verbal behavior. And with the verbal behavior, if you say, if I hold my hand up, then
you hold your hand up. You want to specify what happens. If I hold my hand up and
you hold your hand up, then you won't be penalized
by me taking away your recess. That's obviously a punishment contingency, but it's stating what needs to happen. What is the antecedent,
what is the behavior, and then what is the consequence
or doing or not doing said behavior. That's a rule. We're not actually
contacting the contingency. So for example, you do
not eat expired food because you know you could get sick. That's the rule, right? Even if you've never eaten expired food, you never are going to eat expired food 'cause when you were little,
you were told if you eat expired food, you will get sick. You wear a collared shirt
to the fancy restaurant because the sign says no T-shirts. The sign says no T-shirts
and so you follow that rule 'cause in the past you've
been told if you don't follow that rule, right? If you don't follow,
if you wear a T-shirt, you won't be able to eat
at the fancy restaurant. Maybe you've never experienced that, but you're still doing
it because it is a rule. Contingency shaped
behavior is behavior under the control of consequences. You've actually contacted the contingency. So let's say someone said
don't eat expired food because you'll get sick. Let's say you drank expired milk anyway, and you got really sick. Now your behavior's under
the control of consequences. You've actually experienced
the consequence. Or you arrive to work at 8:00 AM, and you found fresh coffee brewed. You're now getting to
work at 8:00 AM every day. So contingency shape
behavior is which we operate with much more often 'cause we're always
setting up contingencies. If you do this, then you get this. But be aware of these verbal
statements of contingencies that are acting as rules. B-19, verbal behavior and verbal operants. My biggest piece of advice, verbal behavior and verbal operants. Know four things. What evokes the verbal
behavior or reinforces it? Is there point-to-point correspondence? And is there formal similarity? Let's start with a mand. A mand is a request. You can be requesting an item,
information, whatever it is, you're making a request. It's evoked by a motivating operation. You want information;
that's very important. The mand is evoked by an MO. You want a snack; you're hungry, you're deprived of something. A mand is very important. I'm sorry, a mand being evoked by an MO is very important to remember. It's reinforced by the requested item. Point to point and formal
similarity don't necessarily apply here because it is evoked by
that motivating operation. A tact, the speaker labels something. It's evoked by a nonverbal SD. So right off the bat, if
you know what evokes a mand or a tact, you can very simply answer any question thrown at you. Tacts are reinforced by the generalized condition reinforcer. So you see a cow on a road
trip, and you say cow. You're not responding
to any verbal statement. You're simply labeling something
you see in the environment. An impure tact is when the
response is evoked by an MO and a nonverbal stimulus. And we'll talk about
multiple control later. And this is under multiple control where we have both the
motivation to get something and the nonverbal stimulus in play. So if you're at a buffet, and
you see the king crab legs, and you say king crab
legs because you're hungry and you see those, that's an impure tact. Now an echoic, the speaker
repeats what they hear. It's evoked by our verbal SD. This is when this becomes important. It has point to point correspondence, it has formal similarity. That's important so we can
distinguish between echoic and intraverbals. Interverbals are also evoked by a verbal SD, but there are no point
to point correspondence. So if I say, what's your name? You say, what's your name? That's an echoic, evoked by
a verbal SD, point to point. If I say, what's your name? And you say, Timmy, that's an interverbal, evoked by our verbal
SD, no point to point. Very important, but very straightforward these little things we need
to know about verbal behavior. A textual, you're just reading, right? Reading a sign, reading a book. It's evoked by a verbal SD. A lot of times written. We have point to point 'cause you're reading
exactly what was written. It's not formal similarity, right? 'Cause you're reading
something that's written. Formal similarity means
the form is the same. I speak, you speak. I sign, you sign. No form of similarity
is you write, I speak. Text rules are, for example, a stop sign or a passage in a book. And then a transcription, writing down something that is spoken. So when you transcribe ideas evoked by a verbal SD, point
to point correspondence 'cause you're writing
exactly what's being said and no formal similarity. Very simple if you
remember those key ideas. You should have no issue. If you don't remember anything else and you decide that you just can't get it. Number one thing, what evokes each one? Number two, do you have point
to point correspondence? And then an autoclitic
modifies other verbal behaviors or your own
verbal behaviors, right? I think, I see, I hear. B-20, identify the role
of multiple control in verbal behavior. Now this is a newer idea with
our 6th Edition task list. All multiple control says is that a single response is
influenced by multiple variables or a single variable
affects multiple responses. Don't overcomplicate this too much. Let's think about what we mean by convergent multiple control. This occurs when one
response is controlled by more than one antecedent. So saying I'm hungry is
influenced by an antecedent of an empty stomach and
the site of a restaurant. So two antecedents are
controlling a singular response. Divergent multiple control, one antecedent evokes multiple responses. So hearing, tell me about your
trip might evoke things like, it was amazing. I saw a waterfall, or I went hiking. They might consider it divergent, right? Because we have a single antecedent with all these behaviors
diverging from it. With convergent, we have
all these antecedents affecting a singular behavior. What is the role? What's the point? It just helps explain
how different sources of control can influence verbal behavior. We always wanna look at
like we were talking about, what is evoking? What's controlling the verbal behavior? B-21, identify example of processes that promote emergent relations
and generative performance. When we think about emergent relations and generative performance, let's start to think about response induction and stimulus equivalents. We are looking for these
untrained relationships between stimuli, these relationships that emerge just from prior teaching. So relationships that
aren't explicitly taught, but that emerge due to other knowledge. So when a stimulus relationship is formed, that was untrained, we're talking about stimulus equivalence. Now, generative performance
occurs when novel and untrained responses are
demonstrated based on previously learned skills and concepts. So here with emergent relations and stimulus equivalents,
we're focused on the stimuli. With generative performance, we're focused on the responses. So for example, generative performance. If you learn red car and blue ball, and you can now label red
ball without training, that performance has developed based on these past skills and concepts. Now that's a very simple
explanation, right? But just think about
generative performance as almost a form of
response generalization. You start to form these new
responses based on what you know and what you've already learned. Then we have reflexivity, symmetry, and transitivity, which of course are
stimulus equivalence ideas. The best way to check for
stimulus equivalence is through matching to sample. Reflexivity, A equals A. So blue square to blue square. Symmetry, A equals B and B equals A. The word dog to a picture of a dog, and then the picture of
a dog to the word dog. And then transitivity
A equals B, B equals C. Therefore A equals C. It's the highest level
of stimulus relations or stimulus equivalence. So for example, the word,
dog (A), picture of dog (B), picture of dog (B) to real dog (C). Therefore word dog (A) to real dog (C). And all of these for true emergent relations or stimulus equivalents, the relationship has to be untrained. So you can't teach A equals C because that is not real
stimulus equivalence. You can teach it, it's just
not real stimulus equivalence or real emergent relations. B-22, identify ways behavioral
momentum can be used to understand response persistence. Another needlessly complicated with the way it's worded item. Behavioral momentum. Just think high probability
request sequence. I've also listed the
premack principle here, which we'll talk about in a second. Let's focus first on high
probability request sequence. We are building up momentum how? With multiple easy requests, we want to get a yes chain
going essentially, right? Clap your hands. Okay. Stomp your feet. Okay. Rub your head. Okay. Now let's clean up the room. We're building up momentum with these high probability request into the low probability request. So you want your learner
to answer two plus two. You say clap your hands, touch your nose. What's two plus two? Low, low, I should say high, high, low. Now important behaviors in a
high piece sequence should be in the learner's repertoire already. We don't want to consider
high probability request something the learner doesn't know. High probability request
should be requests that we are expecting to happen. Behavior momentum describes
the rate of responding and resistance to change due
to reinforcement conditions. In other words, the
more behavior momentum, the higher the rate of responding and the higher resistance to change in reinforcement conditions. I put the premack principle 'cause it's essentially the opposite of the high probability request sequence, and I don't want you
getting them mixed up. With the premack principle, we're offering access to
highly preferred behavior as a reinforcer for a
non-preferred behavior. So for example, you know your client wants to play with paint. Well, you tell them you can paint as long as we first mow the yard. We're using access to the highly preferred
behavior as a reinforcer for a non-preferred behavior. B-23, identify ways the
matching law can be used to interpret response allocation. We talked about matching
law when we talked about our concurrent complex schedule
or compound schedule. Matching law explains
why behavior contacts or why behavior occurs more or why behavior goes to
certain reinforcement. Matching law says responses
are proportionate to the amount of reinforcement
available across different choices or opportunities. If a kid is praised every
five minutes while playing with toys and every 10 minutes
we're working on homework, matching loss says the kid's going to spend double the
time playing with toys. Responses are proportionate
to reinforcement available. Keep in mind, keep this in mind when you are targeting skills. You wanna be sure that
responses that you want to see happen more are
proportionately ideally receiving more reinforcement
at least in the beginning. Matching law can help predict how behavior will be
distributed across different opportunities based on the
availability, magnitude and immediacy of reinforcement. In other words, you've
got multiple behaviors on different schedules. Matching loss says those
behaviors are going to happen proportionate to the amount of reinforcement that is available. B-24, identify and
distinguish between imitation and observational learning. So we've introduced
observational learning here and the only real
difference between imitation and observational learning is immediacy. Now with imitation, we have a model. So you want to think about
modeling with imitation. The model demonstrates
the skill or ability. The imitator copies the skill or ability immediately, right? We have to think of
imitation as immediate. The primary difference between
modeling and imitating, or I should say imitating and observational learning is imitation is the immediate replication of a model while observational learning
happens through observation of another without immediate performance. So observational learning, the learner requires new
behaviors by observing others. It doesn't have to be planned, and it isn't immediately replicated. That's it, right? So don't overcomplicate this. Imitation is exactly what we remember. I or you or the learner is
imitating a model immediately. Observational learning, you watch somebody do something, a child watches their
brother beat a video game. The next day, the child
beats the game the same way. The brother was not a planned model and the the behavior did
not happen immediately. The child simply observed or
learned through observation. C, measurement data
display and interpretation. Let's talk about C-1. We're gonna create operational
definitions of behavior 'cause with C, we're gonna see how each one goes into one another. First things first, with
behavior, we have to define it. We need to describe exactly
what we intend to measure because we want operational
definitions to be observable and measurable and repeatable. Think about what the behavior
looks like, the topography. Why does the behavior happen? Is it escape avoidance? Is it attention? Is it tangible? Is it automatic? Don't be subjective. The client felt angry. That doesn't tell us anything, not what it looks like,
why it's occurring. If you and I both went
to observe a behavior and the only thing we
knew was the behavior is the client feeling angry, you and I are going to
measure more than likely pretty different things. You need to be as precise as possible. So Johnny hit his brother
five times instead of Johnny was aggressive. Think about how much easier it is to understand Johnny hitting
his brother five times instead of Johnny was aggressive. Now we can define that even further. We can define hitting. We can define non-examples. You can be as detailed and
thorough as you want to get. What you want to avoid in
operational definitions is any subjectivity whatsoever. Somebody, a naive observer,
needs to be able to read that definition and know exactly what you are trying to measure. C-2, distinguish among direct, indirect, and product measures of behavior. Simply, these are the three
different ways we can record our behavior data. Direct is going to be the most important, where we are observing the
target behavior as it happens. We should never write a treatment plan without first directly observing behavior. Indirect includes things like
interviews, checklist, rating, scales, surveys. It's more subjective. You're not observing the
behavior as it happens. So interviewing a parent
is going to be indirect. Taking frequency is direct. You're directly observing the behavior and then product measurement
or permanent product and simply measuring the result, the outcome or the product of a behavior. You don't necessarily need to observe the behavior occurring, but you are measuring the effect
it had on the environment. So a clean room, a completed
test, a hole in the wall. With product measurement,
we need to be very careful because the behavior needs to produce a reliable
product in order for us to use permanent product. If we're measuring, let's say self-injury, and the self-injury doesn't
always leave a product, then we might need to
measure it a different way. C-3, measure occurrence. So occurrence includes
things like count, frequency, rate and percentage. When we think of occurrence, this is how many times a behavior occurs, how often something happens. Repeatability, count/frequency, probably the easiest type of measurement. We're just counting how many
times something happens. Rate is frequency with
the time component added. It's frequency over time equals rate. Now do people use frequency
and rate interchangeably? They do, but there is a distinction. Frequency is the count. You ate 10 peanuts. Rate, we're adding a time component. You ate 10 peanuts per minute. That is the big difference. And then a percentage,
you just have to know how to find percentages on your exam. You might get a question, but
in real life percentages are extremely important for us as analysts. So percentage is a rate number
or amount in per a hundred. That's not overcomplicate that, right? You have a, you have
10 shots, you made six, you made 60%. Percentages, very simple, right? How many did you get right? How many trials were right or wrong versus over number of total trials, and then convert that to the percentage. C-4, measure temporal
dimensions of behavior. Temporal dimensions of behavior
include duration, latency, and interresponse time. Our two temporal dimensions are extent, so duration and then locus, or where the behavior occurs
at a certain point in time. Latency and interresponse time. Duration, how long the behavior
lasts from onset to offset. So duration, you're
measuring a single response. Your trip took four hours,
your order took 10 minutes. You started the order, we
measured it, you ended the order. That is our duration. Duration is the measure of
how long a response takes. Now when we think about latency and interresponse time,
think about this chart here. Latency is the time in between
the SD and the response. Interresponse time is the
time in between responses. So latency, your alarm goes off, takes you three minutes to get outta bed. SD was the alarm. Three minutes is latency to get outta bed. Your wife tells you to
pick up the kitchen, takes you 10 minutes to get up. Wife telling you to pick
up the kitchen is the SD. Latency is 10 minutes. Interresponse time, time
in between responses. Two hours passed between
your last cigarette and the next. Two minutes passed between bite number one
and bite number two. Think about this if you get
confused between latency and interresponse time. Latency and interresponse
time are measuring the time in between something. Latency is SD to response 1. Interresponse time is response 1 to response 2. C-5, distinguish between continuous and discontinuous measurement procedures. Now, continuous measurement
procedures are almost always going to be more accurate. Why is that? Because we are recording every instance of the target behavior. With discontinuous measurement,
we're only taking a sample of behavior within the observation period. So when we can, we want to use continuous
measurement procedures. We wanna use frequency, duration, latency, interresponse time, rate. When's a time you might not be able to use continuous measurement? Let's say you're watching too many kids. You don't have a lot of time, you don't have a lot of resources. There are a lot of behaviors
to look at at once. In that case, you might have
to discontinuously measure. What we mean by that is
continuous measurement. If we have three hour sessions,
continuous measurement, we are taking it to full three hours, measurements occurring
the whole three hours. Discontinuous measurement, we might take that three hours and say, okay, for 30 minutes, I'm going to record data,
and I'm going to break that even further down into
let's say ten second intervals. That is the difference. Things like partial interval data, whole interval and
momentary time sampling are all discontinuous. Continuous measurement
provides a complete picture. Discontinuous provides an estimate. C-6, design and apply discontinuous
measurement procedures. So let's think about what
we're actually looking at with discontinuous measurement procedures. Interval recording. An interval is a specific length of time when data will be taken. Let's say we have five
minutes of data recording, and we break it into 20 second intervals. We're now looking at each interval or interval recording
with partial interval, if the behavior occurs at
all during the interval. So at any point in the 20 seconds that behavior occurs, it's a response. If we have 20 second intervals and the behavior happens for
five seconds, it's a response. If we have ten second intervals, the behavior doesn't happen, no response. The behavior just has to happen briefly. Whole interval, same idea. Let's say we have five
minutes, 20 second intervals. If the behavior occurs the
whole interval, it's a response. So 20 second intervals,
behavior happens for 20 seconds. It happened the whole time. It's a response. Ten second intervals, behavior
happens for nine seconds. Did not happen the whole time. No response. Time sampling. We're still taking interval data, right? So let's say we have five
minutes, 20 second intervals, but with time sampling, we're looking at the end of the interval. So just at the end of
the 20 second interval. Is the behavior occurring? Is it not? For example, 20 second intervals, behavior happens at the 20
second mark is the response. Ten second intervals behavior happens at the eight second mark, no response. A PLACHECK or planned
activity check is just the group version of
momentary time sampling. So momentary time sampling, we have one person. With planned activity check,
we're recording the activity of all participants who are involved. C-7, measure efficiency, so we've changed this a little bit. Efficiency includes things
like trials to criterion, a cost benefit analysis,
training duration. How efficient are we, right? How efficient is our intervention. With trials to criterion, how many opportunities does
it take to reach success? Ideally, we don't want a
lot of trials to criterion because we want to reach success quickly. For example, if your mastery level is six and it takes your client 10
tries to get six matches, trials to criterion or 10. How many opportunities does
it take to reach mastery? A cost benefit analysis,
you're comparing the benefits of implementing an
intervention versus the cost. So the benefits might
be it's socially valid, you can help your client
contact reinforcement in the natural environment,
you're gonna make progress, but it might cost you time,
it might cost you resources, and there might be an
ethical consideration. It's up to you to weigh the costs and the benefits and make a decision. So example, one math
intervention is proven to work, which is great, it's effective, but it's going to take a
lot of resources compared to a program that has
shown incredible outcomes, but utilizes punishment
and is time intensive. So on both cases, we have
interventions that work, they're both time intensive, but one utilizes punishment. Which one are we going to choose? That's the decision you have
to make based on your analysis. And then training
duration, how long it takes to achieve a desired behavior
change or skill acquisition? Does it take a week? Does it take a month? Does it take six months? This is important because we never put timelines
on our interventions, but if you're routinely taking
six to eight months to a year to see progress, you might wanna reevaluate just what you're doing. C-8 and C-12, evaluate the validity and reliability of measurement procedures. C-12, select measurement procedures to obtain representative
procedural integrity data that accounts for relevant dimensions and environmental constraints. I combine these because
this is all about good data. Data should be accurate,
it should be valid, it should be reliable, it should also be believable,
but we'll get to that. Let's start with accuracy. Accuracy says to collected
data truthfully reflects what was measured,
meaning you are measuring exactly what happened. If you're measuring blueberries eaten, the client ate 10 blueberries, you record a 10, data is accurate. If you record 12 blueberries
eaten, the client ate 10, that is inaccurate. The data are not truthful. Validity, the collected data
is taken for the correct or intended behavior. You're measuring what you're supposed to. So you wanna record the
length of time it takes for your client to complete a worksheet, but instead you record how long it takes for them to start the worksheet. That data's not valid, we can't use it. You're measuring the wrong thing. Then reliability. You can produce the data repeatedly. If your client eats 10
blueberries every day, you can record 10 data points every day. Data are reliable. Just because your data are accurate doesn't make the data valid. You can measure the wrong
thing and be accurate. You want all three. Just 'cause you're reliable,
doesn't make the data valid, doesn't mean you're
recording the right thing. Again, you want all three
of these to have good data. What do we talk about when we say dosage? We want to know how much of
the intervention is delivered and what is the best amount. Think of parametric analysis. So we'll talk a little
more about this later. And then we went over accuracy and then environmental constraints. We're gonna talk more
about this ethically, but when you are selecting
a measurement procedure, you really want to focus on measurement that you can reliably pull off, right? So when we've talked about
continuous versus discontinuous. If you can't produce the
same thing over and over and over again, that's
accurate and that's valid, then your environmental constraints might be preventing you from
using what you'd like to. And so you need to think
about how can I measure data and collect data to the point
where it's accurate, valid, and reliable based on
the environment I'm in. C-9, select a measurement system to obtain representative
data that accounts for the critical dimension of the behavior and environmental constraints. Here we go again with
environmental constraints. What does that mean? What does that mean for
environmental constraints? Well, if you're working
in a clinic setting, you might have only a
limited amount of space, limited amount of time,
limited amount of help, or you might be overwhelmed, you might be very busy. If you're working in a client's home, the home might be limited on
resources, it might be dirty. You might have a poor workstation. There might be siblings running around or animals running around. All these constraints need to be factored in When you're
choosing a measurement system because you want measurement, that is going to be measurement that's going to collect
data that are accurate, valid and reliable. You need to choose the
most appropriate system that will record accurate, valid, and reliable data based on
circumstances surrounding the behavior and the behavior itself. Just a quick refresher,
continuous measurement, we're capturing every
instance of behavior. The best form, however,
not always possible. Discontinuous is capturing
as capturing a sample, not as true, right? And when we say true, we
mean typically not as great of a measurement procedure, but it's better often in group settings or for short periods of time or when our environmental
constraints prevent us from continuous measurement. And then event recording will typically happen during assessments. You're just measuring how
many times a behavior occurs. A very simple count. C-10, graph data to communicate relevant
quantitative relations. This is our graphing task list item because in ABA, we should
be graphing everything. Graphing is how we make
decisions about trends and changes in level and variability. And it's an easy way to demonstrate
to stakeholders progress that is or isn't occurring. So it's an essential
part of visual analysis, and visual analysis as our primary method of data analysis in ABA, not necessarily statistical analysis, but visual analysis where we're
simply looking at our graphs and making decisions
about what is going on with our data in our sessions. So let's start with a line graph. Line graph is the most
common form of graph in ABA, and it's the one that you're going to be using the most often because you're going to be
graphing pretty much all your data using a line graph. It's based on the cartesian plane, which you may or may not know. But just so you're aware. The x axis represents passage of time. So sessions, days, however you're tracking your
data, the x axis will be time and then Y is going to
represent the behavior. So whatever your behavior is
is going to go on the Y axis. Then you can see your
data points are connected and it's going to form a data path. So frequently what you might see is a line graph that looks like this. We have time, we have behavior, we have baseline and connect our points, and then we change our
condition to intervention, and it looks something like that. And then maybe we enter
a maintenance phase and it goes something like this. So you can see the data
points are connected in the conditions, but they're separate once the condition changes. Line graphs, very straightforward,
don't overthink graphs. Now, bar graphs not really
that commonly used in ABA, especially day to day. If you're creating a report though and you need to, let's say
demonstrate different values of frequency totals
compared to each other. So for instance, you can see in our graph
here, we have children who prefer red, blue,
green, yellow, and pink. It's a quick way to compare totals. So if you have five total durations and you wanna compare those,
five total frequency counts and you wanna compare those,
a bar graph is a great way to represent total data and compare that total
data against each other. Then cumulative records,
think B.F. Skinner. This is what B.F. Skinner
used when he was doing his experiments with his pigeons where they would just over
time, continue, continue to accumulate responses. And you can see it's a continuous and ever increasing data path
because it never goes down. It's simply accumulating
total data over time. So if we take a look at
our cumulative record here, you can see responses are only going up. They never go down. You can see there are some
very steep points here where responses were quite rapid. And then when you get
to a flat level here, you can see there were no responses. It's just stuck at 300. And so for cumulative records, you really wanna understand
one, it's just an accumulation of data over time so it never decreases. And then two, what do
the different paths mean? If we have something that's very steep, then responding was quite rapid. If it's flat, then there
was no responding at all. A scatterplot is a distribution of data points across a data set. In other words, how are X and Y related? The most obvious example is
you wanna find out what time of day behavior occurs the most. Our scatterplot here is
city miles per gallon versus highway miles per gallon. So it's not like a line
graph where you have behavior over a span of time. We're simply looking at
how X and Y are related to each other, which again, Y it's used typically if
you want to figure out what point in the day behavior happens. You might have a scatterplot
between 9 and 5:00 PM and whenever that behavior
happens, you're gonna put a dot or a check mark at that
certain amount of time. Or let's say you work in a school and you have different classes and you wanna figure out when do behaviors happen in what classes? You could create a scatter plot and find out behavior
relative to each class. So scatter plots are very
useful, but just like bar graphs and cumulative records, they
really have their place. For the most part, we're
just using line graphs. And then semi-logarithmic graphs, standard celeration charts. You likely will not be asked this, but just be aware that precision teaching uses these semi-logarithmic
graphs, and it charts fluency. Now, interpreting graph data. When we are visually analyzing graph data, there are a few things we are looking for. Visual analysis again
is how we read graphs and it's how we explain what
is happening with the behavior. We're looking at things like
level, variability and trend. So let's look at our
first idea here of level. These are where data points
are relative to the Y axis. So if we look at A, you can see our level if we go zero to 10 is quite high. We have quite a high level. ABAB, this is our baseline. Once we go to intervention,
what happens to the level? Well, it goes down at a
sharp decrease in level. What does that say? Well, if the level has dropped, then the behavior has dropped because this is simply just the average of all our data points. We go back to baseline, and we go back up. The level returns to baseline A, A-1 or you could say the level returns back to normal prior to intervention. Intervention goes back. What happens to the level? Behavior decreases so does the level. It represents a change in the
height of the data points. It's a good idea of how behavior overall has
increased or decreased. Then let's look at variability. It's the amount of variation
between data points, the range of data points
around the average of the data points. Variability can be high or low. Essentially, variability
is looking at range. Okay, so if we have our level right? Let's just say our level is here. Where is our variability? Well, we have a nine, we have a seven, an eight, a nine, about
a eight or nine-ish and then about an eight-ish, right? Truthfully, the biggest
range is from what looks to be maybe seven to
nine, nine and a half. Not a lot of variability, right? Everything's kind of centered around our average data points. We would probably classify
this as low variability. Now what about this? Well, we have three little
less, 3, 4, 2 lowest is maybe right below two,
and then highest is right around four. Again, not extremely variable. Same with A-1 or A-2. Same with B-2. A high variability would
probably look more like this where you have a situation
where we have data point here, data point here, here, here, here, here. And you can see once we
connect our data path, just how much this is changing, right? All over the place,
there's a large difference between the highest and lowest data point. And these visual analysis tools are going to give you an idea of what is
happening with your behavior. Finally, trend, the direction the data path is heading on the graph. This is typically the easiest
way to explain behavior to naive observers and trend, right? We can be increasing, decreasing or have no trend. If our trend is increasing, typically that behavior is moving upwards. A decreasing trend,
typically moving downwards, and then no trend would
be more or less flat. If we look at A, what is our trend? Well, we're going down
and back, up, up, up. But down again. There isn't a whole lot of trend going on. And you might ask, well, how
many data points make a trend? It just depends because
had we stopped here, than our trend looks relatively upwards. That's why more data are always better. Here we've gone relatively
decreasing, right? If you draw a trend line,
relatively decreasing, here, it's relatively flat
and here it's relatively flat. Don't confuse trend for level. Level is just the general average or grouping of the data points. And it's always gonna be a straight line. Trend is what direction are
those data points moving? And so if we again graph this trend, it more or less looks something like this, a slightly decreasing trend. Everything else is relatively flat. Now visual analysis is
not a perfect science. So it's going to be up to you to make your best judgment
call to explain the data. Just remember, we want to be as honest and forthright as possible. Even if the data aren't
good, you need to be honest about how you see the data, and you need to explain
that to the stakeholders. Again, visual analysis
is not a perfect science. Alright, last part of part one, we are going to go through
experimental design. What seems to give people
quite a bit of trouble. Now, I will say I've noticed
personally anecdotally that preparation through your courses and whatever college or
classes you've attended towards your masters and towards your BCBA or your BCABA have done
a better job teaching experimental design. It's not that challenging. What makes it challenging is we don't do technically
research in the traditional sense if we're clinical behavior analyst, right? 'Cause we're working one-on-one. But you still wanna understand
the different designs because you never know when you might use or need a certain design. Now let's start here. Okay? This is again as our research
portion of our task list. First thing that we need to
understand is the difference between an independent
and a dependent variable. What are we manipulating? We are manipulating the
independent variable. This is what we're changing,
removing, introducing. So you want to figure out
the ideal amount of salt to add to your recipe. Salt is gonna be the independent variable. You want the increased behavior. So you're trying different
reinforcement schedules. The reinforcement schedules
are the independent variables. The independent variable is
what you are manipulating, what you are changing,
typically your intervention. Now what are we targeting? Well we're targeting behavior. And the behavior is
the dependent variable. The dependent variable is dependent on the independent variable. The dependent variable is dependent on the independent variable, right? And that's if we have functional control. Ideally, whatever we're
introducing is affecting the dependent variable. So if we're introducing salt, salt is going to affect the soup if the soup is the dependent variable. If we're introducing reinforcement, which is our independent variable, that is going to affect the behavior. Think of your behavior as
dependent on your intervention, dependent on your independent variables. Now what can get in the way
of perfect functional control? Functional control or
experimental control, right? Is when we control the behavior. Whatever we're manipulating,
whatever we're changing, whatever we are introducing, we want to control the behavior. Now we have all these other
variables that are extraneous so anything we are not investigating, we're not controlling in the
environment, are extraneous. Once those extraneous variables start to impact our dependent
variable, they become confounds. And we need to control for confounds because confounds are interrupting
our experimental control. So if you break it down
into almost a list, right? You're going to have one
your independent variable, this is your intervention. You're going to introduce
the intervention to change the dependent variable or the behavior. You're then going to try to
control for extraneous variables to prevent confounds. Think about it like that. You introduce your independent variable to change the dependent variable, you control for extraneous
variables to prevent confounds. And then if we just wanna finish it off so we can establish functional
or experimental control. Now we have different types of analyses when we are experimenting. This is very similar to our goals. Remember we had description
prediction control. So if you think of descriptive analysis as description, right? Because this is just measuring behavior under a single condition, right? Let's just say you're
taking frequency data, and you're just taking account. You're just describing what's happening. You're not manipulating
anything, no manipulation. Correlational would be
our prediction, right? Now we're trying to figure out, okay, behavior happens in situation A a certain way, but in
situation B it happens a different way. So we have two conditions. For instance, we give reinforcement in one condition, and we don't in another. How is behavior changing? I give a certain level of
feedback in one condition and a different level of
feedback in another condition. How is feedback correlated to behavior? That's a correlational analysis,
it's related to prediction. Then experimental analysis
is related to control. Now we are actually manipulating things, manipulating conditions, variables, introducing our independent variables and trying to establish
experimental or functional control. When we are experimenting
or doing research or even working in practice,
we want two things. We want to establish internal validity, and we wanna establish external validity. External validity is
easy, it's just the result of our experiment are generalizable. Do our behavior change procedures
and our behavior change results generalize to subject
settings and other behaviors? If we're not generalizing,
we're not being effective. So external validity just
means whatever we're doing, whatever changes we're
making are generalizing. Internal validity says we are controlling the dependent variable or the behavior. So any changes in the
dependent variable are a result of our intervention or manipulation. This is what we talk about when we control for extraneous variables and confounds. It's the reason things
like withdrawal designs are so effective because if we go baseline and then introduce an intervention and the behavior increases, and we would draw our intervention and the behavior decreases
again, we introduce, we withdraw, and each time we introduce and withdraw the behavior
changes reliably, we can reliably say we
have internal validity because we are controlling the behavior. So internal validity has to do with our own functional control. External validity has to
do with generalization. What are some threats
to internal validity? And I've added a few extra here just so you're familiar with them. These are not difficult. These are all considered
potential confounds that can skew our data. They can mess up our data. And typically, these
are much more relevant when dealing with research
participants, right? But you wanna be aware of it too when you're dealing in
one-on-one settings, okay? 'Cause we are dealing with people and so a lot of these apply to everybody. So let's start with one, right? Remember, internal validity
is we are controlling the behavior and the behavior change. Nothing else is affecting it, but something like
history can affect that. But outside events that occurred during the intervention change. When we think of history, we try to think of the
learners' past, right? Or what the learner is experiencing as we are going through the intervention. So if you have a math intervention and the learner is self-studying and you're not controlling
the self-studying, that is a potential confound. Maturation is the natural change
in participants over time. Think about a participant aging. Think about the natural skill improvement as someone gets older. This can be very important to
be aware of in young children. Think about how quickly a child's language naturally develops. So if you're doing a
language intervention, you've got to be aware of the maturation. Testing, repeated exposure to assessments influences performance. Just by nature of doing
the same thing over and over again, people
tend to get better at it. If I'm running the same
multiplication tables over and over and over and over again,
I'm doing the same testing, just by the nature of doing
it repeatedly is likely going to influence performance. Why does it matter? Because if you're running an intervention and they are
just practicing, practicing, practicing, practicing, even
without the intervention, maybe they're already getting better. Instrumentation means to
change how you measure or the tools you're using can change. So if you're using a
different rating scale, if you're using a different measurement, can affect how that data look. And then regression to the mean. This one's a little more
complicated to think about, but let's say a good student
receives a D minus on a paper. Let's say that good student
averages A's and B's. Well over time, the good
student will not continue getting D minuses if they're
an average A and B student. They're gonna regress to their mean, which means even if
they perform really high or really low, both are going to go back to the average eventually. Meaning we can't get too high,
and we can't get too low. We've always gotta think about
what is typical performance. And then finally, attrition is just loss of participants over time. So if you are running a
study that has 10 people and you lose five, the
five that dropped out, that would be related to attrition. Again, most of these are
very much research related if you're actually running research studies. But be aware of these
within your own clients if you are simply a clinical BCBA. Now, D-4, defining features of single subject experimental designs. We use single subject designs
in ABA almost exclusively. Can you use other designs? Sure, but for the most part, single subject is the norm. What are some defining characteristics? Steady state responding. When we are taking baseline
data, we want to look for what we call steady state responding. And we talked about variability earlier. Steady state just means
we weren't responding, that is not variable,
especially in baseline 'cause it gives us the right
of way to go to intervention. So a steady state strategy is how we try to reach steady state responding. We're going to expose a learner
to the same condition over and over again while controlling
for extraneous variables until we get that steady state. That is our goal. Other ideas, individuals
serve as their own control. Meaning I compare this person's baseline to their intervention,
back to their baseline, back to their intervention. Each condition is compared against all their other conditions. They are their own control. We don't have a control group. What are the ideas behind baseline logic? We have prediction,
verification, and replication. Prediction says we are going to predict what the outcome will be when measured. So if I have something like this and a baseline here, I predict that if I don't intervene, my baseline is going to continue that way. Verification is when we
verify our prediction. So if I intervene, if this is baseline, this is intervention, this is baseline, and let's say we intervene. If we go back to baseline and we verify our prediction
here, that's verification. Replication would be going
back to the intervention and now replicating the intervention. That's baseline logic. That is what single subject
experimental designs are so useful for. D-5, identify the relative strengths of single case experimental
designs and group designs. This is a new task list item. Group designs are typically
used in psychology studies where we are trying to take a sample and then generalize to a population. That means when you see results of studies that say, sunscreen is effective for preventing sunburn in adults 30 to 40. Well, they didn't go to
every adult between 30 and 40 and test sunscreen. They took, let's say 200
participants, ran a study and then used those results to
generalize to the population. That's a group design. With a single case design, we're typically dealing
with 1, 2, 3, maybe four, maybe five people at a time, right? We're not really
generalizing to a population, we are focused on the individual. So single case designs
are much more sensitive to individual changes because you're looking much
more at the individual person rather than 50 to a hundred to 200 people. We can demonstrate much more
clear functional control 'cause of the individual. It's flexible, and it's
great for small sample sizes. Now all those advantages
are fantastic, right? But there are strengths
behind group designs. Group designs are easier to generalize to broader populations just because we're using a bigger sample size, and we're using statistical analyses. Remember with single
subject, it's typically going to be much more visual analysis related, just 'cause of ABA. More control for confounds
through random group assignments because we're randomly
selecting our groups instead of just looking at the one person and them acting as their own control, much easier with larger samples and then more efficient at
identifying group trends. Essentially what we're saying
is single subject is fantastic for making or getting an idea of an individual's performance and what's effective for an individual or group designs are better for generalizing ideas
about larger populations and larger groups. Now I've combined D-6,7 and D-9 where we're gonna critique
and interpret data from single case experimental designs, distinguish among the different designs and then apply the designs. I can only cover so much in a study guide. I recommend if you're going to use Cooper to practice your own
interpretation of the designs. Cooper has a ton where they explain it. And so go into the book, pick a random design and interpret it. That's how you're going to get better. Distinguishing among them
should not be that difficult once you've done it a few times. They are pretty clear cut and have their own
strengths and weaknesses. Applying it is really
going to be based on need. And this is going to
come much more in terms of questions, right? Practice questions. So each has its own way of studying. Again, I suggest going into Cooper and interpreting these designs yourself. But let's just go over a general knowledge or general overview of each design. Let's start with the reversal
withdrawal ABA design, The most common by far, right? We all know it. We have a baseline, we
have a treatment baseline and treatment. You can see our prediction here, right? Prediction. This is our verification,
this is our replication. Why is the reversal so good? It's great at demonstrating
experimental control. It's fantastic for experimental control because we are quite literally
reversing what we've done. So if you see this, behavior's
low, trends upwards, we remove the intervention. Goes back, intervene goes up. Very, very, very good chance
functional control exist. What are some disadvantages? Well, some behaviors can't be reversed. If I teach you to read or write or spell or get dressed. Once you learn to do that, it's hard to undo that learning. Ethical concerns. If I'm trying to tackle
a challenging behavior that's causing harm or that's dangerous, do I
really wanna reverse that design and remove intervention? And then sequence effects. The order of conditions may matter. So the impact of the prior
condition on the following condition can affect data. So if we look at this
graph, let's interpret it. Baseline A is in a steady
state with no trend. So we got our steady
state strategy, no trend. The treatment which was
introduced, which led to a change in level
and an increasing trend. Treatment was removed, which
led to this decreasing trend, return to initial level. And then the individual was reintroduced. Again, change in level. Simple as that. ABAB design's not that complicated. Let's talk about a
multiple baseline design. Multiple baselines are used to analyze effects across
settings, behaviors, and participants. And you can see this is the classic layout of a multiple baseline design where we have baseline
and then intervention. And you can see in number
two, baseline goes on longer, baseline goes on longer. We're trying to demonstrate
experimental control here. What we wanna see is
baseline remaining steady until intervention's introduced. If we introduce intervention in one and baseline starts to change,
and two, that's an issue. So advantages, no withdrawal. We don't have to worry about with withdrawal in a
multiple baseline design because we're running multiple designs... We're taking baseline on
multiple things at once, right? We're not having to worry
about withdrawing anything. We can examine multiple
dependent variables all at once between subject settings or behavior. Disadvantages, no experimental
control demonstration. Now I use that broadly, right? You can get an idea of
experimental control, but it's much more difficult to demonstrate experimental
control than let's say a reversal design. What's a multiple pro design? A multiple pro design is just
like multiple baseline design, but only certain data points are collected during baseline. So it might look something
like this where we have kind of the same pattern and try
to do this roughly, right? And we have our baseline,
baseline, baseline. And then once we intervene here, let's say we intervene here, and we go down and then
we go here and then here. Probe, instead of going all the way solid, we're gonna have our
data point, data point. And then there'll be a gap, right? And we're just probing, right? Instead of continuing on with baseline. And this can be used when
we're low on resources or we don't believe
baseline is going to change. We also have a delayed multiple baseline where initial baseline begins, but other baselines are
staggered and delayed. Typically, we're gonna
run baselines concurrently across subject settings or behaviors, but sometimes that isn't possible. So again, let's interpret this graph. Baseline 1 demonstrates
this decreasing trend. We introduce intervention
and level changes slightly. Baseline 2 and 3, once
intervention one is introduced, remain relatively the same. When we introduce intervention
2, level does change, same with three, level does change. So it looks like the intervention
was relatively effective even though it might be a small change. Alternating treatment designs
or multiple element designs. There are several different
ways to run these. You can run them with baselines, you can run them with maintenance phases. They're rapid in a random, in a semi-random alternating conditions, rapid and semi-random or
random alternating conditions. As you can see right here, we have story mapping, and
we have no intervention. And you can see they're alternating. We're trying to figure out, okay, which one is gonna be more
effective at number correct? Try to present an equal opportunity or condition to be present
during measurement. This is great. Again, we don't have to withdraw anything. We can test multiple
independent variables rapidly. So we have multiple interventions
we wanna try all at once. We can reduce sequence effects 'cause we're not changing conditions, which is rapidly alternating,
and no baseline is needed. You can run baseline,
but it's not necessary. Disadvantage, it could be carryover between alternating independent variables. So for example, let's say that no intervention is not effective, but as we introduce story mapping, you can see behaviors
getting a little better and subsequently no
intervention is also improving. Think about that learning effect, right? Just because we're learning
doesn't mean we're gonna reverse when the teaching is removed. So we have to be careful of
things like multiple treatment interference, carryover. But alternating treatment designs or multi element designs are
effective at rapidly trying to identify what intervention
may be most effective. So our interpretation,
story mapping appears to be slightly more effective
than no intervention, but there might be some carryover. You would probably say yes,
story mapping is more effective, but only just slightly. And then a change in criteria on design, probably the least used type of design. It can be a difficult
design because you have to have the skill already
in the repertoire. And typically we think about
teaching skills, right? Or decreasing behavior. Here you've already gotta have
whatever you're looking at in the repertoire. So after baseline, treatment
is delivered in a series of ascending or descending phases meant to increase your decrease
of behavior already in the learner's repertoire. Now there's some rules here,
the length of the phase. So our phases you can see are here. These are individual phases, okay? Each phase should be long enough to achieve stable responding. We want some steady responding. How big should the criterion change be? It can vary to demonstrate
experimental control. Meaning sometimes we
go, let's say 11 to 20. Here they jump, let's say 20 to 30. This, they go almost from 30 to 45. So the criterion changes. And when we say criterion
changes, look at the lines. Those are our criterion. They're changing, but not consistently. That's how we're gonna
start to demonstrate experimental control 'cause ideally, behavior
needs to sit right along each criterion line. What you don't want to see, and it's a little counterintuitive, but you don't wanna see
us change the criterion and then behavior shoot up 'cause now the criterion is
not controlling the behavior. Now the more criterion changes you make, and with each one where
the behavior sticks to that criteria on change, the better. So some advantages only one
target behavior required. You do not have to reverse it. Disadvantages, target behavior must be in learner's repertoire. It is not that appropriate for shaping. And you might hold back your
learner sometimes depending on what you're trying to achieve. Because again, it's a
little counterintuitive. If I set my criterion here, I don't wanna see my behavior way up here 'cause I want it to be
around my criterion. That's how I'm going to show
I'm controlling the behavior. So this interpretation,
if we look at minutes of exercise on each day, exercise increased with
each changing design. So functional control
does seem to be evident as behavior was consistent
with the criterion changes. D-8, identify rationales for conducting comparative component and parametric analysis. Three common analyses that you're going to be using day in and day out. The first, the comparative analyses, you're comparing two or more
I should say or more, right? Different types of treatment. So this might look like you
have a A, I can draw it. Let's try that again. A versus B, right? Component analysis,
you're analyzing what part of a treatment package
is impacting behavior. What is the difference
between a comparative and a component? With a comparative, these
are not part of a package. Component analysis, they are. So you've gotta remember that. These are part of a package; here they're not. If we conduct a dropout analysis, the entire treatment package is presented and then we remove them systematically. So we present A and B, and then we just do A. With an add in, we would
analyze A, we would analyze B, and then we would introduce
the whole treatment. Major difference between
comparative component, comparative is not a treatment
package, component is. And then a parametric analysis. You are simply looking at an intervention and you're asking how much? How much is appropriate? 1, 2, 5, how much should I use? It's our measure of dosage. Okay, so that's gonna end part one. Be sure to head to part
two where we're going to continue on with the task list. We're going to make like we did last time, one whole video for ethics alone. So be on the lookout for that. As always, subscribe
for all of our updates. We do three BCBA videos a week. You don't wanna miss that content. Leave a comment below if
you have any questions. Let us know when you pass. Work hard, study hard. See you soon.