[MUSIC PLAYING] DR. WAYNE FUQUA: Welcome, Brian. Would you be so kind as to introduce yourself to the viewers? DR. BRIAN IWATA: Sure. My name is Brian Iwata, and I'm a professor of psychology and psychiatry at the University of Florida. WAYNE: Great. Brian is a nationally known expert in some really important methodologies in behavior analysis, applied behavior analysis, that are oftentimes referred to as Functional Behavior Assessment and Functional Behavior Analysis. I was going to ask Brian if he would describe to the practitioners this range of technologies and methodologies, so they can understand how that might be applied and relevant to their practice. BRIAN: Sure. Research conducted over maybe the past 20 to 30 years has shown that in large part most problem behavior is learned. That is it's acquired as a result of experience with the individual's environment. So for instance, as this slide shows, a great deal of research has indicated that most problem behavior is maintained by fairly straightforward contingencies. For instance, in a number of cases, problem behavior may be maintained by inadvertent social positive reinforcement delivered by parents, teachers, and so forth. In other cases, problem behavior may be maintained by the opposite, and that is social negative reinforcement. That is escape from task demands. And finally, it's possible that problem behavior could be maintained by automatic reinforcement. That is the behavior produces its own sensory reinforcers. So that's sort of like the general account of the functions of problem behavior. And the purpose of a functional behavioral assessment is to identify which of those is responsible for maintaining problem behavior at this time. WAYNE: Brian, can you describe a little about the conditions under which you should do a functional behavior assessment or behavior analysis? BRIAN: Well, that turns out to be somewhat of a judgment call. This general process does take some time, and occasionally it may actually pose some risk, because you have to see some problem behavior. So ordinarily one would consider doing this if the problem behavior is fairly dangerous, if it's extremely disruptive to the point where it would require a formal behavior intervention plan. If it's just a matter of being slightly annoying, then I don't think we would recommend typically doing a functional behavioral assessment under those conditions. WAYNE: So it sounds like as the danger or severity of the problem behavior increases, the need to do a functional behavior assessment or analysis sounds like it increases also. Is that correct? BRIAN: Yes. It's kind of a cost-benefit sort of thing. That is before implementing an effortful program that will require a good deal of staff training, it's good to know that you're implementing a correct program. And so under those conditions we would consider doing a functional analysis. WAYNE: OK. Great. Can you describe the range of techniques that are used in a functional behavior assessment, Brian? BRIAN: Sure. There have actually been many developed. Most of them can be subsumed under three general categories. First we have what are known as indirect or verbal report assessments. They are called that because there is no direct observation. And in fact, clinicians simply ask questions or administer a rating scale. And the respondent replies by trying to indicate the circumstances under which a problem behavior may occur. From there we move to what are known as descriptive analyses, and those do involve direct observation usually of problem behavior and the circumstances under which it occurs in the natural environment. And of course, the third approach is known as the functional or the experimental analysis, and that involves controlled exposure to suspected variables that may maintain problem behavior. WAYNE: So it sounds like there are three measured methodologies. Can you talk about each of them in a little more detail in terms of how you implement them, what their strengths and weaknesses might happen to be? And let's start with what sounds like the simplest, and that would be the verbal assessments or interviews or questionnaires. BRIAN: Sure. The rating scale, as you know, has a long tradition in not just our field but in many fields of psychology. And beginning in, I guess, the late 1980s, people developed a number of rating scales focusing on conditions under which problem behavior may occur. And so the typical rating scale consists of a number of questions that ask about these conditions. And depending upon the respondent's answer, the general responses may suggest that problem behavior could be maintained by, let's say, positive or negative reinforcement. And I suspect that maybe about a dozen of these scales had been published at the current time. Some good examples are the motivation assessment scale published by Durand and Crimmins. It's a 16-item scale. The second one would be the questions about behavioral function published by Johnny Matson and Tim Vollmer. And the third, probably the most recently published scale, is the functional analysis screening tool, which we published last year. To give you an idea of what one of these scales looks like, here is an example of the Functional Analysis Screening Tool. As you can see, it is simply a one-page questionnaire. And on the left-hand side, a series of questions are posed about the nature of the problem behavior, the respondent who is providing information, and general open-ended questions about when behavior may be more or less like to occur. Over on the right-hand side, there are 16 questions aimed at specific environmental conditions under which problem behavior may occur at high or low rates. And at the bottom of the right-hand side, there is a little summary where one could sort of organize all the respondent answers. And most indirect rating scales are pretty much designed that way. Now, as you can see, the obvious strengths of these scales are that, first of all, they're fairly simple to implement. They don't require a great deal of skill. And the second is that they're rather efficient. They're very quick. Probably the most involved scales require maybe 30 minutes to complete. But typically these can be done in about 10 to 15 minutes. WAYNE: And are they done by a parent or by a trained observer? Who would be filling these out, Brian, typically? BRIAN: Well, typically, it would be the clinician. The therapist administering the scale usually to someone who knows the client or the person who engages in the problem behavior well, such as a parent, both parents, teachers, therapists, and so forth. So those would be the primary respondents. WAYNE: And do they typically provide pretty accurate reports? They aren't trained behavioral observers, but they certainly know the clients well. How accurate are the reports on these indirect measures of this nature? BRIAN: Well, that's the problem. Although the scales are easy to use, and they're efficient, the major problem is that all of the information is subjective in nature. So for instance, if we were to take one of the questions off one of these scales-- does behavior typically occur under these circumstances? It's an easy to question to ask. It seems reasonable. Well, what does it require to answer? You have to recall the last how many times you observed the individual under those circumstances, and then out of those many times, what proportion of them did the person engage in the problem behavior? So for each question you need to basically have access to a great deal of historical data, and then you need to be able to make probabilistic statements about whether the behavior was likely or unlikely to occur. And that's repeated over and over again for just about every question. No one has access to any of that information, and as a result, people simply guess. And just as you guess, I will guess. And when we compare our answers, it's unlikely we will have the same guesses. So the big problem with these scales is that due to their subjective nature, they have historically poor reliability. And of course, if they have poor reliability, then they have questionable validity. Because who's correct? And as a result, the field has generally come to the conclusion that the use of these scales would be inadequate for the purpose of conditional functional behavioral assessment. WAYNE: OK. What are the alternatives to that? How do we move up the level of kind of methodological rigor and hopefully the things that have greater treatment validity? BRIAN: Great question. Well, if the indirect approaches have poor reliability because they don't involve any direct observation, then of course the next option would be to conduct some direct observation. And so the second approach-- the descriptive analysis-- that has a long tradition in our field. It involves objective, direct observations of behavior out in the environment where problem behavior occurs. And of course, we're not just interested in looking at the problem behavior, but we're actually interested in using problem behavior as the index to see what goes on around it. What antecedent events precede behavior, and what sort of consequent events follow behavior? So for instance, as an example, this slide shows the prototypical A-B-C, or Antecedent Behavior Consequent Form, the way it has existed in our field for about the past 50 years. And if you notice, it consists of a bunch of rows and columns. What one does is to first identify the target behavior, and then when an instance of the target behavior is observed, one quickly notes the event that preceded the problem behavior, the antecedent, and then the event that follows the problem behavior, the consequent. And one can do that for a series of episodes of problem behavior and then organize the information and try to group the antecedent and the consequent events in such a way that you can formulate a hypothesis about what maintains problem behavior. Now, this A-B-C form has undergone a number of variations throughout the years, and here is a more current version of it. It involves a series of boxes. So that instead being an open-ended form, it's more closed-ended. And what the observer simply does is to check boxes to indicate the type of event that was occurring, the general context, the event that preceded behavior, or the event that followed behavior, and so forth. And then can summarize the information by actually organizing the antecedent and the consequent events in a series of columns. WAYNE: And is this something a trained observer would typically do-- a BCBA, a Board Certified Behavior Analyst? Or it could be done by anybody? Is it simple enough to train teachers? BRIAN: Well, in theory, it could be done by anyone. But whoever does it will need to have some formal training. For instance, in being able to define the target behaviors operationally, so that they will be able to know instances from non-instances. Not only defining problem behaviors, but also defining legitimate antecedent and consequent events that precede and follow problem behavior. And then they will need some practice actually conducting these observations and some checking with another observer to make sure that they are collecting data reliably. So in theory it could be done by anyone, but the person will need some formal training. WAYNE: Brian, you've described a couple of ways that people do an A-B-C analysis-- one pretty much open-ended, the other a little more structured. What do you do with the information? How do you organize the information from a descriptive analysis of this nature? BRIAN: Well, unfortunately, not a lot has been published about how one would organize antecedent and consequent events, and what sort of organizations might imply certain functions of problem behavior. But if we consider that problem behavior is maintained by certain types of consequences, then the problem behavior is likely to be preceded by certain kinds of antecedent events. And so just generally speaking, one could sort of put together a list of probable antecedents and consequences relevant to particular behavioral functions. Now, as this slide shows, if problem behavior is maintained by access to social positive reinforcement, which is typically going to be attention delivered by a parent or a teacher, or perhaps access to leisure items, edibles or something like that. Then the relevant antecedent event that someone is likely to observe is deprivation or no access to whatever that particular reinforcer is. And so we might observe that an individual is not receiving attention, or someone was delivering attention and had just walked away. That a request for attention or access to tangible items had been denied, or possibly nothing happening, because that also involves deprivation from positive reinforcement. And just as these antecedent events should precede behavior maintained by social positive reinforcement, their delivery should follow behavior. That is problem behavior should be followed by the delivery of attention, access to tangible items, those sorts of things. Now, interestingly, if problem behavior is maintained by social negative reinforcement, we would see almost the opposite. That is if problem behavior is maintained by the termination of ongoing events, then the antecedent event should be the occurrence of these events. Typically it would be something that most people would subjectively describe as aversive, as in a work requirement, as in even perhaps a requirement for social interaction, or even something maybe very specific such as a particular provocation. So these sorts of things would be the antecedent events preceding behavior. And of course, if behavior is maintained by negative reinforcement, then the consequent events should be the removal of these things. And then finally we have problem behavior that might be maintained by automatic reinforcement. That is quote "self-stimulatory" behavior. And unfortunately, because behavior is not sensitive to the social environment in this particular case, then stereotypic or self-stimulatory behavior could be preceded by literally anything or nothing, and it could be followed by anything or nothing. The anything or nothing being irrelevant to the occurrence of the behavior. And so that's the sort of typical way that people would organize these antecedent and consequent events. WAYNE: Now, when you use a recording form of that nature, you see typically multiple occurrences of behavior. Do you usually see a real consistent pattern, Brian, from instance to instance? Or do you see kind of a mixed message that comes through, with sometimes one antecedent or consequent being prominent and other situations may be a slight variation on that? BRIAN: Well, that turns out to be one of the limitations of this particular approach. The descriptive analysis was basically designed to answer the question-- what is happening? So it provides a structural view of environment and behavior. And by structural I mean what is going on in the situation. What it doesn't tell you is why anything is going on. And in most typical situations, we have behaviors occurring, usually not just the target behavior but other behaviors, and these behaviors are preceded usually by a multiplicity of antecedent events, some of which are influential. That is they may actually influence whether behavior occurs, but most of them are somewhat incidental. They just happen to be going on. And the same thing, perhaps, is true of consequent events. Something is delivered as a consequence, and occasionally there may be an influential consequence, but quite often they may be incidental. And so one of the difficulties with conducting a descriptive analysis is that we will get a lot of information, but it's hard to figure out how to determine which events are relevant and which are irrelevant. And in fact, more recent research is showing that in particular descriptive analyses conducted of severe problem behavior may be biased. So for instance, suppose you have a child in a classroom banging his or her head on the table. Or another child aggressing against a victim. Or a third child throwing chairs out the window. You are not going to see a teacher or a parent doing nothing. They will all stop those behaviors, because they simply have to. When conducting a descriptive analysis, you must score those as the occurrence of a social interaction, which will be attention. And so just in recent years we are beginning to find that the more severe the problem behavior, the more likely it will look as though it's maintained by attention in the context of a descriptive analysis, because somebody must respond. Now, again, that's purely a descriptive view of things. We don't know whether that response is actually a functional reinforcer. WAYNE: But in theory, those responses from the teachers or the parents, those were part of the ongoing ecology of that behavior. So you still should allow those people to engage in those typical management behaviors. Is that correct? BRIAN: Of course. Because the entire logic of a descriptive analysis is to capture what happens naturally. And then based on the probabilities of events that you see, you then weigh them, so to speak, and come to a hypothesis about what is maintaining problem behavior. WAYNE: For those that want to use a descriptive analysis of some sort, Brian, are there any special bits of advice to reduce the bias or to make it more naturalistic? Do you need to do surreptitious recording? Do you need to arrange for recording when the behavior is highly likely to occur? Any bits of advice for how to maximize the benefit of a descriptive assessment? BRIAN: Well, although descriptive analyses have a very long tradition in their field, they are most appropriately used when asking structural questions. That is when asking what is happening. And so if the purpose of a descriptive analysis is to define the target behavior more clearly, or to identify its general context, then that's its proper use. In terms of using the descriptive analysis to identify function-- although these data are fairly recent in nature, they are clearly indicating that the descriptive analysis should not be used in an attempt to identify function. Because all you will get is an answer to the question-- what's happening? And in fact-- this may seem somewhat ironic and counterintuitive-- the indirect approach, which most people denigrate because it involves subjective data, at least focuses on function. So the questions being asked are-- under what conditions is the behavior likely or unlikely to occur? What is your general view about the function of the problem behavior? And the difficulty is that all the information is subjective. The descriptive analysis, by contrast, is very objective, but it just gives you a picture of what's happening without really being able to identify the functional characteristics of what's happening. WAYNE: So all sorts of irrelevant things might occur as antecedents or consequences but not be functionally related to the behavior. BRIAN: Exactly. WAYNE: So are there other limitations you want to mention for descriptive assessments, Brian? BRIAN: Well, another limitation that people often don't consider is the fact that descriptive analyses take much longer than most people might think. I have a suspicion that some practitioners out there have the opinion that-- let's say they receive a request for a consultation. They go visit a student in the classroom. They observe that student for maybe 15 or 30 minutes, and that's a descriptive analysis. Well, it is, but it's really not a descriptive analysis that anyone would consider to be adequate. If you consider the general logic of a descriptive analysis-- what is happening? It requires observing behavior under a wide range of conditions. Which means we can't simply observe behavior for 15 or 30 minutes, because we will only capture one context. So now we have to ask, what are the other contexts in which behavior might occur? So we need to sample maybe three or four of those. And then, of course, the question is-- did we see a representative sample of what we would like to see in another situation? And so we have to repeat those. So if you have three or four different contexts, each of which is being observed for 15 or 30 minutes, you would then have to repeat several times. The recommended descriptive analysis, in order to provide a thorough set of the results, is somewhere between four to six hours. And most people do not understand that that's about how long it takes. So aside from the fact that you need some training, the fact that the data really show a great deal of randomness, and if not bias, and the fact that they take a long time, suggest that they're not really well-suited for answering functional questions about behavior. Great for answering structural questions but not functional questions. WAYNE: Interesting. So given those limitations of that and some of the limitations of the indirect report methods, what other alternatives should we be looking at? BRIAN: Well, the third approach that has evolved over the years and has come to be like the gold standard is known as the functional analysis-- not functional behavioral assessment but functional analysis, which some people call the experimental analysis. And the logic of the functional analysis is to take the environment the way it presents itself and begin to segregate that environment into little separate bins. And so it doesn't really involve presenting anything new, it just involves sort of like a reductive view of the complicated environment. And the way that the environment is organized in a functional analysis is according to the kinds of conditions that have been shown repeatedly in research to maintain problem behavior. So we simply take the complicated environment-- a part of it that might influence behavior maintained by, let's say, attention or social positive reinforcement, and we isolate that part of the environment. And we expose the individual to that part and see what happens. And then expose the individual to another part and see what happens. And so we're not faced with a multiplicity of antecedent and consequent events. They remain constant, and we simply see in which environment more or less behavior occurs. WAYNE: Well, it sounds like the functional analysis or experimental analysis is really the gold standard. Could you describe in more detail about how one actually goes about doing a functional analysis? BRIAN: Sure. Well, the general requirement is the use of the experimental model to answer questions about function. And all that's required is basically isolating some environmental event in a condition and exposing behaviors to it and see what happens. Now, of course, if we were to expose behavior to something and it occurs, that might lead us to conclude that that's influencing behavior. But of course, the problem is that we might have seen just as much behavior if something else was happening. And so most functional analyses include at least two conditions. One that contains a variable that we suspect might influence behavior, and the other removes that variable. And if we see differences between when the variable is present and when it's absent, then we conclude that that influences behavior. And if we don't see any difference, then obviously we've picked the wrong variable. Now, a problem behavior may have multiple functions, and so a number of conditions have been designed to sort of isolate these more common functions. Now, as this slide illustrates, in this slide what I've listed are some of the more common test conditions, as we call them, and control conditions. And each condition is correlated with a particular signal. In behavioral parlance we call those discriminative stimuli. And what they do is facilitate discrimination of something in particular happening in a situation. Now, they also include what we call an establishing operation. That is an antecedent event that's designed to make a particular reinforcer valuable. And then they also involve the delivery of that particular reinforcer. So for instance, in one condition that we might call the attention condition, which is conducted in setting one by therapist one or what have you. It's the test condition for the effects of social positive reinforcement, which means the antecedent event will be the absence of social positive reinforcement. WAYNE: Do you call that an establishing operation or a motivational operation, one of those two essentially? BRIAN: Yes. And so essentially we have a therapist present who is explicitly ignoring the individual, not delivering attention. And the only time that attention will be delivered is as a consequence for problem behavior. That therapist may issue a mild reprimand, a statement of concern, brief, comforting physical contact, and then would go back to the antecedent event, that is the unavailability of attention. And that kind of condition was shown early on by Lovaas and colleagues in a series of studies to bring out problem behavior maintained by attention. Now, another test condition might examine the influence of social negative reinforcement-- that is escape. And of course, that would be conducted in a different setting by a different therapist. And the antecedent event in that condition would be the presence of something. Something that people might call an everyday aversive event, such as a requirement to conduct certain kinds of work, an instructional demand, so to speak. And so the therapist would present those, and if the individual engages in the problem behavior, those are terminated for a brief period of time. And so what we have is contingent escape for the occurrence of problem behavior. And the influence of that kind of condition was illustrated in a series of studies conducted by Ted Carr many years ago. Now, the third condition is sort of an unusual condition. It's the test condition for behavior maintained by automatic reinforcement, or as we it call it, self-stimulatory behavior. Now, that behavior is behavior that produces its own reinforcers. And so in a way it's insensitive to the social environment around it. Which, of course, means we can't actually bring that behavior out by doing anything, because it simply occurs when nothing is going on. And so the attempt in that condition is not to actually demonstrate the effects of automatic reinforcement, but to remove the effects of social reinforcement. And so we observe behavior when absolutely nothing is happening. If it continues to occur, then it's unlikely that it would have been maintained by, let's say, contingent attention, because no attention was ever available. It's unlikely to have been maintained by escape-from-task demands, because they're not available either. To the extent that it's a learn-motivated behavior, it probably was maintained by its own reinforcers. And so that's sort of like a rule-out test condition. And the final condition one might consider would be what one calls the control condition. And in that condition we try to eliminate all the possible influences that might produce problem behavior. So unlike in the attention condition where attention is unavailable, it's freely available in that play condition. Unlike in the demand condition where a therapist is presenting challenging work requirements, there are no work requirements. And unlike in the alone condition where the individual is somewhat deprived of sensory stimulation, there are lots of neat things to do. And so one would expect to see the lowest rates of problem behavior in that condition relative to all others. And that's sort of like a general model for conducting a functional analysis, if one has no idea what the function of problem behavior is to begin with. WAYNE: So there are four different test or challenge conditions there, Brian. Any special advice as to how you orchestrate specifics of those test conditions? Because you have to pick out something from the environment that you think is relevant for the escape condition or something of that nature. Any special advice in terms of how you select and actually orchestrate those conditions? BRIAN: Well, the attention condition, or the test for social positive reinforcement, is relatively straightforward. We're not really sure what kind of attention from an adult or caretaker might serve as reinforcement for a problem behavior. And so typically we deliver several different kinds. As I've indicated, it might be a mild reprimand, a statement of concern, some comforting physical contact. And we're not sure which of those might be the reinforcer for problem behavior, but those are the most common things that we see following problem behavior. So we just deliver them. WAYNE: And would you try to emphasize those that you observed in the environment specifically or not? BRIAN: Yes-- if we have. WAYNE: So try to do some ecological validity to the degree possible it sounds like. BRIAN: Right. Now, it's possible that problem behavior may be maintained by a particular reinforcer, such as access to a favorite toy, edibles, and things like that. And we would usually only test for that if the attention test came up negative. So if behavior is maintained by attention, then anything correlated with the delivery of attention is likely to produce problem behavior also. And so if we come up with uniformly negative results of a functional analysis, then we might suspect there is something idiosyncratic about the form of social positive reinforcement. Now, in terms of the test condition for social negative reinforcement, the demand condition, what we typically would do would be to consult with teachers. And to try to identify relatively challenging tasks for which is there is a generally low probability of compliance. Because these will, by definition, be the more effortful tasks, and these are the ones we would like to see presented. WAYNE: Well, it sounds very reasonable. After you've selected your test conditions and orchestrated the stimuli, tell me more about how you actually conduct the functional or experimental analysis in terms of the sequencing, how long you expose people to different conditions, and then what sort of data, and how do you interpret the data, Brian? BRIAN: Well, that's a very good question. It turns out that if the only requirement is an experimental comparison, there are numerous ways to conduct it. And so over the years people have made somewhat arbitrary decisions in how they've sequenced things and how they've defined sessions. But in general, things have gotten refined over the years. And so at the current time, more often than not, a session that is one exposure to a particular condition might last, let's say, 10 minutes. And 10 minutes has been shown to be long enough so that if the establishing operation influences behavior, that is deprivation from attention or repeated requirements to perform work, that we are likely to see behavior. And it allows behavior to contact its consequences enough so that it would reliably occur during that session. So 10 minutes turns out to be a good estimate for how long a session will be conducted. And then one will conduct repeated exposures to these. Now, how many exposures is up to someone to decide. We typically have a sort of informal requirement of three, which is just based on the stupid rule of you need two data points to draw a line, and then the third one confirms. And so typically our functional analyses will consist of three exposures to any set of conditions that we are including. And if the data look fairly clear at that point, we're finished. And that could usually be easily done within in a day. WAYNE: That sounds very reasonable. And do you typically run these sessions right after each other, or do you do anything special to help the participants discriminate different session conditions? BRIAN: Well, I'll take the second one first. The discrimination-- it used to be many years ago that we would actually have different rooms in which we conducted these sessions. And each room was painted a different color. Now, of course, we don't have access to that anymore and most clinicians don't either. But one thing we found that works pretty well is different colored t-shirts. And so we also don't have different therapists that we can keep on switching in and out. And so independent of who's conducting the session, we select a particular t-shirt color for, let's say, the attention condition, and a different t-shirt color for the demand condition. And that may seem fairly hokey, but it is a reliable cue that's correlated with the particular condition. And we find that that facilitates discrimination. WAYNE: So the question is, do you run most of these sessions in contiguity right after each other? Do you put breaks in? Do you try to run all of your comparison conditions in one day, or do you spread them across days, Brian? BRIAN: Right. Well, the purpose of assessment is to answer some questions quickly, so that you can get on to treatment. And so we know that time is somewhat of the essence here. So ideally what one would do is compress the assessment over the briefest amount of time possible. Now, then you have the problem of running one session after another of a different condition. And then you would have perhaps a sequence or carryover effects. And so what we typically do is to conduct a session. And following each session there would be a break. So that wherever that session is being conducted the person will be removed from that room or that setting for a few minutes, will be given a drink of water or be given a bathroom break or something like that, and then will be brought in for the next session. Now, the only real requirement is that we don't begin the session with problem behavior. Because it suggests that whatever is influencing it is basically something that had gone on previously. And so usually we have a five-minute break between sessions, as long as problem behavior is not occurring, and just keep on going until we're finished. And under those conditions, usually you can get done in under a day. WAYNE: That sounds reasonable. Brian, you've described the different test conditions, each of which typically has some sort of antecedent preparation and some consequences. Do you have many challenge or test conditions that you build that have different sorts of test variables in them? Some sort of antecedent things that may not have a linked consequence potentially. Do you have any other suggestions in terms of the things that people might look at in their test conditions? BRIAN: Well, occasionally yes. The test conditions that I've described are those that are probably the most common in terms of variables that maintain problem behavior. But of course, behavior is also subject to idiosyncratic influences. And so it may be the case that there is a particular kind of attention or attention delivered only by a particular person. And under those conditions, you might not see very clear results in a functional analysis. And so then you would start to suspect that behavior is being influenced by something idiosyncratic. And under those conditions, a descriptive analysis may actually help. Because, of course, that asks the question-- what's happening? And so if you don't see good, clear results, then you might spend some time conducting a brief descriptive analysis. Now, interestingly, the purpose of the descriptive analysis would not be to answer the question-- what's going on most often? But what sort of unusual things are happening that I might see occasionally, which you can then turn into an operational variable, present repeatedly in a functional analysis, and the behavior should emerge relatively quickly Now, what some of those things are-- it just sort of depends. Like I already mentioned, certain kinds of attention, attention from particular people. Certain types of task demands may be effortful than others, such as novel demands versus demands that have been presented for a period of time. Or length of the instructional session may be another one. Some individuals can work for relatively short periods of time, whereas they can't work for long periods of time. And so one may need to sort of adjust session length, things like that. WAYNE: OK. Well, it sounds like a pretty straightforward methodology. What do you collect in terms of data? And what do you look at to interpret those data, Brian, after you've done an experimental or functional analysis? BRIAN: Well, each session will result in some quantitative summary about what's happened in that session. And this slide illustrates some examples of outcomes from functional analyses. So the slide shows three graphs going from top to bottom. And let me sort of orient the people who are watching to what these data actually show. Each data point represents a session, and the session would be, let's say, either a 10-minute session. Or back when we collected these data, actually the sessions were 15 minutes in length. So each data point represents a 15-minute session. The sessions are sequenced from first, over on the left, to last, over on the right. And each data point represents the mean number of responses that is problem behaviors permanent across the 10 to 15 minutes of the session. Now, what I did was I color-coded the data points to illustrate the different conditions. So the data shown in red represent the attention condition. The data points shown in black represent the alone condition. Blue is the demand condition and green the play condition. And recall that green is the play condition, which is the control. So in the graph at the top, there is clearly a higher rate of problem behavior in the attention condition. So we would say that this problem behavior is maintained by social positive reinforcement that is in the form of attention. Now, with the same graphing conventions, the graph in the middle simply shows a different outcome. The highest rates of behavior are occurring in the demand condition. So we would say that the function of this behavior is social negative reinforcement, that is escape from task demands. And the data on the bottom graph show yet a different outcome. The highest rate of problem behavior is occurring in the alone condition. So we would say that that behavior is maintained by automatic reinforcement, or it is quote "self-stimulatory" behavior. Now, I selected these three graphs obviously because they show relatively clear results. But another interesting feature of the data, which is not shown in the graphs per se, is that these three sets of data all show exactly the same response. So structurally we're looking at the identical behavior of head banging. So to ask the question--what's the best treatment for head banging? Doesn't make a whole lot of sense. Because here we have head banging that will require three different interventions based on its source of reinforcement. WAYNE: Those data look like they're pretty clear to me, Brian. When you do an experimental or functional analysis, do you typically get displays that have that much response differentiation? Or do you sometime get cloudier pictures? And what do you do with the cloudier pictures, Brian? BRIAN: Well, usually we do get relatively clear results. And I believe the reason is because we pay a great deal of attention to the use of these correlated stimuli. That is discriminative stimuli to facilitate discrimination. And we do a good job also of controlling the establishing operations, that is access to attention, delivery of task demands, and things like that. Now, occasionally one will get unclear results for perhaps a number of different reasons. One possibility is that the individual simply doesn't discriminate the conditions very well. And one would see that when you observe results that are basically showing data crisscrossing all over the place. So probably the interpretation there is poor discrimination. And the solution there would be to simply conduct the assessment exposing the individual to only one condition at a time rather than at all conditions concurrently. Because that further facilitates discrimination. Another possibility is that the sessions simply do not allow long enough exposure to the contingencies that maintain behavior. So you may need to increase session time. WAYNE: So for example, a demanding task may have to go for a period of time before it functions to motivate escape behavior essentially. BRIAN: Exactly. WAYNE: OK. Very good. Is there also a potential by the way, Brian, that an automatic reinforcement would produce elevations in everything also? Is that one other feature that would lead to non-differentiation of responding? BRIAN: That's true. The graph I showed had a higher rate of problem behavior in this alone condition when nothing was happening. And that is a very clear interpretation that you're likely to get most problem behavior in the presence of nothing. Behavior is correlated with deprivation. The other outcome that would lead to the same interpretation is problem behavior occurring at relatively high rates in all conditions. And it simply shows that the behavior is completely insensitive to the environment and nothing competes with the reinforcer that's produced directly by the response. So both of those patterns-- interestingly they are somewhat very different patterns which suggest the same outcome. WAYNE: Brian, it sounds like a pretty straight forward methodology. Are there any limiting conditions or concerns you would like to alert people to? BRIAN: Yes. The field having conducted these kinds of assessment for a number of years has basically identified several conditions under which assessment may be problematic. And I'll just mention all of them briefly, and then go over each of them in a little bit of detail and describe some variations in assessment that have been developed to handle these limitations. The first one is complexity. That is it does take some skill to be able to conduct these kinds of assessments. The second one is time. Now, that may not be an issue in most settings. But when assessments are being conducted in out-patient clinics-- for instance, when clients and their parents are going to be there for only an hour or two, then it may be very difficult to conduct a functional analysis. The third one would be control over the setting. If you read most of the research on this approach to assessment, those studies are conducted under conditions, even clinical conditions, that somewhat resemble laboratory-like situations. We control what's going on in the environment. Which raises the question of whether you can really do this in the home. Or can you do this in the school? That is the classroom where the behavior occurs. Then the next question or limiting condition, which turns out to be a fairly significant one, is risk. Supposedly we reserve these forms of assessment for behavior that is dangerous and potentially high-risk. Well, then how do we conduct an assessment of a behavior that we don't want to happen very much? So each of these has been identified as one limiting condition. WAYNE: Those sound like all very reasonable limiting conditions. Can you elaborate on each of those, Brian, and offer people advice as to what to do with those limits? BRIAN: Sure. Let's start with the complexity issue. And the way that has gone is that some authors have written if a client is referred for a problem behavior, you could use these indirect approaches, the questionnaires, or maybe take some observational data, the descriptive analysis. But when it comes time to do a functional analysis, you need to call in the expert. Because only experts have been trained how to conduct functional analyses. And when you consider that, it sort of makes sense. This does require some technical skill. But if you look at it another way, that doesn't seem to be a very good argument. For instance, you can ask the question-- what does it take to conduct a functional analysis? And there is a long answer one can give or a short answer. And the short answer would be-- you need to be able to follow some instructions on how to deliver some antecedent events and some consequent events. And if you can't do that, then you can't conduct a functional analysis. Let's skip that. We'll call in the expert. Then the expert leaves some recommendations, and you now have to implement this intervention program. So what kind of skills are required to implement the intervention program? You need to follow some instructions on how to deliver antecedent events and consequent events. And so the point is that although learning to conduct a functional analysis does require some skill, it is no more skill than that required to implement any behavior intervention plan. And if you can't conduct a functional analysis, you probably can't implement any plan either. Well, that's sort of like the glib answer. And we typically require data in our field. A series of studies has been conducted in which it has been shown that varied trainee populations, ranging from undergraduates who have never had exposure to any individual with disability, teachers in various stages of training, and even individuals who are being trained who are not in the same city-- they are being trained by a way of teleconferencing-- can acquire the skill to conduct a functional analysis at a level of a minimum of 90% accuracy in under two hours. Now, that's not to say that they could conduct a functional analysis, revise it several times, develop a complicated plan. But simply implement procedures, sort of the way I've described them, you should be able to pick up that skill in under two hours of training. WAYNE: That sounds very promising in terms of exportability actually. BRIAN: Sure. Yes. Now, the next issue is time, and that turns out to be somewhat significant, particularly in out-patient clinics. Now, David Wacker up at the University of Iowa does most of his work in out-patient clinics. And a number of years ago he began to consider adopting functional analysis methodology. And found that when he read the reports published in the literature, he really couldn't apply those assessment methods simply because he didn't have enough time. And so he developed what has come to be known as the brief functional analysis. Which was first reported in an article published by one of Dave's students, John Northup. Now, what is the brief functional analysis? It involves, as you might suspect, brief exposure to assessment conditions. So that session, instead of lasting for 15 or 10 minutes, lasts for only five minutes. So it's sort of like a very brief exposure to a condition, and there's no replication. So it would be, let's say, a one-minute attention condition, a one-minute demand condition, a one-minute play, one-minute alone. And if you have time, you might repeat one of these conditions. But results have shown that with that particular approach, even if you count all the setup time, the break time, take down time, that it can be easily accomplished within the space of an hour. And so we have actually used these brief functional analyses to some extent in our own out-patient clinics. Now, let me show you a graph of what one of these brief functional analyses looks like. And as you can see, there are different data points representing each of the conditions, but there is only one data point per condition. And each data point represents only a five-minute session. So there is first, one alone session, followed by an attention session, followed by a play session, and finally we have the demand session. And I have an arrow pointing to that one demand session, because this is what one is hoping to find. That is an outlying data point in which problem behavior is higher during one condition than the others, implying that, let's say, in this case problem behavior is maintained by escape from task demands. Now, another one of Dave's students, Mark Derby, published a summary of 79 cases going through the University of Iowa out-patient clinics, and they showed that they were able to identify the function of problem behavior in right about 59% of the cases. Now, some people consider that and they go-- well, 50%, that's not that great. But the alternative is to do what? To conduct an interview and make a guess and say good luck. And so these data tend to show that within the space of an hour you could get something that very closely resembles a functional analysis of problem behavior. WAYNE: And when you do the brief functional analysis, do you get results that are fairly similar to the results you get with a more extended functional analysis, Brian? BRIAN: Well, as I indicated, Mark Derby's data show that they were able to get interpretable data in about 50% of the cases. Whereas if you were to conduct, let's say, longer sessions and repeated measures of those sessions, we found that you can get interpretable results in well over 95% of the cases. And so there is a loss resulting from both reduction in session number and in session duration. You're just simply not sampling behavior for a long enough of period of time. WAYNE: Well, that sounds like a major time-saving methodology, doesn't it? BRIAN: It is. Now, there's another way to save time. And that is to not test for every possible function of problem behavior. Now, recall that we have these indirect rating scales that are unreliable. But if you were to conduct them with one or two, let's say, respondents, and you get some reasonable concordance, that is correspondence between what the respondents are saying, then you might have some information ahead of time that behavior is suspected to have a particular function. And what you can then do, perhaps, is to combine the outcome of an indirect assessment with what we then call a single function test. So if we have the teacher and the parent or two parents, or the teacher and the aid, suggesting that problem behavior is maintained by escape from task demands, then we will just run that one test condition, compare it to a control, and we've now cut the assessment time by 50%. So here is an example of several different single function tests. So again, I have three graphs in this slide. And the top graph shows results for an individual whose assessment consisted of the attention condition and the play condition. Now, very clearly there's a higher rate of problem behavior in the attention condition. What we can't say is whether this behavior is at all maintained by escape from task demands. But we didn't test that condition. We can say, however, that problem behavior is attention maintained, and we can develop an intervention based on that hypothesis. And if that intervention is effective, then we're done. And we only need to come back and repeat the assessment if the intervention is ineffective. Now, the second graph shows an individual whose assessment consisted of the demand in the play condition. And there is a higher rate of problem behavior in the demand condition. We can say that this behavior is maintained by escape from task demands and move onto treatment. Now, the last graph shows something different. The top two graphs show the behavior of one individual exposed to two conditions-- a test and a control. The bottom graph shows results for two people each exposed to one condition, and that's the alone condition. Now, as you can see, one person's problem behavior, that is client number one, maintains. Which suggests that that behavior is maintained by automatic reinforcement. We're on to the treatment. So that's a positive test. The second person's problem behavior decreases, and in fact, basically disappears. We would say that the behavior extinguished. So whatever is responsible for maintaining the problem behavior is not in that condition. Which would then imply that the behavior is maintained by either social positive or social negative reinforcement. Implying that we need to go back and repeat some part of the assessment. But nevertheless, at the outset, that's another good way to cut the assessment time in half. WAYNE: So it sounds like targeted functional analyses are pretty darn efficient. BRIAN: Well, they haven't been done very often. Simply because people tend to sort of repeat the old general model of this is the way to do things. But if you consider actually selective assessment and assessment for particular aims. And that is efficiency and so forth. Then this kind of approach seems to make sense, and we're doing more of this. WAYNE: And the key element means that you have to test out the hypothesized function and have a control condition. BRIAN: Yes. WAYNE: OK. Good deal. In addition to that, are there any other time saving advances that you would like people to know about? BRIAN: Well, those seem to be the two. That is greatly abbreviate the assessment to the point where it looks like a brief functional analysis, or figure how to dismiss or subtract parts of the assessment based on information that you may have from indirect approaches. WAYNE: How about some advice, Brian, in terms of doing functional analyses in specific settings? What should the practioner know about there? BRIAN: Well, in essence, research has shown that if you can simply control what goes on in the setting, that is in the home or the school, it really doesn't matter, and you can conduct functional analyses the way I've been describing them. All you have to be able to do is to control extraneous variables. And so in many cases you simply could conduct a typical functional analysis. Now, that raises the question of-- what happens if you can't conduct a typical functional analysis? That is if it the setting is one that is inherently noisy, or rather that changes very rapidly. What would you do then? And the answer to that is to try to greatly reduce the amount of time that you have to control things in the setting and then repeatedly probe over and over again to see whether or not you get consistency. And one method that has been developed for that purpose is known as the trial-based functional analysis. And the reason why it's simply called a trial-based functional analysis is that everything lasts so short that you really can't call it a session so to speak. So the idea of a trial-based functional analysis is we will go in, we will take advantage of what's happening in that particular environment. We'll sample behavior very quickly. We will actually get a control sample and a test sample. And then we'll just let whatever goes on in the environment happen. And then later on when we have an opportunity again to take advantage of what's going on, we'll get another little trial and another little trial. So we can simply conduct a number of different kinds of trials over a period of time and organize them in one big pile and see what proportion of trials produces the largest amount of problem behavior. Now, this slide sort of summarizes a trial-based functional analysis. So let's say, again, that we want to determine if problem behavior is maintained by attention. Well, we would like to first observe the fact that behavior is not occurring. So we know that under certain conditions it won't happen. And the best way to not see attention maintained behavior is simply just have someone stand there just delivering a lot of attention for free. And so anytime there is a situation in which a teacher is present, we can have the teacher engaging in friendly conversation with the student. And we just do that for a minute or so, and we count-- does problem behavior occur, does it not? And of course, it shouldn't. What we then do is abruptly have the teacher turn away and maybe attend to someone else. And now the establishing operation is in effect to perhaps make attention more valuable. And the question is-- did that transition from free attention to no attention occasion problem behavior? And so we would simply run that pair over and over again a number of times and see whether or not we're getting problem behavior when we go from the lots of attention to the no attention part. Well, then we can run some trials in an attempt to identify if problem behavior is maintained by task demands. And of course, we want to start off showing that you don't get escape behavior. And the way to not get escape behavior is to not present any work. So we can simply start off taking advantage of the fact that work is not being required right now, observe the student for a minute, and see no problem behavior. And then, again, have the teacher abruptly walk up and start to present difficult learning trials and see if that change from no work to effortful work produces problem behavior. And we can repeat those trials and over and over again until we get a good number of them. And then finally, if we suspect that problem behavior might be maintained by automatic reinforcement, then perhaps we can find a time of the day where simply nothing is happening. And we can observe that student for several minutes and just record whether or not problem behavior is occurring. Now, this sort of assessment is not designed to be time saving. But of course, that's not the problem here. The problem here is setting control. And this assessment has ben specifically designed for classrooms where a setting control is very different. And so the therapist who consulted the teacher could conduct these trials over a period of several days. And then simply look at those trials and determine what types of trials typically occasion more problem behavior. WAYNE: And is your dependent variable in this typically the percent of trials with a problem behavior? BRIAN: Yes. That would be simply it. Because there are no sessions, you're just conducting a number of trials. It's what proportion of trials yielded problem behavior. WAYNE: OK. It sounds pretty straight forward. What else should we be looking at in terms of limiting conditions and concerns? BRIAN: Well, the next one is risk. And that turns out to be a fairly significant one, because sometimes problem behavior is likely to produce a great deal of damage. Now, there are several things that have been recommended for reduction of risk. Of course, first of all, one should have conducted a sort of general assessment of the risk of the behavior to determine whether any of these procedures might be warranted. But for instance, several simple suggestions have been along these lines. One is to reinforce every occurrence of the problem behavior. Whenever we think about delivering consequences, sort of a corollary question is-- how often should we deliver consequences? And of course, there is reason to believe that in the real world, behavior is not always followed by a reinforcing consequence. And so we may be tempted to use sort of intermittent schedules of reinforcement in a functional analysis. Now, that may mimic more closely what happens in the real world, but of course, intermittent schedules also produce higher rates of problem behavior. And all we want to see is a rate that's higher than when a contingency is completely not present at all. So you use continuous schedules of reinforcement. Next, you deliver consequences not only for the target behavior but for any reasonable approximation to it. Because what we're looking for is the increase in a topography, that is the frequency of a topography. We're not looking for an increase in the intensity. So if you hold out consequences and not deliver them for mild approximations, then you may see more target behaviors, but you also may see more severe target behaviors. WAYNE: So it sounds like you're hypothesizing kind of a response chain. BRIAN: Sort of. WAYNE: Can you give us an example so people will understand how you actually orchestrate that, Brian? Let's say a child who would typically show lots of verbal protest before engaging in self-injury or aggression. How would you take that into consideration so as to reduce risk? BRIAN: Well, what you might conduct are some observations of that student engaging in problem behavior. And of course, the tricky part is you need to watch them before they're engaging in problem behavior to look for any subtle cues that problem behavior may follow. And of course, we would call these precursor behaviors-- behaviors that precede and predict the occurrence of later behaviors. And if you observe any of these precursor behaviors what you might do is to actually conduct a functional analysis of the precursor behaviors. And if you deliver the consequences for mild forms of the behavior or what are called as precursor behaviors, then it's been shown in research, in particular by a study conducted a number of years ago by Smith and Churchill, that the functions so-called of the precursor behaviors typically match those of the more severe target behavior. And in addition to that, you tend to see fewer target behaviors than precursor behaviors. Because the person didn't have to engage in the target behavior to get the consequence that maintain both of them. WAYNE: It sounds like a good strategy to lower risk. BRIAN: It is. Now, another possibility might be to use what are known as protective devices. So for instance, individuals who engage in self-injury might be fitted with equipment that doesn't restrict them from engaging in the behavior. So it's not restraint, but it's simply equipment that protects them from the consequences of their own behavior, like padded helmets for individuals who engage in head banging, or protective arm gear for people who engage in punching and things like that. And of course, then you've got the protective devices for the therapists in the case of aggression. WAYNE: That sounds good. BRIAN: Now, another possibility for reducing risk is simply trying to figure out how to reduce the frequency of problem behavior. And there is a way to do that that turns out to be somewhat interesting. It involves simply understanding something about some fundamental dimensions of behavior that we've known about for a long time but somehow haven't recognized very well. So when we look at behavior to see the effects of environment on it, we expose someone to a changed environment, and we see typically a change in the frequency. Does it go up or does it go down? Well, another dimension of behavior that seems to be sensitive to environmental influence is the latency to the first occurrence. And the logic goes sort of like this-- If we have a given amount of time and behavior will occur at a high rate across that time, then the behavior must have a relatively shortened latency. Because it must start relatively soon in order to accumulate a high rate. But what about behavior that will have a low frequency? It doesn't need to have a short latency. In theory, it could occur at any point in time in the session. And so the possibility exists that if behavior occurs at short latencies, then you might be able to count the latency to the occurrence of the first response and use that as the index of sensitivity. And the functional analysis session might include only one response. Now, I've got an example of what a latency functional analysis might look like in this graph. So we have a graph here, and it actually shows two functional analyses for the same individual. And this was taken from a study that Jessica Thomason conducted in which she was comparing the results of latency functional anlayses versus typical functional analyses. Because we didn't know whether the former would be representative of the latter. And so what she did was conducted latency functional analyses first, so that the individuals would not have a history of exposure to any assessment. And then conducted the more typical functional analysis, so that she could compare rates of behavior. Now, this graph shows the more typical functional analysis. Here we have each data point showing rates of problem behavior across conditions. And as you can see very clearly, there is a higher rate of problem behavior in the attention condition. So there's no question that problem behavior is maintained by attention. Now, what's not as easily discernible is the amount of behavior that occurred over the entire course of the assessment. Because again, each data point simply shows the main number of responses per minute across the session. Well, of course, we have the raw data. So we can actually calculate the total number of responses. And the functional analysis shown in the bottom of the graph required 108 instances of problem behavior, and in this case, it's aggression. So the therapist is being struck 108 times across all the sessions of the assessment. Now, the top graph shows the latency functional analysis. And the main difference is that the y-axis does not illustrate rate of problem behavior but latency to occurrence of a problem behavior. And we arbitrarily put the data point at the maximum session length, 300 seconds, simply to illustrate that the session ran out and problem behavior never occurred. Because if problem never occurs, there is zero latency. So if you look at the green data points in the top graph, you'll see that they all land at the maximum value. Which means the session clocked out and we never observed behavior. If you look at the red data points, and those represent the demand condition, three of the four sessions run to the end without seeing any behavior. There is only one session, the second session, in which an episode of aggression was observed. And it occurred about somewhere between 115 and 120 seconds into the session. The only sessions in which aggression reliably occurs are the attention sessions. In the first attention session, problem behavior occurred about 130 seconds into the session. And thereafter it's occurring within the first 15 to 30 seconds. Now, it's very easy to see exactly how many aggressions occurred in the latency functional analysis. And those represent simply the number of data points that don't land at the maximum value. So if you counted them up, you would see five. So the latency functional analysis resulted in five episodes of aggression, the more typical functional analysis-- 108. And so there is a huge difference in terms of amount of response simply by changing the measurement procedure. WAYNE: And they yielded essentially the same answer to the question in terms of the controlling variables. BRIAN: Right. In which condition does behavior occur more quickly? And it turns out that based on these data the condition in which it occurs more quickly is also the condition in which it's likely to occur more often. WAYNE: That sounds like a major advance in terms of safety for the participants and the person doing the functional analysis. Is there a downside to using that particular model, Brian? BRIAN: It turns out there is. And again, it has to do with discrimination. So people discriminate what's happening based on their experience, their exposure to various things. And in this case we see rapid, clear results in a functional analysis the longer the person is exposed to a condition and the more clearly different those conditions are. Now, in the case of the latency functional analysis there are two problems. One is that there is only one opportunity to encounter the consequence associated with that condition, and then the session is over. And then of course the other problem is that the sessions don't last very long. And so you've got rapidly alternating sessions, only one exposure, and the session stops when you engage in problem behavior. And so one of the things we observed was that initially you may get unclear results for a longer period of time than you would during a typical functional analysis. Simply because it takes longer to discriminate. But of course, the problem here is not time. The problem here is risk. And so it turns out that there are all kinds of limitations to doing functional analyses. And each of them has a solution, but that solution may compromise another part of the assessment. So if your problem is time, sure, there are abbreviations. That's going to compromise clarity, precision. If your problem is risk then, sure, there are ways to reduce risk, but it's going to take more time. WAYNE: Well, Brian, you've provided a marvelous overview about this whole range of methods for doing a functional analysis or functional assessment. What is your synopsis? Why do we do this sort of thing? BRIAN: Well, the entire approach focuses on determinants of problem behavior. And as I indicated before, the third approach, the functional analysis, has been deemed the gold standard. And the question might then be-- why? Well, it's been shown that if you want to identify the influence of something, the best way to do it is to conduct a little experiment. Which is why the procedure sort of derives the name experimental analysis. It's not that we don't know if it's of any use as in experimental medicine, it's just the method of assessment borrows from the experimental model of exposing something to a variable and then removing it, and exposing it, and then removing it. Now, we can ask people about the conditions under which behavior occurs. And they will guess, and sometimes they may guess correctly. But of course, we won't know that. Or we could observe behavior in a natural environment, as in the descriptive analysis. But as I indicated, that won't answer the question-- what's happening? And there are so many things happening that usually we can't really isolate which of these things is incidental versus which of them is influential. What we then do is to isolate those events in a functional analysis. And you can very quickly figure out which ones are incidental, which ones are influential. And so it's the one way that clearly indicates the function of a problem behavior. WAYNE: So you describe the methodology. You actually arrange the methodologies to identify the function of a problem behavior. Why is that so important, Brian? How does it connect into treatment and outcomes? BRIAN: Well, that's a very good question. It used to be many years ago that the answer would be-- I have no idea. The reason is that our approach to treatment many years ago was to simply grab a procedure and to see whether it worked. So it was sort of like a blind, empirical way to select interventions. Now, probably the operative model back then was the least restrictive approach to intervention. Which of course, as you know, involves lining up all the procedures according to some perceived level of intrusiveness and always starting at the top. Which is actually kind of a structural approach to intervention. We focus on what the procedures look like. And there are many procedures that have been developed over the years. The problem is that we don't know what procedure will affect behavior. And it depends upon actually the function. And so currently we-- or more recently-- we have realized that if you develop an intervention plan for a problem behavior, it ought to take into account the environmental influences that are currently responsible for its occurrence. And so if you can design interventions that focus specifically on the environmental circumstances that produce behavior and maintain it, those interventions will and should have a larger likelihood of being successful. So for instance, if you just consider ways to reduce behavior using reinforcement, not considering names of procedures, but fundamental principals, it turns out that there are three major underlying mechanisms. One is to eliminate the behavior's establishing operation. If the reinforcer is less valuable, people are less motivated to behave. How do we identify the establishing operation? We do a functional analysis. So once we've conducted a functional analysis, we can identify the establishing operation that influences behavior, the deprivation from attention, or let's say, the presentation of task demands. We know what needs to be changed in terms of antecedent events to make that behavior less likely. The second way you can change behavior with reinforcement is to simply get rid of it. You terminate the contingency that maintains behavior. And of course, we know that as extinction. Well, how do you terminate the contingency? You have to know the function of behavior. So once we identify the function, we can determine what kind of extinction is necessary to reduce that behavior. Do we need to eliminate the attention or the tangibles that are being delivered as consequences for behavior? Or do we need to stop terminating ongoing events when problem behavior occurs? Or do we need to interrupt with part of the response chain, so it would be less likely to produce its own sensory reinforcement? And of course, the third way that one can reduce a behavior is to put another behavior in competition with it. And of course, we call that differential reinforcement. Now, how do we know what kind of reinforcer to use to establish that alternative response? Or what kind of alternative response do we select? And again, the answer lies in identifying function. If problem behavior is maintained by access to attention or escape, it means that the individual probably does not have a very effective way to recruit those consequences from the environment. So we want to give the person a new way to get attention or to get access to tangibles. And of course, the consequences that we need to use to establish those behaviors are the same consequences that maintain problem behavior. Likewise if problem behavior is maintained by escape, it means that that person probably doesn't have a socially acceptable escape response. And so we want to teach that person a new way, so to speak, to communicate to individuals that things are aversive, and I would like things to stop. And of course, the reinforcer for that is termination of ongoing activities. And then finally, if problem behavior is maintained by automatic reinforcement, well, people aren't responsible for the maintenance of that behavior. But the question still becomes-- what kind of alternative response should we teach? And it should be a response that puts an individual into contact with a highly enriched environment, so that it will maintain on its own as a new, socially acceptable, automatically reinforced response. And so when you consider the fact that there are basically three fundamental ways to reduce behavior with reinforcement-- eliminate the establishing operation, eliminate the maintaining contingency, establish a competing response-- information about function is critical to all three of those. WAYNE: That sounds phenomenal. So on the sensory reinforcement side of it, you aren't changing the sensory reinforcement, you are simply changing the behavior that contacts that source of sensory reinforcement to something more adaptive it sounds like. BRIAN: Right. Or giving the person access, so to speak, to what we hope will be better sensory reinforcement by engaging in an alternative response. When you look at many of these behaviors that are self-stimulatory-- the flipping and the twirling and the flapping-- many individuals seem to be entrenched in those behaviors to the point where they don't do anything else. But if you consider the consequences for those behaviors, they're not too great. Like somehow these people got stuck on these behaviors for some reason. And if you like the twirling and the flipping so much, then I should be able to find something that competes with it. Like multi-sensory visual and auditory stimulation combined with kinesthetic stimulation-- there should be other consequences that compete with it. And it's a matter of putting you into contact with them, and then giving you a response that will reliably produce them. And then you're actually simply building new self-stimulatory behavior, but it's behavior that's hopefully better than the current response. WAYNE: Socially acceptable and adaptable potentially. BRIAN: And incidentally that puts the person in a position where they now are manipulating the external environment rather than simply their own body, so they may become more receptive to training to teach them other new replacement behaviors. WAYNE: So the process of doing a functional assessment analysis leads to a number of different treatment strategies. What's the take-home point about the treatment strategies? Are they more effective, efficient, less intrusive? Are there any advantages to those types of treatment strategies based on function, Brian? BRIAN: Well, I think there are several things. First of all, if you sort of understand that those are the basic mechanisms for behavior change with reinforcement, then what you can do is design an intervention plan that includes all three of them. Because that's going to be the maximally powerful intervention. We make the reinforcer less valuable, then we make it unavailable, and then we make it available for a different response. Well, under those conditions, the only adaptive thing is to stop doing one behavior and start engaging in another. So that provides sort of like a complete approach to a behavior intervention plan, and we would recommend that every behavior intervention plan contain at least one procedure that accomplishes each of those. Now, the next thing it will do is to make it clear when you have deficiencies in terms of your intervention, and that may sound sort of vague, so let me provide an example. What about the case where someone engages in a very dangerous behavior to the point where you cannot allow it to occur, simply can't. An individual engages in very dangerous hand banging behavior. We know it's maintained by attention. We know that the way we ought to extinguish it is to walk away. Well, but no one can walk away and leave somebody bleeding. We have to stop that behavior. Well, we now know that one of the three ways to reduce behavior with reinforcement is unavailable. So whatever we program we implement will be defective partially, and that might give us an appreciation for the fact that our intervention will rest on the other two strategies, so we need to make them stronger than they would be ordinarily. It might also allow us to design an intervention that will be minimally disruptive. For instance, we have to stop behavior. Well, but we don't have to a five-minute conversation with the person about why that behavior was wrong or to deliver a great deal of comfort. Although that may seem like it's a nice thing to do, it may simply be delivering reinforcement. And so what about the possibility of just blocking the behavior? Now, that does constitute attention. But if that's the only attention that's delivered and if we deliver a much better quality of attention for the different alternative response, then the fact that extinction technically is absent may not be as bad. WAYNE: Well, Brian, you've described strategies to do functional assessment and functional analysis, treatments emerging from that. How about the bigger picture? Why is all this stuff so important in terms of the clinical and social outcomes for people with problem behaviors, children especially? BRIAN: Well, I think there are several benefits. One of them has to do with efficiency. Like I said before, we could simply grab a procedure and see if it works. Well, sometimes it will be correct, and sometimes it will be incorrect. And I think a much more precise way to design an intervention is to select treatments that at the outset have a higher probability of success. Which means that the treatment course should be shorter than it would simply randomly selecting interventions. The second is that the use of a functional model of intervention has shown that it allows us to make our reinforcement based procedures more powerful than they would be otherwise, because now they're individualized. We're not simply arbitrarily grabbing procedures, grabbing behaviors, and arbitrarily trying to change things. But the responses that we select as replacements for your problem behavior and the reinforcers we use to establish it are based on your individual history of maintenance for your problem behavior rather than simply arbitrarily designing it. Now, what that's shown is that that has increased the general effectiveness of reinforcement based approaches to treatment, so that over the long haul we will be less likely to have to use more intrusive forms of intervention. So for instance, data have been published by Pelios and Axlerod in which they looked at intrusiveness of interventions over the years, and these data are correlational, but they basically showed that since the more widespread use of functional analysis procedures as the basis for intervention plans, there has been a concomitant decrease, a tremendous decrease in the use of punishment in our field. WAYNE: Well, that sounds absolutely admirable. Do you also get better maintenance of treatment gains when you do a function-based delivery model? BRIAN: You might. And the problem is that very little data have examined that. And most of the emphasis has been on-- do the interventions work more quickly in a higher proportion of cases? Now, the whole maintenance issue is an important issue to examine, but so far not many people have really looked at that one. WAYNE: OK. Are there any other benefits you would like to convey about function-based treatment? BRIAN: I don't think so. I hope I've been able to make the case that there's a good deal of evidence that problem behaviors just don't pop out of nowhere. They're acquired as a result of experience. And if we spend some time studying the person's experiences and isolating them in a variety of ways, we'll be able to figure out which ones reliably produce problem behavior. And if we do that, then we have a systematic way to design intervention plans. And if I think if the people who watch this video get that point, then I've been successful. WAYNE: Great. Well, thank you very much, Brian. Wonderful information. I do appreciate it. BRIAN: Well, thank you, Wayne. BRIAN: In this simulation, we will be seeing a series of conditions from a functional analysis, in this case, of aggression. Jennifer Haddock will be playing the role of the client or student. And her target behavior is aggression, which consists of punching the therapist, whose role is being played by Travis Jones. Now, we'll begin with what we call the no interaction condition. And in the interview, I described an alone condition, which is the condition that we would use for problem behavior suspected to be maintained by automatic reinforcement. And the alone condition will be used for, let's say, self-injurious behavior or stereotypic behavior. Now, that condition is irrelevant for aggression, because there can be no aggression in the alone condition, which is why we replace it with what we call the no interaction condition. Now, in that condition, the client has access to leisure materials and can do what he or she pleases. No behavior produces any consequences by the therapist. So as we begin this condition, we see Jennifer, the client, reasonably playing appropriately with the toy. As you can see, she has engaged in an episode of aggressive behavior, but that produces no reaction on the part of the therapist as does any other behavior she exhibits. JENNIFER: Will you play with me? Please? This puzzle is hard. Fine. Come on. Travis, play with me. BRIAN: And so as the session continues, Jennifer engages in various responses-- some of them consisting of appropriate play, some of property destruction, some aggression. Regardless of what the behavior is the therapist does not deliver any sort of attention. So this is basically a test condition to see if aggression maintains when nothing is happening. Now this condition is an example of what we call the attention condition. It's the test condition to see if problem behavior is maintained by social positive reinforcement, usually in the form of attention. Now, recall that the antecedent event in this condition is no attention is available from the therapist for any behavior except as a consequence for the target behavior. In this case it is going to be aggression. Now, the way we typically start this session is to simply indicate to the individual that we, the therapist, will be busy, and then completely ignore the individual from then on. TRAVIS: Jennifer, you've got some toys there. Play with your toys. I've got some work to do. Ow. Jennifer, that's not nice. That's not how you make friends. BRIAN: Now, Jennifer engaged in an episode of aggression. She struck the therapist. He delivered attention. Next, she engaged in another inappropriate behavior, which is sort of property destruction, throwing things. That's not the target behavior for this assessment, so it produces no social consequences. JENNIFER: Travis, are you going to play with me this time? I figured out how to do the puzzle. BRIAN: Neither does appropriate behavior-- also no consequences. TRAVIS: Jennifer, that is not how we treat people. JENNIFER: Can you help me, please? Come on, Travis. Help me, please. I really want you to play with me. BRIAN: So in spite of the fact that Jennifer is actually engaging in some appropriate social behavior, the therapist does not deliver any consequences. Again, the purpose of this condition is to see whether those consequences are reinforcers for the target problem behavior. TRAVIS: Jennifer, how many times do I have to remind you hitting people is not nice? JENNIFER: Travis, play with me. I broke the puzzle. Can you help me? Please, Travis? The dinosaur lost its head. See? Play with me. TRAVIS: Hey, Jennifer. You really can't hit people. BRIAN: This condition illustrates what we call the play condition, which is the control condition. Unlike the attention condition, the individual is not deprived of attention. Unlike the demand condition, which you will see next, there are no work requirements present. And unlike the no interaction condition, the therapist does deliver social interaction frequently throughout the session simply on a non-contingent basis, does not deliver any consequences for problem behavior. TRAVIS: Nice playing. You're doing great. That's right. That's how you match Eeyore. JENNIFER: I've got so many Eeyores. I'm glad you decided to play with me finally. TRAVIS: That's right. You do have so many Eeyores. Nice work. [TOY PLAYING MUSIC] Oh, wow. Let's play music together. This is really fun. [MUSIC CONTINUES] Wow. The music is great. I like how you matched all those Eeyores. You've got some Poohs over here. BRIAN: Now the client has engaged in a couple of episodes of aggressive behavior, but that produced no differential response by the therapist. [TOY PLAYING MUSIC] TRAVIS: Wow. You are quite the musician, Jennifer. Keep it up. [TOY BANGING] JENNIFER: I think it's broken. TRAVIS: It sure is a beautiful day out today. Look at these two Piglets I found. Nice job playing. I like these two Winnie-the-Poohs you found, two Tiggers. JENNIFER: Break it. TRAVIS: Wow. I like how you match. Good work. Jennifer, isn't it a beautiful day out today? BRIAN: Now, although Jennifer is engaging in some aggression during this condition, we would expect that with subsequent exposures, that is if we were to repeat this condition several times, aggressive behavior would decrease, because it is not producing any differential response from the therapist. This segment illustrates what's known as the demand condition. And again, that is the test condition for behavior maintained by social negative reinforcement, escape, usually escape from the presence of task demands. And so if you recall, the antecedent event in this condition is that the therapist presents a series of learning trials that have been selected because they are either boring, repetitive, or effortful. The therapist continues to deliver instructions to perform tasks. If the individual complies, the therapist would deliver praise. However, if the individual engages in the target behavior, which again, is aggression, then the therapist would implement what appears to be a timeout, but really it's escape from the task demand. TRAVIS: Jennifer, put the pencils in the bin. Put the pencil in the bin like me. You do it. Great work. Put another pencil in the bin. Put another pencil in the bin like me. You do it. BRIAN: Now, if the client is not engaging in the problem behavior, the therapist will go through a routine of successive prompts. TRAVIS: Hand me the red chip. Hand me the red chip like this. You do it. Hand me the red chip like this. That's hand me the red chip. Hand me the black chip. Good work. Touch the white card. You don't have to. BRIAN: Now, Jennifer on that trial engaged in aggressive behavior, which is the target, and the consequence is to terminate the trial. And as you can see, the therapist removed the task and said she didn't have to work. Now, following a brief escape period from the task, the therapist will resume the instructional trials to see what happens. TRAVIS: Put the pencil in the bin-- you don't have to. BRIAN: Now, in that trial, the therapist couldn't even deliver the instruction before the aggression occurred. Nevertheless, again he terminates the trial. TRAVIS: Put the pencil in the bin. Great work. Put another pencil in the bin. You don't have to. BRIAN: And this sequence will repeat itself throughout the session. Namely the therapist will continue to deliver instructions and prompts as needed. If the student or client complies, the therapist delivers praise. If the student engages in the target behavior, then the therapist terminates the task. TRAVIS: You don't have to. Hand me the black chip. Good job. Touch the white card. Good job. Touch the yellow card. You don't have to. Hand me the pink card. You don't have to. [MUSIC PLAYING] CLOSED CAPTIONING PROVIDED BY TESSA M. ZIEBARTH, CLOUDSPEAK LANGUAGES LLC