Transcript for:
Overview of Structural Equation Modeling Concepts

Structural equation modeling is an analytical approach that allows us to test our theories using confirmatory factor analyses and multiple regressions at the same time. So how do we use it? Well, we're going to cover that in this video. First, I want to welcome you to the Brilliant AF community.

I'm Ashley. I make videos about God goals in grad school, and this is a special time. I am in my second year of my PhD program.

And what I found is that... I learn best when I teach. So I'm going to be doing some high level videos on structural equation modeling.

And I'm going to be linking the articles where you can go in depth in to this really powerful analytic strategy for you to, you know, get mastery. This Set of videos is really meant to give you a high-level conceptual understanding so that you know what's going on when you dive deep into the hundreds and hundreds of pages on statistical analyses and processes. This is something that I wish I had when I was going through my learning journey and I am more than happy to offer this to you. So I want to offer the disclaimer that this is me in a point in time. Second year PhD student and I am 100% confident that over the years I will grow and I will get even better at my understanding and my explanations of these things.

So I would love your grace and your participation. If you hear something that's not quite on the mark, let me know in the comments. Let's have this be a mutually reinforcing conversation so that we can help. The entire Brilliant AF community thrive. So that's my disclaimer.

Let's get into it. I'm going to base the foundation of my explanation of the two-step structural equation modeling approach on Anderson and Gerben's 1988 article. And I'm going to pepper in some best practices based off of some more contemporary articles and I'll make sure to link those in the description below. It's actually kind of funny.

I was just sitting in my mother's womb while Anderson and Gerbing were thinking about the next frontier of theoretical analyses. That... good job guys.

So how do we do it? There's two steps. First step is to analyze your measurement model. The second step is to analyze your path model. Cool.

Video over, right? Okay, no, but I'm going to dive a little bit deeper into that. To set this up, we're going to use my trusty little iPad so that we can follow along. I don't know about you, but I'm a visual learner and I know a lot of people who watch my channel are as well. So let's get started.

So first, we're just going to set up a basic model. Typically, what you're going to have is a latent variable indicated by that circle. You're going to have some items. Let's say that this is like a two-factor structure here. So I'm going to make another set of items and then we are going to have the factor loadings as indicated by these lines and corresponding arrows.

So now that I have this nice little model, we're going to make another one. really quickly just so that we can have a relationship built in. So we'll make this one a nice little lime green color. We're going to call this, let's make sure I get this right, the exogenous variable, the one on the outside, X.

And we're going to make the endogenous variable, the one on the inside, Y. And let's say for the purposes of this, we are going to hypothesize x leads to y. Simple enough. So now that we have our model, we want to make sure that we are using the two-step approach appropriately. So the first step in the Anderson and Gerbing's two-step approach is to analyze the measurement model.

And by measurement models, I actually mean the X and Y section independently. So when we're analyzing the measurement model, we are basically using confirmatory factor analyses to make sure we're measuring what we actually set out to measure. These are called measurement invariance tests.

So there's two types of measurement invariance tests that we would conduct. The first is configurable invariance. And that basically means that we're testing the fact structure to make sure it's what we expect it to be.

We want to make sure that if we think it's going to be a two-factor structure when we run the analyses and we find out that there are three, then that's an issue and we want to revisit that because the first things first is we want to make sure that we're measuring what we're actually set out to measure. The next thing is metric invariance. We want to make sure that the factor loadings are what we would expect them to be at a high level.

high enough threshold. If you end up doing the metric and variance tests and find out that these items are have like a 10, 20% loading on the factor, that's typically not very strong unless you have some like mega theoretical reasoning for keeping those. I've seen that we typically like above 0.5. the higher the better when it comes to factor loadings. So once we make sure that we have the right factor structure and then we make sure that we have reasonable factor loadings, then we will move on to the next model.

So in the figure that I drew, we had two measurement models that we needed to test. The x measurement model and the y measurement model. I am actually really happy that we have this time together to break that down because I, when I was first taking this kind of course, all I remember seeing is us just running a whole bunch of random models and I didn't really understand what was going on. I was learning the mechanics of how to program the analyses, but I didn't conceptually understand what we're doing.

So that's why we're spending this time here today. We're going to try to make sure that we are measuring each model independently so that we know that we can for sure say that we have measured what we set out to measure. So the next thing that we're gonna do with our measurement model is use our theory to re-specify the measurement model. So this might mean letting your residuals co-bury. Whatever you do, you want to make sure that your theory backs up what you're trying to do analytically.

And that really just speaks to the importance of nailing down the theory first before we get into the analyses because you can theorize these kinds of relationships, how things fit together, and you can also theorize some alternative models that might be applicable as well. Because if you spend that time doing this beforehand or a priori, as we say in science, super fancy, And then it's good science. It's good practice. If you find yourself doing the analyses and hypothesizing after the fact, they have a term for that that's harking. If you try to pass that off as you hypothesizing that a priori, then that's technically cheating in the science game.

And we don't want to cheat here. We want to make sure that we are bunkering down, creating some really strong theory and using that theory to guide our our analyses. If you invest at that time in the front end, then you can have a map for your analyses moving forward.

So the last thing that you want to make sure that you do, and this comes from Cortina and colleagues 2017 article is to report your degrees of freedom. So I'm going to do an entire video on degrees of freedom, but the basic premise is that you want to report your degrees of freedom for all the models that you test so that you can contribute to science in a way that allows others to replicate. Replication is really important when it comes to knowledge creation because you want knowledge that sticks, right?

And if you create knowledge that no one can reproduce, then it's not really durable goods. We want to create durable goods here. Now that we've done these things, we've... Definitely analyze the measurement model. Now it's time to do the path model.

What I mean by the path model is basically everything, I'm going to circle this, everything in this graphical depiction that I made. Specifically this arrow because the path model is basically outlining the relationships between the variables. You might be hypothesizing that increased x leads to decreased y.

And that means you're going to run the path model and look at the relationship between X and Y to determine if you get a negative coefficient, basically. So if you get a negative coefficient and it's statistically significant, then you can say as X increases, Y decreases. decreases. Alternatively, if you're hypothesizing that there's a negative relationship and then you see that there's a positive relationship and statistically significant, then the evidence doesn't support your theory. And you might want to make sure that you've done everything right to make sure that that is, is the case.

I know I found myself in that situation, um, once or twice, and that's not necessarily a bad thing. And we're not here to try to game the system. We're here to try to build on knowledge.

So if we see something that is contrary to what we've hypothesized and we're really confident about our theory, you know, we need to start understanding why that happened. Was it the data? Was it a specific boundary condition that we didn't consider in our theory? Should we consider that moving forward?

Everything is not a waste. We can use it to build on the knowledge that we're trying to build, whether it's practically in terms of skills or whether it is within the field of building knowledge within this specific domain. Now everything I said with the measurement model still applies to the path model. We want to make sure that we're measuring what we say we're going to measure.

We want to make sure that all the lines that we put in our model are tested. Sometimes you can see things like some covariances and even some lines going across here and all that stuff. Everything in pink, all these lines I'm drawing, have to be tested in our path model.

That's the whole point of this second, second part. And just like The measurement model, every single line, every single thing that you add here, needs to be backed up by theory. You need to justify why you think these arrows exist. So that's my conceptual explanation of two-step process and some best practices from more contemporary authors.

I'm really excited to hear what you want to know next about structural equation modeling. I'm pretty excited about having this tool in my toolbox and I can't wait to grow. in using this. If you want to go deeper in this, I highly recommend going to Karma.

I'm going to link that in the description below. It's a fantastic resource that allows you to go deep in method. The people who are running this are very committed for us to be great scholars, producing great science.

And I have personally benefited from the instructors and the content on Karma. So I am here as a vessel for the conceptual level for you to kind of get a high level view of what we're trying to accomplish so that when we go deep, you don't get as lost. So if you found value in this, please give me a thumbs up.

I love hearing your feedback and I can't wait to see you next time. Stay brilliant.