Transcript for:
Introduction to Control Theory Concepts

an important question that has to be answered when you're designing an autonomous system is how do you get that system to do what you want I mean how do you get a car to drive on its own how do you manage the temperature of a building or how do you separate liquids into their component parts efficiently with a distillation column and to answer those questions we need control theory control theory is a mathematical framework that gives us the tools to develop autonomous systems and in this video I want to walk through everything you need to know about control theory so I hope you stick around for it I'm Brian and welcome to a Matlab Tech talk we can understand all of control theory using a simple diagram and to begin let's just start with a single dynamical system this system is the thing that we want to automatically control like a building or a distillation column or a car it can really be anything but the important thing is that the system can be affected by external inputs and in general we can think of the inputs as coming from two different sources there are the control inputs U that we intentionally use to affect the system for a car these are things like moving the steering wheel and hitting the brake and pressing on the accelerator pedal and then there are unintentional inputs these are the disturbances D and they are forces that we don't want affecting the system but they do anyway these are things like wind and bumps in the road now the inputs enter the system interact with the internal Dynamics and then the system State X changes over time so for a car we move the steering wheel and we press the pedals which turn the wheels and revs the engine producing forces and torques on that vehicle and then combined with the forces and torques from the disturbances the car changes its speed position and direction now if we want to automate this process that is we want the car to drive without a person determining the inputs where do we go from here and the first question is can an algorithm determine the necessary control inputs without constantly having to know the current state of the system or maybe a better way of putting it is do you need to measure where the car is and how fast it's going in order to successfully drive the car with good control inputs and the answer is actually no we can control a system with an open loop controller also known as a feed forward controller a feed forward controller takes in what you want the system to do called the reference R and it generates the control signal without ever needing to measure the actual state in this way the signal from the reference is fed forward through the controller and then forward through the system never looping back hence the name feed forward for example let's say that we want the car to autonomously drive in a straight line and at some arbitrary constant speed if the car is controllable which means that we have the ability to actually affect the speed and direction of the car then we could design a feed forward controller that accomplishes this the reference drive straight means that the steering wheel should be held at a fixed zero degrees and drive at a constant speed means that we depress the accelerator pedal some non-zero amount the car would then accelerate to a constant speed and drive straight exactly as we want however let's say that we want the car to reach a specific speed like 30 miles an hour we can actually still do it with a feed forward controller but now the controller needs to know how much to depress the accelerator pedal in order to reach that specific speed and this requires knowledge about the Dynamics of the system and this knowledge can be captured in the form of a mathematical model now developing a model can be done using physics and first principles where the mathematical equations are written out based on your understanding of the System Dynamics or it can be done by using data and fitting a model to that data with a process called system identification both of these modeling techniques are important Concepts to understand because as we'll get into models are required for almost all aspects of control theory now as an example of system identification we could test the real car and record the speed it reaches given different pedal positions and then we could just fit a mathematical model to that data basically speed is some function of the pedal position now for the feed forward controller itself we could just use the inverse of that model to get pedal position as a function of speed so given a reference speed the feed forward controller would be able to calculate the necessary control input so feed forward controllers are a pretty straightforward way to control a system however as we can see it requires a really good understanding of the System Dynamics since you have to invert them in the controller and any error in that inversion process will result in error in the system state also even if you know your system really well the environment the system is operating in should have predictable Behavior as well you know so that there's not a lot of unknown disturbances entering the system that you're not accounting for in the controller of course it doesn't take much imagination to see that feed forward control breaks down for systems that aren't robust to disturbances and uncertainty I mean imagine wanting to autonomously drive a car across the city with feed forward control theoretically you could map the city well enough and know your car well enough that you could essentially pre-program in all of the steering wheel and pedal commands and just let it go and if you had perfect knowledge ahead of time then the car would execute those commands and then make its way across the city unharmed obviously though this is unrealistic I mean not only are other cars and pedestrians impossible to predict perfectly but even the smallest errors in the position and speed of your car will build over time and eventually deviate much too far from the intended path so this is where feed back control or closed loop control comes to the rescue in feedback control the controller uses both the reference and the current state of the system to determine the appropriate control inputs that is the output is fed back making a closed loop hence the name and in this way if the system state starts to deviate from the reference either because of disturbances or because of errors in our understanding of the system then the controller can recognize those deviations those errors and adjust the control inputs accordingly so feedback control is a self-correcting mechanism and I like to think of feedback as a hack that we have to employ due to our inability to perfectly understand the system and its environment we don't want to use feedback control but we have to all right so feedback control is powerful but it's also a lot more dangerous than feed forward control and the reason for this is that feed forward changes the way we operate a system but feedback changes the Dynamics of the system it changes its underlying behavior and this is because with feedback the controller changes the system State as a function of the current state and that relationship is producing new Dynamics and changing Dynamics means that we have the ability to change the stability of the system and on the plus side we can take an unstable or marginally stable system and make it more stable with feedback control but on the negative side we can also make a system less stable and even unstable and this is why a lot of control theory is focused on designing and importantly analyzing feedback controllers because if you do it wrong you can cause more harm than good and since feedback control exists in many different types of systems the control community over the years have developed many different types of feedback controllers there are linear controllers like PID and full State feedback that assume the general behavior of the system being controlled is linear in nature and if that's not the case there are non-linear controllers like on off controllers and sliding mode controllers and gain scheduling now often thinking in terms of linear versus non-linear isn't the best way to choose a controller so we Define them in other ways as well for example there are robust controllers like mu synthesis an active disturbance rejection control which focus on meeting requirements even in the face of uncertainty in the plant and in the environment so we can guarantee that they are robust to a certain amount of uncertainty there are adaptive controllers like extremum seeking and model reference adaptive control that adapt to changes in the system over time there are optimal controllers like lqr where a cost function is created and then the controller tries to balance performance and effort by minimizing the total cost there are predictive controllers like model predictive control that use a model of the system inside the controller to simulate what the future state will be and therefore what the optimal control input should be in order to have that future State match the reference there are intelligent controllers like fuzzy controllers or reinforcement learning that rely on data to learn the best controller and there are many others and the point here isn't to list every control method I just wanted to highlight the fact that feedback control isn't just a single algorithm but it's a family of algorithms and choosing which controller to use and how to set it up depends largely on what system you are controlling and what you want it to do so what do you want your system to do what state do you want the system to be in what is the reference that you want it to follow and this might seem like a simple question if we're balancing an inverted pendulum or designing a simple Cruise controller for a car the reference for the pendulum is vertical and for the car it's the speed that the driver sets however for many systems understanding what it should do takes some effort and this is where planning comes in the control system can't follow a reference if one doesn't exist and so planning is a very important aspect of Designing a control system with a self-driving car for example planning has to figure out a path to the destination while avoiding obstacles and it has to follow the rules of the road plus it has to come up with a plan that the car is physically able to follow you know it doesn't accelerate too fast or it doesn't turn too quickly and if there are passengers then planning has to account for their comfort and safety as well and only after the plan has been created can the controller then generate the commands to follow it an example of two different graph-based planning methods are rapidly expanding random trees rrt and a star once again there are too many different algorithms to name but the important thing is that you understand that you have to develop a plan that your controller will then try to follow all right so once you know what you want the system to do and you have a feedback controller to do it now you need to actually execute this plan and as we know for feedback controllers this requires knowledge of the state of the system that is after all what we are feeding back and the problem is that we don't actually know the state unless we measure it and measuring it with a sensor introduces noise so for our car example we're not feeding back the true speed of the car we're feeding back a noisy measurement of the speed and our controller is going to react to that noise so in this way noise in a feedback system actually affects the true state of the system and so this is one additional problem that we're going to have to tackle with feedback control a second problem is that of observability in order to feedback the state of the system we have to be able to observe the state of the system and this requires sensors in enough places that every state that is fed back can be observed now it's important to note that we don't have to measure every state directly we just need to be able to observe every state for example if our car only has a speedometer we can still observe acceleration by taking the derivative of the speed so there are two things here we need to reduce measurement noise and we need to manipulate the measurements in such a way that allows us to accurately estimate the state of the system State estimation is therefore another important area of control theory and for this we can use algorithms like the column and filter the particle filter or even just run a simple running average and choosing an algorithm depends on which states you are directly measuring and how much noise and what type of noise is present in those measurements now the last major part of control theory is responsible for ensuring the system that we just designed works that it meets the requirements that we set for it and this comes down to analysis simulation and test for this we can plot data in different formats like with a body diagram a nickels chart or a Nyquist diagram we could check for stability and performance margins we could simulate the system using Matlab and simulink and all of these tools can be used to ensure that the system will function as intended and so this full diagram here I think represents everything you need to know about control theory you have to know about different control methods both feed forward and feedback depending on the system you're controlling you have to know about State estimation so that you can take all of those noisy measurements and be able to feed back an estimate of System state you have to know about planning so that you can create the reference that you want your controller to follow you have to know how to analyze your system to ensure that it's meeting requirements and finally and possibly most importantly you have to know about building mathematical models of your system because models are often used for every part we just covered they are used for controller design they're used for State estimation they're used for planning and they are used for analysis all right I always leave links below to other resources and references and this video is no exception and there are a bunch for this video since I mentioned so many different topics and something I think is nice is that we already have Matlab Tech talks for almost every topic I mentioned we have feed forward and PID and gain scheduling and fuzzy logic and Coleman filters and particle filters and planning algorithms and system identification and more so if there's an area of control theory that you want to learn more about I hope you check out the links below and to make it easier to browse through all of them I put together a journey at resourcium.org that organizes all of the references in this video again link to that is below as well so this is where I'm going to leave this video if you don't want to miss any other future Tech talk videos don't forget to subscribe to this Channel and if you want to check out my channel control system lectures I cover more control theory topics there as well thanks for watching and I'll see you next time