Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411423 Posts in 69363 Topics- by 58416 Members - Latest Member: JamesAGreen

April 18, 2024, 05:44:50 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsCommunityTownhallForum IssuesArchived subforums (read only)TutorialsBasic PID controllers: Missiles, Robots, and Cars
Pages: [1]
Print
Author Topic: Basic PID controllers: Missiles, Robots, and Cars  (Read 21809 times)
Theotherguy
Level 1
*



View Profile
« on: December 27, 2009, 08:32:22 PM »

Hello all. This tutorial is about getting things that have brains to move in a physical environment in realistic and compelling ways, without requiring a lot of code. What I will be talking about I learned from reading research about Boid flocking behavior from the mid-90's and some current robotics research that I am involved in. You see, controlling robots is a lot like controlling characters in a video game -- especially modern video games that are highly physics based.

So, let's get started.


Part I: Missiles


Let's say you've coded a shooter in which the player can switch between various weapons, and you want the player to be able to pick up a guided missile weapon that fires bullets which automatically track enemy ships. We'll assume for now that your game is coded with an underlying physics base -- that is, that objects in your game are physically simulated in some way, and are capable of responding to forces.

How would you go about doing this?

Well, the first thing you need to do is get your missile to select a target, represent it in some way, and start applying forces to move towards it. So, you have your missile represent its target as a 2-dimensional vector (X,Y) which is constantly updated as its target moves. Your missile can apply a single steering force (which we will call sForce)towards its target, with a maximum force of "maxForce". The missile will constantly apply sForce to itself and will be updated in your physics engine at each step.

At this point, your model looks like this:



So the question is, where do we point "sForce", and how powerful should it be?

The Wrong Way

The first, most obvious way to answer this question, which I will call "The Wrong Way" is to simply have the missile point towards its target, yelling "FULL SPEED AHEAD" and thrusting full towards its target. This, at first, seems like a pretty reasonable way of doing things, and I've seen many amateur games that do this for their "homing" missiles. In fact, I'm pretty sure that this is how Cortex Command does it with their homing missiles. (shame on you, Cortex Command.)

However, when you do this, the paths of the missile and the target will end up looking something like this, if the target it moving in a straight line at a constant velocity:



It makes huge, wide, arcs, way overshooting the target and spending tons of time trying to fly back!

What happened here? Well, we did something akin to putting a uniform gravitational field on the target, and applying it to the missile. The missile constantly accelerates towards the target, without any regard for what it is doing at the time.  When this happens, you end up with huge oscillations, or even orbits, in which the missile will do nothing to correct its course errors, or will attempt to correct them with relative sluggishness. It doesn't care about whether or not it has overshot its target, or whether it will ever catch the target.

Proportional Controller

How do we go about this in a smarter fashion? We need to stop thinking about our missile as an object which gets moved around stupidly by laws of nature and instead think of it as a robot, an entity with some semblance of thought process and intelligence.

An entity with intelligence has goals a missile's goal is not to simply move towards the target, but to reach its target, and explode on it. Therefore, if the missile is to act intelligently, we must give it a desired state in which it wishes to be, and then compare our current state with the desired state, and then apply forces to minimize the difference, or error between our current state and our desired state.

What is our desired state? It is to be traveling towards the target at maximum speed, of course! Well, actually, this might not be the desired state, but its what we will go with for now. Now, we must keep track of the velocity of the missile at any given moment, in addition to its position. We will also introduce a constant, called "kP" for Proportional Constant, which we can use to tweak the performance of our missile. What we have set up here is called a "Proportional Controller," because our control, or the force we apply, is directly proportional to our error between our current and desired states.

Our model will look like this:



So, to reiterate:

Code:
/*Our desired velocity, a vector, is to be traveling towards our target at maximum speed.*/
desired_velocity = (Normalize(target_position - current_position)) * maxSpeed;

/*The error we are trying to minimize is the difference between our desired velocity
and our current velocity (note that this is reversed from what you would expect)*/
error = (desired_velocity - current_velocity);

/*The force we apply to minimize our error is our error times some constant called kP*/
sForce = (error)*kP;


What does the path of the missile look like when this new rule is applied? Well, something like this:




Great, it actually explodes the target now! However, it still oscillates a bit, overshooting the target and making these wacky curves until finally colliding with it. This might actually be desirable in your game, as it can make a cool looking effect, especially when the missile has a particle trail, but it is not efficient if your goal is to get the missile to kill the target every time.

There are a few reasons why the missile still oscillates. The first reason is because the missile is targeting where the target currently is, rather than where it will be by the time the missile gets there. The result is that the missile lags behind the target. In order to compensate for this, you may have to lead the target a bit, telling the missile to go to a position slightly ahead of the target, which is further along the target's path the further the missile is from the target. The second reason is that our missile only desires to be moving towards the target at maximum speed, and it has no way of deciding when it will overshoot, and has no way of compensating for this until it has actually overshot.

The first problem is not the point of this tutorial, and the second problem can only be solved by tweaking kP relentlessly until you find a solution that doesn't oscillate so much. However, getting even better performance out of the missile will require ideas that I will put forward in Part II.

(by the way, if you wanted something to avoid a target, rather than fly towards it, multiply the value of kP by -1!)


Part II: Robots


Now let's move away from missiles and on to other things. Let's say you have a game in which an NPC robot with a single omni-directional wheel is moving around a warehouse, trying to attack the player.

One of the things the robot can do is ride elevators up to higher parts of the warehouse, but in order to do this, it must get to the elevator, and stop when it reaches it, then sit on the elevator, perfectly still, until the elevator goes up to the next floor.




Well, this is basically just like the missile problem, except even simpler, since our target isn't even moving! So, we build a proportional controller, and give the robot a desire to be moving towards the elevator at maximum speed.

However, we have a problem with this. The robot will move towards the elevator at maximum speed, reach it, blow past it going as fast as it possibly can, and then stop, turn around, go back towards the elevator at maximum speed, blow past it again, stop, turn around, etc. and oscillate over the elevator forever!

You see, just like we had a problem with the missile oscillating before, we now have a problem with the robot buzzing about the elevator like a fly, unable to stop on it and ride it up.

You could fix this with hacks, of course, and make the robot's velocity instantly zero as it rolls on top of the elevator, but this would look very cheap and not very realistic at all. You could create a threshold in which the robot simply stops applying forces and hopefully your physics engine slows it down enough with friction that it stops, but this also looks clunky and would result in some nasty oscillations if you weren't careful.

Instead, we are going to have to re-think our model and how our robot thinks about the world and its goals.

Unlike the missile, the robot not only wants to be moving towards the elevator at maximum speed, but it wants to stop on the elevator once it gets there. So its desired state at any given moment is different than it was before. In other words, it has to keep track of its state even more thoroughly, not only taking into account its current and desired states, but how its current state is changing.

First let's make a proportional controller. Let's make the robot's desired state "stopped on top of the elevator." I.e, its position should be right on top of the elevator with a velocity of zero. So, its error will be the difference between its current position and the elevators position. From this we can make a proportional controller in which the robot moves towards the target faster when it is further away, and slower when it is closer, like this:



With this basic proportional controller, the robot will be moving much slower when it reaches the elevator, but it will still overshoot and have to turn around, oscillating closer and closer until its motion is overcome by friction and it stops.

Now, to prevent this oscillation, we are going to keep track of the change in the error of the robot's position from the last time we measured it. We will call this factor dE, and we will introduce a constant called kD to compensate for it. What this will do is compensate for change in the robot's error so that if it is moving too quickly towards its target, it will slow down, applying the brakes so that it stops when it reaches the target.

Now we will have to consider two time frames: the last time we measured the robot's position, and the current time through the game loop. This will require us to store the last state of the robot, which we will simply call "previous_error." What we have now is called a "Proportional/Derivative Controller,"

Here is what our model will now look like:



Again, to re-iterate:

Code:
/*Our current error is the difference between our position and our desired (target) position*/
current_error = target - position;

/*The change in error is equal to our current error minus our last error*/
dE = current_error - previous_error;

/*The force we apply is the one which minimizes the current error plus our change in error.
This means we will want to be on top of the target with our error a constant zero*/
sForce = kP*(current_error) + kD*(dE);

/*Store our current error for the next time through*/
previous_error = current_error;



What happens when we do this? The robot moves towards the elevator, going faster when it is further away and slower when it is closer, and begins decelerating well before it reaches its target, coming smoothly to a stop. It may actually oscillate slightly over the target in very tiny, jerky motions. If this happens, you can add a threshold in which it gives up trying to move to the elevator and just stops. Friction should take care of the rest.

We can apply this same concept to our missile, and it will dramatically improve its performance, but it may make its behavior seem a bit unrealistic (how many missiles do you know that slow to a stop before exploding?)



Part III: Cars


Now on to another game. Suppose you're developing AI for a top-down 2D rally-racing game, and in this rally racing game, you have mud pits which dramatically increase the friction on the cars, slowing them down to a crawl. Your AI car is moving towards the goal, which is on the other end of a mudpit.




Seeing the method of controlling cars as being similar to the way you controlled the robot before, you implement a PD controller for your car, and have it drive along some pre-defined waypoints in the road towards the goal. You spend hours tuning the kP and kD terms so that the car can easily drive along the road at a steady pace.

However, when the car enters the mud pit, something weird happens. The car slows down, and eventually stops in the mud-pit, unable to continue on to the goal. The forces it is applying are too small to overcome friction.

What is happening here? Well, when the car enters the mudpit, a larger force is pushing on it than normal, and this force changes the optimal kp/kD terms that you so painstakingly calibrated. The force may be so large that your previously well-tuned terms are now too small to even cause the car to move through the mud.



How do we fix this? Well, you could hack it again. You could tune the kp/kD terms for both mudpit and non-mudpit surfaces, so that the car can move efficiently in both. But this is not very robust. What if you wanted several different surfaces, or a gradient of surfaces in your rally game? You wouldn't want to make special cases for all of them. In other applications, particularly in robotics, its not often possible to know exactly what conditions your controller will face, so it is often difficult or impossible to calibrate the controllers to meet every special case.

So, what we're going to do is introduce the (slightly dangerous) Integral term to our controller. We are going to keep track of the sum of the average error over time, and try to minimize that as well. This way, if our controller gets stuck in the mud, it will know that it has not moved in some time, and will increase its force to overcome friction. We will also introduce a term called kI which will allow us to tune this parameter.




Again, to reiterate:

Code:

/*Add the last error to the integral term, giving us a larger error than before.
This should be reset occasionally, or else you will get gradual "windup" */
intE += previous_error;

/*Our current error is the difference between our position and our desired (target) position*/
current_error = target - position;

/*The change in error is equal to our current error minus our last error*/
dE = current_error - previous_error;

/*The force we apply is the one which minimizes the current error plus our change in error, plus the integral of error.
This means we will want to be on top of the target with our error a constant zero, and
we also want to be doing something which at least gets us moving.*/
sForce = kP*(current_error) + kD*(dE) + kI(intE);

/*Store our current error for the next time through*/
previous_error = current_error;


Hopefully, if kP, kD, and kI are all balanced correctly, the car will behave as normal, driving along the track no matter what the friction is, and when it gets stuck, revving its engine until it has wrested itself free of the mud.

Remember when I said the integral term could be dangerous? The reason is it accumulates, and if your controller isn't doing something which makes its error smaller, the integral term may cause a positive feedback loop, getting bigger and bigger until it explodes exponentially. Because of this, you must remember to reset the integral term every now and then, or it will get out of hand. This is especially sound advice in real-world  applications like robots, because an explosion of force from a controller could even kill someone. There have been many cases I have seen where a robot's motors have been shut down while debugging, with its controllers still churning away, and its integral term building larger and larger, until it is turned on and goes absolutely crazy, injuring someone in the process.

But Theotherguy, How do I tune my Controllers?

This is a very good question. If you can find a good answer to this, there are a lot of universities out there that will give you a free PHD. Finding out the right values of kP, kD, and kI is a delicate art, and balancing them perfectly takes a lot of practice, a lot of debugging, and a lot of long nights. For certain applications, it is possible to compute the most efficient values using vector calculus, but for many others, it is not possible to solve exactly and requires approximation.

Some success has been had in using genetic algorithms to produce finely-tuned controllers, spawning populations of different actors with different kP, kD and kI terms, and breeding the most successful ones together.

But for the amateur game developer, knowing the "right" values to get the "perfect" controller is not such a big issue. Just play with the constants until you get something relatively reasonable, and it will most likely be good enough for your game.

A good heuristic is that if you want your entity to move faster, increase kP. If you want its motion to be smoother, increase kD, and if you want it to be more relentless and robust, increase kI.

Thanks for reading!
« Last Edit: December 28, 2009, 10:43:53 PM by Theotherguy » Logged

salade
Level 4
****



View Profile
« Reply #1 on: December 29, 2009, 09:30:22 PM »

A quick word on finding the right constant values: it is wise to make a graph of each one you try(distance away vs. time), and choose the constants that produce the best graph works. 


Just my two cents...
Logged
mewse
Level 6
*



View Profile WWW
« Reply #2 on: December 29, 2009, 10:15:53 PM »

One more awesome spot for using PID controllers is replay cameras;  they can give a very believable camera behaviour, as though a human operator was doing his best to point a mounted camera at (for example) a car driving past. 

For this approach, you would have a 3D PID which was trying to match the movement of the object being watched, and just always point the camera toward the position of the PID.  Most folks doing this will also have a 1D PID controlling the camera's field of view (this requires some trigonometry to figure out what the desired field of view is).
Logged
Theotherguy
Level 1
*



View Profile
« Reply #3 on: December 30, 2009, 08:24:22 AM »

One more awesome spot for using PID controllers is replay cameras;  they can give a very believable camera behaviour, as though a human operator was doing his best to point a mounted camera at (for example) a car driving past. 

For this approach, you would have a 3D PID which was trying to match the movement of the object being watched, and just always point the camera toward the position of the PID.  Most folks doing this will also have a 1D PID controlling the camera's field of view (this requires some trigonometry to figure out what the desired field of view is).

PID controllers are really great for cameras, 2D or 3D. To make a camera for, say, a 2D platformer, I have a camera that tries to maintain the same position and rotation as the player, and zooms out slightly when he goes faster. This makes it look as though a camera man is trying to keep the player in shot.
Logged

Theotherguy
Level 1
*



View Profile
« Reply #4 on: December 30, 2009, 08:31:26 AM »

A quick word on finding the right constant values: it is wise to make a graph of each one you try(distance away vs. time), and choose the constants that produce the best graph works. 


Just my two cents...

Yes, graphs are your friend! I sometimes dump my PID data to a file and then make a graph of it. Generally, I make a graph of the error over time and see where the oscillations are with various constants, then I put my threshold at the amount the error begins to oscillate at.

Logged

Pages: [1]
Print
Jump to:  

Theme orange-lt created by panic