In my first blog post, I have talked about A.I. in a very broad way since I was learning the basics of it as well. After weeks of doing some extra research, I have today decided to talk about it into more details.
In the process of adding A.I. into a game, there are tons of techniques. In this post, I will talk about two common AI techniques: steering and finite state machines.
Steering behaviours aim to help autonomous characters move in a realistic manner, by using simple forces that are combined to produce life-like, improvisational navigation around the characters' environment.
In my first post, I have talked about Pathfinding A.I. Steering is mainly about this kind of A.I, using linear algebra and physics. For instance, the famous A* algorithm of pathfinding is part of Steering.
Here are some examples of steering behaviours:
As you can see, the basics of Steering is based on mathematical vectors (and physics).
Each behaviour is represented as a force vector. Giving more importance to one or another by making the vector scale bigger, makes the entity move in the direction of this force-behaviour.
For example, set the player as a seeker so you can have the entity char follow a position and then add an entity that is a sheep that will evade wolves entities. Now set an evade force for the sheep and a pursue force for the wolves, you may want to add a wander force to make the sheep follow a location.
Each steering behaviour has of course a more complex side, such as different ways of implementations in different programming languages.
I will not be treating the complex part in here, but in future posts.
Now, as I said, steering is mostly about pathfinding. Let's get a bit more technical by talking about Dijkstra algorithm.
First, what is an algorithm? It is a step by step procedure designed to perform an operation, and which will lead to the sought result if followed correctly.
In our daily life, preparing for school in the morning is an example of using an algorithm.
We first wake up (step 1), then eat breakfast (step 2), then put clothes on (step 3), and go to school (step 4).
Dijkstra is a Dutch computer scientist that studied and taught in my university. He is famous for his algorithm which can find the shortest path from point A to a certain point B.
Another famous algorithm that has the same purpose is A*.
Finding the shortest route from one object to another when developing game A.I is a very common problem and many solutions exist. In 2D grid / tile based games, perhaps the most common one is A*, with Dijkstra's being also quite good. Depending on the complexity of the game, Dijkstra's algorithm can be nearly as fast as A*. A* is generally a better implementation, but can be slightly more complex, so I will be discussing the fundamentals of Dijkstra's algorithm in this post.
From now on, I will be talking only about graphs, which have nodes and edges:
Here, the blue circles labeled 1,2,3,4 are nodes. The edges labeled 1,2,5,10 are paths between the nodes. The number describing the edges are given numbers that show how long the path is.
For instance, it takes less time to go from node 1 to node 2 than from node 1 to node 3 because
1 < 2.
Graphs always have labeled nodes but the edges are not necessarily labeled, it is just for these kind of algorithms.
Now, we want to find the shortest path from node 1 to node 4. This is mentally easy, because we could all infer that going from node 1 to node 3, then from node 3 to node 4 is the solution because (2+5) < (1+10). However, computers do not know that by themselves and we need to implement an algorithm for this. Moreover, what if the graph has, let's say 1000 nodes, would it still be mentally easy? No, in that case and it is the case with Game Development, the computer does it for us.
Temporarily assign C(A) = 0. All other nodes have value infinity (temporarily)
C(A) means the Cost of A
C(x) means the current cost of getting to node x.
The following graph has changed a little from the one shown at the beginning. The nodes no longer have labels, apart from our starting point Node A and our goal Node B.
For each temporary node labeled vertex y adjacent to x, make the following comparison:
if C(x) + W(xy) < C(y), then
C(y) is changed to C(x) + Wxy
Assign y to have parent x.
There are two temporary nodes adjacent to our current node, so calculate their cost values based on the current node's value + the cost of the adjacent node. Assign that value to the temporary node only if it's less than the value that's already there. So, to clarify:
The top node is adjacent to the current node and has a cost of infinity. 0 (the current node's value) + 1 (the cost associated with the temporary node) = 1, which is a less than infinity, so we change its value from infinity to 1 (for now).
Now, do the same calculation for the next adjacent node. which is the bottom node. The value is 0 + 2 = 2, which is also less than infinity. Here it is after step 2:
Now, we go back to step 1. From this point forward, I'll be using the term iteration to describe our progression through the graph.
We’re back at the first step. We have two nodes to look at the top node with cost 1 and the bottom node with cost 2.
The top node has a cost of 1, which is less than 2, so we set it as permanent and set it as our current node. It is important to keep in mind that the bottom node still has a temporary cost assigned to it. This temporary cost is what allows the algorithm to find the actual cheapest route.
Find the cheapest node. It's set as permanent and our current node is this one. This node value is permanent.
The yellow highlight indicates the node we are currently on, and the green text means the node cost is permanent (no changing). The nodes with white text for their costs are temporary nodes.
Assign cost values. There is only one adjacent node to our current node. It's current value is infinity, which is less than 1 + 10, so we assign 11 to it's temporary cost value.
This is not the shortest path from NodeA to NodeB. The algorithm traverses all nodes in the graph, so you get the shortest path from a node to any other node which may decrease its running time sometimes.
We then return to step 1.
Ok, so now we look again at the temporary nodes to see which has the lowest value. Even though we calculated the temporary value of B to be 11, we are not done because that value might change (in this case, it will definitely change).
Pick the cheapest node and set it as our current node and make it permanent, and assign it its parent. We have two remaining temporary nodes with costs of 2 and 11. 2 is lower, so pick it and set it permanent and set it as our current node. Let’s take a look at the graph to elucidate a bit. So, out of 11 and 2, as we said, 2 is cheaper so we pick it. We set this node’s value to be permanent and assign its parent is NodeA, demonstrated by the arrow.
Assign cost values to temporary nodes adjacent to the current node. Again, like in the previous iteration, there is only one node to do a cost calculation on, as there is only one temporary node adjacent to the current node. This adjacent node is NodeB. So, we check to see if 2 + 5 < Node B’s temporary cost of 11. It is, so we change Node B from 11 to 7.
Return to Step 1.
Choose the cheapest temporary node value. There is only one temporary node remaining, so we pick it and set it as permanent, set it as our current node, and set it's parent.
Assign costs. There are no temporary nodes adjacent to Node B (there –are- permanent nodes, but we don’t check them).
Return to step 1
Choose the cheapest temporary node. If none exists or c(x) = infinity, then stop. There are no more temporary nodes and no nodes have values of infinity, so we’re done. The algorithm is over, and we have our shortest path from A to B, but also from that node to every other node in the graph.
Finite States Machines
A finite-state machine, or FSM for short, is a model of computation based on a hypothetical machine made of one or more states. Only a single state can be active at the same time, so the machine must transition from one state to another in order to perform different actions.
They are useful to implement A.I logic in games. They can be easily represented using a graph where nodes are the states and edges are transitions, which allows a developer to see the big picture, tweaking and optimising the final result.
Here is an example given through an image:
As you can see, An FSM can be represented by a graph, where the nodes are the states and the edges are the transitions. Each edge has a label informing when the transition should happen, like the player is near label in the figure above, which indicates that the machine will transition from wander to attack if the player is near.
This is kind of the brain of the character, that specifies what to do and when.
Why use Finite States Machines?
They are easy to implement and manage: Finite State Machines are the first step towards elegant A.I development. This programming model is one the simplest concepts to implement and to manage: it is close enough to using basic if/else statements, yet much more flexible. It basically makes your A.I easy to modify and extend, even when it gets big.
They cover many A.I needs: Although they have their limits, FSMs will cover most of your AI needs.
They can easily model a variety of enemies, characters, and even objects (for instance, an enemy with 3 states: “Fire”, “Hide” and “Run”).
They are accessible and intuitive: FSMs aren’t abstract at all: it’s quite natural to think about an artificial intelligence as an entity with a set of behaviours or states.
What about A.I applied to Computer Vision?
Now that I have talked about A.I in Gaming a lot, I would like to talk about something else related to A.I that is also really interesting: Artificial Intelligence applied to Computer Vision, or should we call it A.I Vision.
Today, Computer Vision gives computers the ability to understand what they are seeing, and to act smartly on that knowledge.
This leads to A.I Vision which has different topics: for example A.I can use computer vision to communicate with humans. GRACE the robot is a robot who could communicate slightly with humans to be able to recognise her surroundings and achieve a specific goal. For example, GRACE attended a conference through a lobby and up an elevator by communicating with humans. Communications included understanding that she had to wait in line, and asking others to press the elevator button for her.
Another example is handwriting or drawings recognition, like in this photo:
The last one is, according to me, the most interesting one: passive observation and analysis.
It is using computer vision to observe and analyse certain objects over time.
During my interview for the High Tech Systems Honors Track, I have talked about the idea of a Home-Security Drone. Imagine a drone standing in the roof of a house, and a robber enters, what should the drone do? Well, of course he has to go to him, but also keep a certain distance while following him (assuming the robber is running or moving). If he does not keep a distance, then the robber could break the drone. Now, one other question that goes deeper:
Which distance? It actually depends on the weapon: if a drone wants to use an electrical net that falls on the robber's head, then the distance must be kept in the Y-axis. However, if the drone wants to use a (non-killing) gun, he must keep a distance in the X-axis or Z-axis in order to shoot like a human-being.
I think A.I Vision is really interesting, and it is certainly not the only type of applied A.I.
Any person interested should check the following website: www.aitopics.org
Ahmed Ahres, 21.