In my first blog post, I have talked about A.I. in a very broad way since I was learning the basics of it as well. After weeks of doing some extra research, I have today decided to talk about it into more details.
In the process of adding A.I. into a game, there are tons of techniques. In this post, I will talk about two common AI techniques: steering and finite state machines.
Steering behaviours aim to help autonomous characters move in a realistic manner, by using simple forces that are combined to produce life-like, improvisational navigation around the characters' environment.
In my first post, I have talked about Pathfinding A.I. Steering is mainly about this kind of A.I, using linear algebra and physics. For instance, the famous A* algorithm of pathfinding is part of Steering.
Here are some examples of steering behaviours:
As you can see, the basics of Steering is based on mathematical vectors (and physics).
Each behaviour is represented as a force vector. Giving more importance to one or another by making the vector scale bigger, makes the entity move in the direction of this force-behaviour.
For example, set the player as a seeker so you can have the entity char follow a position and then add an entity that is a sheep that will evade wolves entities. Now set an evade force for the sheep and a pursue force for the wolves, you may want to add a wander force to make the sheep follow a location.
Each steering behaviour has of course a more complex side, such as different ways of implementations in different programming languages.
I will not be treating the complex part in here, but in future posts.
Now, as I said, steering is mostly about pathfinding. Let's get a bit more technical by talking about Dijkstra algorithm.
First, what is an algorithm? It is a step by step procedure designed to perform an operation, and which will lead to the sought result if followed correctly.
In our daily life, preparing for school in the morning is an example of using an algorithm.
We first wake up (step 1), then eat breakfast (step 2), then put clothes on (step 3), and go to school (step 4).
Dijkstra is a Dutch computer scientist that studied and taught in my university. He is famous for his algorithm which can find the shortest path from point A to a certain point B.
Another famous algorithm that has the same purpose is A*.
Finding the shortest route from one object to another when developing game A.I is a very common problem and many solutions exist. In 2D grid / tile based games, perhaps the most common one is A*, with Dijkstra's being also quite good. Depending on the complexity of the game, Dijkstra's algorithm can be nearly as fast as A*. A* is generally a better implementation, but can be slightly more complex, so I will be discussing the fundamentals of Dijkstra's algorithm in this post.
From now on, I will be talking only about graphs, which have nodes and edges:
Here, the blue circles labeled 1,2,3,4 are nodes. The edges labeled 1,2,5,10 are paths between the nodes. The number describing the edges are given numbers that show how long the path is.
For instance, it takes less time to go from node 1 to node 2 than from node 1 to node 3 because
1 < 2.
Graphs always have labeled nodes but the edges are not necessarily labeled, it is just for these kind of algorithms.
Now, we want to find the shortest path from node 1 to node 4. This is mentally easy, because we could all infer that going from node 1 to node 3, then from node 3 to node 4 is the solution because (2+5) < (1+10). However, computers do not know that by themselves and we need to implement an algorithm for this. Moreover, what if the graph has, let's say 1000 nodes, would it still be mentally easy? No, in that case and it is the case with Game Development, the computer does it for us.
Temporarily assign C(A) = 0. All other nodes have value infinity (temporarily)
C(A) means the Cost of A
C(x) means the current cost of getting to node x.
The following graph has changed a little from the one shown at the beginning. The nodes no longer have labels, apart from our starting point Node A and our goal Node B.
For each temporary node labeled vertex y adjacent to x, make the following comparison:
if C(x) + W(xy) < C(y), then
C(y) is changed to C(x) + Wxy
Assign y to have parent x.
There are two temporary nodes adjacent to our current node, so calculate their cost values based on the current node's value + the cost of the adjacent node. Assign that value to the temporary node only if it's less than the value that's already there. So, to clarify:
The top node is adjacent to the current node and has a cost of infinity. 0 (the current node's value) + 1 (the cost associated with the temporary node) = 1, which is a less than infinity, so we change its value from infinity to 1 (for now).
Now, do the same calculation for the next adjacent node. which is the bottom node. The value is 0 + 2 = 2, which is also less than infinity. Here it is after step 2:
Now, we go back to step 1. From this point forward, I'll be using the term iteration to describe our progression through the graph.
We’re back at the first step. We have two nodes to look at the top node with cost 1 and the bottom node with cost 2.
The top node has a cost of 1, which is less than 2, so we set it as permanent and set it as our current node. It is important to keep in mind that the bottom node still has a temporary cost assigned to it. This temporary cost is what allows the algorithm to find the actual cheapest route.
Find the cheapest node. It's set as permanent and our current node is this one. This node value is permanent.
The yellow highlight indicates the node we are currently on, and the green text means the node cost is permanent (no changing). The nodes with white text for their costs are temporary nodes.
Assign cost values. There is only one adjacent node to our current node. It's current value is infinity, which is less than 1 + 10, so we assign 11 to it's temporary cost value.
This is not the shortest path from NodeA to NodeB. The algorithm traverses all nodes in the graph, so you get the shortest path from a node to any other node which may decrease its running time sometimes.
We then return to step 1.
Ok, so now we look again at the temporary nodes to see which has the lowest value. Even though we calculated the temporary value of B to be 11, we are not done because that value might change (in this case, it will definitely change).
Pick the cheapest node and set it as our current node and make it permanent, and assign it its parent. We have two remaining temporary nodes with costs of 2 and 11. 2 is lower, so pick it and set it permanent and set it as our current node. Let’s take a look at the graph to elucidate a bit. So, out of 11 and 2, as we said, 2 is cheaper so we pick it. We set this node’s value to be permanent and assign its parent is NodeA, demonstrated by the arrow.
Assign cost values to temporary nodes adjacent to the current node. Again, like in the previous iteration, there is only one node to do a cost calculation on, as there is only one temporary node adjacent to the current node. This adjacent node is NodeB. So, we check to see if 2 + 5 < Node B’s temporary cost of 11. It is, so we change Node B from 11 to 7.
Return to Step 1.
Choose the cheapest temporary node value. There is only one temporary node remaining, so we pick it and set it as permanent, set it as our current node, and set it's parent.
Assign costs. There are no temporary nodes adjacent to Node B (there –are- permanent nodes, but we don’t check them).
Return to step 1
Choose the cheapest temporary node. If none exists or c(x) = infinity, then stop. There are no more temporary nodes and no nodes have values of infinity, so we’re done. The algorithm is over, and we have our shortest path from A to B, but also from that node to every other node in the graph.
Finite States Machines
A finite-state machine, or FSM for short, is a model of computation based on a hypothetical machine made of one or more states. Only a single state can be active at the same time, so the machine must transition from one state to another in order to perform different actions.
They are useful to implement A.I logic in games. They can be easily represented using a graph where nodes are the states and edges are transitions, which allows a developer to see the big picture, tweaking and optimising the final result.
Here is an example given through an image:
As you can see, An FSM can be represented by a graph, where the nodes are the states and the edges are the transitions. Each edge has a label informing when the transition should happen, like the player is near label in the figure above, which indicates that the machine will transition from wander to attack if the player is near.
This is kind of the brain of the character, that specifies what to do and when.
Why use Finite States Machines?
They are easy to implement and manage: Finite State Machines are the first step towards elegant A.I development. This programming model is one the simplest concepts to implement and to manage: it is close enough to using basic if/else statements, yet much more flexible. It basically makes your A.I easy to modify and extend, even when it gets big.
They cover many A.I needs: Although they have their limits, FSMs will cover most of your AI needs.
They can easily model a variety of enemies, characters, and even objects (for instance, an enemy with 3 states: “Fire”, “Hide” and “Run”).
They are accessible and intuitive: FSMs aren’t abstract at all: it’s quite natural to think about an artificial intelligence as an entity with a set of behaviours or states.
What about A.I applied to Computer Vision?
Now that I have talked about A.I in Gaming a lot, I would like to talk about something else related to A.I that is also really interesting: Artificial Intelligence applied to Computer Vision, or should we call it A.I Vision.
Today, Computer Vision gives computers the ability to understand what they are seeing, and to act smartly on that knowledge.
This leads to A.I Vision which has different topics: for example A.I can use computer vision to communicate with humans. GRACE the robot is a robot who could communicate slightly with humans to be able to recognise her surroundings and achieve a specific goal. For example, GRACE attended a conference through a lobby and up an elevator by communicating with humans. Communications included understanding that she had to wait in line, and asking others to press the elevator button for her.
Another example is handwriting or drawings recognition, like in this photo:
The last one is, according to me, the most interesting one: passive observation and analysis.
It is using computer vision to observe and analyse certain objects over time.
During my interview for the High Tech Systems Honors Track, I have talked about the idea of a Home-Security Drone. Imagine a drone standing in the roof of a house, and a robber enters, what should the drone do? Well, of course he has to go to him, but also keep a certain distance while following him (assuming the robber is running or moving). If he does not keep a distance, then the robber could break the drone. Now, one other question that goes deeper:
Which distance? It actually depends on the weapon: if a drone wants to use an electrical net that falls on the robber's head, then the distance must be kept in the Y-axis. However, if the drone wants to use a (non-killing) gun, he must keep a distance in the X-axis or Z-axis in order to shoot like a human-being.
I think A.I Vision is really interesting, and it is certainly not the only type of applied A.I.
Any person interested should check the following website: www.aitopics.org
Being a huge gaming and psychology enthusiast, I have decided to talk about the motivational psychology behind video games.
Why do we play them ? Is it because they are fun ? No, it goes deeper than this.
In this post, I will try to explain to you why we do play video games, and a lot of people will actually realise, through this study, that they are more than concerned.
Intrinsic motivation is one of the major concepts in understanding the science behind using gamification and game-based learning as engagement tools.
The self-determination theory (SDT) suggests that competence, autonomy, and relatedness are the three needs that stimulate the psychological health and well-being of a person.
Let me now talk more about this:
Autonomy: It represents the decision-making ability and personal agency.
This human need is about giving the player the ability to make decisions that could affect the outcome of the game and even affect the storyline. Players should be able to shape the game’s narrative through decisions.
It is an important part of game design since it gives the player a certain freedom to act and the feeling that he is in direct control of the character. It enhances the immersion and the player’s experience in the game.
One great example is Firewatch. During the entire game, the player is in control of a walkie-talkie and speaks to his supervisor Delilah. It is (almost) always possible to choose what to say to Delilah. The player has the ability to choose from three different options. The decisions made by the player directly affect the relationship developed with her during the game. For instance, Delilah could become in love with your character if the player chose to be playful with her during the previous conversations. Giving this ability to choose makes the player immersed and intrinsically motivated.
Control: This is also called Competence. It represents the sense of efficacy.
It is about giving the player the feeling that he does something successfully and efficiently.
The intrinsic motivation comes from the satisfaction of mastering the game as well as the pursuit of mastery.
It goes without saying that the more time is spent playing, the more efficient the player becomes.
Indeed, with time, experience and mastery are built. In order to keep satisfying the need of Competence, the game difficulty has to proportionally increase as the player becomes better and better.
The “flow” is an important graph in game design showing how the difficulty must increase with the skill to keep the player intrinsically motivated. On one hand, If the challenges underwhelmed players, they would result to boredom because the game is simply too easy. On the other hand, if they overwhelmed the players, they would lead to anxiety since the game is too hard. What maintains the proportion is called flow.
2D platform games are an example here of satisfying the competence need.
For instance, if we consider Super Mario 2D Land, the game starts very easy, giving time to the player to master the settings, the commands, and the game itself.
The player then feels efficient, he is given the ability to accomplish a task successfully. The intrinsic motivation is then developed. With time, the player gets more and more experience with mastery, which is why the enemies are multiplied and jumping becomes harder with distance increased.
Relatedness: It is about being (socially) connected and associated.
It represents the desire to connect and interact with others. It could also be described as the social connections made through the game. The intrinsic motivation comes from creating social connections.
This is an important part for game design because human beings are naturally social beings. Developing connections and getting to know new people is something that (almost) always makes people happy. Giving the ability to the player to create these connections through a video game will foster his well-being and enhance his experience as well as his intrinsic motivation.
A lot of MMORPG games such as World of Warcraft illustrate Relatedness. One other original example is Keep Talking And Nobody Explodes. It is a puzzle game that requires two players: One player has a bomb in the screen with a timer. The other person has a manual saying how to defuse the bomb. The game is based on the communication: how will the person having the manual explain meticulously to the other one the way he will diffuse the bomb within the time limit. This game is all about cooperation and communication. It enhances the social connectedness and therefore relatedness.
Our game, called Castlenova, is a stealth/infiltration game. It is about getting to the destination without being noticed by the guards.
It focuses on two of the fundamental human needs: Autonomy and Control. Indeed, the game being based on stealth/infiltration, they are the main needs to satisfy in this type of games. Relatedness could be added, but it is not the priority.
Autonomy is given a lot of importance in our game. Indeed, the city is freely explorable, opened. It is possible to arrive to the final destination choosing between multiple paths. This does not really influence the narrative, but still gives the player a sense of choice. Moreover, it is possible to choose whether to kill the enemies silently, or go into infiltration that involves using objects smartly in order to avoid the guards. This creates a separation between achievers and killers types of players. Of course, depending on the path taken, the enemies will have different pathways. Castlenova really gives a sense of choice through the style of play and an open map.
Control is about mastering the game. We try to foster control by increasing the difficulty as the player progresses. Mastering the way you play helps you later in the game. We will need to get the right flow by routing the guards in such a way that you will encounter more and more of them closer to the end. Also the game environment is less forgiving and open, there can be a guard literally around every single corner searching for your character.
As said previously, one of the SDT dimensions excluded in Castlenova is relatedness. It is mainly a solo game that does not require social interactions and does not create any social connections.
The question whether adding it would make the game better is subjective. Stealth game are usually played solo (such as Metal Gear Solid). People prefer being alone making smart moves to a whole army, which enhances competence. On the other hand, having a mode where two players play together in one level could be a great idea. Infiltrating in a team requires communication and strategy, which would foster social connectedness and therefore relatedness itself.
The main benefit of applying SDT to game analysis can be thought of as having a theoretical framework for identifying how much of the motivation for that particular game is intrinsic, as opposed to extrinsic motivation. Not only is the type of motivation and need satisfaction can be quantified and studied with SDT but it is also possible to make further decisions about the inclusion or exclusion of certain elements of the game in study, with respect to how much they contribute to the overall goal of furthering the type of motivation that we are aiming to achieve. Put differently, applying SDT and having the knowledge of the literature behind it, we have the option of leaving out certain elements of a game, such as extreme violence if this is not needed for the story or some other element, rather than thinking that it must be present in order to make the game more motivating or more immersive.
On the other hand, the limitations of applying the SDT to analyse a game can be thought of as the limitation of just like any other framework or theoretical model. That is to say, SDT does not capture all of the essence of games, and we may be overly focused with trying to satisfy and amplify the elements of competence, autonomy and relatedness while at the same time losing sight of the bigger picture of the game in question. This by itself is not a unique disadvantage or limitation of SDT but rather a more fundamental issue of attempting to dissect, analyse and be able to reproduce human experiences and emotions in different types of media. As such, it may not be the case that simply amplifying the dimensions of SDT can lead to an artistically congruent game even if it may be statistically more motivating than another control game. It seems to be apparent that, as in all creative endeavors, the element of art and ingenuity plays a big role in game development.
I strongly believe that analysing is an extremely important skill to develop whether it is about life choices, art, people, games or movies. This is why I have decided to publish my part of the analysis of Firewatch done with friends at the university.
This game is not like the others; it does not involve killing or levels, it is about the story and exploring. A true masterpiece.
Firewatch is a first person adventure video game developed by Campo Santo and officially released on February 9, 2016. It was created using Unity3D game engine for PlayStation 4, Microsoft Windows, Linux and Mac OS.
The game takes place in 1989 in North America. The player takes control of Henry, a fire lookout in the Wyoming wilderness who is assigned a special tower. While exploring the area, Henry discovers some indications about occurrences happening mysteriously in the surroundings that seem to be related to the destruction of his tower. In addition, he notices a shadowy figure that occasionally appears to watch him.
The only communication method possible is through walkie-talkie with Henry's sarcastic supervisor Delilah, which will be brought later in this analysis.
One of the main reasons Firewatch stands out from other games is the very beginning. The developers chose to immerse the player into the story using an original manner. Indeed as soon as the game starts, the player either chooses what happens next or is the one answering Julia, Henry's wife suffering from Alzheimer's disease.
The red sentence is what the player is supposed to interact with. Some of the conversation interactions are a choice between multiple answers.
The game designer is actually interacting directly with the player by using the pronoun "you".
This choice changes the rest of the conversation, as seen in the next figure.
The choices in the conversations of the previous screen shots were mostly about getting the player acquainted with the story. However, it goes deeper than this. Throughout the entire game, the player is supposed to speak to Delilah. With the use of conversations, secrets about Julia are revealed and the story gets clearer and clearer with time. Most of the time the player has the choice of what to answer to Delilah. The answers allow the character to mirror the personality of the player and they define the relation between Henry and Delilah.
This makes the game only more realistic and more immersive as it feels like it is the player's own life inside the game.
Looking at the bigger picture, Delilah's character could be seen as the personification of the unattainable escape in virtual reality that many players seek in video games. Her character feels so real to the point in which the player falls in love and leaves his rings on the table. However, no matter how close he gets to her, do everything to please her, Delilah will always be far and there is no way for Henry to touch her nor see her face.
Something else that makes Firewatch fun is that the player is actually able to interact with nearly any object of the game.
For instance, it is possible to interact with books, notes, ropes, alcoholic drinks and much more. Some of these objects could be used later in the game. For example, a rope to climb, a book to read or even alcohol to save for later.
The possible interaction solutions are in the bottom right part of the screen.
During the entire game, Delilah gives Henry instructions to follow.
When a clue appears, you need to report it to her in order to get new instructions.
This implies that the player has control over when to get to the next goal, leaving him the option to walk and discover the open world.
Firewatch is artistically tremendous. Undoubtedly, one thing that plays an important role is the audio. As the game starts, some relaxing piano music is played that seemed to attract every player.
That music does not keep looping, as other instruments appear such as guitar to make another relaxing music. As the player goes alone in a mountain, the audio keeps him into the game.
For instance, when the shadowy figure appears, or when a dramatic event happens such as the death of a character (at the end of the game), the music instantly changes into a dramatic song, enhancing the feeling.
Moreover, the artwork gives the feeling that everything is rich, detailed and uniquely designed as if every moment could be a computer screen saver.
The previous figure is an in-game screen shot that shows a sunset.
The analysis of the picture shows how realistically detailed the artwork is. For example, as the sunset starts, stars show up progressively and shine more and more as the sun goes down. The water reflection of the sun is interesting as well. The water part in the same direction as the sun light shines more than the other parts, as in real life. Last but not least, The colors are well chosen. This mix of red, yellow and orange creates the perfect aspect of a sunset.
This artwork is extremely important for the game itself since it psychologically influences the player as he seems to find a connection with the real world through beautiful environments.
Moreover, at the end of the game, the forest is burned as seen in this figure.
In this in-game screen shot, one could not help but notice how the trees are burned and how the environment changes and reacts to the burning area. This aesthetic part of the game adds more realism than ever.
For what type of player is Firewatch? It is mainly a game for explorers and socializers type of players because of the extreme importance of interaction. Indeed, the player is connected to the world and interacts with it nearly every moment and Delilah makes sociability an important aspect.
If assigned to a unified play style, there is no shadow of doubt that it would go with the Rational/Explorer/Simulationist as puzzles and theories are main parts of the game as well as the Idealist/Socializer/Narrativist play style since it also involves storytelling and cooperation.
We are currently working on an infiltration/stealth game with my interdisciplinary team using Unity3D. It will be available soon in the "MY PROJECTS" page.
Since I am willing to apply to the High-Tech Systems Honors track in my university which focuses on the application of knowledge of several domains to unmanned air vehicles (UAVs), I have decided to talk about it in this new post.
From a young age, I have always asked my parents to buy me a helicopter that I can control remotely. The only products available then were just small engines.
However, recently, I have noticed how UAVs are booming in tech stores. For instance, electronic stores in Schiphol Airport in Amsterdam and MediaMarkt in Eindhoven have a lot of these huge drones to sell.
One could argue that, like Virtual Reality, they represent an important part of the future generations.
In my opinion, these engines can influence a lot our daily life since they can be used for several reasons. For instance: Filmmaking, posts delivery (Amazon), Aerial surveying...
These are only a few examples out of tens others for which they could be used for.
Let's talk about how these engines are taking over some of the daily life tasks.
In Philadelphia, a dry cleaner armed with a DJI Phantom (see image at the beginning of this post) delivers free dry cleaning to one customer a month. A book rental company in Australia plans to deliver textbooks to college students via drone. A UK restaurant named Yo! Sushi uses drones remote-controlled by waitstaff to deliver burgers precariously to customers’ tables. Dominoes is testing pizza delivery by drone.
There is no shadow of doubt that package delivery is one thing that UAVs will take over from humans.
In addition, I think that Quadcopters could also be an effective way to save lives. One German nonprofit has released a concept for Defikopter, a copter that’s been modded to carry a defibrillator and parachute it down to heart attack victims. The drone, which can travel up to 69.2 kilometers an hour, could be useful for providing quick assistance to heart attack victims wherever they happen to be.
One last idea that I have been personally thinking about concerns Home Security UAV.
Indeed, a lot of people buy dogs to protect their house from robbers and happen to not take care of that dog as he is used as a simple object. This behaviour makes any person who respects animals sad.
Hence, one thing for which a quadcopter could be useful is the security of our daily life.
For instance, a UAV could be placed in the roof and, after being activated by the human on the mode "Security" or "Night", uses sort of sensors to detect any robber.
The UAV can for example stop the intrusive person by using an electrical net.
As these engines are gaining more and more popularity, laws have been decided:
In this Honors track, the responsible people ask the chosen students to choose an application on which we have to focus on starting from January. A lot of them, such as Computer Vision and Robotics, are extremely interesting.
Of course, if I have the great opportunity to be chosen, I would take without any hesitation Artificial Intelligence.
Here is one of the inspiring videos given during the introductory activity of the Honors track.
As I mentioned in my last post, this one will be about Machine Learning.
First of all, let's start by defining it, which I did not do with A.I since it was straightforward:
According to the online course taught by Stanford University on Coursera, Machine learning is the science of getting computers to learn, without being explicitly programmed for it.
It is the study of computer algorithms that improve automatically through experience and has been central to AI research since the field's inception.
This computer science subfield grew out of work in A.I and represents new capabilities for computers. For instance: Netflix product recommendations, handwriting recognition etc..
Before starting to talk about it in Gaming, I should explain on what it is based.
Machine Learning uses a lot of Artificial Neural Networks (ANNs): informations processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. They are mainly used because they have adaptive learning, self-organisation.
What about ML in Game Development ?
Below this text, you will find a video showing how a machine learned to play the game Mario through Neural Networks. It explains how ANNs actually simulate the human brain and keeps learning by playing until finishing a level without dying in the game.
You can see how connections are used in Neural Networks to make the machine learn what to do and when in the game, depending on the environment (enemy, obligation of jumping etc..)
Let's talk about larger games. Here are some examples of implemented ML:
However, this is only a small number of major examples and there not so much out there in games such as FIFA, Rocket League or Call of Duty. Why so ?
In my opinion, it is simply a design-based choice. It is actually not an issue whether it' is difficult but rather a question of whether it should be done at all, and how it would impact the player's experience.
Would you enjoy it if the computer quickly learned how to counter your particular strategies, and beat you? Of course not.
It is better to simply make the enemy "appear" intelligent because that is all that matters to a player.
Hence, the result usually gets much better with simple algorithms. Especially when you think about these big titles, the time spent implementing Machine Learning could be put to better use by delivering better graphics, gameplay etc..
At this point, ML application to games is still rare. However, with Virtual Reality booming and new features coming every year, I guess sooner or later this will be often used in games.
Let's talk about AI in Game Development.
How many times do we hear people complaining about games because they are not realistic in graphics and A.I? Quite often.
I was playing Club Pro in FIFA 16 last night with some friends and could only notice how the bots made such a bad job. Indeed, they were passing randomly and even giving the ball to the opponents. Hence, I decided to start with it in my blog.
First of all, AI technology provides solutions to an increasing demand to add realistic, intelligent behaviour to the virtual creatures that populate a game world. As game environments become more complex and realistic, they offer a range of excellent testbeds for fundamental AI research.
On the contrary to what we all think, Video game A.I is not about intelligence, it is about creating a realistic and fun experience.
Moreover, A.I has matured into one of the pillars of modern game development. Of course, the quality of the A.I can make or break a game.
One of the games that revolutionised A.I was Metal Gear Solid 2 (2001): Enemies could hear footsteps, have a 45-degree field of vision, move their heads left and right, and behave in a more sophisticated manner when searching for intruders. This logic is still followed in the recent MGS games.
When developing a simple game, implementing A.I is not necessarily hard. For instance, in my WAVE game, there is the green enemy who also appears to be smart.
All he does is calculating the distance between both of you using the coordinates (x,y) and continuously decreases it, no matter where your player goes.
That is an example of Scripting: It is without hesitation the most used game A.I used to day. Just think of it as an "If... then..." statement. For instance, if the player hides behind a wall, wait two seconds and throw a grenade.
Other types of A.I:
Random Scripting: It looks a lot like Scripting with more variety added.
Think of it as an "If... then... or..." statement. For instance, If the player hides behind a wall, wait two seconds and throw a grenade, or rush the go melee-attack, or cover.
Behavioural or Character Based Scripting : It is the usage of random scripting with character types. For example, if a soldier is offensive, its probability of doing a particular action could be tweaked as 25% of grenade attacks, 70% of rushing, 5% of taking cover. If the soldier is instead defensive, it will be 35% grenade attaches, 5% of rushing, 60% of taking cover.
Character based scripting is often adjusted throughout game balancing.
Pathfinding - Getting a characters from point A to B.
3D terrain and cover are huge A.I. dilemmas in modern A.I. programming. As a programmer, you need to keep in mind where the player is, is he firing, should the character fire when he moves, or go as fast as possible. It is more advanced scripting,
Emergent - The game actually learns from your actions.
The strategy game "Black & White" is a good example. This could also be assigned to Machine Learning, which I will be discussing in my next blog post.
Ahmed Ahres, 23.