Category Archives: Assignments

Behaviour tree

There are many different behaviour trees. Some that have been shown by teachers, some that are online done by people working on major titles. But they do basically the same thing by the looks of it. The AI will need a behaviour tree. So the first thing I’ll be doing is making a list of all the things the player will need to do. I’ll start with the bare minimum for combat (without capture the flag). The basic flow of the AI according the the brief is as follows:

  1. AI stands for a random count “Idle mode”
  2. After count is up, AI picks a random nav point and creates a path with A*
  3. The AI then walks the path and goes back to idle mode and this is repeated
  4. If the AI sees an enemy in any of these steps, it switches to attack mode
  5. In attack mode the AI will walk directly towards the enemy, shooting his gun
  6. If the AI kills the enemy, he will switch back to idle mode
  7. If the AI dies, he will switch do Die mode
  8. In die mode, the die animation will play, and then the AI will respawn and the process is repeated

To break down into smaller actions, the AI tree would look something like the following:

  • Guard (selector)
    • ChooseRandWait
    • StandGuard -> complete
  • Hunt
    • CreateRandomPath
    • WalkPath (check for enemies -> Attacking) -> complete
  • Attacking
    • FacePoint
    • WalkPath
  • Die
    • Die
    • Dead

I may refine these even more but this is the basic tree I’ll be creating.

For these to work, they will need to be arranged in a way that the tree can choose which one in order of importance. The order would be:

  • Is Dead? Do die sequence (first priority)
  • Can see enemy? (do attack sequence)
  • Has Guard time? (do Idle)
  • Otherwise do roaming (pathfind mode)

To make this tree, I will first make a main “BehaviourNode” class. This will be what all nodes derive from.

From this, I’ll make a “CompositeBehaviourNode” class. This will be used as either sequences or selectors. Each will be assigned on the creation of the object in their constructor.

Finally, I’ll make an “ActionBehaviourNode” class that all my main actions will be derived from.

The reason I’ve decided on this approach, especially with keeping the selector and sequence in one class, is for simplicity’s sake. From this there ill be less code and also less files laying around. I am also concerned about me forgetting which is which.

The nodes will all have a method called “Execute”. This function takes in deltaTime for it’s measurement of time. It will also need to access the world some how. I plan to do this by referencing the Player in the constructor of each BehaviourNode. Normally the best practice for this sort of thing would be to use a “Puppet” type of object, and the player be derived from that, but in the case of this simple game, where there’s only 1 type of actor, I’m going to do it this way.

I strongly believe the best way to learn how to code is to jump in and do it, and then make mistakes so you can write it better again. That’s what school is for. I’m not going to be able to make the perfect behaviour tree no matter how hard I try.

Below is an attempt at a diagram of the structure of the tree if we implemented the capture the flag mode:

20130807030440

And here’s the class diagrams:

20130807034230

I’ll also need to create some sort of Blackboard class to communicate with each leaf node and also so the player knows when to die. I’ll probably use this also for decorations as I don’t plan on doing any decorator classes until I see a fantastic advantage to doing them. For now I just want a working tree so I understand the basic concept.

So on with the code!

Changed AI Assignment

What? Yeah. I changed it a bit. After completing the network project I found new things. People change, Sarah, and so did my code. I’m leaving you.

Unfortunately, the code is mostly in int main(), and the entire project probably should have been bundled into an Application object that had a Renderer to control things like drawing Marv. Because I didn’t do this, the main.cpp file ends up having many globals.

The good news is, the globals aren’t very numerous.

The other good news is, I know what I’m doing now.

The other bad news is, main.cpp is long.

Some people like it long.

So far since this change:

We have Marv on the screen, much like the Network assignment. Except now Marv is smart (sort of) well. Not yet. I still need to do pathfinding and a Behaviour Tree. Do I know how to do one? No. Will I figure it out. Yes. Because they look awesome.

I also fixed a lot of rendering bugs. The AI assignment had weird things happening to normals on the walls of the castle. – speaking of the castle walls, I changed the FBX image loading code to include an image name instead of a pointer to the scene, so I can stick all the images belonging to that FBX model into its own folder in the images folder. THis looks nicer. So back to the images – I found on the internet some wall and grass textures, so now the FBX of the game world doesn’t fail when trying to load these 2 images.

On next week’s episode: we will create a behaviour tree and an A* path finder thing.

Networking assignment

Plan

For the networking assignment I plan on doing a really simple League of Legends (LoL) style game with multiple players and a set of about 3 different spells.

I’ll be using the “Arena.fbx” (converted to AIE format to help loading times) and Marv as he character.

Arena level
Arena level preview. The red glitches are the nav mesh (I’ll hide it in game)

The characters will be sort of “Space adventure” rather than the genre of LoL because it suits Marv better.

I’ll be texturing the entire level with a space looking metal texture (maybe tinted blue or something) and the players will just be tinted their colour.

I plan on making it similar to LoL.

20130626004050

So because I had actually not played much of it before I decided to play it, and got dragged into a game with friends (when I say dragged I just mean easily peer pressured because I’m a chump and I’m just procrastinating).

The results were a bit sad:

20130626005557

So after enduring that escapade, I figured the game will appear as follows:

  • Camera’s Z and angle will be locked at a 45 degree angle looking down at the player
  • the player will move around independently of the camera’s X and Z position (the player can move the camera around)
  • The player will use a* to move around a nav mesh when clicked on a certain position
  • The player will have various abilities assigned to certain keys

The players can attack and do spells. Each spell costs mana and has a cooldown. Each character will have the same spells and attacks as I’m going to focus on the networking rather than the depth of the game.

So this is what the players can do:

  • Level up – they start at level 1 and each level takes twice as many points to reach. you gain more and more XP the more you kill players and get hits on players / not dying for ages (multiplier)
  • Attack (more damage per level) (click on enemy)
  • Medkit – heals player over time, has a cooldown (A) (requires level 2)
  • Throw Mine – explodes when player touches it does lots of damage – get 1 every time you die and respawn (S) (requires level 3)
  • Shoot Destroyer Blaser- does lots of damage depending on your level, has a cooldown (D) (requires level 5)

Technical:

Movement

  • Right clicking places a target for player to walk to
  • When a target is places, the player must perform an A* pathfinding search to get to it
  • When the player has found this path, he will have to walk towards it in nodes
  • Once the player is in the last node’s triangle, he will seek out the point directly
  • Player will animate and face the direction he’s walking
  • I won’t worry about player colissions, but if I do, I can check to see if there’s an enemy in the current tri, then check the distance

Camera

  • Camera will be locked to a good distance and angle
  • When player moved mouse pointer to a certain distance from the screen’s edge, the camera will move in that direction
  • The camera should have a limit to how far it can be moved in certain directions so player doesn’t get lost (There’s no minimap)

Attacking

  • Clicking on an enemy will do a spherical collision and attackes will be dealt to that enemy
  • The player will send attacks constantly 1 after the other like a gun
  • Each bullet will fly towards the enemy, if it hits it, it will take damage (spherical collision)
  • The big bullets will move the same speed but will look bigger
  • The mines will basically be the same as big bullets except they won’t move. They will also do the same damage

Other

  • Healing will just be done with a timer and heal a certain amount of player health over time
  • When a player has no health, he dies and is re-spawned in 5 seconds
  • The game ends after a 5 minute timer runs out (or 10 minutes, whatever). The winner is the one with the most kills
  • Or first to 20
  • Player is spawned as far away from the opponent as possible on a random nav point (loop through each nav point and get distance)

Networking

Yeah I know, all that other stuff is fluff, but here’s the juice to the orange of assignment:

  • First time you run the program, it asks to be a server or client
  • if you are a client, the client will ask to connect to a server. Once connected, the client will send all player info to the server
  • On the server side, when started the game, the player is put in the game and assigned as player 1. Their colour is random.
  • The server will listen for connections, and if there is one, it will add the new player with its data to the game, and then send the client the server’s player info
  • When the client is connected to the server, it will wait for all players on the server’s info and add all these players to the list on the client side with unique IDs.
  • The client will update its positions and send the server every action that is performed and also the position. Each client or server will then animate each character and move it to the new position. Animations and effects will be done on client side.
  • If a player’s client decides that a player has been hit with a bullet, then that info is sent to the server, the server then tells every client this so it updates its info
  • This info is sent at about 12 times per second (every 0.083 seconds)

Apart from these points, the only other thing would be end game state which will be controlled by the server. The timer will also be synced with the server.

That’s it for now. Happy everything.

20130626014922

Let’s make Marv do things

Scaling

Today I updated the scaling part of my vertex shader because it seemed wrong. And it was wrong. Before:

mat4 newmodel = Model;
newmodel[3][3] = Scale;
vWorldPos = newmodel * PosWithBone / Scale;
// output position
gl_Position = Projection * View * newmodel * PosWithBone;

After

// for scaling
PosWithBone.x *= Scale;
PosWithBone.y *= Scale;
PosWithBone.z *= Scale;
vWorldPos = Model * PosWithBone / Scale;
// output position
gl_Position = Projection * View * Model * PosWithBone;

The vWorldPosition is set so the pixel shader knows easily where the world is so I don’t have to send it as a uniform.

Flags

I plan on adding flags for capture the flag. I spent a bit of time on TurboSquid finding some free flags but only one was free in FBX format and it didn’t import properly, so I decided to go with using the dragon FBX, despite it being extrem,ely high poly count. I believe this won’t be a problem but if frame-rates slow down then I’ll just use another random object like soulspear.

Lights and scenes

The light were originally just structs, so I moved them out into their own header and class files for better code standards.

So far here’s what we have:

20130528000756

Playing with fog. Using the z buffer to darken the fragments depending on distance from the camera.20130528000839

Got a nice specular effect using a normal map created in crazybump.20130528001303 20130527232639

Marv ready to be animated.20130527232702

 

Testing fog colours. Here we have red fog.20130528001501

Using different textures on the level. Lava texture looks a bit too bright.20130528030046

Here we see the result of what Marv animated like before I removed the 4th bone transform from the indices. I’m going to assume that that last one is for the transform of the gun Marv is supposed to be holding.20130528030328

Using emissive textures to light up areas that are not lit. I couldn’t figure out the formula though so I removed it.20130528030358

Go home Marv, you are drunk.20130528030522 20130528032217Trying with only 1 bone index. Marv hated that.

 

Marv: The movie

That’s right boys and girls, today I got Marv to animate. (Well, Monday).

Horaaah.

I am very proud of this because it was really hard to figure out, and some parts I still don’t fully understand, but I’m going to explain how I did it here and maybe someone one day will read it and think I’m amazing. Which they are right now. Yes I’m talking about you. The reader.

The first thing I had to get to work was, working off of yesterday’s progress, getting the indices and weights for the bones into the shader. This was done by adding a few more vertex attribute arrays to the shader and then sending them in where the shader is loaded:

InitFBXSceneResources()

// bind arrays needed for animation / normals / texturing etc
 glEnableVertexAttribArray(0); // pos
 glEnableVertexAttribArray(1); // normal
 glEnableVertexAttribArray(2); // tangent
 glEnableVertexAttribArray(3); // binormal
 glEnableVertexAttribArray(4); // indices for bones
 glEnableVertexAttribArray(5); // weights for bones
 glEnableVertexAttribArray(6); // uv
 glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(FBXVertex), (char*)FBXVertex::PositionOffset);
 glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, sizeof(FBXVertex), (char*)FBXVertex::NormalOffset);
 glVertexAttribPointer(2, 4, GL_FLOAT, GL_FALSE, sizeof(FBXVertex), (char*)FBXVertex::TangentOffset);
 glVertexAttribPointer(3, 4, GL_FLOAT, GL_FALSE, sizeof(FBXVertex), (char*)FBXVertex::BiNormalOffset);
 glVertexAttribPointer(4, 2, GL_FLOAT, GL_FALSE, sizeof(FBXVertex), (char*)FBXVertex::IndicesOffset);
 glVertexAttribPointer(5, 2, GL_FLOAT, GL_FALSE, sizeof(FBXVertex), (char*)FBXVertex::WeightsOffset);
 glVertexAttribPointer(6, 2, GL_FLOAT, GL_FALSE, sizeof(FBXVertex), (char*)FBXVertex::UVOffset);

Init()

// load shader
 const char* aszInputs[] = {
 "Position",
 "Normal",
 "Tangent",
 "BiNormal",
 "Indices",
 "Weights",
 "UV",
 };
const char* aszOutputs[] = {
 "outColour"
 };
// load shader
 g_ShaderID = LoadShader(
 7, aszInputs, 
 1, aszOutputs,
 "./shaders/animation_vertex.glsl",
 "./shaders/normalmap_pixel.glsl"
 );

Below is an image showing the structure of the FBX model and the animation part of it.
20130529015854

 

 

Programming for AI assignment

Today I focused on getting the environment for my AI and Animation demo done with lights and models imported. Because I had most of what I need to start with from assignment 1 and tutorials, this was easy. The Marv model we get to use for animation of a character doesn’t have an actual material, so I applied a plain metal texture to him and then tinted him red.

The texture for the level, however, looks really crappy. I used a uniform and sent a “tileAmount” variable to the pixel shader to tell it how much to tile the texture on the level but it didn’t help.

The shader also has a fog setting, and a normal map texture sent to it.

20130526223547

I don’t know what the deal is with the pixelation that looks like noise. I guess I need to set up some sort of LOD or mipmapping so the textures blend a bit better…

20130527024241

 

That is a closeup of the wall behind Marv. It looks like white noise.

So, after browsing the net, I found that openGL can have mipmaps set up to work automatically (instead of creating them all yourself). http://www.swiftless.com/tutorials/opengl/mipmap_generation.html This could be a feature in this AI assignment for frustum culling.

The next steps were to try and find how FBX models animate. This stage was difficult as I had no idea how this worked and required a lot of digging around.

After digging around I managed to figure out:

  • FBX models have a “skeleton” made of “bones”. “Bones” are basically just 4×4 matrices. This skeleton can be accessed with GetSkeletonByIndex( id )
  • FBX models also have an “animation” which contains “tracks”. These “tracks” have “keyframes”. Each keyframe has a “rotation”, “translation” and “scale”. Animations can be accessed with GetAnimationByIndex( id )
  • Animations contain “TotalTime” that can be used to loop.

What I couldn’t figure out is how the actual indices are affected by each bone. I mean, there’s bones in there, but what indices do they affect? This would need to be held in some sort of array as a bone could affect multiple indices. Then after I figure THAT out, I need to somehow throw them into the shader. An array maybe? I’ll do this tomorrow.

AI Assignment – Plan

This assignment needs to have animation and AI. What does the A stand for? Artificial. What does the I stand for? Intelligence. What does animation stand for? Movement over time. Let’s do it.

Intro

This assignment is about bringing our static boring meshes alive and making them walk around like people things. I’ll be creating a sort of AI demonstration with animations. This will be a small combat situation where teams of 4 red and blue Marvs will fight to the death and perhaps capture a flag.

  • The level, including a light. The level needs textures on it.
  • At least 4 AI controlled characters with animations played at certain times for:
    • Running
    • Idle
    • Death
    • Attack
  • Animations should be blended between each other
  • A* Path finding
  • Behavior trees
  • Collision avoidance
  • Frustum culling
  • Line of sight visibility check

Combat

For the combat system, I plan on having the men run around with a fairly short line of sight. I’ll be using the physics level provided to us which is fairly open, so if the line of sight is infinite, then everyone will be shot all over the place.

The weapons will be plasma guns that shoot a shiny light (no “lights” though, a low poly sphere will be used for demonstration purposes). Basically the characters will run around and if they see an enemy they will shoot it. The plasma ball will need to be aimed a little bit in front of the opponent so it hits it, so the AI will need to do a quick measurement and guess where the enemy will be walking to when the bullet is in that position. This measurement will be something like getting the direction of the bullet, the direction of the player and then determining where these 2 will cross at the time it takes for the bullet to reach a projected vector from the direction of the player.

time

I’ll also add in a “capture the flag” mode. This will simply be either nobody or 1 of the 4 players on each team is set to “flag master” and will be the one who goes to get the flag. There won’t be any “protect the flag” modes or anything because I think the characters will be in battle a fair but anyway.

When a player dies, he will respawn at the flag on their base. To get a point, a player must kill an enemy player, or bring the enemy flag back to their base which will award 10 points. The game will be run on a timer of 2 minutes (or whatever time works best). And the winnder is announced on the screen after that. The game will then restart and continue forever.

Graphics and culling

The graphics will just be as simple as possible to just show off the AI and animation. All the Marvs and level will both be textured with a single material which will include a normal map. This will be a very plain texture so it can be tiled easy and not look really rubbish due to detail being stretched or uneven on the models.

I’ll be implementing a frustum culling system that will make sure only things on the screen are rendered. This will also need to count and display all these objects. The way I’ll do this is to simply check bounding boxes of the players are inside the camera. This requires 2 complex maths things: figuring out the volume area of the camera, and the bounding box size. The bounding box could be set manually to save time. These 2 volumes need to be checked of intersection. Which should be nice and difficult.

I’ll be using the physics level provided to us which is fairly open, so there won’t be need for portal culling.

Animation

Animation will need to be done by first figuring out how the FBX loader reads animation sequences and tracks in the FBX files for Marv. This shouldn’t be too difficult.

Once the vertex shader has been set up properly, the animations will just need to be played and the model swapped depending on which one needs to be played. I could keep all the animations on the video card only once and copy them per player.

AI

Probably the most difficult part of this assignment. The main AI will be fairly straightforward with a state manager for each player. Then the AI will need a behaviour tree that controls each state of the AI based on steps within steps that need to be executed in order to complete a node in the tree so the next task can be performed. Sounds complex.

Then there’s A*. In theory this isn’t hard, but then I need them to figure out where the floor is and probably have to use a navigation mesh for them to know where to walk. This will be pretty hard to program.

One thing I could add would be a “Commander” class that controls all AI entities and tells them what to do in a team environment. I don’t think I will need this really, because the AI can surely play pretty well by themselves the way I’ve planned it.

Assignment complete (so far)

20130410165712

 

So I managed to complete a scene graph, create a geometry shader and pixel shader with animations.

Most of the assignment was taken up with fixing errors and things not working.

I managed to get the tree loading in but not into the real project.

There isn’t any terrain generation yet or lights. These are still to come.

For the lava I used 2 textures, 1 is the black rocks on top and under that there’s 2 of the same texture that both move in opposite directions on the Y axis and at the same time along the X axis. A time variable to move the UV along the X axis for the top lava.

At the same time I used 3 sine waves over time to morph the lava geometry to make it appear like waves.

For the GLOW I made a screen texture and researched an easy-to-use glow
filter. The geometry shader was a simple flat quad and the pixel shader used a formula to determine the brightness of the pixels around it and made it glow. For performance, the glow only applies to every 4th pixel:

20130410165712

 

 

I got a tree to import

That’s right, children. Witness my tree with normal maps and specular!

2013-04-05 15_50_17-AIE OpenGL Window

 

2013-04-05 16_15_10-AIE OpenGL Window

 

Next step: generated ground.

GENERATED GROUND

I changed my mind, so now I’ll be doing the “midpoint displacement” method (maybe “Diamond-square algorithm” because Wikipedia says it’s better).

The Idea I have is to combine the perlin noise function with the diamond method to make a hill and then add the Volcano in the middle of this. Maybe add water over the middle part.

Here’s a diagram:

diagram volcano

 

So if you imagine the land bumps up over the water surface, and the lava is just above the water. Hopefully, the volcano height and lava hEight will be right enough that it doesn’t stick out of the edges of the volcano.

This changes my original idea of the volcano going in under the height of the lava. Now the lava will just need to be above the water a bit and be only the radius of the volcano.