Marv: The movie

That’s right boys and girls, today I got Marv to animate. (Well, Monday).

Horaaah.

I am very proud of this because it was really hard to figure out, and some parts I still don’t fully understand, but I’m going to explain how I did it here and maybe someone one day will read it and think I’m amazing. Which they are right now. Yes I’m talking about you. The reader.

The first thing I had to get to work was, working off of yesterday’s progress, getting the indices and weights for the bones into the shader. This was done by adding a few more vertex attribute arrays to the shader and then sending them in where the shader is loaded:

InitFBXSceneResources()

// bind arrays needed for animation / normals / texturing etc
 glEnableVertexAttribArray(0); // pos
 glEnableVertexAttribArray(1); // normal
 glEnableVertexAttribArray(2); // tangent
 glEnableVertexAttribArray(3); // binormal
 glEnableVertexAttribArray(4); // indices for bones
 glEnableVertexAttribArray(5); // weights for bones
 glEnableVertexAttribArray(6); // uv
 glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(FBXVertex), (char*)FBXVertex::PositionOffset);
 glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, sizeof(FBXVertex), (char*)FBXVertex::NormalOffset);
 glVertexAttribPointer(2, 4, GL_FLOAT, GL_FALSE, sizeof(FBXVertex), (char*)FBXVertex::TangentOffset);
 glVertexAttribPointer(3, 4, GL_FLOAT, GL_FALSE, sizeof(FBXVertex), (char*)FBXVertex::BiNormalOffset);
 glVertexAttribPointer(4, 2, GL_FLOAT, GL_FALSE, sizeof(FBXVertex), (char*)FBXVertex::IndicesOffset);
 glVertexAttribPointer(5, 2, GL_FLOAT, GL_FALSE, sizeof(FBXVertex), (char*)FBXVertex::WeightsOffset);
 glVertexAttribPointer(6, 2, GL_FLOAT, GL_FALSE, sizeof(FBXVertex), (char*)FBXVertex::UVOffset);

Init()

// load shader
 const char* aszInputs[] = {
 "Position",
 "Normal",
 "Tangent",
 "BiNormal",
 "Indices",
 "Weights",
 "UV",
 };
const char* aszOutputs[] = {
 "outColour"
 };
// load shader
 g_ShaderID = LoadShader(
 7, aszInputs, 
 1, aszOutputs,
 "./shaders/animation_vertex.glsl",
 "./shaders/normalmap_pixel.glsl"
 );

Below is an image showing the structure of the FBX model and the animation part of it.
20130529015854

 

 

Programming for AI assignment

Today I focused on getting the environment for my AI and Animation demo done with lights and models imported. Because I had most of what I need to start with from assignment 1 and tutorials, this was easy. The Marv model we get to use for animation of a character doesn’t have an actual material, so I applied a plain metal texture to him and then tinted him red.

The texture for the level, however, looks really crappy. I used a uniform and sent a “tileAmount” variable to the pixel shader to tell it how much to tile the texture on the level but it didn’t help.

The shader also has a fog setting, and a normal map texture sent to it.

20130526223547

I don’t know what the deal is with the pixelation that looks like noise. I guess I need to set up some sort of LOD or mipmapping so the textures blend a bit better…

20130527024241

 

That is a closeup of the wall behind Marv. It looks like white noise.

So, after browsing the net, I found that openGL can have mipmaps set up to work automatically (instead of creating them all yourself). http://www.swiftless.com/tutorials/opengl/mipmap_generation.html This could be a feature in this AI assignment for frustum culling.

The next steps were to try and find how FBX models animate. This stage was difficult as I had no idea how this worked and required a lot of digging around.

After digging around I managed to figure out:

  • FBX models have a “skeleton” made of “bones”. “Bones” are basically just 4×4 matrices. This skeleton can be accessed with GetSkeletonByIndex( id )
  • FBX models also have an “animation” which contains “tracks”. These “tracks” have “keyframes”. Each keyframe has a “rotation”, “translation” and “scale”. Animations can be accessed with GetAnimationByIndex( id )
  • Animations contain “TotalTime” that can be used to loop.

What I couldn’t figure out is how the actual indices are affected by each bone. I mean, there’s bones in there, but what indices do they affect? This would need to be held in some sort of array as a bone could affect multiple indices. Then after I figure THAT out, I need to somehow throw them into the shader. An array maybe? I’ll do this tomorrow.

Some random team assignment?

Today we randomly were told to get into a pre-chosen team to draft out and work together on a game. The game has to be based on “Snaked and Ladders”.

The first day was just a planning day but we came up with some pretty cool ideas. The game we will be creating over the next 2 weeks is going to be like a reverse Snakes and Ladders where you have to get from the top of a cliff to the bottom. In the background there is a Cthulhu who has given you dark magic… okay I’ll stop now. This is the story:

Cthulhu is destroying the world and got to Mexico. Unfortunately for Cthulhu, he drank too much Tequila and got drunk, dropping some of his dark magic in the ocean which has washed up on the side of a cliff face.tmp_cthulhu

 

(Cthulhu with a Sombrero) (Image source unknown. Maybe here?)

In come 2 Mexican men, enemies, but are trying to escape Cthulhu’s destruction. On their race to the boat at the bottom of a deadly cliff, they find dark magic and find that they can use it to aid their descent or disrupt his opponent with it by casting spells.

So the idea of the game is to descend down the cliff. As you go, you pick up from 1 to 3 dark magic points. A spell will cost a certain amount of dark magic points. The spells will be “Spawn Vine” to help progress down, “Spawn tentacle” to harm your enemy, “Immunity” that will make you immune to tentacles, and “Cthulhu attack” which is really expensive (you have to save up by not casting any spells), but it sends the player back many spaces (20 or so) which is almost back to the start.

The levels will be semi-random with randomized spots for “portals” which are where you cast vines and tentacles to come from.

The art style will be hand painted textures on 3D models, and we will be using Unity to create the game in C#.

Game Jamming

This weekend we will be embarking on a journey into the jam… of games.

Stuart, Alex and I on a team against other teams of fellow AIE members at the AIE floor. The theme was announced at 7pm which ended up being…

Deception

20130527010626

(I know, that’s from “Inception” but screw you).

After much discussion and broken bones (I wasn’t there for that part)… we decided on a game idea. The idea is basically checkers but with ninjas. There will be 2 players, each with a single ninja. The ninja can move to any square in a 3×3 area around it, but the catch is that when the player does this he can choose to create a ghost ninja that the other player cannot detect (they just see a normal ninja).

We thought hard about how we would do this without networking and thought that the best thing to do would be to have the player choose between 4 different coloured ninjas and they select one by dragging it down onto the board in the position they want the ninja to move to. But I hear you say “But the other player will see this won’t they huh? huh?” Yeahsss… But to do this they can either left click or right click and then drag the ninja onto the board. depending on which of these you do, the ninja is either clones or moved. The reason we have colours is mainly so someone can actually remember which of THEIR ninja is the real one. (I know I’d forget).

We don’t have a name yet, but in the code we called it “Shadow Clone”.

So that’s the planning stage. Look around for more posts to continue the plot of the ninja.

Game Jam progress

Friday and Saturday Morning.

Which seemed like the same day, we had Alex pump out some pretty sweet looking grass, rocks and bamboo for the level textures. Then after a few hours we had a state machine going. The state machine was my handy work and I managed to get it to run everything from the game playing state, and calling the ChangeState function on the same state again reset the game. In the meantime, Stuart was working on the getting the player to move around and we were both deciding on some key concepts like how Z ordering would work.

Not really much to show at this stage of the jam; just core mechanics needed to build up from.

Day 2.

We have not had very much sleep. I got about 5 hours (accidentally. I was only going to have 3, but I felt awesome after it…) and when I got back to the room we were in, Stuart had pumped out the code to get the players moving in the squares which was wicked.

From there, Alex had done a nice looking Ninja. We were planning on animating the ninjas but only towards to end of the jam if we had time. Alex had also re-made the graphics to a new style and made a start on the HUD graphics.

I implemented a camera controller with smooth movements which looked really good. This was done by a Camera class with a normal position and a target position. The camera would ease into the X and Y of the target, getting slower the closer it got. This is my favourite formula in programming ever. We also got the graphics in and had the ninjas moving around which looked nice.

sdf

(This is still the old graphics)

At this point, the ninjas simply teleport to their position without any way of knowing what’s happening. So we planned on adding in an explosion which I made a start on in Flash. It turned out really well in the end after a few attempts. I tried to make the smoke pull inwards towards the centre of the cloud as it moved upwards. After that I added a purple tint to it so it looked like a ninja puff of smoke. I then exported it into a PNG sequence for importing into XNA.

Untitled-2

At this point, Stuart had left for a concert so I began implementing more smooth ninja movements and getting the camera to position its self on the players properly.

More updates to come!

AI Assignment – Plan

This assignment needs to have animation and AI. What does the A stand for? Artificial. What does the I stand for? Intelligence. What does animation stand for? Movement over time. Let’s do it.

Intro

This assignment is about bringing our static boring meshes alive and making them walk around like people things. I’ll be creating a sort of AI demonstration with animations. This will be a small combat situation where teams of 4 red and blue Marvs will fight to the death and perhaps capture a flag.

  • The level, including a light. The level needs textures on it.
  • At least 4 AI controlled characters with animations played at certain times for:
    • Running
    • Idle
    • Death
    • Attack
  • Animations should be blended between each other
  • A* Path finding
  • Behavior trees
  • Collision avoidance
  • Frustum culling
  • Line of sight visibility check

Combat

For the combat system, I plan on having the men run around with a fairly short line of sight. I’ll be using the physics level provided to us which is fairly open, so if the line of sight is infinite, then everyone will be shot all over the place.

The weapons will be plasma guns that shoot a shiny light (no “lights” though, a low poly sphere will be used for demonstration purposes). Basically the characters will run around and if they see an enemy they will shoot it. The plasma ball will need to be aimed a little bit in front of the opponent so it hits it, so the AI will need to do a quick measurement and guess where the enemy will be walking to when the bullet is in that position. This measurement will be something like getting the direction of the bullet, the direction of the player and then determining where these 2 will cross at the time it takes for the bullet to reach a projected vector from the direction of the player.

time

I’ll also add in a “capture the flag” mode. This will simply be either nobody or 1 of the 4 players on each team is set to “flag master” and will be the one who goes to get the flag. There won’t be any “protect the flag” modes or anything because I think the characters will be in battle a fair but anyway.

When a player dies, he will respawn at the flag on their base. To get a point, a player must kill an enemy player, or bring the enemy flag back to their base which will award 10 points. The game will be run on a timer of 2 minutes (or whatever time works best). And the winnder is announced on the screen after that. The game will then restart and continue forever.

Graphics and culling

The graphics will just be as simple as possible to just show off the AI and animation. All the Marvs and level will both be textured with a single material which will include a normal map. This will be a very plain texture so it can be tiled easy and not look really rubbish due to detail being stretched or uneven on the models.

I’ll be implementing a frustum culling system that will make sure only things on the screen are rendered. This will also need to count and display all these objects. The way I’ll do this is to simply check bounding boxes of the players are inside the camera. This requires 2 complex maths things: figuring out the volume area of the camera, and the bounding box size. The bounding box could be set manually to save time. These 2 volumes need to be checked of intersection. Which should be nice and difficult.

I’ll be using the physics level provided to us which is fairly open, so there won’t be need for portal culling.

Animation

Animation will need to be done by first figuring out how the FBX loader reads animation sequences and tracks in the FBX files for Marv. This shouldn’t be too difficult.

Once the vertex shader has been set up properly, the animations will just need to be played and the model swapped depending on which one needs to be played. I could keep all the animations on the video card only once and copy them per player.

AI

Probably the most difficult part of this assignment. The main AI will be fairly straightforward with a state manager for each player. Then the AI will need a behaviour tree that controls each state of the AI based on steps within steps that need to be executed in order to complete a node in the tree so the next task can be performed. Sounds complex.

Then there’s A*. In theory this isn’t hard, but then I need them to figure out where the floor is and probably have to use a navigation mesh for them to know where to walk. This will be pretty hard to program.

One thing I could add would be a “Commander” class that controls all AI entities and tells them what to do in a team environment. I don’t think I will need this really, because the AI can surely play pretty well by themselves the way I’ve planned it.