Category Archives: 2014 Class Work

Plasmation – Version 1 put online

Hey folks.

Just a short post saying that I’ve finished version 1 and could do with feedback.

Here are some screenshots:

20141119171317 20141119184109 20141119185655 20141119195035 20141119195157

 

You can mess with it here: plasmation.jamesheazlewood.com/scenes/editor/1

Here’s a video showing how to use the interface:

The exporter

While I’m here, I should get into how I managed to get the exporter working.

My first atttempt was to change the renderer so it would render to a bunch of textures stored in an array. This worked well and it allowed me to use a JQuery slider to scrub through the exported frames. However, because all these images were stored in web.gl buffers, it was hard to get the data from them. Another issue was that it was using an enormous amount of GPU memory as the webGL contect would crash multiple times when exporting more than 100 or so images (I guess storing over 100 512×512 textures is a bad idea…)

My next attempt was to simply use the canvas to render. The way I did this was just render the animation to the canvas the way way I had been before, except implement things like frame skips and allow the user to control whether it has opacity or not.

This was fairly easy, apart from mutliple bugs with Jquery (I ended up just using the context straight from javascript).

The frame constructor worked by rendering a frame to the main canvas, and then using a different 2D canvas, draw this frame on top of what was already on this new 2D canvas ad a proper location. This was easy as I could choose the scaling of the image as well (although it kind of sucks, but I want to wait and see how it looks once I add the lens blur effect to prevent that noisy velocity thing). 

Below is the code for creating the sprite-sheet client-side:

plas.Reset();
plas.RefreshShaders();
plas.options.viewMode = 1;
plas.Render();
plas.ticks = 0;
var image;
var canvas = document.getElementById('image_compile');
var context = canvas.getContext('2d');
var scale = plas.options.scale;
var size = (512 * scale);
var frameCount = plas.options.frames;
var h = Math.round(frameCount / plas.options.horFrames);
canvas.width = size * plas.options.horFrames;
canvas.height = size * h;
context.globalAlpha = 1.0;
var rFrame = 0;
var x, y;
while(rFrame < frameCount)
{
 if(plas.Render())
 {
 x = rFrame % plas.options.horFrames;
 y = Math.floor((rFrame - x) / plas.options.horFrames);
 image = new Image();
 image.src = plas.BufferData();
 context.drawImage(image, x * size, y * size, size, size);
 rFrame ++;
 }
}
$("#image_output").attr("src", canvas.toDataURL("image/png"));

Once the frames are rendered to the new canvas, it gets all the data from this canvas and converts it to a data url (base64 encoded thing, the same as I already had) and puts that in an image ready for exporting. The user can then just save the image to their hard drive. 

I noticed that the you can save a canvas as an image in Firefox and Chrome, which meant I didn’t even need to create an image from the canvas data, however, although Internet Explorer 11 ran the simulation really well, it couldn’t save canvas data as an image. This meant that I had to convert it to an image. 

The next step was then going through all the set-able options in the scene and giving them controls. I used a slider because I find it requires less clicks and typing to just move a slider back and forth. I added titles to these sliders and then got it to also draw their values underneath.

Done.

 

Plasmation Update 6 – Got it working

So, I got it working. Finally.

I managed to get so stuck with the fluid simulation that I searched online for GPGPU examples, and found this. It was a really well-written example on how to use the RGB channels of a “velocity” texture for the XYZ co-ordinates of objects on the screen, and have it actually work in three.js. 

So I had to painstakingly go through each line of code and try and figure out a way to implement it into my own fluid simulation program, and without having to delete too much of my existing code. 

The major differences I found with this method compared to my original method were firstly, the mesh used to render the texture from each shader pass was the same mesh, and instead of adding this mesh to a scene per render pass, it just created one single scene for rendering the fluid on this 1 mesh and then just SWAPPING the texture used in the mesh with the pre-defined textures.

After a few bug fixes and frustration the result was:

20141118202315

Looks better than a single frame! Although there’s no accumulation of textures again, as it is simply replacing the velocity from the previous iteration with the initial velocity, and then not using the divergence shader as it should be.

From further investigation (and another entire day of banging my head from lack of debug ability with such a complex system), I managed to figure out that each texture over-written after it enters the next shader. This meant that once the velocity texture from the first iteration entered the “Add velocity” shader, it was being over-written with the new velocity and then the wrong values are sent to the divergence shader. To fix this, I tried adding a new texture to the system to use for adding velocity onto the velocity texture from the first pass…

Success!

20141118203225

Here you can see the velocity X (red) the velocity Y (green) and the pressure (blue).

I then created a texture that I could use to sample the colour based on the velocity of the fluid:

20141118203805

And then rendered these colours based on the pressure and some trial-and-error and fixing a weird bug where the colours would go completely white when past the usual 1.0 value of the texture (caused by the texture looping back to the start of the image), the results were interesting:

20141114213428 20141114213137

 

I sent it to someone and they said that it reminded them of a pool of lava. (A pity I didn’t record a video of it because it looked really smooth and mesmerizing). But I wanted it to look like an explosion. 

After more tweaking, I got the fluid to fade out to 0 when there was little-to-no pressure:

20141114222951

I then did some polishing and included more colours in the “heat”
texture to get more fun colours of the flames:

20141114221839  20141114223248

There was still a few problems (minus an issue to do with the velocity texture I originally had and trying to figure out why taking 0.5 from it, it wouldn’t halve shift the velocity, but I won’t get into that):

  • The render can reach the edge of the screen, getting cut off which is an undesired result for a sprite.
  • The edge of the screen was prone to looping back and causing the fluid to exponentially fling out of the edge.
  • The effect lasted too long for a sprite-sheet (about 100 frames!).
  • Velocity that is too highly concentrated in one area creates a noisy area, when it should look completely white (caused by the shaders only being able to calculate pixels either side of one another, not regarding the rest of the effect)
Noisy velocity. An issue.
Noisy velocity. An issue.

The first problem I solved by introducing a terxture I could multiply the blending level of the heat texture:

opacity

 

With this being sampled per pixel, I was able to adjust the flames so that if they trail too far from the centre, it would gradually shove the colour being sampled from the heat texture down the right-side, causing the effect to seem to naturally burn out the further away from the center it got:

20141115094100

 

To prevent chaotic bleeding of velocity, I added another sample texture, this time to the “add velocity” shader, which basically has a region around the outside of the image that can be multiplied by the velocity to 0 off the velocity around the edges:

 edges

 

The “add velocity” shader now looks like this (notice the edges are darker):

20141118205227

This method was sort of like a cheap hack, but because I’m not too worried about the frames-per-second, and the fact that I have the circle texture limiting the drawing of the fluid anyway, it works well for the situation.

For the effect lasting too long, I introduced a “time” variable where it would start at 1.0 and decrease by a small amount each frame. Once it reaches 0.0, the effect has been faded out completely. I multiply this number by the effect, much like the circle from the first issue above.

I’ll have to figure out the other issues at another time as the project has already lasted too long and the effect is starting to look really good. Here’s a video:

Some ideas for the remaining solution is with the noisy look, I could apply a lens blur effect:

20141118155831
A lens blur added with Photoshop.
20141118155857
Before a lens blur is added.

Although this removes some of the fine details of the more orange flames, so I’d have to try and apply it to only brighter areas somehow. 

I also managed to accidentally create a “brightness” effect to the final render while trying to figure out the “life” variable (the time that it simulates before it fades out), which I left in and added a uniform for it, this is because I planned to include that effect in the original design anyway. 

Increased brightness
Increased brightness

For the time-being, the final shader code for actually rendering the effect looks like this:

uniform vec2 pixelUnit;

uniform float deltaTime;
uniform float life;

uniform sampler2D heatMap;
uniform sampler2D pressureSample;
uniform sampler2D velocitySample;
uniform sampler2D roundMap;
uniform float heatMapRow;
uniform float brightness;

varying vec2 vuv;

// represents which colour gradient our effect will use
const float numberOfHeatRows = 8.0;

// main program
void main()
{
 // get velocity as a unit per pixel
 float speed = sqrt(
 texture2D(velocitySample, vuv).x * texture2D(velocitySample, vuv).x +
 texture2D(velocitySample, vuv).y * texture2D(velocitySample, vuv).y
 );
 
 // calculate the spot on the UV map of the heat texture we want to use
 float heatV = 1.0 - (heatMapRow / numberOfHeatRows);
 
 // calculate heat based on circular center around center of image
 float round = texture2D(roundMap, vuv).x;
 
 //
 float heatAmount = (2.0 - speed - round) * clamp(life, 1.0, 10.0);
 
 // position on the heat map "amount of heat"
 vec2 heatPos = vec2(heatAmount, heatV);
 
 // get colour from heat map
 vec4 color = texture2D(heatMap, heatPos) * brightness;
 
 // final colour
 gl_FragColor = vec4(
 color.r * color.a,
 color.g * color.a,
 color.b * color.a,
 color.a
 );
}

That’s all, folks.

Plasmation Update 5 – export and fluid troubles

This week was good and bad.

The good was that after a very quick researching session, I was able to easily figure out how to export a PNG from the data ion a canvas. Using three.js this was pretty straighjt forward (after scratching my head a little bit on how to do something explained online in three.js)

First, you need to actually set a flag (preserveDrawingBuffer) on the renderer being added to the canvas that tells it that it will keep the back-buffer’s data. For some reason, it normally gets cleared and when reading the data from the canvas, returns an array of 0s. 

renderer = new THREE.WebGLRenderer({preserveDrawingBuffer: true, alpha: true, autoClear: false});

The other 2 flags are to firstly tell the canvas that we will be able to see underneath it, (The checkerboard effect) and that it will not be cleared automatically (we need to preserve what’s been rendered because we need this data for use in the shaders.)

The next thins was simple using a javascript function called “toDataURL” which grabs pixel data from an image or canvas domElement (probably others too, I haven’t tried things like SVGs or custom fonts). I then turned this into a function that can be called on the “Plasmation” class (the class that holds all the system’s functioanlity like updating, initializing and rendering the canvas).

/**
 * @return string
 */
this.BufferData = function()
{
   return renderer.domElement.toDataURL("image/png");
};

I can then get the data returned from this, and simply insert it into the src attribute of an image element. So, instead of an image path on the server, it will contain a “compressed” data version of the image.

<img id="image_output0" src="data:image/png;base64,iVBORw0KGgoAAAANSUhED....." width="100" height="100">

(I’ve put “…” in there because that data would otherwise be about 112,000 characters long…).

The reason for using the base64 data method is because rendering on the canvas is a client-side job, and therefore I literally cannot get the URL of the image on the server unless I converted it, sent the data to the server, converted it back to image data using PHP, and then saved it to the server, just for the javascript to ping back at the server for the URL of this new image, and then insert that into the image as the src attribute. This seemed a bit ridiculous, so I simply made use of the data URL that sites like Google use for their search engine (I’m sure you’ve copied the source of an image from a Google image search thumbnail and it returning a stupid URL like the one in the code block above). However, one issue that may happen is the fact that my original project spec wants thumbnails of the simulation. This means I may have to do this anyway, but for a faster way, this works better anyway, and has less chance of error.

Shader loader

Looking at three.js tutorials, there seems to be a trend of people either putting shaders into an HTML element…

 <!-- a noob way of creating vertex shader code -->
 <script id="vertexShader" type="x-shader/x-vertex">

 void main() {

 gl_Position = vec4( position, 1.0 );

 }

 </script>

… or using an array of strings for each line of the shader…

fragmentShader: [

"uniform sampler2D tDiffuse;",
"uniform float amount;",
"varying vec2 vUv;",

"void main() {",
"vec4 color = texture2D(tDiffuse, vUv);",
"gl_FragColor = color*amount;",
"}"

].join("\n")

I looked at this and shook my head as both cases couldn’t use syntax highlighting and were mostly hard to use. I knew there must be a more usable way, for example, how you can just create “shader.glsl” in c++ and load the file.

It turns out, loading a file was kind of hard when you don’t have a web server, which is why people were opting for this method (it’s safer for beginners)… but I have a web server… So I created a shader loader function that uses a jquery ajax call:

/**
 * custom function that loads a shader
 * uses ajax to read a shader from the web server
 * @description returns a full string of the shader code
 * @return string
 */
this.LoadShader = function(url)
{
   return $.ajax({
      url: url,
      async: false
   }).responseText;
};

Then, to use the shader with a three.js material, I just need to create it like this:

var kernel = this.LoadShader("/shaders/kernel-vert.glsl");
var advect = this.LoadShader("/shaders/advect-frag.glsl");

 // create advect calculations shader
 // normPixelWidth, normPixelHeight, pixelUnit, pixelRatio;
 advectVelocityShader = new THREE.ShaderMaterial({
     uniforms: {
         pixelUnit: { type: "v2", value: pixelUnit },
         pixelRatio: { type: "v2", value: pixelRatio },
         scale: { type: "f", value: 1.0 },
         sourceSample: { type: "t", value: zeroTexture }, // texture 1 in, texture 1 out
         velocitySample: { type: "t", value: zeroTexture }, // texture 1 is used later and loops back to here
         deltaTime: { type: "f", value:     options.fluid.step }
 },
     vertexShader: kernel,
     fragmentShader: advect
 });

So basically, this allows me to use a single file for each shader, much like Jonas Wagner does with his fluid simulation. (I’ll mention him below).  

You’ll notice that the shaders are stored in variables first. I could simply stick the function call in the “vertexShader” part of the call, but I needed to use some of the shader calls in other three.js shaders, so it was neater to include them all before-hand.

The bad news

So my other task this week was needing to get the fluid simulation working. This, of course, was extremely difficult. 

After researching some examples online, I managed to find  a webgl fluid simulation by Jonas Wagner that uses shaders for the velocity, force, advection and jacobi etc. calculations and could run at real-time. (http://29a.ch/2012/12/16/webgl-fluid-simulation). I contacted Jonas and he gave me permission to use his shaders. I’ll get into what had actually been changed in a later post. Because I had to change a lot, and the project has to be 70% my own work. 

After about 2 full days of programming the shaders and little sleep, I managed to get three.js to first have a proper camera, render to a scene for each shader iteration of the fluid simulation, and figure out how to render-to-texture. 

The code for this implementation is too large to show here, so I’ll link the javascript file for the entire thing and you can check that out: http://plasmation.jamesheazlewood.com/js/editor.js

It basically boils down to the following set of steps:

  1. Create renderer context on canvas (described above)
  2. Create orthogonal camera
  3. Load textures to be used, eg the “heat” texture and initial velocity
  4. Load shader files with my ajax shader loader (mentioned above)
  5. Create all needed render target textures “velocityTexture1”, “velocityTexture2”, “divergenceTexture” etc.
  6. Create a scene for each shader pass (6 total)
  7. Create a mesh for each shader pass (6 total)
  8. Apply shaders to the meshes and add them to the scenes for rendering
  9. Position meshes so that they are in the exact centre of the camera and fill up the camera frustum.
  10. Call the render function
  11. In the render function, render each scene 1 by 1, making sure to export to the correct texture.
  12. Where needed, send that texture to the next shader for morphing. 
  13. Repeat the render function 60 times a second. 

The texture I used for the initial velocity of the fluid was something like this:

20141112163552

I used the red and green channels as the velocity of the particles. Just as a test for now as in reality, a texture like this would not be able to represent negative velocity as I could only ever use a number between 0 and 1, where 1 is the brightest colour (255) of the Red or Green channel. For the final version, I’ll have to make it entirely dark yellow (127 red, 127 green) and any velocity over this or under this is negative or positive velocity. 

Once all this was set up, the next step was to make sure it all worked. And it didn’t. And because I didn’t fully know how three.js did things behind the scene and what flags were set in webgl (three.js makes it easier to make normal 3D scenes, but that’s not even close to what I’m trying to do.). 

The render code iterates over each frame, and looks like this:

// 1
advectVelocityShader.uniforms.sourceSample.value = velocityTexture1;
advectVelocityShader.uniforms.velocitySample.value = velocityTexture1;
advectVelocityShader.uniforms.deltaTime.value = 0.01666;
renderer.render(velocityScene1, camera, velocityTexture2, false);

// 2
addForceShader.uniforms.velocitySample.value = velocityTexture2;
renderer.render(velocityScene2, camera, velocityTexture2, false);

// 3
divergenceShader.uniforms.velocitySample.value = velocityTexture2;
divergenceShader.uniforms.deltaTime.value = 0.01666;
renderer.render(divergenceScene, camera, divergenceTexture, false);

// 4
jacobiShader.uniforms.divergenceSample.value = divergenceTexture;

// set up vars for jacobi shader loop
var originalPressure0 = pressureTexture1,
 originalPressure1 = pressureTexture2,
 pressureSwap = originalPressure0;

// this iterates over the jacobi shader (stores texture data and then swaps to another one
// so that the new data can be stored in the second, and then swapped back)
for(var i = 0; i < options.fluid.iterations; i++)
{
 // update shader this iteration
 jacobiShader.uniforms.pressureSample.value = originalPressure0;
 renderer.render(pressureScene1, camera, originalPressure1, false);

 // swap textures around
 pressureSwap = originalPressure0;
 originalPressure0 = originalPressure1;
 originalPressure1 = pressureSwap;
}

// 5
pressureShader.uniforms.pressureSample.value = pressureTexture1;
pressureShader.uniforms.velocitySample.value = velocityTexture2;
renderer.render(pressureScene2, camera, velocityTexture1, false);

// 0 render visualiser
particleShader.uniforms.deltaTime.value = 0.01666;
particleShader.uniforms.pressureSample.value = pressureTexture1;
particleShader.uniforms.velocitySample.value = velocityTexture1;

The “false” flag in each render was a “clear screen” flag, which I didn’t want because I wanted to remember the previous values on that texture for the next iteration.

This took a lot of effort to go over the shader tutorials and try and convert them into three.js functionality. But then, it was finally complete. And the result was:

20141111230951

Once I finally figured this out over the space of about 2 days (something to do with the uniforms in the shader already existing and they were being bound to the “fog” uniform, even though I wasn’t binding them to anything to do with fog. I assume it was a uniform location issue with webGL rather than javascript). 

But then once I fixed it, the shader was not even updating over time, but rather just appearing as the first frame.

20141112200633

Eventually I added a debugger that displayed certain textures, and I noticed that the second iteration of the velocity was actually working, but in the next render, it was being reset.

The second velocity iteration. Something happened. But this is as far as it goes.
The second velocity iteration. Something happened. But this is as far as it goes.

I suspected that the textures being rendered to the scene could not contain negative values, and therefore pressure would not have been working correctly. I also noticed that the jacobi loop was not doing anything to the textures. So it was either something to do with the textures being reset every frame, or the scenes losing their information once another scene is rendered. This may be because the scenes are rendered to a texture and this texture may have been a single one in memory. It could have also been because the textures could not be modified in memory after being initially set. However I had no control over either of these, so I had to figure out a different way of approaching the render cycle.

The amount of frustration involved with this was incredible, as I didn’t know where to even start to find a solution. There were no fluid simulations similar to this online that used three.js, which is what I needed to use for this.  I also didn’t know many people that could help.

 

I’ll get into how I overcame this in the next post.  

Myo game update – Tutorial sequences

This week I focused on editing of my final report and adding tutorial sequences to the game.

I mentioned an issue in the previous post, I mentioned that one issue people playing the game had was finding it hard to know what to do without me giving hints, despite there being a written tutorial. 

I then thought that I could follow traditional methods and incorporate the tutorials into the game.  So, lacking voice actors, I decided to record the tutorials myself and add them into the game using a state manager (described below) that is updated after the player performs a certain action, or they can skip it by making a fist. 

This video shows the tutorial sequence up until completing the game.

It has been suggested that the game needs a few more variations in bosses and a bit of variety of attacks. This is true but I think the current game gives a good demonstration of how to play it. 

After a couple of tests with players, they seemed to understand the game better, although there is also a need to remove some actions from the game to prevent players accidentally performing an action in-game that has not been introduced yet. Perhaps there needs to be a game mechanic where you “learn” actions. This will need to be researched further. 

Tutorial state manager

The state manager was a very simple function that changed a boolean and reset a timer. The timer was used restrict progression of the tutorials until they had been fully played back to the player.

 // changes game state and resets timer
 private void ChangeState(string state)
 {
 _stateTimer = 0;
 _stateEntered = false;
 _gameState = state;
 }

 Where you see “stateEntered” is basically a boolean used as a simple “enter” functionality without having to create an entire state machine for a simple prototype. If this is true, during the update loop, it will perform “enter” functioanlity (such as play a sound once and reset any flags), and then set the “_stateEntered” bool to “true” and therefore not performing things every frame.

The update functionality is similar to the following:

 // state machine to control the game
 // mainly used for tutorial states
 switch(_gameState)
 {
 case "Setup":
 if(!_stateEntered)
 {
 PlaySound("Bad");
 _stateEntered = true;
 }
 if(thalmicMyo.armRecognized) 
 {
 PlaySound("Good");
 ChangeState("Tutorial 1");
 }
 break;
 case "Tutorial 1":
 if(!_stateEntered)
 {
 PlaySound("Tutorial 1");
 _stateEntered = true;
 }
 
 if(jointObject.GetComponent<JointOrientation>().spreadDone && _stateTimer > 16.0f)
 {
 PlaySound("Good");
 ChangeState("Tutorial 2");
 }
 break;
//... etc.

 basically, all the state manager is doing it play an audio file, wait for the length of this file (tutorial speech) and then wait for the player to do something that triggers the next one. Super simple. 

Future work

Some future work would include a “learning” system where the player cannot activate any action in the game until he or she has triggered it in a tutorial, thus activating it. This will prevent confusion, seeing as the game has no visuals to help with understanding the environment.

The distant future will include the addition of a story, more enemies, quests and more actions for the player to do. Each enemy would have different combinations of actions to beat it, and doing so will grant new abilities.

After this, I will be incorporating a GPS system to track quests as each quest will require the player to either be tracked moving 1 – 5 KM or a certain number of steps to reach the objective. This will hopefully be an interesting exercise routine app for mobile.

 

Plasmation update 4

TLDR:

  • Got scenes list to display
  • Got user accounts to login and redirect to scenes
  • Made a Facebook page
  • Made another template 
  • Got jquery to work
  • Got three.js into the scene
  • Got the canvas to render a basic scene with a cube

What up

So, I enjoy writing these journals because it lets me explain things to a website, which helps figure things out somehow. 

Anyway.

More website work

The majority of the work this week was sorting out the website templates and getting the users to be able to create an account, login and see their scenes. This was quite easy due to my framework (the “Corn” framework).  I also added it all to BitBucket for version control with git. 

Some issues I had were purely me being an idiot, but still issues and they still take up time. The first one was me trying to use DBO methods of saving the scene once a user created it. I was trying to save about 12 fields at once and it was causing an exception.  I realized it was just because I had spelled the last field differently to the one in the sql preparation query. 

The second issue was with trying to get jquery to initialize the three.js canvas by using $(“#canvas”).appendChild, which should have worked, but apparently jQuery has its own way of doing this which is just ().append. Nice work. 

I managed to get the canvas working in the end and three.js to initialize a cube. Which is a good start. The next step is getting the shaders running. Looking around the net, I found a webGL fluid simulation example by Jonas Wagner where he uses IBOs etc. and texture maps to save things like pressure and velocity for the pressure. I could go through these for some ideas on ways to implement my own type of fluid simulation that includes a pixel shader for the “heat” of each cell which will be based off of a gradient of colours over heat level. This will allow the users to do whatever colour they want based off of these colour gradients. 

Here’s some screenshots (the interface still needs styling up, but that should be done after I get the fluid simulation and exporting done.)

20141021031757

 

20141021031749

20141021031728

More redesign

I’ve had some thoughts and I still can’t decide on the format of the data in the database. On one had I want to be able to literally post the javascript object data straight to the database like a json string, which would be faster and easier. But on the other hand, this is both insecure and causes the data to be harder to read once saved out to the database and may prove to be hard to debug in the future. it also poses the risk of the data becoming corrupt if I screw up the saving functions at any stage.

After contemplating a lot, the way I am leaning towards at the moment is having the javascript in the editor post json queries to the server that do specific things like “update resolution” or something more compact like “save scene properties”. This would be good because I can just compact and send the needed variables to the server to update only when they need to be. This lets me remove the “save” function, and only leave the “save as” function because the scene will technically be automatically saved.  

The downfall of this is that I would need to write many functions that get and set this data, and these will all need to be written, added security, and tested in the test phases which need to be completed this week. I think there should be a fair bit of copying and pasting for most of it, so I could turn a lot of the repetitive functionality into functions.  

An issue that has cropped up during these redesigning parts is the fact that a simulation will not have much randomness in the particle generation initially. This means it would be a good idea to allow the user to “paint” on the areas where pressure and velocity will start at in the simulation, otherwise it would look very round and boring. I’ve added another tool on the toolbar for letting the user paint on the layer / emitter the direction that the fluid pixels will have from the first frame. This will also give the user more freedom.

Facebook page

Pretty straightforward: created a page, added a picture, and linked it to the footer of the app. I might need to make a Google+ account and something else, like Tumblr, YouTube (for Tutorials), or Pintrest. 

Trello

I’ve also started adding things to Trello, making sure I have tickets for everything and that they go into the correct column when done and tested.

 What’s next?

The first thing I want to get working is the exporting ability. THis will require somehow grabbing the pixels out of the canvas element and saving them as a 32-bit PNG. 

In the next 2 weeks or so, I will be getting the fluid system into the application. This will require researching some fluid code done with webgl and implementing it with three.js. Looking at some demos online, this should be fairly straightforward. 

Myo game update 4

Last week I completed the first version of the Myo game prototype, now titled “Echo of Heroes”.

The game was a lot more simple than I imagined because I wanted it to be as simple as possible for players to get the hang of in a testing environment.

I even made an A4 poster for it:

Sword

 

The game was made for a research report about how a fun game could be made from using Myo and without visual feedback. The reason for this is mainly because of the ability to play a game without having to engage your visual senses, and thereofre making it a better platform for exercise routines on mobiles. The Myo uses bluetooth and can be taken outside, so the game is going to end up being a larger game like Zombies, Run!, and incorporate missions and jogging. 

The way I put it together was with 2 main pieces of technology plyus some clever audio design.

20141008231112

The screenshot shows how the games’ “colliders” are put together. These are basically what Unity3D calls “things that if other things overlap with, things happen”. These are built into Unity, and can be easily utilized. The way the game uses these is when the player swings his or her arm wearing the Myo arm band and if a pretend sword collider overlaps with the capsual in front of them, it registers a hit.This is the same with the collider above the sword (in the screenshot), but is used for “blocking”. If the user has his or hand up in the air high enough it will collide with that flat box and can trigger the player blocking.

The way I conveyed the game state was a big loud voice echoing thoguh your headphones saying “haha, this is the end of the line, you tiny little speck” to signify that there’s a big bad monster in front of you. The big boss attacks, and this is identified by a climbing “reverse sword clang” sound effect stretched out over about 3 seconds and at the end if the player is blocking, it makes a “clang” sound, otherwise a “bash” sound and a type of buzzer sound to identify that the player has been hit and this is bad. The player attacks the boss by waiting for him to swing and as soon as he hits the player’s block, the player has 1 second to react by swinging his or her sword at the boss. If it hits, you hear a “ouch” sound from the boss, and if not, you hear a bad buzzer and a clang noise, like you are hitting the boss’s shield.

The Myo’s SDK was easy to import into Unity because they have bundled with it a package that lets you easily set up the environment. 

After studying binaural sound a little bit, I ran into a Unity plugin (free for students) called 3Dception by Two Big Ears.  This also came with an easy Unity package.

One of the clever things I implemented was the audio handling. because Unity allows 1 sound emitter per transform, I had to create a bunch of these transforms and attach sound to them. I added them all to a parent called “Sounds”, and referenced this to the game’s logic handling object. I then added to the logic object a sound manager function where I would only need to call “PlaySound(“Hurt”) for example to get it to play a sound. The function would basically search for the child by the name passed into the function and then play the sound on that object. This works well and makes future code more usable.

I also used the 3Dception audio for the sword position by having it constantly play a fire loop. The swinging sound was also played when the player swings the sword (mentioned in a previous article on how I did this). The binaural natrure of the sound made it feel like the sword was where it was in game. To test this with the users, I added one of the questions on the survey that asked this. About half of the people that did the survey said that they could tell, but it is not a vwery useful result as I did not explain to them that it was an advanced feature and they could have understood the feature as a difference to mono sound for example. To properly test this, the uisers would have needed to kjnow a bit about the issues wjth sound positioning in real life and how the ear is clever enough to know where sounds are coming from. They would also need to have a “good ear” as in, know the difference between normal stereo sound positioning in most games, and the binaural feature. 

However, there was a discussion I was involved in online about the binaural technique and was made aware of even more technology. It would be a good idea to study this further and compare different solutions.

George

Some problems I ran into in regards to the game was that the player had to firstly tell the Myo which direction the boss was. This is because the Myo’s absolute position is sent to the game, and it could be on any angle. This was half incorporated into the Unity package that came with the Myo, but it needed to be made usable in the game and disabled so that a user doesn’t accidentally activate it. The way I approached this was simply ask them to spread their fingers at the beginning of the game, while holding their hand to show them where to point it. This worked well except at some points the Myo did not register the finger spread gesture, and I was unable to do it myself without having to take the Myo off their arm and re-set it on my own. 

Another issue was when users tested the game, they had to sync the Myo with their body. Traditionally, this was a painfully long process of performing gestures, but they improved it to simply doing a “setup gesture” which was waving your hand away and then rotation your arm in the same direction. This was still difficult to explain to people, even with a picture showing them what to do. 

 Overall the results showed that despite the technical issues, the game was fun, but needed some sort of tutorial sequence to get the player used to the game. 

Plasmation update 3

This week I focused on getting the design documentation and technical documentation complete for the project.

Database design

Some things I ran into was that because the data for the online tool will be saved in a database, there needs to be a decision to what format the data is saved in. The 2 choices were to store things in the database as their own fields, for example “camera_x” or “tint_red” “tint_blue” etc. But some thing would need to be relational and in second normal form. This occurs in the curves that happen over time. Things like the colour of the effect transitioning over time need about 4 numbers per node in the curve editor. This would cause an indeterminate size of data for each scene. Therefore, after some research, I’ve decided to just make numbers saved per frame without curves. This means the database will have a table called “keyframes” and these will all belong to certain scenes. They will contain a number for which frame it is, certain colour and other data, and in the app they will simply transition in a normal “lerp” (linear interpolation). Future versions could have curve types like “easing”. But I don’t have time to make it more complex.

New interface

Because of this new keyframe method, I had to re-make the design documentation and change the example screenshots.

Screen export

 

As you can see, the interface now does not have any tabs for “workspace” and “export” view, but rather you simply see what the finished product will look like. Each layer will be rendered to a graphics buffer and caches for easy scrubbing through the timeline. The 3rd button on the bottom-left of the time-line is an “add keyframe button” which will add a keyframe to all layers.

The reason I didn’t want to combine the layers into a single buffer is because the fluid simulation would otherwise need to be updated for all layers rather than just 1. This also allows transformation of layers individually without having to re-render the images underneath. 

Further along this week, I will be putting the fluid simulation in the tool and set up the javascript framework for it to render to buffers.

Myo game update 3 – Echo of Heroes

This week I focused on setting up the Myo and getting it to run in Unity.

The package that comes with the Myo development kit makes this easy as it comes with an example scene element (a ball on a stick – which is literally just that.) And as bunch of code that allows the Myo to rotate this ball on a stick. It also lets you change the texture of it and cause vibration with some gestures.

My goal this week was to get the scene working with sound and get an android build working on my phone.

Android compatibility

The first test was simply using the scene and compiling it for android. After doing this and launching it on my phone, the first issue emerged that, of course, my phone did not detect the Myo, even with Bluetooth switched on. I tried another method of using a USB extension cable (micro USB into the charger port and a normal USB out) and connect the Bluetooth dongle to that. I did this as some research revealed that the Myo needs the particular dongle hardware to function at all.

In desperation,  I contacted my trusty Myo contact via email, but got a vacation notice as a reply. I then decided to go on the Myo developer forums and ask the question of how I would go about getting it to work on my android. A Myo developer replied saying that the Myo is supported on android,  but not Unity. However, it would be supported in the future.

This was unfortunate because I had not allowed time in my schedule to get the game to work on android using Java or C++, so I would have to simply make a prototype on Windows using Unity and present it with the intent on going mobile once Unity is supported with the Myo on android. Not a complete loss, but it’s still a bit of a damper on my hopes and dreams of a fun audio game. It also means I have to re-do some of my test plans as they relied on a questionnaire about a mobile specifically.

Implementing audio

The next goal was making the game play audio. This was simple as Unity has an inbuilt audio system. So it was as simple as importing a wav file to play. I used a fireball dart sound as the sword swing. To get it to play, I wanted it to somehow detect that I am swinging the sword and then play the swing sound. To do this, I first tried detecting the difference between 2 quaternions from the previous frame and the current frame. This turned out to be more complicated as the Myo seemed to not always send data to unity at a constant rate, so the resulting difference seemed to be a random number or 0. This, I assume, is because the Myo does not send data at the same rate as the game update ticks, and so comparing the 2 differences would try and compare with the same position withing 1 Myo update a the difference would be 0.

A way to fix this would be to have a function that runs “onMyoUpdate()” and do the calculations there. However this may make the game go out of sync and make calculations with the game world harder. Also when working with audio, it is a lot easier to hear tiny differences in the audio volume or pitch than to visually see something, and so having some updates being skipped may cause the audio to be “choppy”.

In the end, I simply coded around it by first getting the difference of the vector of the end of the sword (the sword doesn’t actually move, but rather changes angle only, so the sword end would accurately represent a velocity of it spinning) of the last frame and current frame, and then added that onto a “power” variable. This would also decrease each frame slightly but never go past 0. This means that even if the difference is 0, there is still a possibility that the power value is higher than 0.

Using this power variable and a cool-down of about half a second,  I got the sound to play when the power was higher than a certain level, but also change the volume of the sound by the amount of power. This effectively made the long swing sound fade out quickly if the user stops swinging suddenly. The cool-down was added so the sound wouldn’t start to play continuously as the user swings. A better option than a cool-down would be a variable that calculates if the sword is travelling in a certain direction each frame, and if it changes, would cause it to reset some flag, telling the script that the sound can be played again. However, at this point in time we can go with the cool-down option and see how the testers go with the game.

Below is a video of the result, also showing the mentioned 0s in the console coming from the Myo.

What’s next?

The next thing I will be implementing will be a sound for when the sword collides with the center of the scene which would represent an enemy.

After this, I will implement a scoring system, where the enemy will have 5 hit points, and be blocking for some of the time. The blocking will be signified by a “shield being hit” sound with echo. The enemy will also make a sound for when he is about to swing. When this happens, you will be expected to block. To block, the player must hold his or her arm vertically and make a fist gesture. When this is done correctly, you will hear the same sort of sound as the enemy (but needs to be different, to avoid the player thinking the enemy blocked instead) this would be the core of the game.

Ideas for future

As the game cannot be ported to mobile using unity, we will not be seeing a mobile app in this version. However, once mobile Myo is supported in unity, the game could expand into a jogging adventure app where an adventurer can go on quests to slay enemies and get loot. The catch is that they have to jog at least 2 KM (using GPS or a treadmill) and fight the enemy before returning home.

Research update

As described in a previous post, I’ll need to research other motion capture and gesture capture controls like the Razer Hydra, Perception Neuron and the Nintendo Wii controller.

I will then look into other mobile games and how they do audio. Games would be “Papa Sangre” and “Zombies! Run!“. Some notable features of Papa Sangre is that they muffle audio from things that are behind the player. This is not a feature in unity, although it can be implemented. One issue is that the Myo game I am proposing has the issue of assuming the player’s head is facing towards the enemy as there is no head tracking. This will need to be experimented with.

Test cases

  • Audio panning and muffling helps or causes weird experience ?
  • Cool-down on sword. Is it annoying if you swing at a weird time? Should it play based on velocity or based on change of direction?
  • How fun is the game in general?

Plasmation update 2

Progress

Straight into it. So far, I’ve managed to get a web server up and running and set up an environment to view the particle simulation.

20140920015730

20140920015833

20140920020442

So far, so good.

Some things to take note here is that this is heavily based off the Cornstar framework. However, I’ve done some modifications to the framework to make it more modular like moving classes into different files and separating Plasmation-specific functions into their own set of files. The folder structure can be seen here in PHPStorm:

20140920020204

The benefit of this is that I could use this framework in other project, simply by keeping it in its own repository. If I need up update any of the framework, I can submit it to the database and pull changes straight from that into other projects.

To see the difference, see the Cornstar blog entries and compare.

Milestones

Week Date Assignment item Milestone
1 6 August
2 13 August
3 20 August
4 27 August Submit implementation plan
5 3 September 1. Website Foundation completed
6 10 September
7 17 September 2. Demo of workspace completed
8 24 September
9 1 October Design documentation due 3. Exporter completed
10 8 October Report due 4. Particle system and graphics finalized
11 15 October 5. First version launched
12 22 October Test plan / QA documentation due 6. First round of user testing complete
13 29 October 7. Second version launched
14 5 November 8. Testing and testing documentation complete
15 12 November Assignment submission date  9. All documentation complete
16 19 November
17 25 November

 

Changes to specs

After all this I found that my time management was a bit off when the reality was: I’ll be creating a full-on 3D Navier Stokes formulated  particle simulation. This was a bit silly as the final result was going to be 2D anyway, and the initial plan had changed to not include a camera in a 3D world any more. This means I may as well just do a 2D Navier stokes simulation using code from multiple places on the net. 

Finding code for the simulation

After deciding that it’d be easier to do 2D fluid simulations, I went back and looked at some of the simulations that use webGL:

http://haxiomic.github.io/GPU-Fluid-Experiments/html5/

 http://29a.ch/sandbox/2012/fluidwebgl/

http://www.ibiblio.org/e-notes/webgl/gpu/droplet.htm

http://www.cake23.de/traveling-wavefronts-lit-up.html

http://www.ibiblio.org/e-notes/webgl/gpu/fluid.htm

I’ll be looking at these in detail and determine which is best to use for my situation. Most of them were mentioned in my research journal, but only as examples. 

I am a professional at Development Methodologies

Today I plan on planning to do a plan. Well, I’m actually doing the plan.

This will be the milestones for my development methodology report, where I must talk about multiple project management development methodologies before creating an argument about which one to use in a small game company or personal project.

For the project I plan on doing the development methodology for my Software Development 4 project (Echo of Heroes) which will be done by myself, so this will be taken into consideration. The idea will be to choose a good methodology for this project, which should be based off things like avoiding a lot of unnecessary things like time spent documenting that other people would otherwise be looking at in a large team.

Here’s the milestones:

WEEK DATE ASSIGNMENT ITEM MILESTONE
4 28 August Find research.
5 4 September Intro, template and fill text.
6 11 September Waterfall, template, abstract (intro sections).
7 18 September Spiral, Scrum, comparison sections.
8 25 September TDD, RAD, comparison sections..
9 2 October Incremental funding method, Feature driven development, comparison sections.
10 9 October Initial report due Conclusion, comparison sections, and clean up. Hand in.
11 16 October Intro and plan for chosen development methodology.
12 23 October 1/3rd of case study sections.
13 30 October 2/3rd of case study sections.
14 6 November 3/3rd of case study sections. 
15 13 November Conclusion and abstract. Finalise.
16 20 November Final report due Proof read and hand in. Buffer.

 

As these are based off what looks like a draft of the development methodology assignment outcomes, I can’t say they are the actual assignment due dated. But There’s no hurt having them earlier than usual.