Plasmation – Version 1 put online

Hey folks.

Just a short post saying that I’ve finished version 1 and could do with feedback.

Here are some screenshots:

20141119171317 20141119184109 20141119185655 20141119195035 20141119195157

 

You can mess with it here: plasmation.jamesheazlewood.com/scenes/editor/1

Here’s a video showing how to use the interface:

The exporter

While I’m here, I should get into how I managed to get the exporter working.

My first atttempt was to change the renderer so it would render to a bunch of textures stored in an array. This worked well and it allowed me to use a JQuery slider to scrub through the exported frames. However, because all these images were stored in web.gl buffers, it was hard to get the data from them. Another issue was that it was using an enormous amount of GPU memory as the webGL contect would crash multiple times when exporting more than 100 or so images (I guess storing over 100 512×512 textures is a bad idea…)

My next attempt was to simply use the canvas to render. The way I did this was just render the animation to the canvas the way way I had been before, except implement things like frame skips and allow the user to control whether it has opacity or not.

This was fairly easy, apart from mutliple bugs with Jquery (I ended up just using the context straight from javascript).

The frame constructor worked by rendering a frame to the main canvas, and then using a different 2D canvas, draw this frame on top of what was already on this new 2D canvas ad a proper location. This was easy as I could choose the scaling of the image as well (although it kind of sucks, but I want to wait and see how it looks once I add the lens blur effect to prevent that noisy velocity thing). 

Below is the code for creating the sprite-sheet client-side:

plas.Reset();
plas.RefreshShaders();
plas.options.viewMode = 1;
plas.Render();
plas.ticks = 0;
var image;
var canvas = document.getElementById('image_compile');
var context = canvas.getContext('2d');
var scale = plas.options.scale;
var size = (512 * scale);
var frameCount = plas.options.frames;
var h = Math.round(frameCount / plas.options.horFrames);
canvas.width = size * plas.options.horFrames;
canvas.height = size * h;
context.globalAlpha = 1.0;
var rFrame = 0;
var x, y;
while(rFrame < frameCount)
{
 if(plas.Render())
 {
 x = rFrame % plas.options.horFrames;
 y = Math.floor((rFrame - x) / plas.options.horFrames);
 image = new Image();
 image.src = plas.BufferData();
 context.drawImage(image, x * size, y * size, size, size);
 rFrame ++;
 }
}
$("#image_output").attr("src", canvas.toDataURL("image/png"));

Once the frames are rendered to the new canvas, it gets all the data from this canvas and converts it to a data url (base64 encoded thing, the same as I already had) and puts that in an image ready for exporting. The user can then just save the image to their hard drive. 

I noticed that the you can save a canvas as an image in Firefox and Chrome, which meant I didn’t even need to create an image from the canvas data, however, although Internet Explorer 11 ran the simulation really well, it couldn’t save canvas data as an image. This meant that I had to convert it to an image. 

The next step was then going through all the set-able options in the scene and giving them controls. I used a slider because I find it requires less clicks and typing to just move a slider back and forth. I added titles to these sliders and then got it to also draw their values underneath.

Done.

 

Plasmation Update 6 – Got it working

So, I got it working. Finally.

I managed to get so stuck with the fluid simulation that I searched online for GPGPU examples, and found this. It was a really well-written example on how to use the RGB channels of a “velocity” texture for the XYZ co-ordinates of objects on the screen, and have it actually work in three.js. 

So I had to painstakingly go through each line of code and try and figure out a way to implement it into my own fluid simulation program, and without having to delete too much of my existing code. 

The major differences I found with this method compared to my original method were firstly, the mesh used to render the texture from each shader pass was the same mesh, and instead of adding this mesh to a scene per render pass, it just created one single scene for rendering the fluid on this 1 mesh and then just SWAPPING the texture used in the mesh with the pre-defined textures.

After a few bug fixes and frustration the result was:

20141118202315

Looks better than a single frame! Although there’s no accumulation of textures again, as it is simply replacing the velocity from the previous iteration with the initial velocity, and then not using the divergence shader as it should be.

From further investigation (and another entire day of banging my head from lack of debug ability with such a complex system), I managed to figure out that each texture over-written after it enters the next shader. This meant that once the velocity texture from the first iteration entered the “Add velocity” shader, it was being over-written with the new velocity and then the wrong values are sent to the divergence shader. To fix this, I tried adding a new texture to the system to use for adding velocity onto the velocity texture from the first pass…

Success!

20141118203225

Here you can see the velocity X (red) the velocity Y (green) and the pressure (blue).

I then created a texture that I could use to sample the colour based on the velocity of the fluid:

20141118203805

And then rendered these colours based on the pressure and some trial-and-error and fixing a weird bug where the colours would go completely white when past the usual 1.0 value of the texture (caused by the texture looping back to the start of the image), the results were interesting:

20141114213428 20141114213137

 

I sent it to someone and they said that it reminded them of a pool of lava. (A pity I didn’t record a video of it because it looked really smooth and mesmerizing). But I wanted it to look like an explosion. 

After more tweaking, I got the fluid to fade out to 0 when there was little-to-no pressure:

20141114222951

I then did some polishing and included more colours in the “heat”
texture to get more fun colours of the flames:

20141114221839  20141114223248

There was still a few problems (minus an issue to do with the velocity texture I originally had and trying to figure out why taking 0.5 from it, it wouldn’t halve shift the velocity, but I won’t get into that):

  • The render can reach the edge of the screen, getting cut off which is an undesired result for a sprite.
  • The edge of the screen was prone to looping back and causing the fluid to exponentially fling out of the edge.
  • The effect lasted too long for a sprite-sheet (about 100 frames!).
  • Velocity that is too highly concentrated in one area creates a noisy area, when it should look completely white (caused by the shaders only being able to calculate pixels either side of one another, not regarding the rest of the effect)
Noisy velocity. An issue.
Noisy velocity. An issue.

The first problem I solved by introducing a terxture I could multiply the blending level of the heat texture:

opacity

 

With this being sampled per pixel, I was able to adjust the flames so that if they trail too far from the centre, it would gradually shove the colour being sampled from the heat texture down the right-side, causing the effect to seem to naturally burn out the further away from the center it got:

20141115094100

 

To prevent chaotic bleeding of velocity, I added another sample texture, this time to the “add velocity” shader, which basically has a region around the outside of the image that can be multiplied by the velocity to 0 off the velocity around the edges:

 edges

 

The “add velocity” shader now looks like this (notice the edges are darker):

20141118205227

This method was sort of like a cheap hack, but because I’m not too worried about the frames-per-second, and the fact that I have the circle texture limiting the drawing of the fluid anyway, it works well for the situation.

For the effect lasting too long, I introduced a “time” variable where it would start at 1.0 and decrease by a small amount each frame. Once it reaches 0.0, the effect has been faded out completely. I multiply this number by the effect, much like the circle from the first issue above.

I’ll have to figure out the other issues at another time as the project has already lasted too long and the effect is starting to look really good. Here’s a video:

Some ideas for the remaining solution is with the noisy look, I could apply a lens blur effect:

20141118155831
A lens blur added with Photoshop.
20141118155857
Before a lens blur is added.

Although this removes some of the fine details of the more orange flames, so I’d have to try and apply it to only brighter areas somehow. 

I also managed to accidentally create a “brightness” effect to the final render while trying to figure out the “life” variable (the time that it simulates before it fades out), which I left in and added a uniform for it, this is because I planned to include that effect in the original design anyway. 

Increased brightness
Increased brightness

For the time-being, the final shader code for actually rendering the effect looks like this:

uniform vec2 pixelUnit;

uniform float deltaTime;
uniform float life;

uniform sampler2D heatMap;
uniform sampler2D pressureSample;
uniform sampler2D velocitySample;
uniform sampler2D roundMap;
uniform float heatMapRow;
uniform float brightness;

varying vec2 vuv;

// represents which colour gradient our effect will use
const float numberOfHeatRows = 8.0;

// main program
void main()
{
 // get velocity as a unit per pixel
 float speed = sqrt(
 texture2D(velocitySample, vuv).x * texture2D(velocitySample, vuv).x +
 texture2D(velocitySample, vuv).y * texture2D(velocitySample, vuv).y
 );
 
 // calculate the spot on the UV map of the heat texture we want to use
 float heatV = 1.0 - (heatMapRow / numberOfHeatRows);
 
 // calculate heat based on circular center around center of image
 float round = texture2D(roundMap, vuv).x;
 
 //
 float heatAmount = (2.0 - speed - round) * clamp(life, 1.0, 10.0);
 
 // position on the heat map "amount of heat"
 vec2 heatPos = vec2(heatAmount, heatV);
 
 // get colour from heat map
 vec4 color = texture2D(heatMap, heatPos) * brightness;
 
 // final colour
 gl_FragColor = vec4(
 color.r * color.a,
 color.g * color.a,
 color.b * color.a,
 color.a
 );
}

That’s all, folks.

Plasmation Update 5 – export and fluid troubles

This week was good and bad.

The good was that after a very quick researching session, I was able to easily figure out how to export a PNG from the data ion a canvas. Using three.js this was pretty straighjt forward (after scratching my head a little bit on how to do something explained online in three.js)

First, you need to actually set a flag (preserveDrawingBuffer) on the renderer being added to the canvas that tells it that it will keep the back-buffer’s data. For some reason, it normally gets cleared and when reading the data from the canvas, returns an array of 0s. 

renderer = new THREE.WebGLRenderer({preserveDrawingBuffer: true, alpha: true, autoClear: false});

The other 2 flags are to firstly tell the canvas that we will be able to see underneath it, (The checkerboard effect) and that it will not be cleared automatically (we need to preserve what’s been rendered because we need this data for use in the shaders.)

The next thins was simple using a javascript function called “toDataURL” which grabs pixel data from an image or canvas domElement (probably others too, I haven’t tried things like SVGs or custom fonts). I then turned this into a function that can be called on the “Plasmation” class (the class that holds all the system’s functioanlity like updating, initializing and rendering the canvas).

/**
 * @return string
 */
this.BufferData = function()
{
   return renderer.domElement.toDataURL("image/png");
};

I can then get the data returned from this, and simply insert it into the src attribute of an image element. So, instead of an image path on the server, it will contain a “compressed” data version of the image.

<img id="image_output0" src="data:image/png;base64,iVBORw0KGgoAAAANSUhED....." width="100" height="100">

(I’ve put “…” in there because that data would otherwise be about 112,000 characters long…).

The reason for using the base64 data method is because rendering on the canvas is a client-side job, and therefore I literally cannot get the URL of the image on the server unless I converted it, sent the data to the server, converted it back to image data using PHP, and then saved it to the server, just for the javascript to ping back at the server for the URL of this new image, and then insert that into the image as the src attribute. This seemed a bit ridiculous, so I simply made use of the data URL that sites like Google use for their search engine (I’m sure you’ve copied the source of an image from a Google image search thumbnail and it returning a stupid URL like the one in the code block above). However, one issue that may happen is the fact that my original project spec wants thumbnails of the simulation. This means I may have to do this anyway, but for a faster way, this works better anyway, and has less chance of error.

Shader loader

Looking at three.js tutorials, there seems to be a trend of people either putting shaders into an HTML element…

 <!-- a noob way of creating vertex shader code -->
 <script id="vertexShader" type="x-shader/x-vertex">

 void main() {

 gl_Position = vec4( position, 1.0 );

 }

 </script>

… or using an array of strings for each line of the shader…

fragmentShader: [

"uniform sampler2D tDiffuse;",
"uniform float amount;",
"varying vec2 vUv;",

"void main() {",
"vec4 color = texture2D(tDiffuse, vUv);",
"gl_FragColor = color*amount;",
"}"

].join("\n")

I looked at this and shook my head as both cases couldn’t use syntax highlighting and were mostly hard to use. I knew there must be a more usable way, for example, how you can just create “shader.glsl” in c++ and load the file.

It turns out, loading a file was kind of hard when you don’t have a web server, which is why people were opting for this method (it’s safer for beginners)… but I have a web server… So I created a shader loader function that uses a jquery ajax call:

/**
 * custom function that loads a shader
 * uses ajax to read a shader from the web server
 * @description returns a full string of the shader code
 * @return string
 */
this.LoadShader = function(url)
{
   return $.ajax({
      url: url,
      async: false
   }).responseText;
};

Then, to use the shader with a three.js material, I just need to create it like this:

var kernel = this.LoadShader("/shaders/kernel-vert.glsl");
var advect = this.LoadShader("/shaders/advect-frag.glsl");

 // create advect calculations shader
 // normPixelWidth, normPixelHeight, pixelUnit, pixelRatio;
 advectVelocityShader = new THREE.ShaderMaterial({
     uniforms: {
         pixelUnit: { type: "v2", value: pixelUnit },
         pixelRatio: { type: "v2", value: pixelRatio },
         scale: { type: "f", value: 1.0 },
         sourceSample: { type: "t", value: zeroTexture }, // texture 1 in, texture 1 out
         velocitySample: { type: "t", value: zeroTexture }, // texture 1 is used later and loops back to here
         deltaTime: { type: "f", value:     options.fluid.step }
 },
     vertexShader: kernel,
     fragmentShader: advect
 });

So basically, this allows me to use a single file for each shader, much like Jonas Wagner does with his fluid simulation. (I’ll mention him below).  

You’ll notice that the shaders are stored in variables first. I could simply stick the function call in the “vertexShader” part of the call, but I needed to use some of the shader calls in other three.js shaders, so it was neater to include them all before-hand.

The bad news

So my other task this week was needing to get the fluid simulation working. This, of course, was extremely difficult. 

After researching some examples online, I managed to find  a webgl fluid simulation by Jonas Wagner that uses shaders for the velocity, force, advection and jacobi etc. calculations and could run at real-time. (http://29a.ch/2012/12/16/webgl-fluid-simulation). I contacted Jonas and he gave me permission to use his shaders. I’ll get into what had actually been changed in a later post. Because I had to change a lot, and the project has to be 70% my own work. 

After about 2 full days of programming the shaders and little sleep, I managed to get three.js to first have a proper camera, render to a scene for each shader iteration of the fluid simulation, and figure out how to render-to-texture. 

The code for this implementation is too large to show here, so I’ll link the javascript file for the entire thing and you can check that out: http://plasmation.jamesheazlewood.com/js/editor.js

It basically boils down to the following set of steps:

  1. Create renderer context on canvas (described above)
  2. Create orthogonal camera
  3. Load textures to be used, eg the “heat” texture and initial velocity
  4. Load shader files with my ajax shader loader (mentioned above)
  5. Create all needed render target textures “velocityTexture1”, “velocityTexture2”, “divergenceTexture” etc.
  6. Create a scene for each shader pass (6 total)
  7. Create a mesh for each shader pass (6 total)
  8. Apply shaders to the meshes and add them to the scenes for rendering
  9. Position meshes so that they are in the exact centre of the camera and fill up the camera frustum.
  10. Call the render function
  11. In the render function, render each scene 1 by 1, making sure to export to the correct texture.
  12. Where needed, send that texture to the next shader for morphing. 
  13. Repeat the render function 60 times a second. 

The texture I used for the initial velocity of the fluid was something like this:

20141112163552

I used the red and green channels as the velocity of the particles. Just as a test for now as in reality, a texture like this would not be able to represent negative velocity as I could only ever use a number between 0 and 1, where 1 is the brightest colour (255) of the Red or Green channel. For the final version, I’ll have to make it entirely dark yellow (127 red, 127 green) and any velocity over this or under this is negative or positive velocity. 

Once all this was set up, the next step was to make sure it all worked. And it didn’t. And because I didn’t fully know how three.js did things behind the scene and what flags were set in webgl (three.js makes it easier to make normal 3D scenes, but that’s not even close to what I’m trying to do.). 

The render code iterates over each frame, and looks like this:

// 1
advectVelocityShader.uniforms.sourceSample.value = velocityTexture1;
advectVelocityShader.uniforms.velocitySample.value = velocityTexture1;
advectVelocityShader.uniforms.deltaTime.value = 0.01666;
renderer.render(velocityScene1, camera, velocityTexture2, false);

// 2
addForceShader.uniforms.velocitySample.value = velocityTexture2;
renderer.render(velocityScene2, camera, velocityTexture2, false);

// 3
divergenceShader.uniforms.velocitySample.value = velocityTexture2;
divergenceShader.uniforms.deltaTime.value = 0.01666;
renderer.render(divergenceScene, camera, divergenceTexture, false);

// 4
jacobiShader.uniforms.divergenceSample.value = divergenceTexture;

// set up vars for jacobi shader loop
var originalPressure0 = pressureTexture1,
 originalPressure1 = pressureTexture2,
 pressureSwap = originalPressure0;

// this iterates over the jacobi shader (stores texture data and then swaps to another one
// so that the new data can be stored in the second, and then swapped back)
for(var i = 0; i < options.fluid.iterations; i++)
{
 // update shader this iteration
 jacobiShader.uniforms.pressureSample.value = originalPressure0;
 renderer.render(pressureScene1, camera, originalPressure1, false);

 // swap textures around
 pressureSwap = originalPressure0;
 originalPressure0 = originalPressure1;
 originalPressure1 = pressureSwap;
}

// 5
pressureShader.uniforms.pressureSample.value = pressureTexture1;
pressureShader.uniforms.velocitySample.value = velocityTexture2;
renderer.render(pressureScene2, camera, velocityTexture1, false);

// 0 render visualiser
particleShader.uniforms.deltaTime.value = 0.01666;
particleShader.uniforms.pressureSample.value = pressureTexture1;
particleShader.uniforms.velocitySample.value = velocityTexture1;

The “false” flag in each render was a “clear screen” flag, which I didn’t want because I wanted to remember the previous values on that texture for the next iteration.

This took a lot of effort to go over the shader tutorials and try and convert them into three.js functionality. But then, it was finally complete. And the result was:

20141111230951

Once I finally figured this out over the space of about 2 days (something to do with the uniforms in the shader already existing and they were being bound to the “fog” uniform, even though I wasn’t binding them to anything to do with fog. I assume it was a uniform location issue with webGL rather than javascript). 

But then once I fixed it, the shader was not even updating over time, but rather just appearing as the first frame.

20141112200633

Eventually I added a debugger that displayed certain textures, and I noticed that the second iteration of the velocity was actually working, but in the next render, it was being reset.

The second velocity iteration. Something happened. But this is as far as it goes.
The second velocity iteration. Something happened. But this is as far as it goes.

I suspected that the textures being rendered to the scene could not contain negative values, and therefore pressure would not have been working correctly. I also noticed that the jacobi loop was not doing anything to the textures. So it was either something to do with the textures being reset every frame, or the scenes losing their information once another scene is rendered. This may be because the scenes are rendered to a texture and this texture may have been a single one in memory. It could have also been because the textures could not be modified in memory after being initially set. However I had no control over either of these, so I had to figure out a different way of approaching the render cycle.

The amount of frustration involved with this was incredible, as I didn’t know where to even start to find a solution. There were no fluid simulations similar to this online that used three.js, which is what I needed to use for this.  I also didn’t know many people that could help.

 

I’ll get into how I overcame this in the next post.  

Myo game update – Tutorial sequences

This week I focused on editing of my final report and adding tutorial sequences to the game.

I mentioned an issue in the previous post, I mentioned that one issue people playing the game had was finding it hard to know what to do without me giving hints, despite there being a written tutorial. 

I then thought that I could follow traditional methods and incorporate the tutorials into the game.  So, lacking voice actors, I decided to record the tutorials myself and add them into the game using a state manager (described below) that is updated after the player performs a certain action, or they can skip it by making a fist. 

This video shows the tutorial sequence up until completing the game.

It has been suggested that the game needs a few more variations in bosses and a bit of variety of attacks. This is true but I think the current game gives a good demonstration of how to play it. 

After a couple of tests with players, they seemed to understand the game better, although there is also a need to remove some actions from the game to prevent players accidentally performing an action in-game that has not been introduced yet. Perhaps there needs to be a game mechanic where you “learn” actions. This will need to be researched further. 

Tutorial state manager

The state manager was a very simple function that changed a boolean and reset a timer. The timer was used restrict progression of the tutorials until they had been fully played back to the player.

 // changes game state and resets timer
 private void ChangeState(string state)
 {
 _stateTimer = 0;
 _stateEntered = false;
 _gameState = state;
 }

 Where you see “stateEntered” is basically a boolean used as a simple “enter” functionality without having to create an entire state machine for a simple prototype. If this is true, during the update loop, it will perform “enter” functioanlity (such as play a sound once and reset any flags), and then set the “_stateEntered” bool to “true” and therefore not performing things every frame.

The update functionality is similar to the following:

 // state machine to control the game
 // mainly used for tutorial states
 switch(_gameState)
 {
 case "Setup":
 if(!_stateEntered)
 {
 PlaySound("Bad");
 _stateEntered = true;
 }
 if(thalmicMyo.armRecognized) 
 {
 PlaySound("Good");
 ChangeState("Tutorial 1");
 }
 break;
 case "Tutorial 1":
 if(!_stateEntered)
 {
 PlaySound("Tutorial 1");
 _stateEntered = true;
 }
 
 if(jointObject.GetComponent<JointOrientation>().spreadDone && _stateTimer > 16.0f)
 {
 PlaySound("Good");
 ChangeState("Tutorial 2");
 }
 break;
//... etc.

 basically, all the state manager is doing it play an audio file, wait for the length of this file (tutorial speech) and then wait for the player to do something that triggers the next one. Super simple. 

Future work

Some future work would include a “learning” system where the player cannot activate any action in the game until he or she has triggered it in a tutorial, thus activating it. This will prevent confusion, seeing as the game has no visuals to help with understanding the environment.

The distant future will include the addition of a story, more enemies, quests and more actions for the player to do. Each enemy would have different combinations of actions to beat it, and doing so will grant new abilities.

After this, I will be incorporating a GPS system to track quests as each quest will require the player to either be tracked moving 1 – 5 KM or a certain number of steps to reach the objective. This will hopefully be an interesting exercise routine app for mobile.