Plasmation update 4

TLDR:

  • Got scenes list to display
  • Got user accounts to login and redirect to scenes
  • Made a Facebook page
  • Made another template 
  • Got jquery to work
  • Got three.js into the scene
  • Got the canvas to render a basic scene with a cube

What up

So, I enjoy writing these journals because it lets me explain things to a website, which helps figure things out somehow. 

Anyway.

More website work

The majority of the work this week was sorting out the website templates and getting the users to be able to create an account, login and see their scenes. This was quite easy due to my framework (the “Corn” framework).  I also added it all to BitBucket for version control with git. 

Some issues I had were purely me being an idiot, but still issues and they still take up time. The first one was me trying to use DBO methods of saving the scene once a user created it. I was trying to save about 12 fields at once and it was causing an exception.  I realized it was just because I had spelled the last field differently to the one in the sql preparation query. 

The second issue was with trying to get jquery to initialize the three.js canvas by using $(“#canvas”).appendChild, which should have worked, but apparently jQuery has its own way of doing this which is just ().append. Nice work. 

I managed to get the canvas working in the end and three.js to initialize a cube. Which is a good start. The next step is getting the shaders running. Looking around the net, I found a webGL fluid simulation example by Jonas Wagner where he uses IBOs etc. and texture maps to save things like pressure and velocity for the pressure. I could go through these for some ideas on ways to implement my own type of fluid simulation that includes a pixel shader for the “heat” of each cell which will be based off of a gradient of colours over heat level. This will allow the users to do whatever colour they want based off of these colour gradients. 

Here’s some screenshots (the interface still needs styling up, but that should be done after I get the fluid simulation and exporting done.)

20141021031757

 

20141021031749

20141021031728

More redesign

I’ve had some thoughts and I still can’t decide on the format of the data in the database. On one had I want to be able to literally post the javascript object data straight to the database like a json string, which would be faster and easier. But on the other hand, this is both insecure and causes the data to be harder to read once saved out to the database and may prove to be hard to debug in the future. it also poses the risk of the data becoming corrupt if I screw up the saving functions at any stage.

After contemplating a lot, the way I am leaning towards at the moment is having the javascript in the editor post json queries to the server that do specific things like “update resolution” or something more compact like “save scene properties”. This would be good because I can just compact and send the needed variables to the server to update only when they need to be. This lets me remove the “save” function, and only leave the “save as” function because the scene will technically be automatically saved.  

The downfall of this is that I would need to write many functions that get and set this data, and these will all need to be written, added security, and tested in the test phases which need to be completed this week. I think there should be a fair bit of copying and pasting for most of it, so I could turn a lot of the repetitive functionality into functions.  

An issue that has cropped up during these redesigning parts is the fact that a simulation will not have much randomness in the particle generation initially. This means it would be a good idea to allow the user to “paint” on the areas where pressure and velocity will start at in the simulation, otherwise it would look very round and boring. I’ve added another tool on the toolbar for letting the user paint on the layer / emitter the direction that the fluid pixels will have from the first frame. This will also give the user more freedom.

Facebook page

Pretty straightforward: created a page, added a picture, and linked it to the footer of the app. I might need to make a Google+ account and something else, like Tumblr, YouTube (for Tutorials), or Pintrest. 

Trello

I’ve also started adding things to Trello, making sure I have tickets for everything and that they go into the correct column when done and tested.

 What’s next?

The first thing I want to get working is the exporting ability. THis will require somehow grabbing the pixels out of the canvas element and saving them as a 32-bit PNG. 

In the next 2 weeks or so, I will be getting the fluid system into the application. This will require researching some fluid code done with webgl and implementing it with three.js. Looking at some demos online, this should be fairly straightforward. 

Myo game update 4

Last week I completed the first version of the Myo game prototype, now titled “Echo of Heroes”.

The game was a lot more simple than I imagined because I wanted it to be as simple as possible for players to get the hang of in a testing environment.

I even made an A4 poster for it:

Sword

 

The game was made for a research report about how a fun game could be made from using Myo and without visual feedback. The reason for this is mainly because of the ability to play a game without having to engage your visual senses, and thereofre making it a better platform for exercise routines on mobiles. The Myo uses bluetooth and can be taken outside, so the game is going to end up being a larger game like Zombies, Run!, and incorporate missions and jogging. 

The way I put it together was with 2 main pieces of technology plyus some clever audio design.

20141008231112

The screenshot shows how the games’ “colliders” are put together. These are basically what Unity3D calls “things that if other things overlap with, things happen”. These are built into Unity, and can be easily utilized. The way the game uses these is when the player swings his or her arm wearing the Myo arm band and if a pretend sword collider overlaps with the capsual in front of them, it registers a hit.This is the same with the collider above the sword (in the screenshot), but is used for “blocking”. If the user has his or hand up in the air high enough it will collide with that flat box and can trigger the player blocking.

The way I conveyed the game state was a big loud voice echoing thoguh your headphones saying “haha, this is the end of the line, you tiny little speck” to signify that there’s a big bad monster in front of you. The big boss attacks, and this is identified by a climbing “reverse sword clang” sound effect stretched out over about 3 seconds and at the end if the player is blocking, it makes a “clang” sound, otherwise a “bash” sound and a type of buzzer sound to identify that the player has been hit and this is bad. The player attacks the boss by waiting for him to swing and as soon as he hits the player’s block, the player has 1 second to react by swinging his or her sword at the boss. If it hits, you hear a “ouch” sound from the boss, and if not, you hear a bad buzzer and a clang noise, like you are hitting the boss’s shield.

The Myo’s SDK was easy to import into Unity because they have bundled with it a package that lets you easily set up the environment. 

After studying binaural sound a little bit, I ran into a Unity plugin (free for students) called 3Dception by Two Big Ears.  This also came with an easy Unity package.

One of the clever things I implemented was the audio handling. because Unity allows 1 sound emitter per transform, I had to create a bunch of these transforms and attach sound to them. I added them all to a parent called “Sounds”, and referenced this to the game’s logic handling object. I then added to the logic object a sound manager function where I would only need to call “PlaySound(“Hurt”) for example to get it to play a sound. The function would basically search for the child by the name passed into the function and then play the sound on that object. This works well and makes future code more usable.

I also used the 3Dception audio for the sword position by having it constantly play a fire loop. The swinging sound was also played when the player swings the sword (mentioned in a previous article on how I did this). The binaural natrure of the sound made it feel like the sword was where it was in game. To test this with the users, I added one of the questions on the survey that asked this. About half of the people that did the survey said that they could tell, but it is not a vwery useful result as I did not explain to them that it was an advanced feature and they could have understood the feature as a difference to mono sound for example. To properly test this, the uisers would have needed to kjnow a bit about the issues wjth sound positioning in real life and how the ear is clever enough to know where sounds are coming from. They would also need to have a “good ear” as in, know the difference between normal stereo sound positioning in most games, and the binaural feature. 

However, there was a discussion I was involved in online about the binaural technique and was made aware of even more technology. It would be a good idea to study this further and compare different solutions.

George

Some problems I ran into in regards to the game was that the player had to firstly tell the Myo which direction the boss was. This is because the Myo’s absolute position is sent to the game, and it could be on any angle. This was half incorporated into the Unity package that came with the Myo, but it needed to be made usable in the game and disabled so that a user doesn’t accidentally activate it. The way I approached this was simply ask them to spread their fingers at the beginning of the game, while holding their hand to show them where to point it. This worked well except at some points the Myo did not register the finger spread gesture, and I was unable to do it myself without having to take the Myo off their arm and re-set it on my own. 

Another issue was when users tested the game, they had to sync the Myo with their body. Traditionally, this was a painfully long process of performing gestures, but they improved it to simply doing a “setup gesture” which was waving your hand away and then rotation your arm in the same direction. This was still difficult to explain to people, even with a picture showing them what to do. 

 Overall the results showed that despite the technical issues, the game was fun, but needed some sort of tutorial sequence to get the player used to the game. 

Plasmation update 3

This week I focused on getting the design documentation and technical documentation complete for the project.

Database design

Some things I ran into was that because the data for the online tool will be saved in a database, there needs to be a decision to what format the data is saved in. The 2 choices were to store things in the database as their own fields, for example “camera_x” or “tint_red” “tint_blue” etc. But some thing would need to be relational and in second normal form. This occurs in the curves that happen over time. Things like the colour of the effect transitioning over time need about 4 numbers per node in the curve editor. This would cause an indeterminate size of data for each scene. Therefore, after some research, I’ve decided to just make numbers saved per frame without curves. This means the database will have a table called “keyframes” and these will all belong to certain scenes. They will contain a number for which frame it is, certain colour and other data, and in the app they will simply transition in a normal “lerp” (linear interpolation). Future versions could have curve types like “easing”. But I don’t have time to make it more complex.

New interface

Because of this new keyframe method, I had to re-make the design documentation and change the example screenshots.

Screen export

 

As you can see, the interface now does not have any tabs for “workspace” and “export” view, but rather you simply see what the finished product will look like. Each layer will be rendered to a graphics buffer and caches for easy scrubbing through the timeline. The 3rd button on the bottom-left of the time-line is an “add keyframe button” which will add a keyframe to all layers.

The reason I didn’t want to combine the layers into a single buffer is because the fluid simulation would otherwise need to be updated for all layers rather than just 1. This also allows transformation of layers individually without having to re-render the images underneath. 

Further along this week, I will be putting the fluid simulation in the tool and set up the javascript framework for it to render to buffers.

Myo game update 3 – Echo of Heroes

This week I focused on setting up the Myo and getting it to run in Unity.

The package that comes with the Myo development kit makes this easy as it comes with an example scene element (a ball on a stick – which is literally just that.) And as bunch of code that allows the Myo to rotate this ball on a stick. It also lets you change the texture of it and cause vibration with some gestures.

My goal this week was to get the scene working with sound and get an android build working on my phone.

Android compatibility

The first test was simply using the scene and compiling it for android. After doing this and launching it on my phone, the first issue emerged that, of course, my phone did not detect the Myo, even with Bluetooth switched on. I tried another method of using a USB extension cable (micro USB into the charger port and a normal USB out) and connect the Bluetooth dongle to that. I did this as some research revealed that the Myo needs the particular dongle hardware to function at all.

In desperation,  I contacted my trusty Myo contact via email, but got a vacation notice as a reply. I then decided to go on the Myo developer forums and ask the question of how I would go about getting it to work on my android. A Myo developer replied saying that the Myo is supported on android,  but not Unity. However, it would be supported in the future.

This was unfortunate because I had not allowed time in my schedule to get the game to work on android using Java or C++, so I would have to simply make a prototype on Windows using Unity and present it with the intent on going mobile once Unity is supported with the Myo on android. Not a complete loss, but it’s still a bit of a damper on my hopes and dreams of a fun audio game. It also means I have to re-do some of my test plans as they relied on a questionnaire about a mobile specifically.

Implementing audio

The next goal was making the game play audio. This was simple as Unity has an inbuilt audio system. So it was as simple as importing a wav file to play. I used a fireball dart sound as the sword swing. To get it to play, I wanted it to somehow detect that I am swinging the sword and then play the swing sound. To do this, I first tried detecting the difference between 2 quaternions from the previous frame and the current frame. This turned out to be more complicated as the Myo seemed to not always send data to unity at a constant rate, so the resulting difference seemed to be a random number or 0. This, I assume, is because the Myo does not send data at the same rate as the game update ticks, and so comparing the 2 differences would try and compare with the same position withing 1 Myo update a the difference would be 0.

A way to fix this would be to have a function that runs “onMyoUpdate()” and do the calculations there. However this may make the game go out of sync and make calculations with the game world harder. Also when working with audio, it is a lot easier to hear tiny differences in the audio volume or pitch than to visually see something, and so having some updates being skipped may cause the audio to be “choppy”.

In the end, I simply coded around it by first getting the difference of the vector of the end of the sword (the sword doesn’t actually move, but rather changes angle only, so the sword end would accurately represent a velocity of it spinning) of the last frame and current frame, and then added that onto a “power” variable. This would also decrease each frame slightly but never go past 0. This means that even if the difference is 0, there is still a possibility that the power value is higher than 0.

Using this power variable and a cool-down of about half a second,  I got the sound to play when the power was higher than a certain level, but also change the volume of the sound by the amount of power. This effectively made the long swing sound fade out quickly if the user stops swinging suddenly. The cool-down was added so the sound wouldn’t start to play continuously as the user swings. A better option than a cool-down would be a variable that calculates if the sword is travelling in a certain direction each frame, and if it changes, would cause it to reset some flag, telling the script that the sound can be played again. However, at this point in time we can go with the cool-down option and see how the testers go with the game.

Below is a video of the result, also showing the mentioned 0s in the console coming from the Myo.

What’s next?

The next thing I will be implementing will be a sound for when the sword collides with the center of the scene which would represent an enemy.

After this, I will implement a scoring system, where the enemy will have 5 hit points, and be blocking for some of the time. The blocking will be signified by a “shield being hit” sound with echo. The enemy will also make a sound for when he is about to swing. When this happens, you will be expected to block. To block, the player must hold his or her arm vertically and make a fist gesture. When this is done correctly, you will hear the same sort of sound as the enemy (but needs to be different, to avoid the player thinking the enemy blocked instead) this would be the core of the game.

Ideas for future

As the game cannot be ported to mobile using unity, we will not be seeing a mobile app in this version. However, once mobile Myo is supported in unity, the game could expand into a jogging adventure app where an adventurer can go on quests to slay enemies and get loot. The catch is that they have to jog at least 2 KM (using GPS or a treadmill) and fight the enemy before returning home.

Research update

As described in a previous post, I’ll need to research other motion capture and gesture capture controls like the Razer Hydra, Perception Neuron and the Nintendo Wii controller.

I will then look into other mobile games and how they do audio. Games would be “Papa Sangre” and “Zombies! Run!“. Some notable features of Papa Sangre is that they muffle audio from things that are behind the player. This is not a feature in unity, although it can be implemented. One issue is that the Myo game I am proposing has the issue of assuming the player’s head is facing towards the enemy as there is no head tracking. This will need to be experimented with.

Test cases

  • Audio panning and muffling helps or causes weird experience ?
  • Cool-down on sword. Is it annoying if you swing at a weird time? Should it play based on velocity or based on change of direction?
  • How fun is the game in general?