Particle Tool – Week 7

This week I focus on research and creating a schedule for the creation of the tool. I figure that without research, I might not be able to accurately produce a workable schedule. We’ll start with the documentation creation process and how long this should take:

  • Start date: 12th February 2014
  • Due date: 28th May 2014

This in total is 15 weeks to do documentation, with a presentation on the due date.

Week 1 to 5 Discuss and research initial ideas
Week 6 Finalise Preliminary Idea
Week 7 to 12 Research project and document drafting
Week 13 to 14 Final Drafts
Week 15 Final Documentation and Publishing
Week 16 Submission

Since I already have the initial idea, I’m going to start on the actual project and document drafting. Below is a plan of what I’ll need complete by the end of the planning stage:

  • (W-7) 26th March – Create a plan. Then Research some more effects and specifics about realistic particle simulation.
  • (W-8) 2nd April – Make a start on the documentation, making the templates for each document and make lists of the parts I’ll be doing.
  • (W-9) 9th April – More research on things
  • (W-10) 16th April – More research on things
  • (W-11) 23rd April – Will be travelling at this time, but can do little things when needed.
  • (W-12) 30th April – End of document Drafting – most of the documents should be complete by now.
  • (W-13) 7th May – Final drafts to start.
  • (W-14) 14th May – Back from Travelling – Knuckle down in final drafts. Start presentation.
  • (W-15) 21st May – Final documentation and publishing. Finish presentation, Do mock recording etc.
  • (W-16) 28th May – Due date! Everything should be done and handed in on this day. Everything should be done before this but leaving a week for any catch up.

On each of these days I’ll be writing a blog entry like this one. On the date of final documentation and publishing, I’ll copy all the blog entries into a document for handing in. As it is easy to track things online.

  • Research Journal
  • Learning Plan
  • Peer Reviews
  • Design Documentation
  • Technical Design Documentation
  • System Analysis Plan
  • Project Management Documentation
  • Financial Outline

These will need to be submitted both by PDF and a binded document. I’ll need to make sure I have enough printer ink and buy the binder.

Research Journal

For this task, I will need to use WordPress to log each week of what I’ve been researching and which peer reviews, information I can find about the system. At the end of the week I will need to copy them to a word document and spell check them. This will be printed and binded. I will be watching a lot of videos on real special effects like plane crashes and gun shooting to see how fire and smoke acts in real life, and then study images of fire to reproduce it.

Learning Plans

Some ideas of learning objectives could be:

  • Work out whether saving files to the server or directly to the user’s hard drive is better. Should I use a conbination of both?
  • Learn how to optomise particle systems on the video card using researches methods
  • Marching cubes / metaballs for water effects – maybe use normal maps and specular maps to add better lighting
  • Volumetric for ckloud effectrs
  • screen space ambient occlusion for smoke
  • particle light emissions into smoke
  • volumetric lighting inside smoke cklouds caused by particles
  • Real life particle movement with wind and the best way to do this
  • Creating normal maps
  • Learn what industry people want
  • Undo and Redo feature using “Memento pattern”.

I might need to elaborate on these more and explore / find new ideas as the documentation develops. Each of these will need to be saved as separate PDFs, and printed along with other material to be binded.

Peer Reviews

I would like to post on as many forums as I can and also ask at least 3 industry special effects designers what they like and dislike about Fume FX and as many opinions as I can get. These can be people on YouTube that post special effects demos and forums. Some things I could ask are:

  • What are the best features of Fume FX? What are the things that you cannot live without?
  • Do you find the interface overwhelming?
  • Are all options used or are there settings that are mostly used and some that should be default anyway?
  • Would you pay for software like this particle tool idea?
  • Do you think having it run in a browser would be beneficial? Why / why not?
  • What are the hardest special effects to create?
  • What features do you wish were there / were easier?
  • Would a GUI speed boost make workflow a lot easier?
  • What sort of interface would work better? Is one like Photoshop a good choice? Would an interface that goes away from the trends of Photoshop be a burden?

The general idea is to prove my niche really. I need to know these things so I can design the interface around these. These will all be recorded and preseted as part of the peer review documentation. I will the end it with a conclusion.

Design Documentation

This will be where the tool is explained in as much detail as possible. Some general things would be things like:

  • Tool will run in primarily Google Chrome (and Firefox)
  • Complete functionality. It will:
    • Be able to save and load on the server
    • Be able to save and load files locally
    • be able to create, delete and edit “layers” (smoke on one and fire on another for example)
    • Export a spritesheet that can be used in the Unity’s particle cool (must do more research about compatability with normal maps and different effects, alpha sorting issues and using multiply and additive together)
    • Can export different channels, like a normal map, additive and multiply
    • use the GPU to simulate the system
    • can create frames
    • Frames can be scrubbed through
    • A frame can be chosen with a small box where the section will be exported per frame
    • The “camera” can be moved around to center the frame
    • The “frame” will be a square but a “fade out” can be applied in a spherical matter inside the frame, as this is a pre-particle exporter, the effects need to be technically sections of a larger particle system.
  • Tools in the toolbar will be
    • Brush to draw particles to the 3d world
    • A pushing tool to drag particles around like the liquify tool in Photoshop
    • A gravity tool for pulling particles towards the cursor
  • Interface will be designed around simply the default styles that three.js comes with.
  • Different interface view modes:
    • Game engine preview – will show the rendered version with all effects added using a 3 point light system that can be customised. This will not be rendered, as lighjting in games can change. For example a water particle cannot have a bright light reflection rendered at the top-left because the game’s light might be under it.
    • Real-time preview (slower to render but will try and make it at most 1 frame per second, each frame can be cached into a buffer on the canvas and displayed until he user moves the camera) This will be the main area that the game will use
    • Normal mode will show 10% of the particles
    • Motion vector preview – a normal map looking (x y z velocity represented as red green blue channels) as to visualize movement in a single frame.
    • Depth buffer view (shows the depth texture, mainly for me debugging but may be useful for others
  • Particles could light up when hovering over them
  • The background can be customized to show the transparency in the screen
  • Screen effects can be applied, like depth of field, screen space ambient occlusion on “smoke” particles (using the depth buffer), glow and
  • Particles can be simulated using either vertices connecting a vertex colouring additive effect on each particle so make more realistic fire (will go into detail later, and after more research), smoke as volumetric, particle lighting and ambient occlusion (more research needed)
  • Can add a game light to simulate how it would look in a game
  • Server will need a database to store user data
  • To be usable, the system needs to have an undo / redo feature. This should be done using the “Memento pattern” and should be implemented from the start.
  • A status bar showing the particle count, some extra info and a status of the program (like loading or saving or exporting etc.)
  • Toolbars, one on the left, the right, the bottom and top. I will try and make them simple without many options as one of the aims of this project is to have a program that’s less daunting and easy to learn.

The name of the program I’m leaning towards something like “Plasmation”. Because the word isn’t really a word, apart from being seen in some online places, which say it’s “forming or moulding” when this tool is like moulding a special effect particle. It also has “plasma” in it, which, to me, sounds special-effect-y.

Technical Design Documentation

This will consist of a lot of UML diagrams basically. Once I’ve sorted out the design documentation, done enough research on the formulas and ways to produce the effects, I’ll be able to do this properly. I think this section will be important to show how the web server and client-side parts will work together, as the barrier here is that because the program is working on a web server, the data needs to be transferred up and down to that and loaded in by users from their hard drives. A list of these would be:

  • Component listing
  • Pseudo code for all systems and components
  • Program design diagrams in UML
  • Data flow diagrams in UML

The thing to start off with will be the way the server and client could interact, but as I haven’t done the design documentation, this is just an example (Made with Lucidchart, a great example of an online tool): Web Server Workflow - New PageAs you can see, this is a basic flow of data in the logging in / creating account / workspace.

System Analysis Plan

In this I will basically talk about why I’ll be using an agile type of development methodology. I will most likely use Scrum because it is what I’m used to. An issue with this is the morning meetings, which I can replace with the journals and discussion of my goals and blocks.

Requirement analysis will also need to be documented which is basically a document that describes the requirements of the program. This will be closely tied to the technical documentation. I will need to include this as a separate documentation with an introduction explaining briefly the purpose of the application, scope, objectives and success criteria of the project, and the usual glossary of terms.

The second part of the documentation will include requirement and the analysis model of the application. The documentation will finish with references.

This section will also include a test plan and test cases for requirements validation.

Project Management Documentation

This will consist of a Work Breakdown Structure (WBS), complete with resource allocations, and a gantt chart with critical path. I will be using gantt chart software for this and will discuss in my journals which software is best for this.

This part of the documentation will also include a risk assessment and contingency plan.

Financial Outline

The application could either be free or a subscription-based program with advanced features unavailable. For this project, however, the program will be very basic, and adding user and subscriptions in there just makes another layer of difficulty. For the actual outline, I will be creating a mock one for industry professionals in a form of a pitch to get funding. The actual money involved will cover things like web servers, domain names, time food and workspace programs for myself to create at home, software, internet and power costs.

Getting up to scratch and a new uni project

Hey everyone. I have not posted in ages mainly because I keep forgetting to. So I’ll be doing at least one a week for a new project I’m working on plus any number of others that I have in the works. The things that I’ve been working on briefly are:

Game Faming

20140326142906 Game Faming is a gamer social network website that allows users to post videos and blogs, vote on them by attacking or defending them, which allows the good videos to be kept and the bad ones be deleted. The site is like a Facebook type website but for gamers only, or game related content. The core of the site will be that users level up the same way as you do in an RPG type game and can access extra content and even enter competitions to win prizes based on their level. The website is being developed by myself and a team of others.

Filtrate updates

20140326143333 Filtrate is a 2D platformer game made with a friend in my first year at AIE. The updates include:

  • Extra HUD with dimension indicator and death counter (it’s great coming into a room where someone’s been playing a level for ages and seeing the death counter.)
  • More player animations and smoother walk cycle. Animated idle stance.
  • Added more music depending on the level
  • Added more level variations that match the music
  • Added backgrounds
  • Tweaked graphics. Particularly the spikes image as people thought it looked like grass and would skip and hop through a meadow of death.

I had also done a few code tweaks and added a mode where the orbs would kill you instead. This made the game really hard and had to tweak the levels slightly.

Little Annoying Rocket game

20140326143358 This game is a type of phone / tablet game where you control the rocket by tapping and holding, and then dragging the touch to rotate. As you’re touching, the rocket jets fire and you move forwards. It is a difficult game because the camera moves with the rocket and only moves the direction it is facing, so it’s hard to control, but that’s where the challenge is. The goal of the game is to get as much score as possible by going as high as possible, going

FPS Punching game

20140326161308This is currently in very early construction but basically, the idea is that it’s an arena type game like Quake III Arena except you only get the ability to punch people and kill them in 1 hit. The idea came about at a LAN one day when we decided to create our own game-mode in Crysis Wars where we would not use weapons. The game gave you the ability to go invisible, be super fast,. jump really high, or have armour. Using these abilities drained energy and you’d need to turn them off to recharge it. The game mode ended up being really fun because of 3 main aspects: sound effects – satisfying hearing the bash and crunch when you hit someone. Rag-doll: seeing your friend’s bodies fly across the arena after you punch them and then flop to the ground was funny. One hit punches: With the strength mode, punching someone was an instant kill, and this was extremely satisfying. This video shows this in action. 20140326164046   The way this worked well was also because it was a first person shooter game with guns, but if you take away the guns and make your arms super powerful, it’s a whole new game, and just as fun. This game will be fast paced and include power-ups that allow the player to move faster or jump higher, go invisible, and occasionally a ball they can throw once to kill someone. We will be developing it in Unity3D because it’s easy to use and I’m familiar with it. I also have a student licence for the pro version.

Helping a friend with some art

20140326182139 Even though I’m a programmer. I still enjoy doing (2D) artwork from time-to-time. A friend has been developing a game for fun and I asked if I could so some art for it. The concept for the art I was aiming for was a spaceship that appeared like a sports car (Sort of like the game is about you driving sports cars in space and not allowed to scratch it). 20140326182623

Particle tool for CIT

20140228054828 There will be a separate post about this later. But basically, because I really enjoy particle effects, for my “Final Project” for CIT, I’ll be creating an “In browser” tool that lets you create particle effect animations to be used as animated sprites in your game’s particle engine. The need is that there’s not many cheap / free usable particle creation tools around, apart from having to create them in Adobe Flash or Photoshop. There are tools around that let you create super realistic particle effects like Fume FX for Maya and 3DS Max. but these aren’t cheap and they require a lot of setting up. They also don’t allow you to view the effects at real-time as they need to be rendered out, which takes a considerable amount of time. So the main aim for this tool is to have a particle effect generator that utilized WebGL’s GPU usage to display the particle systems and the browser can take care of the toolset.

Particle Tool Proposition



As I have a passion for particle simulation, I would like to create a 3D particle creation tool using WebGL in Google Chrome. The tool will allow the user to create a particle system that can render up to 10 million particles and export a single frame or animated sequence for use in games as sprites in their game’s particle system. This will be things like single smoke puffs, magic spark effects, fire, explosions, fighting effects, plasma bullets etc. As a way of getting more realistic particle effects without having the game simulate millions of particles, this can act as a mid-point between millions of particles being simulated in the game engine and having to create particle animations in Photoshop or other tools.

The tool will focus on real-time interface speed and be done using WebGL. I will research particle simulation techniques, one in particular is a trick that utilizes the graphics card and uses image maps as the positions, velocity and other elements using the RGB values of each pixel in the map for the X Y Z values of these elements. I will then use render targets to do post-processing like glowing, color manipulation, sharpening and lens flares.

The user will be able to render the particle system, like an explosion, magic effect or smoke cloud, and see each frame individually as it will be all stored on the video card. Once the user is happy with the animation, they can export it and download it as a sequence of PNGs (some may have many channels like alpha, multiply, additive). The rendered images will be done client-side and appear in the browser, able to be exported to use in their game engine.

As for game engines and target platforms, the tool will mainly focus on exporting for the Unity game engine, but because it will export PNGs, these could be used in any engine and any platforms, and because the tool will run in browser, it will not just be limited to PC, but in any platform that uses Google Chrome and WebGL.

The main goal of this project is to create a free, usable tool for artists that will allow them to create realistic special effect sprites like explosions, fireworks, magic effects, smoke and whatever they can manage to create. The niche is that there isn’t any free, easy to use tools that allows an artist to create a realistic explosion or a smoke effect other than tools like 3DS Max, and even those tools require a lot of setting up and don’t allow you to view the effects at real-time. They are also not built to do those sorts of things but rather are expensive complicated plug-ins for expensive complicated programs.

An additional thing I want to do is make the particle system like a plug-in so that people could use it in games for less intense particle systems. This is because I want to focus on getting as many particles rendered at real-time using GPU power.

Some ideas for extra work would be packing the images as a zip file, and combining them into sprite-sheets.

Some challenges:

  • “Domain” security with websites and web browsers accessing local data. Textures and save files will need to be downloaded/uploaded and the state of the application will need to be stored on the server the site is run from.

  • Lighting with smoke particles will be a challenge to get right. I will need to research how this can be done to many particles without losing much GPU resources.

  • Particle flow will need to be chaotic and still realistic so that it looks good enough, this may need to be simulated perhaps with particles representing pressure

  • Real-time storing of long animations will be difficult as they are stored client-size, these particles will use a LOT of memory, and some systems will either run out of GPU memory or physics memory. So this will need to be closely maintained.

Some research that will need to be done is on usability. I will ask people from the industry and artists about the tools they use, the issues they have with them, and things they like and are familiar with. This will help with the layout and to keep usability solid.

How to get a joystick working in XNA 4.0

The problem most people have with gamepads in XNA is the fact that you can’t use any joystick type input apart from the XBox 360 controller. This tutorial shows you the most simple way to get joystick input in your XNA game.

The reason I’m writing this is because it took so long to get it to work myself and thought there should be a more useful tutorial on the net about it.

For this tutorial, I’m going to assume you already have an XNA game with keyboard input and just want to be able to use joysticks.

Step 1. Don’t use Soopah.

Although it was a really good little plugin, it’s no longer supported (last updated in 2007) and it only works with XNA 1.0.

Step 2. Grab SlimDX

SlimDX is a DirectX interface for use with C# projects.

You will need to download the correct version for your project, which will most likely be the x86, .NET 4.0 version. It can be found here:

Once downloaded, install it.


Step 3. Reference it in your project

2013-12-06 13_30_33-Filtrate - Microsoft Visual Studio (Administrator)

Right-click on your project in the solution browser and select “Add Reference…”. Click “Browse” and browse to C:WindowsMicrosoft.NETassemblyGAC_32SlimDXv4.0_4.0.13.43__############# and select SlimXD.dll.

Or, you probably should copy that DLL file and paste it in your project (it depends where you want it) and then link it from there.

Step 4. Programming

Put this at the top of the files that you need to use methods from the DLL:

using SlimDX.DirectInput;

In your class, you want to declare some variables that DirectInput is going to use:

private DirectInput directInput;
private Joystick joystick;

For the setting up phase (initialize) parts, you want to add:

directInput = new DirectInput();

// Search for device
foreach (DeviceInstance device in directInput.GetDevices(DeviceClass.GameController, DeviceEnumerationFlags.AttachedOnly))
    // Create device                 
        joystick = new Joystick(directInput, device.InstanceGuid);
    catch (DirectInputException)

if (joystick == null)
    // do error for no joystick here

// get joystick device objects
foreach (DeviceObjectInstance deviceObject in joystick.GetObjects())
    if ((deviceObject.ObjectType & ObjectDeviceType.Axis) != 0)
        joystick.GetObjectPropertiesById((int)deviceObject.ObjectType).SetRange(-100, 100);

// Acquire sdevice             

Then, when in your update loop, you need to get a key or direction from the joystick, just use code like the following:

// button press
    // do some action like jump

// direction press
if(joystick.GetCurrentState().X < 0)
    // do some action for pressing "left" on the stick

If you need to disable the joystick, use the following:

if (joystick != null)
joystick = null;

Of course, you may want to incorporate better error handling and probably implement this into your game’s input handler that controls the key presses. But as I said in the intro, this is a very simple implementation, and will be enough as a starting point.

If you have any questions or if you found this tutorial useful, please leave a comment.

Nice looking Cinematic Cameras in Unity

A challenge you may face in your little programmers life of programming and Unity stuffs, is that you might be making a 3rd person game…. or even a 2D platformer… You know what, any game – mainly 3D, and you want a cinematic camera to do things.

Implementation 1

Touch: Something dark happening. A cinematic is needed.

Basically, the camera really needed to just sit there and move slowly in one direction. Originally, I made it start not moving, once it’s activated, it rotates in one direction and after a few seconds, cuts back to the player.

This was all well and good, but it meant that I couldn’t control easily WHERE the camera would end up afterwards, and it didn’t give much freedom in positioning the camera.

Implementation 2

The second idea was that the cameras LERP (Linear interpolation) between one transform and another. This was what it ended up being (with a little extra magic) but instead of just using plain transforms, I used actual cameras. The reason being is because you can SEE what the cameras can see at the start and end, and it made it a lot easier to position the cameras artistically (or cinematically, whatever).

All but a mess of camera frustum wires, but oh so useful it is.

But, the cameras were a bit jerky, (they went in a straight line and jerked to a halt at the end of their journey), so let’s make a curve LERP!

This was done by easing in the “t” in the “Lerp(t)”  function from 0 to 1 (so, it’s not linear any more). The following code is what was used to do this:

float normalizedTime = m_timer / m_lerpTime;

if(normalizedTime > 1) normalizedTime = 1;

m_lerpProgress = normalizedTime * normalizedTime * (3.0f - 2.0f * normalizedTime);

m_cameraObject.transform.rotation = Quaternion.Lerp(m_lerpStart.rotation, m_lerpDestination.rotation, m_lerpProgress);

m_cameraObject.transform.position = Vector3.Lerp(m_lerpStart.position, m_lerpDestination.position, m_lerpProgress);

So as you can see… we Lerp the rotation and position of the camera based off the rotation and position of both a starting camera and a destination camera.

  • The “m_lerpProgress” is what is used as “t”
  • “m_lerpTime” is how long the camera needs to take to complete its journey
  • The “m_lerpProgress” needs to be from 0 to 1, so we need to use some calculations to figure this out based on our known time variables. This formula is… I can’t remember but it’s a popular easing formula for LERPing with a curve, and it works perfectly.

The last touch of magic is a lerp back to the player. To control this, the camera uses a finite state machine, because why not? And basically uses the EXACT same formula to LERP back to the player rather than to another camera within the cinematic camera list.

Ladies, form a line.

One thing of difficulty and noteworthiness is that each camera you use as the destination or start cameras needs to be DEACTIVATED, and also are not allowed to have any audio listeners, or anything really. They are simply references. So strip them down to their bones.

The other thing is that (despite me not doing this is the end game) is that there should really only be one camera in your scene. If you’re good, and you plan your game well before creating the camera system (like I wasn’t) you should be using the same camera your player uses as your cinematic cameras use. This is because a camera needs to have its settings correct (like fog colour and frustum distance). So design your camera systems around a single camera.

Unity Sparks system

Everyone knows I love particles. And if you don’t then there’s something new to learn about someone every day.

But seeing as I like particles so much, I decided to try and see if I can get some nice realistic spark effects in Unity and succeeded after a couple of hours.


And because I’m nice, I’ll let you have the package:


The way this was done was mostly trial and error and tweaking values in the particle system.

The way I did it was first create 2 GameObjects, 1 for the spark effect in the middle, and the other for the flying sparks.

Next, I added the particle emitter to the main spark one and stuck an additive particle material on it. I then set the particle emit rate to 60, the life time to something really low (the values are all in the unity package anyway) and a random rotation. Also to make it look better I added a size over lifetime so they sort of get bigger over 2 frames.

The second part was a cone shape emitter with a large emit rate, a bounce setting and the render type set to “Stretched billboard” so they all face the direction they go.

That’s the basics of it, but there’s a lot more to it like getting the gravity just right and the lifetime just right. These are all in the package but they will depend on your scene in the end so they will still need to be tweaked.

If you like, you could add smoke particles to a 3rd emitter to give it more of a burning effect. I actually recommend this, but smoke is another tutorial that I’m sure you can find somewhere else.

Anyway, I hope this helps someone out.

PS: you could use a “one shot” for both emitters and use it as bullets hitting a wall 😀


Improving usability

In the last few days I’ve decided to do a little bit of extra work on our first year project :”Filtrate”. This was due to me wanting to show something off at Open Day.

For this who don’t know, this is FIltrate:

2013-08-22 10_12_46

Rather, that is a screenshot from the current version.

Main game

Some of the things I updated on it were as follows:

  • Adjusted player movement. This was because the player jerked to left or right when pressing the arrow key. In theory, this was a good move to begin with because it meant the player moves in that direction instantly, but as we added better physics to the game it started to not work well. The reason for this is that when you hit an arrow key, it set the player speed in that direction instead of increasing the speed. This meant that you couldn’t do precision landings and also it caused the player to move unnatural when he moves in one direction and then changes direction.
  • Added a circle around the player that reveals what’s behind the player in other dimensions. This was extremely difficult and the end result was really good.The first attempt, I tried to use a white oval behind the player that was used as alpha on the background. This didn’t work mainly because I couldn’t figure out how to set up the blending very well and I moved onto another method.The second method I used was a single large image that was completely white but had hole in the center. This was then placed over the player and the blending functions were applied for the filters using this rather than a white square which is what it was before. This worked well and is how it’s used now. I then made the square change size randomly to make it seem like it was flickering, creating the effect that it’s stuff from another dimension.
  • Added a smokey effect in the dimension circle around player. This also turned out really good. The way I did it was also extremely hard. I’ve explained it all below.
  • Added more levels. This was a result of improving the level editor and just making levels as I tested it. Some weren’t saved, which is sad. But there are some easier levels at the start that introduces the player to the basics slower.

Smokey dimension effect

This is the effect in the game that allows you to see smokey effects on objects in other dimensions directly behind the player.

First, I calculate the red, green and blue filter colours. THese are eased based on what filter has been selected with a formula  c += (0 – c) / delay for colours not selected, and c += (255 – c) / delay for the current filter colour. This wasy the colours that are off blend down to 0 over time and the colour that’s active will blend up rto 255 over time. These are not actually applied to the filter, yet but are needed in the next step.

I then negate these colour values as the blending needs the opposite colour to “show” the actual filtered objects behind it (so to see red the blend needs to be 0, 255, 255, which is all colours BUT red).

I then set up the first spritebatch (XNA for the win) which uses an additive blend state. This is used to render the “Below” level (Everything that is filtered, eg. Not the player and start/end doors or HUD etc.) This draws in 2 modes: 1 draws ONLY the FILTERED dimensions that are NOT active. So if I have red filter activated, this will draw only the green and blue dimensions, but not the white dimension. The reason for this is explained later.

2013-08-22 10_59_20

The next step is to draw the smoke. The smoke effect is a tiled square of smoke (actually an inverted black and white water texture)
SwirlThis is applied using a “multiply” effect (darker areas are drawn more, lighter areas are more transparent). This causes the effect of dark smokey patches over the top of the underlying dimensions. I then made the UVs rotate upwards and positioned it relative to the camera.

2013-08-22 11_01_16The NEXT step was to render the rest of the level. The easiest part is to draw the neutral dimension. The other dimension (the blue one in this case) needs to be drawn at a percentage of how far along the transition is. Otherwise if we draw the dimensions “if others are off” they will switch instantly and we lose the fade effect on elements on the previous dimension.

2013-08-22 11_02_15As you can see from the screenshot, it is starting to make sense why I’ve done it this way. The neutral dimension doesn’t need to be drawn to have clouds applied because it’s ALWAYS on. Even though technically it’s still counted as being in that all coloured dimensions.

Next, we apply the “filter”, the main aspect of the game. This originally was a stretched plain white 255,255,255 texture, but as mentioned above, it’s been changed to a white square with a tiny hole in the center that reveals our other dimensions with smoke on them (and as you can see, the neutral dimension shines through).

2013-08-22 11_03_07Notice the green orb is smokey because it’s not part of the blue dimension but the player is standing over it.

Then, I apply a dark border just to add a gloomy atmosphere.
2013-08-22 11_04_39Then we draw the white objects (this has since been changed so it’s drawn under the gloomy border):

2013-08-22 11_05_49And finally the static effects, and the HUD:

2013-08-22 11_07_03Notice that the noise adds a negative effect to the level, which is nice.

Level editor

  • Added dynamic title bar – This is so you can see what level you have open, (or Untitled of none) and a * if you have unsaved changes,
  • Made save feature on close – Added an inturrupt on any close events that causes a “Save changes?” dialogue box to appear.
  • Multiple select items with shift – This was very difficult as object with change properties would change every object’s properties to the last selected item, so I had to add a check on every variable if it’s the same as it was and if not, it edits JUST that property. Worked well in the end.
  • Can create platforms dragging up and left now – previously, platforms could only be created by dragging from a point down and right. To fix this I did a check for if the final mouse positon’s x or y was lower than the starting mouse pointer’s x or y and if so, reversed the formula.
  • X Y Width Height in properties now NumericUpDown – Before this was a textbox which not only needed converting, but had to be edited to see changes. Now you can just click up or down on the arrowds on eahc box.
  • Made texture in properties a dropdown with default “” – I plan on implementing this properly for the sake of variation in levels. Levels will have customisable textures for help signs etc.
  • Removed corkscrew button / removed goose button – These weren’t implemented
  • Added zoom function – Can zoom in and out by multiples of 8
  • Added shortcuts for menu items Ztrl+O opens etc.
  • Got Exit to work / Got New to work – they obviously didn’t work. Exit also promps for save.
  • Added caution box for unsaved data – as mentioned above this was implemented by the close inturrupt. This was done with a simple variable that was set to ‘true” when something was deleted or added to the scene. And set to false when the level is actually saved. This state is used to add the * in the title bar.
  • Added check for locked files – If the level editor tries to save a file that is locked, it would crash. To stop this I added a function that used try / catch on the file stream.
  • Clicking nothing de-selects everything – I kept doing this to de-select items so I just implemented it by making use of the function “DeselectEverything” that had already been implemented.
  • Made 2 levels of grid lines – most of the levels I had made used 2 by 2 sections are they just looked nicer. I added a darker grid line for every second one to help with this sort of level style.
  • Made dimension and effect button selections more obvious – Before it was hard to see in the properties panel which button was pressed as they had little background or shading. Now they use bright colours.

2013-08-22 11_41_45

Camera bugs and research

Camera bug

I found a bug in my camera programming. The issue was that when the ray flying towards the camera HITS the transparent-able wall, it doesn’t go any further, which means if the camera is behind a solid wall, it never hits that, therefore the camera can go through walls if its behind a transparent wall.

To fix this, I put the transparent wall objects on the transparent effects layer and used that for things that are “not solid”. These include also the mother and children.

I then changed the programming to fire 2 rays instead: 1 to check with transparent things, and another to check with solid walls. This worked, however, I realized that it wouldn’t if there there was perhaps 2 transparent effects. This means I will need to do a loop of rays, each time ignoring the previous object, until the ray hits the camera. This will be quite difficult, and I may have to use some other method than a ray, or tweak the ray more so it can ignore certain objects or something. I did some research but couldn’t find much help.


Other than trying to fix the camera, I also did more research for cameras. Looking into it at the beginning of the project, the camera seemed pretty straightforward, but as the project comes along it’s starting to seem more complex.

I had a look at some videos of games that used cameras such as Mario 64, Banjo Kazooee, Journey, and various other Mario games. One thing I notices, especially with Mario 64, was that the camera had an AI. It seemed that at certain times in the game, the camera would position its self and aim in certain directions depending on what was happening. For example, when Mario has to fight Bowser, the camera zooms into Bowser’s face from behind Mario, and when game-play starts, the camera “LERP”s back to its normal position (orbiting Mario from a set distance). The camera controls and type of view are signified by an icon at the bottom-right of the screen. There’s also wall avoidance that was included.

As well as this, I did some reading for the camera in Unitie’s 3d person tutorial. The only thing I found to do with the camera in particular was that the camera has a “near” and “far” camera, that are both combined to make the finished image. This seemed like a good idea, but for the time being we will just render everything to the current camera until any tests reveal slow-downs of the game.

After all this I can see that the camera is going to be a quite large task. It is also a main element of the game. I think for a good camera, it will need a lot of sections in every level that tell the camera to navigate to a certain angle if the player is standing in certain area, mixed with any key things in the level that need to be in view. I can combine these to try and get a nicer camera.

Twitter API 1.1 in cakephp 1.3

ALthough at the time of writing this, vendors have been deprecated in cakephp, I’m going to write this article because I want to start doing more useful tutorials instead of just school work.

This tutorial will only work for cakephp version 1.3.

Today I did some work. This work was basically “Help my twitter feed has been broken for months”. Never fear, Jimmy is here. To save the day.

This problem happened because Twitter decided to remove old functionality with their Twitter feeds like RSS and atom feeds. This is good because “Bring on the future!” but also bad because a lot of javascript libraries used these old methods and now they are broken. Like the one I was using on this website.

To fix the issue I had to comb the interwebsites to finds me some tutorialses. One in particular was pretty useful and showed how to grab the twitter feed, which is all I needed to do. The process is as follows:

  1. Log into
  2. Create a new app
  3. Create an access token
  4. Leave this page open so you can copy about 4 damn codes along to your program
  5. Search for the twitteroauth code on github and download it.
  6. In the zip, grab the twitteroauth folder with twitteroauth.php and oauth.php
  7. Paste this folder in cakephp’s app/vendors folder

MEANWHILE, on your web server (or localhost), you’re editing a cakephp CONTROLLER file. In this case, because the twitter feed was on every page, I was editing the app_controller.php.

In your controller, you need the following:

// Load latest Twitter data from cache
$tweets = $this->Session->read('Twitter.latest');
	// Path to twitteroauth library
	App::import('vendor', 'twitteroauth/twitteroauth');

	// vars for use later
	$twitteruser = 'username';
	$notweets = 3;
	$consumerkey = 'xvsdfsdfsd';
	$consumersecret = 'sdfsadfsadf';
	$accesstoken = 'sdfsadf';
	$accesstokensecret = 'sdfsdaf';

	// function creates new connection
	function getConnectionWithAccessToken($cons_key, $cons_secret, $oauth_token, $oauth_token_secret) 
		$connection = new TwitterOAuth($cons_key, $cons_secret, $oauth_token, $oauth_token_secret);
		return $connection;

	// get connection using the above function
	$connection = getConnectionWithAccessToken($consumerkey, $consumersecret, $accesstoken, $accesstokensecret);

	// request tweets json
	$tweets = $connection->get('' . $twitteruser . '&count=' . $notweets);

	// add into session cache
	$this->Session->write('Twitter.latest', $tweets);
$this->set('tweets', $tweets);

And there. Look through the code. The comments tell you what it is doing. Basically make sure you paste the codes and usernames into the fields, The data there is example data.

The reason why I’ve added that session stuff is because the twitter call from the website is slow as balls and if I had to wait 5 – 10 seconds on every page load I’d probably shoot myself. This basically stores the tweet data in the session because it’s quicker that way. The user will now only experience an initial slow page load but no slow-down after that.

Then, in your view, you need something like:

$twitter_output = '';
	$username = 'twitter_username';
	$html->tag('h4', 'Latest Tweets');
	for($i = 0; $i < 3; $i++)  	
$twitter_output .= $html->tag('p', $html->link($username, '' . $username, array('target' => 'blank')) . ': ' . $tweets[$i]->text);

The above may or may not need some filtering on that text, for example: links won’t appear as links but just HTML. Twitter converts these on its own.

In new versions of cakephp, it is recommended that you create a plugin or a type of package for this sort of thing rather than vendors. Vendors are lazy and… I don’t know. But if you need a quick fix for a client, then this is your best option.

There you have it, Twitter API 1.1 on cakephp 1.3.

Leave comments if you have any better ways of doing things.