Spectragate

Making indie gams for the Toronto gamedev scene



Why open-source Git strategies are harming your closed-source project

Category : Tutorials · No Comments · by Jan 12th, 2020

Note: This article was originally written in 2017. Since then continuous integration branches have become very popular, following similar ideas to the post below. If you like the ideas below, be sure to do some research and give them a try.

While source control solutions like Git have many techniques for branch management, once they are battle tested in closed-source projects a number of problems start to arise. Unfortunately these problems often only become apparent hours before you’re trying to deploy a build, and worse still developers often consider it “just the way source control is”.

In this article I’m going to look at just one problem with source control which can can have dire consequences on an entire project, then explain how I went about solving it.

The Single Problem

You’re a web developer working at a studio with 5-6 other developers. You’ve got your brand new Git repo setup and you start following a branching convention similar to most visual guides on the internet. Or perhaps you want to be fancy and you setup GitFlow so you can automatically follow best standards. Things are going great, you’re checking things in, you’re writing good commit messages, you’re creating and closing branches. You and your coworkers create new feature branches, close those features and after a week your build release day approaches. You open a new Release branch, the code looks good and you prepare to push the build out. Your tree now looks like this:

 

Your boss then enters and says an issue has come up and its going to take weeks to fix. Let’s say the client changed their mind on the requirements for some new CSS, so Feature-C (which we will now call Feature-X) needs to be removed from the deployment, but the other 3 features can still go live.

You realize that you can just make a new release branch from the master branch, merge in the 3 features still going live and push that build out. You start merging in branches and it looks like this:

Git Tree - 2

 

It’s a bit messier, but on merging the Feature-D branch you run into a problem. It’s a single problem that has two unfortunate consequences:

Disaster Number 1

Looking at 3 features still going live (Feature-A, Feature-B and Feature-D), we realize that Feature-D was actually developed after the now dead Feature-X. Because the developer of Feature-X thought the feature was complete, he merged it into the development branch, becoming the CSS that Feature-D was developed off. The developer of Feature-D had no idea the CSS was a brand new feature, he just used it because it was the most recent. If we remove Feature-X, Feature-D is going to look like broken CSS garbage.

We’ve painted ourselves into a corner. We’ve built new features on code that is not yet in production, which if pulled from the release impacts every other feature that was built on top of it. Having a single development branch that all developers are building from creates a single point of failure for future changes. At best your project looks like this:

 

Branch Impact 2

At worst it looks like this:

Branch Impact 2

 

This is a situation that is common in closed-source studios where newly developed features tend to get bundled up over a week or two into a single release. This is different to how open source projects on Github tend to work (which is where Git and Gitflow really shine), where contributors often work in isolation from each other, one feature having little to impact on the next. After the owner of the project decides 4 contributed features are to be included in the current version, the branches are updated and reflected immediately on Github. Since users are building and/or running the projects locally, there’s no delay in deployment where changes need to be pushed out to the world in delayed bundled packages.

The fact that Github (and by extension, GitFlow) don’t have to deal with the deployment sides of projects, just the source control branches, often gets overlooked when researching branch strategies.

TLDR: Every feature that gets merged back into a development branch becomes the base for any future development, cascading with changes when one feature down the line is removed.

 

Disaster Number 2

You’re getting close to fixing this mess. Feature-D has also been removed from the build and the new release branch is done. Even though the build was meant to go live hours ago it’s finally deploying to the server. Now that it’s time to update the master and development branches by merging release into them. But as your finger hovers over the merge button, you notice something is wrong.

The release branch isn’t actually what you want people developing from since you don’t want to completely remove Feature-X and Feature-D from the codebase. The release branch was just something you hacked together so you could deploy with the missing features.  You can’t merge Release back in without wrecking everyone’s work, so instead of merging it back in and damaging the development branch, you just kind of…leave it open. A little dangling branch to remind you off these failures, maybe you add a little “dead-end 🙁 “tag to it. You tell everyone to just continue working on the development branch and you’ll make a new release branch next deployment.

Git Tree - 3

But now there’s an even bigger issue – your codebase (development and master) don’t actually match what is in production. If someone goes to work on that problematic CSS which caused all this mess, is what they are seeing on the development branch actually what is in production? If someone in QA notices a bug on the live site, do you create a Hotfix branch from your current master/development branch or the old dead-end release branch?

TLDR: The gap between what code is on the live site and what your developers are working with gets wider as more last minute changes can’t be rolled back into the master and development branches.

 

How to fix this

There is a way around this nightmare of a situation, but it requires changing how the development branch works and requires all developers to follow this system. So here’s the single concept we are going to use to avoid this.

You will never merge a feature back into the development branch. Every developer should have full trust that if they start a new feature, they are working off a snapshot of the current live website. 

Simple!

So what happens when we start and finish a feature? The new pattern is this:

  • Rule #1: New features are created from the latest development branch.
  • Rule #2: When features are completed, they will only be merged into a QA branch (based off the latest development branch).
  • Rule #3: QA will eventually be merged into a new release branch
  • Rule #4: Once deployment is completed, the release is merged back into master and development, starting a new cycle.
  • Rule #5: Undeployed features are updated to the latest build by having the development branch merged back into them (more on this in questions below).

Sidenote: The master and development branches are always identical. The reason for having a development branch is to make it easier to hook into CI servers, which usually expect certain naming conventions. Plus it has the side benefit of being less confusing for new developers who are used to working off a development branch.

Using this approach, our branches look more like this:

Git Tree - 4

Taking the original problem into account, the story above now look more like this:

  1. We have a stable development branch and master branch, both of which are identical to the live production build/site.
  2. A developer starts work on Feature-A. They checkout the development branch and create “feature/Feature-A” branch.
  3. The developer finishes their work, but they do not merge back into the development branch. Instead of polluting that branch, we spin up a new QA branch from development and merge our changes into that. If another developer has already created a new QA branch based off the most recent development branch we can use that instead.
  4. An email is sent to QA to run some tests on the QA branch. If the teams are small enough, an email to your team so everyone knows to check the QA build when they have time.
  5. The other developers create Feature B, C and D and merge into QA.
  6. Release day! We get news Feature C is being removed. We spin up a new QA branch and merge features A, B and D into it. Once a new round of QA is complete, we merge into a release branch.
  7. Once deployed, we merge Release back into Master and Development. The old feature branches are then deleted. You can then either delete your QA branch until a new QA round starts, or you can create one off your master/development branch and just keep it open between deployments. I prefer to just delete it once a deployment is complete.

 

Questions

What happens to Feature-X in this model? Isn’t it now one release behind?

As per rule #5 above, after a release is deployed a check should be done for any features that are still open. If a feature skips a release or two, every time the development branch is updated, the feature branch can be rebased back onto the latest development. Any conflicts or updates that need to be performed are now handled by the person writing the feature, not by the deployment team at the 11th hour who know nothing of the code. This gives the feature developer ample time to incorporate any conflicting changes into his design. Once his feature is complete, when its merged back into QA (and subsequently, development and master) we can be sure he wont overwrite anything accidentally with outdated code. To update Feature-X, our branch looks like this:

Git Tree - 5

Developers can theoretically keep updating like this forever, always sure they wont lose work and are working off the latest snapshots of the live build/site.

Is GitFlow really that bad? It seems that in GitFlow if you always work off the development branch for new features, you are always moving forward in the codebase. If a feature is removed, shouldn’t that just be considered a new feature and incorporated into the build/branches?

In theory, yes! However, this is why I say GitFlow is only perfect in a perfect world. While thinking that removing a feature should just be considered a feature in itself and be developed (undeveloped?) just as any other feature would, this is rarely possible given the time when features are most likely to be removed – hours before a deployment. If you can guarantee that anytime a feature is removed you have ample time to restructure the app/website around this removal without impacting timelines, then by all means go for it.

There is a world where this is possible, and its for people using Git for what it was intended for – open source projects with public commits from a huge number of different sources. These projects do tend to only move forward with their branches – if a feature is included and later needs to be removed, that in itself is considered a development task and will probably show up in patch notes. But what works for the open source GitHub world doesn’t work, in my opinion, for day to day development in studios which have a lot less control over features and have to handle deployments.

What happens if a feature is being developed that is dependent on another feature branch? Since the development branch wont be updated until after release and we can’t build off QA, how do I get those changes?

Ideally you should do a release before starting the new feature based on another. Since you should only be building off what is the latest snapshot of production, you should only be building off another developers feature once its been committed to the latest build.

Now, this isn’t always possible. If you are working on a feature that is being built on top of unreleased changes from another developers feature work:

  1. Create a feature branch off their existing feature branch. You know that your work is dependent on that feature, so if theirs doesn’t go live you already know your feature isn’t going live either. You should be periodically merging the parent feature into yours to make sure your working off their latest changes.
  2. Less ideally, build your branches all the way up to the release branch, merge it back into development and treat it as a virtual-deployment. The drawback of this method is that if the features you just merged into development need to be removed, you will have to spin up a new feature branch to undevelop those code changes (similar to the GitFlow approach in the previous question). This was the problem we were trying to avoid, so it’s advised not to do this often.

 


That’s it, hopefully this makes your deployments and branching strategies easier. I’ve used this pattern at my old office for around 7 months and it solved many of the headaches we were having with deployments. Since then I’ve seen similar techniques to this popup with integration branches. If you have any feedback on this approach or can think of some edge cases that I didn’t cover, please feel free to comment.

Fixing common Advanced C# Messenger issues

Category : Tutorials, Unity · No Comments · by Oct 12th, 2015

In the latest project I’m working on I’m making heavy use of the Advanced C# Messenger. It’s a brilliant script that acts more like a Mediator Pattern than a plain Send/Receive messenger, which is fine with me.

However as great as this plugin is, there are two small issues I ran into pretty quickly.

No Listeners Available

To make ACM (is that an acronym?) work like a true mediator system, it needs to give exactly zero craps about what’s actually listening to the broadcasts. Unfortunately if you try to send of a broadcast and no listeners are setup, you’ll quickly run into this error:

“BroadcastException: Broadcasting message “SaySomething” but no listener found. Try marking message with Messenger.MarkAsPermanent”.

Marking the message as permanent wont solve anything. Luckily ACM comes with a way to fix this: comment out this line at the top of Messenger.cs

//#define REQUIRE_LISTENER

This simply disables the check for any listeners before it broadcasts a message. Easy! The documentation on the wikipedia page doesn’t mention it, so unless you connect the dots yourself you’ll go down the wrong path thinking there is an issue with permanent messages, like I did.

Unexpected results when restarting scene in Unity 5.2

This one took me a bit longer to hunt down. You load your scene by pressing the play button in the Unity Editor (so no Application.LoadLevel calls are made) and everything loads and works fine. You then call “Application.LoadLevel” to restart your scene and, no matter how simple your scene is, some messages make it through and others don’t. What the horse?

This bug is because of a breaking change in Unity 5.2: Before Unity 5.2 OnLevelWasLoaded was called before Awake. Now that this has been fixed, any listener events you are setting up in your Awake calls are going to get deleted when OnLevelWasLoaded is called. This is because Messenger.cs automatically calls ‘Clear()’ (which removes all the current listeners) when this event triggers.

The simple fix is to comment out the .Clear (line 304 in Messenger.cs) that happens and call it manually yourself when you are changing scenes.

//Clean up eventTable every time a new level loads.
public void OnLevelWasLoaded(int unused) {
	//Messenger.Cleanup();
}

Unfortunately because OnLevelWasLoaded no longer fires before Awake, there doesn’t seem to be a replacement method yet that can take its place. Perhaps a new OnLevelWasUnloaded method would be a handy?

Solving gaps between scrolling background sprites

Category : Tutorials, Unity · No Comments · by Oct 4th, 2015

Note: The background sprites in this blog post were downloaded from opengameart: Forest Background by ansimuz

In the game that I’m currently working on I need an infinite scrolling parallax background behind the player. Because the camera needs to be orthagraphic but support parallax scrolling, I decided to make the world move while the player stands still. Everything was going great until I noticed this:

gap1

I had a damn gap appearing between my tiles – and it wasn’t all the tiles, in fact my foreground seemed to work ok. It seemed to happen more often with very slow moving sprites. After much research I realized that the problem was occurring because of float precision errors in Unity. Basically this coordinate:

gap2

would move a few frames and then explode into something like this:

gap3

The tile at the front of the queue that’s moving ends up with .4599998, but it still has to render to a single pixel on the screen. The camera has to take a guess how to render this and either rounds the position up or down, placing the sprite one pixel too far or short, creating a gap between the tiles. In many 2D only game frameworks you don’t notice this issue because the world unit to pixel ratio is already 1:1, so it’s easier to move around consistently.

My first solution was to round all coordinates down to 3 decimal places – it seemed like an easy fix. However, the gap started to appear again and I realized that I was just now doing in code what the camera was doing when it rendered anyway – taking a precision error and then trying to guess if it should be rounded up or down.

After 3 days of faffing about with different techniques I came up with this solution:

My orthographic camera has a ratio of 1 Unity world unit to 100 pixels (note: You can find more about getting your orthographic scale here). What I realized was that for a sprite to move a single pixel on the screen the absolute minimum distance it can travel in Unity is 0.01 world units. As long as I constrained my background movement to move only in multiples of 0.01 I would avoid any rounding or float precision errors. After all, if something lands in 10.672, how does the camera reliably render that at .672 of a pixel?

I changed my movement code to simply use 0.01 as the minimum distance a sprite can travel, which seemed to work great except for one tiny issue:

gottagofast

It’s way too fast! He’s supposed to be running but not that quick! The problem now is that 0.01 as a minimum distance isn’t small enough. If the backgrounds move that 1 pixel every frame, that’s still 60 pixels being covered every second. The solution was to take the movement code and place it into a Coroutine that runs at different intervals for different layers. For the background, they only shift one pixel every 0.05 seconds, while the foreground moves every 0.03 seconds.

public float MovementSpeed;
public float MovementDelay;

void Start ()
    {
        StartCoroutine(WaitAndMove(MovementDelay));
    }

    IEnumerator WaitAndMove(float waitTime)
    {
        yield return new WaitForSeconds(waitTime);
        transform.position = new Vector3(transform.position.x + (MovementSpeed), transform.position.y, transform.position.z);
        StartCoroutine(WaitAndMove(waitTime));
    }

Very simple but very effective. One drawback with this technique (which is less to do with the technique and just pixel based movement) is that you can’t move the sprites with anything that uses floats as its values – this includes transform.Translate, moving using a velocity or multiplying movement by delta time.

If you need pixel based movement in Unity, make sure you are only moving in whole pixel amounts based on your orthographic camera size.

Some final notes

If you find that the pixel based movement for very slow backgrounds is too jerky, you can get sub-pixel movement by setting the move distance to half your orthographic camera size (eg. 0.005 instead of 0.01) and then enable bilinear filtering on your sprites. You will lose the pixel-perfect outline of your sprites, but if your using this on far away objects (eg. clouds) the player probably wont even notice.

Also, pixel-based movement will always be jerkier than sub-pixel world based movement (which can lerp between pixels without just jumping from point A to point B in a single frame). It’s not a silver bullet and it sometimes takes some experimentation to see if the effect is worth it. If you are using pixel based movement however, turning on vsync under the quality settings will also help prevent too much sprite-jumping, owing to the fact that you can’t use time.deltatime to smooth our your frame-to-frame movement.

Tutorial: Installing HaxeFlixel

Category : HaxeFlixel, Tutorials · No Comments · by Jul 1st, 2015

When I first started using HaxeFlixel, the biggest turn off was definitely the giant stack of programs I needed to install. Worse yet, I had no idea what half of them even did so I just had to blindly install them all and hope that nothing broke along the way.

Now that I’ve had a bit more time to use HaxeFlixel I realize the install process isn’t as daunting as it first seems. In fact, it’s pretty straightforward once you understand what everything does.

The following guide is a lot longer than what you’ll find on the HaxeFlixel website, but the idea is to explain the individual components that make up the install process. This is so when something breaks you’ll hopefully understand how to fix it. When I first started using HaxeFlixel the website I was downloading the libraries from went wonky, fell over, got a bucket stuck on its head and then one of the components went completely bonkers. I didn’t know what the hell the component even did so I just deleted everything and started again.

Note: This guide assumes you are installing on Windows.

Step 1. Installing Haxe.

What the hell is Haxe?

haxe

Haxe is the foundation of HaxeFlixel. It’s a framework that uses it’s own language to export to many other languages. It can export to Javascript, PHP, C++, C# and more! Writing Haxe is a bit of a combination of C#, Python, Javascript and a few other Haxe specific features like Macros thrown in. Haxe is what you’ll be writing in for HaxeFlixel, which is both a blessing and a curse. One one hand you have the entire Haxe community for when you run into issues, but on the down side most of your questions will be HaxeFlixel specific, making the official Haxe forums a bit useless. But in my experience just about every question about Haxe in general has already been asked so as long as you do some searching you’ll find an answer.

Ok so how do I install it?

Visit http://haxe.org and download the Windows installer.  Once it’s installed, go to “Start > Control Panel >  Search for Environment Variables > Edit system variables” and then click the Environment Variables button. In the screen that appears you have two types of variables, local at the top and system at the bottom. In the system variables check that you have a variable called “HAXEPATH” set to the same directory you installed Haxe.

step1_installingHaxe_variables

 

Note: If you are installing Haxe through Chocolatey sometimes the environment variable will be installed under “HAXE_PATH”. It should be changed to “HAXEPATH’. You might need to restart your computer after changing this variable. I’ve also noticed that sometimes it wont setup this variables at all, in which case you’ll have to manually add it in. Also make sure you have “NEKO_INSTPATH”.

 

Step 2: Running HaxeLib

When I first started with HaxeFlixel the tutorials made it seem like I had to install OpenFL manually. This is probably where everything starts to go sideways for most new users and it’s not the best idea to be installing all the packages by hand. Instead, Haxe comes with a  a very handy tool called HaxeLib. HaxeLib is a package manager similar to Nuget, npm, Bower or other tools. Now that we have Haxe installed we don’t actually need to install anything else manually, we can just cram stuff into the command line and let it install what it needs. Huzah!

To start, open your command line by going “Start > Run” type “cmd” and hit enter. This should open your command console. Next, enter these commands one at a time:

 

haxelib install lime
haxelib run lime setup

Timeout Errors: If you start getting timeout errors when using Haxelib, just wait 10-15 minutes and try again. Sometimes it gets hit by heavy traffic and sometimes (rarely) it goes down completely. If you are getting the timeout errors for an extended period of time you can download and install the packages locally by visiting lib.haxe.org directly. Once you’ve downloaded the correct package, in your command line make sure the path you are at contains the zip file and then replace the keyword ‘install’ above with the keyword ‘local’ – this will make haxelib install from your local .zip file instead. Also just make sure that you replace ‘lime’ with the full zip file name of the download (which will look something like “lime-2,5,0.zip”).

What the hell is Lime and why am I installing it?

Lime is the most basic layer of our rendering engine. It’s a very low level system that supports many platforms, such as Flash, HTML5, as well as native platforms like Windows and Android. It’s part of the glue between Haxe and the devices you compile too, plus it has one of the best icons around. Look at it! It’s a lime and a cube!

Lime alone can be quite difficult to write too, which is where OpenFL enters the scene – it’s a layer that sits on top of Lime and gives it a much nicer API to program against.

OpenFL? How many of these things are there?

openFL

OpenFL is a framework that is built to run on top of Lime. It’s a rewrite of the Actionscript3 library with some extra additions thrown in for device support. It’s very cool and gives us the ability to write in framework that is a lot closer to Actionscript. Without OpenFL we would have to write in OpenGL directly, which isn’t fun for most indie devs. We do lose out on some of the advanced functionality you get with Lime by having direct access to OpenGL, but from what I’ve seen that’s rarely an issue.

So why does this matter for us? Well OpenFL is what allows the “Flixel” part of “HaxeFlixel”. Flixel is a flash game engine written by Adam North, but since Flash isn’t that flash these days (HA!) some smart people re-wrote it in Haxe on top of the OpenFL engine (and OpenFL lets us export to more platforms than just Flash). That’s why some people refer to OpenFL Actionscript 4 and HaxeFlixel as Flixel 2.

Now that we’ve already installed Lime, we can install OpenFL to give us a nice API on top. In your command line run the following one at a time:

lime install openfl

This will install OpenFL on your computer. Note that in this case we didn’t install this through haxelib because Lime will take care of setting up OpenFL for us.

One that is done we need to choose what platforms we want Lime to build to. You’ll notice that we don’t need to setup HTML5 support, it’s setup by default.

lime setup windows

If you also want to build to Android, you can run the following:

lime setup android

Warning: If you do not plan on building to Android and you have never installed the SDK before, do not run this command. It will download an absolute craptacular amount of files and require you to do some manual steps in between to setup Java developer kits. I would actually recommend setting up all this in advance manually by following this guide on the Android site and then coming back and finishing this install.

Step 3: Installing HaxeFlixel

Between all these libraries and frameworks we actually still need to install HaxeFlixel! We have our base setup now, which looks like:

techstack

So our last step is to run the following.

haxelib install flixel
haxelib install flixel-tools
haxelib run flixel-tools setup

This will install Flixel from the haxelib library, plus it will download a set of tools that make development much easier.

When you setup flixel-tools you will be asked if you want to setup a flixel alias in your console. It is highly advised you set this up. This allows you to write shortcut commands like ‘flixel tpl’ to setup a boilerplate project ready to go.

You will also be asked what IDE you are using. Don’t do what I did which is press a random number and hope to fix it later. Whatever option you select will determine what format your projects get created in (using the command alias above), so if you choose Flash Develop you will get .hxproj files, but if you choose IntelliJ you will get .idea files. If you don’t know which one to choose, just select Flash Develop.

Note: It is possible that after you’ve installed Haxe you can just skip to the command above. Lime and OpenFL are marked as dependencies that haxelib will automatically download when you install flixel. However, I recommend setting them up separately just so that you can do any extra steps you need like setting up Lime for windows/android.

Wait I thought we were installing HaxeFlixel? This is just Flixel.

flixel

Confusingly, Flixel is actually HaxeFlixel, it just doesn’t seem like it. Because haxelib is a collection of libraries written in Haxe, the version they contain is the actually the Haxe version of Flixel. This can be strange for new users because your downloading a framework called Flixel, but everyone keeps calling it HaxeFlixel.

HaxeFlixel is more of a collection of tools and frameworks under the umbrella project of HaxeFlixel, all supported by a core group of open source developers, rather than a single program like Unity or Stencil. So when you get HaxeFlixel, you’re actually pulling in several different tools and libraries (OpenFL, Lime, Flixel etc.) which all combine together to make a game engine. It’s both what makes the project so cool, but also very confusing to get started.

 Step 4: Installing Flash Develop

Visit www.flashdevelop.org and download the latest version.

Once you’ve installed it, open it up and go to: “Tools > Install Software” and tick “Standalone debug flash player”. This will download the Flash debugger we need for our project. At this point we can create a new blank project either by using the command line, or we can download a template that allows us to create a new flixel project from inside Flash Develop.

I also recommend downloading and installing the templates from this page: http://haxeflixel.com/download. If you are still using Flash Develop, download ‘FlashDevelop .fdz template’ and then run the file it gives you. You will probably have to restart Flash Develop. There is also a package just called “Flash Develop Template” which is just a raw dump of the boilerplate project – this can be handy if you want to customize the project template and use that as your starting point (I have a slightly modified version I use for game jams).

Step 5: Have a victory beer

Everything should be setup ready to go now. There’s some tricks you can now do like ‘haxelib upgrade’ to update all the libraries and plugins you’ve downloaded. and there’s some great plugins out there that you can also download (don’t forget that as long as it doesn’t need to render anything, pretty much any Haxe plugin can be used in your game). I may do a follow up post on getting started with Flash Develop (including the extremely finicky debugger) and cover some good boiler templates that are out there to get started.