…for 2D Mobile
Unity is one of those game engines you keep hearing about. It has grown enormously over the past few years, and it is safe to say Unity has become the most popular game engine to dominate the current market, especially for mobile.
It started as a 3D game engine mainly used for building desktop games. In version 4.3, Unity began introducing 2D features, with the first being the Sprite object. Following this, version 4.5 brought 2D physics support and version 4.6 a whole new set of UI tools. Along with other cool features, like the ability to build for both iOS and Android while writing code in C#, it all looked very promising.
Having some previous experience with desktop Unity, we decided to test it out on mobile by building a simple iOS game. Without going into the details of creating a mobile game, we’ll try to explain the difficulties we encountered along the road that ultimately led us to break apart from Unity.
Building a Game
It all started out nice and shiny. We made our screens (view controllers) in the editor, positioned buttons and views using the new UI toolkit, and adjusted anchor points to support multiple resolutions. Everything was going well—it was easy to use Unity.
Then, we needed to write some navigation from one screen to another (similar to how UINavigationController handles navigation stack on iOS). There’s no built-in way to do it. You have to manage your navigation stack manually and implement your own animations for pushing and popping views. Since we’re building a game, pretty custom transitions between screens are almost mandatory, and we were going to implement them anyway. And so we entered pitfall #1.
Pitfall #1: Animation System
There are two ways you can go about writing animations in Unity. The first way is by using the animation window to build animations and then the animator state machines to create smooth transitions between them. The animation window allows you to change any property of your GameObject and record it as a key frame. By building key frames and interpolating between them, you’ve got yourself an animation. You can define your own interpolation curves in the editor, allowing really powerful control over how the animation will be performed, but this editor also has a couple of downsides. For example, if you rename an object or its child that was used in the animation, you often have to recreate key frames for the renamed object, which can be quite frustrating.
When working with the animation editor, values used in animations are always absolute. You cannot translate the object from its current position using the animation editor. If the initial position specified in animation is different from the current one, our object would instantly teleport to the initial position.
That brings us to the second way we build animations in Unity: through the code. The problem is that there’s just no nice, built-in way to do it easily. You can use coroutines or the Update method to change the position in time, but you have to do all the calculations manually. There’s no simple way like in CCActionMoveBy in Cocos2D.
The code written below represents the same animation as the one created in the animation editor above. Using Cocos2D, the same animation can be written in just two lines of code and can be easily extended if more complexity is needed.
And so we were stuck with our first tough decision: use the animation editor, along with animator state machines, to define pushing and popping, or write those animations in code. The first option would give us all the power in the world to create complex and juicy animations, but with great power comes great tweaking of all the possible parameters. The second option was to use code; however, the juiciness of custom interpolation curves would be harder to implement. We went with the former option and wound up with the large state machine controlling the flow of the screens, and we spent quite some time setting it up and working on it. The code would’ve probably been a better option, but we would have had to build our own more convenient animation system, such as CCAction in Cocos2D, and implement our own interpolation curves to achieve the same level of juiciness. In either case, it would be too much trouble.
So, we were back on track. We had implemented our views, and they had nice transitions between them. Everything was working. That is to say, everything was working inside the Unity editor. However, when we tried to build and run it on a device, our nice transitions wouldn’t work. That’s where pitfall #2 comes in.
Pitfall #2: Debugging
When stuff doesn’t work, you just debug it. Yes, but here we have a special case: when something doesn’t work on the device. It works in the Unity editor but not on the device. What’s so different about this? It means that every time you try to debug something that doesn’t work on the device, you have to build Unity, which creates an Xcode project, then build the Xcode project and run the application on the device. That process alone can often last more than five minutes. After you’ve gone through that, you’ll find out that there’s no simple and convenient way to debug your code using breakpoints. The best try is probably to put breakpoints in compiled C++ version of your code which looks quite unreadable and some extra time will definitely be spent understanding what’s written. Although much better than printing values, I still consider this a major inconvenience.
Unity introduced state machine behaviors in version 5.0. We used that feature to implement some behavior that was dependent on the animator state. It turned out that this feature was not supported on iOS. We ended up implementing a workaround and filing a bug report to Unity.
And so we continued down our road. We had our view controllers ready. Implementing the game part of the app was no biggie since we went with a “let’s build something as simple as possible just to try it out” philosophy. Most of the game was ready, so we built it again for the device. We’ve got 5 FPS on the iPhone 4 (although irrelevant device nowadays, we wanted to support it back then). How did we end up with 5 FPS with this extremely simple game!? Back to pitfall #2 all over again. It took us a day to realize that Unity by default uses a very complex shader for rendering an Image component of the relatively new UI toolkit. Why not use something cheaper? After all, it’s only a UI component. At least it was an easy fix. We just had to replace the shader with a cheaper one on all Image components, but it was hellish to find since the problem was only about the device (we already described how to tackle those). This kind of problem strikes fear into one’s heart. Although easy to fix, it makes you wonder what else is lurking in the shadows.
To finish it up, we decided to add a Facebook share, Game Center leaderboard, and Chartboost SDK for ads. All of these have to be tested on the device, so there’s always some flirting going on with pitfall #2.
Our game was finally ready to be released.
Pitfall #3: Build Size
The size of our build was 40 MB. It was just too much. The game is extremely simple, so 20 MB would’ve been more reasonable.
When Unity imports a PNG texture, it increases its size. For example, an 84 kB PNG image takes up 384 kB of space when imported with mipmaps turned off. That’s a factor of 4.5 in this case. Why is that so? “Only POT textures can be compressed to PVRTC format,” POT meaning the power of 2. So, does that mean you should have some extra transparent space in your texture just to have its size equal to the nearest power of 2? It won’t work—“Cannot compress a non-square texture with Sprites to PVRTC format.” So, the texture also needs to be a square?
We ended up placing all our textures into a squared POT texture atlas. It reduced the size of our build by 8.5 MB. Unity takes about 20 MB, which means that our app alone would be about 11.5 MB as well. The total resulted in 31.5 MB; nothing more we could do there. Although 20MB Unity adds doesn’t seem like a big deal on small projects, it is somewhat embarrassing to see similar apps have much less build size. Also, if those 20MB push you over the 100MB limit, users won’t be able to install your app without WiFi so 20MB can make a big difference in those cases. Later on, we found out that Unity has a feature called Sprite Packer, which can pack Sprites into atlases, but its main purpose is to optimize performance on the graphics card and not to reduce the size of a build. Sprite Packer docs and build size optimization docs don’t mention it as a method for reducing build size, so using it for that purpose might be considered a convenient hack more than a solution.
And so our adventure has come to an end. Still a bit frustrated with all the pitfalls we fell into, I believe debugging on the device was the main reason we decided to break apart with Unity. But before feeding Unity to the dogs, we needed to be fair and list all the positives and negatives.
The Good Stuff
- Renderer. Using scene graph and bounding boxes, Unity doesn’t render the parts of the scene which fall outside of the view frustum.
- Raycast touch handling. Using similar principles, Unity doesn’t have to check the array of all individual objects and see which one was tapped (like Cocos2D) but can eliminate most of them by performing group checks.
- Supports 3D.
- Powerful animations and animator system. High level of control can be achieved by using custom interpolation curves in the editor, and by defining states, transitions, and blend trees, you can blend from one animation to another while transitioning between animator states.
- Inspector. You can tweak certain parameters in editor while the game is running, which can be quite helpful.
- Physics engine. Collision detection and response, material types, bounciness, friction, various types of joints, etc.
- You can’t debug on the device. If you still don’t feel the pain of it, I suggest reading pitfall #2 again. Debugging on this device can be unpleasant
- It’s closed third-party code. If something doesn’t work as promised (such as state machine behaviors), you can only pray and hope that it’ll be fixed in the next version.
- Deeper parts have horrible documentation or none at all (setting up animator from code, using geometry shader, etc.), and since the source code is closed, you have nowhere to look for explanations.
- Inflated texture size when the texture is not a POT square.
- Project size is up by cca 20 MB.
- UI toolkit is actually quite poor and has only the basic UI elements.
- No access to native capabilities and components. Alert view, picker view, table view, collection view, navigation stack, touch ID, etc.
- Deployable to both iOS and Android and many other platforms. It’s a plus even though it never works out of the box. You’ll always have to write some iOS and Android-specific code. Also, the simpler the game is, the smaller the benefit of this feature.
- Editor-oriented. Editors can be a hassle, and it’s often easier to change and maintain stuff which is written in code. On the other side, editors allow designers to create animations right inside Unity. I consider this a minus since there are no easy in-code alternatives.
The truth is, no tool is perfect, and you’ll unavoidably be bitten every now and then. Today it is Image shader and POT textures. Tomorrow it will be something else. The problem with Unity is that it fails to mention these things in documentation. Cocos2D is not perfect at all, but it is open source, which means that when you do get bitten, you still have the power to find out what happened and fix it.
Don’t get me wrong, Unity is a great tool when used for the right purpose. I just think people got in habit of using it for anything without thinking it through or even realizing that there are better options for specific cases.
When everything is taken into account, whether to go with Unity or not mostly depends on the type of game you’re building. If you’re building a 3D game, Unity is the way to go. If you’re building a complicated game with the intention to deploy on both iOS and Android, you can benefit from Unity, but that alone is not a reason to go with it, at least in my opinion. Writing in Unity and deploying to iOS and Android is not twice as fast as building two apps. With Unity, you’ll spend a lot of time writing capabilities that already exist natively, falling into pitfalls you can’t fix, finding workarounds for some core functionalities, and ending up writing platform-specific code anyway. Maybe it will be faster than writing two separate apps, but by how much? 20 percent to 50 percent? I guess it depends on your case and if it’s worth the trouble.
For anything simpler, I really see no reason why Unity should be a primary option for mobile. Most of the positive stuff is not that important in the 2D world while the negative aspects can be extremely frustrating. I see no reason to abandon all the native components and capabilities, depend on closed third-party code, and go through some of the worst periods of your debugging life for no reason whatsoever. You also get an increased app size as a bonus. Cocos2D is a good alternative, and it’s even possible to use it in combination with UIKit as a powerful menu builder.
And Then We Tried the Native Approach
We did not program in Swift back then. We went through Unity before even trying to build natively. We didn’t know any better and thought that Unity was the best it could get. We were so wrong. Oh, the beauty of the navigation controller handling your navigation stack, pushing and popping your view controllers with custom transitions that are easily implemented. The pure joy of building and debugging the application on a device without any intermediate steps. And breakpoints! Sweet breakpoints, how I missed them. The Cocos2D animation system is so beautiful and easy to work with. It is far easier for me to create a simple animation in Cocos2D from code than to do it either way in Unity. After experiencing all this simplicity, I don’t think I’ll be going back to Unity any time soon.
What do you think? What are your experiences with Unity?
UPDATE: this article was updated after some feedback we gathered from the community.