The aim of the Unity project was to reproduce tombola’s existing mobile app for iOS, Android and Windows Phone 8 devices using the Unity engine. The hope was that it would bring with it some additional benefits, such as speed of development, improved memory management, and performance gains. Over the course of about a year we created a framework that enabled quick and flexible production of existing and future games along with 5 of our most popular bingo games and all existing features of the live version of the AIR app at the time. In the interest of brevity I’ll try to give a quick overview of the app architecture, focusing on the Unity features, rather than technical details.
Anatomy of the App
One of the first problems we came up against was how to handle the move from vector assets used in all of our desktop and mobile app games to flat image files, without losing the high quality of vector rendering and also keeping memory usage acceptable. Our solution was to create image atlases for each game, at a range of sizes for predefined resolutions. This way, we limit the size of the assets to be close to the minimum required for the screen-space they will use. We also defined slicing values for the assets to further reduce their size, where possible, and keep rounded corners uniform across devices.
To do this we leveraged our team’s experience with flash technologies to build a tool using AIR to import the assets in a swf file, and to allow us to set the slicing and physical size on screen by either width or height. The tool then resizes and exports each asset at the correct size for each of the 6 preset dpi values. It also adds the correct slicing value for each asset, in pixels, to the slicing json file for each dpi.
AssetBundles are a fantastic feature of the Unity engine. In short, they are compressed bundles of assets that can contain any asset type that can be recognized by Unity. They are cached and versioned, with new versions being downloaded from url only when not available in cache.
We created a separate Unity project to use as our Asset bundler. The benefit of having a separate project is that all assets are not a part of the app project, meaning loading and switching platforms is a very quick in the app project, as there are no assets to re-import. Atlases and game sounds are fairly static and change rarely. We wrote a script to create atlases, creating an atlas for each game at each dpi for both phone and tablet screens, including the slicing data in the AssetBundle.
The benefits of this approach are:
- Assets are sized for the screen, which means they are as sharp as they can be without the memory cost of downsizing large assets.
- We could control the atlases externally, allowing us to re-skin games at will (e.g. for a promotion, seasonal skin or even to appear as a different game).
- We could localize assets as well as text externally.
The Unity app used a ‘Model-View-Controller’ style architecture, separating data and display logic. The state manager was the part of this architecture that controlled the state of the app; including all data values as well as the current viewstate(s) of the app, which tells us which views should and should not be shown. It also controlled the flow of data between the game servers/web APIs and the views, acting as mediator between the two. The idea behind the state manager was to create a single separate codebase that wasn’t dependent on the display for testing. It was a completely separate C# project that we compiled to a DLL for inclusion in the Unity project and is worthy of its own blog post.
The data models used a data binding system that made it easy to link Unity’s GameObjects to property changes in those models using a StateManagerBinding component that we created. The views could then take that data and perform any display logic required. A simple example of this in action is the Jackpot amount label. The GameObject has a StateManagerBinding component, in which we specified the view model, the property to bind to in that view model, an optional ViewConverter and a list of commands. When the StateManager changes the jackpot amount all GameObjects with the StateManagerBinding bound to the jackpot amount will call the commands. In this case the ViewConverter converts the value to a formatted currency string and the OnTextChanged function is the command, in a script attached to the same GameObject, which sets the text value of the Label to the currency string.
One of the great features of the StateManager is that it maintains game socket connections and updates ViewModels for all games, even when they are not visible to the user. So although we would only ever maintain assets for two games in memory at a time, the app was capable of rebuilding the state of each game precisely when it is subsequently re-launched, even down to button selections, meaning all games could run concurrently, and all we had to worry about was loading and unloading assets to keep memory stable.
Within the app we required some native interaction, most notably with web views. We used web views to display sections relating to players’ accounts including registration, as well as presenting web-based games in exactly the same way as those within the app. Other extensions include reporting of the keyboard height on screen (TouchScreenKeyboard.area in the Unity API works only for iOS), loading spinners, pop-up dialogue boxes, and overriding back button (Android). We did try some third party plug-ins but we found that nothing covered all of our needs so all native extensions were created by our development team.
Targeting all screen sizes
The new Unity UI system was [finally] released in version 4.6 of the engine. Until then, we’d been building our app in exactly the same way but using the then-ubiquitous NGUI code library. When 4.6 was released, we began to port our framework over to use Unity’s UI solution. We defined the layout of every game in the same way: using layout prefabs. The game layouts are simply rectangular regions on the screen that the views anchor to. Each game requires 3 layout prefabs: phone (portrait only), portrait tablet and landscape tablet. Each layout is built graphically in the editor from GameObjects with a RectTransform component. We used Unity’s new UI system components such as LayoutGroups and a few of our own helper components for building the layouts, such as a component to set a physical size for a layout’s width and/or height. Whenever the view state of a game changes (from “purchasing” to “inplay”, for example), our ViewManager component receives a trigger from the StateManagerBinding component and simply destroys any views that do not belong to the current view state and creates any missing views, setting their RectTransforms to match that of the appropriate RectTransforms in the layout prefab. This is illustrated in the image below. Creating the layouts graphically in the editor is very quick and makes matching the designs very straightforward.
Each game in the app was its own scene. Rather than going from scene to scene, the “compendium” scene was in charge of loading game scenes into the current scene (and disposing of them when we no longer want them). The Unity API has a rather nice method Application.LoadLevelAdditiveAsync(), which takes a string or integer parameter and will load a scene into the current one in a background thread, so that the game/app does not freeze while loading happens. In the interest of memory management, we limited the app to having a maximum of two games loaded in at one time, as well as the Compendium scene, so that the launching of a third game would trigger unloading and disposal of the second active game. Each game also had its own camera(s); meaning camera settings are per game, depending on the requirements and layering could be controlled simply using each camera’s depth variable, which controls render order.
2D, or not to 2D- that is the question:
Although traditionally thought of as a 3D engine, Unity has always been perfectly capable of producing great 2D content with a little work. Relatively recent additions to the engine and editor have greatly improved 2D content creation, and new features are still being added fairly regularly.
tombola’s games are exclusively 2D games in a ‘flash’ style. With the Unity project we wanted to see if we could remain true to the designs but also take advantage of the 3D engine at our disposal. In Cinco we added 3D cards with animated flip reveal animations and physically simulated chip animations for marking the cards. This wasn’t too far from the current app version, where a sprite sheet fakes a 3D card flip. For Bingo Roulette we used 3D chips again and included a 3D roulette wheel with a range of physically simulated ball animations. We used a (now legacy) Reflective/Specular shader with a reflective cubemap for the gold parts of the wheel to give a metallic reflective appearance. In conjunction with this, we also developed our own shadow system. Realtime shadows are computationally expensive and don’t work on all android devices. For our requirements, we could achieve the same effect at much lower cost and on all devices using a combination of a raycast from the light through the 3D object’s position onto the surface below and an animated mesh, skinned with a shadow texture and exported as part of the 3D object’s animation. The results looked great and performed very well.
Performance is something that we’ve come to expect from the Unity engine. That’s not to say you don’t need to put the effort in to get the returns… Probably the most disappointing aspect of the new Unity UI system for us was the performance of the CanvasRenderer component. We found that frequent changes to graphic elements hit the framerate hard, and that this hit was proportional to the size of the graphic element on screen. There are a lot of ways to optimise this but no combination of Canvases and anti-aliasing settings produced acceptable performance for frequently animating 2D content. Our solution was to ditch all graphic elements. To do this we had to put a lot of work into creating a TextMesh-based text solution and creating our own SlicedMesh component – a Mesh with 16 vertices and 18 triangles, representing a sliced sprite (Unity Sprites can now be sliced so we would now use the Sprite and SpriteRenderer components).
Performance of the Unity app was tangibly better than that of the AIR or HTML5 offerings, and even allowed us to display two live games without limitation on screen at the same time, enabling a picture-in-picture feature (triggered with a pinch gesture) – even on older Android 2.3 devices, which we no longer support with our current HTML5 games.
The unity editor is feature-rich and extendible, but one of the best features is the profiler. The profiler window was incredibly useful in profiling memory usage and identifying any performance bottlenecks even down to the individual methods that are taking the most time/resources in each frame.
We extended the editor in several places to ease and accelerate development of our games. This included custom sprite selection windows for setting sprites from atlases, convenient UI for common components and additional context menu items. We also created a custom window for our Asset Bundler project, shown earlier.
Speed of development
More stuff we love
- RenderTargets and RenderTextures: Our general ticket rendering solution relied on RenderTextures. We also used RenderTextures to implement the game swiping and picture-in-picture games before Unity 4.6 was released.
- Coroutines: These are so versatile. We used them extensively, from controlling animations in code, to ensuring synchronization and waiting for long-running or background operations to finish executing before allowing logic flow to continue. For example, we would never want to show a game lobby until the visual assets for that game have been downloaded/loaded into memory. Another good example of when we used coroutines was to spread the rendering of a strip of ticket RenderTextures over multiple frames (one ticket per frame).
- Draw call batching. Each draw call to the graphics API has a significant performance overhead on the CPU. Batching meshes that share a material into a single mesh and single draw call is a widely practiced tactic for improving performance, especially on mobile devices. With Unity, batching is built in and happens after the visibility determination step, minimizing overdraw.
- With 4.5 million registered developers, no matter what problem you’re having you can almost guarantee that someone has already solved it and shared the solution(s).
One of the most exciting recent developments at Unity technologies is the addition of a range of services tailored for use with the engine. In particular we were looking forward to getting our hands on Unity Performance Reporting, Unity Analytics and Unity Cloud build. You can read more about the services here.
We did have a little play with Unity’s WebGL export option in the Unity 5 beta and I have to say we were very impressed with the results. Although prohibitively large for our needs right now, the look and performance were fantastic. Our roulette wheel lives on to this day in the form of a WebGL project “Coffee Roulette” that we use to decide who makes the round in the office.
Although the decision was taken by the business not to use Unity to make our products, the entire development team (a great mix of old hands and brand new graduates) enjoyed working with Unity and I think that we proved that the engine is a perfectly viable platform that could even provide several improvements and otherwise unattainable features compared with the alternatives.