WorryFree Computers   »   [go: up one dir, main page]

Sachin Kotwani
Senior Product Manager
Elvis Sun
Software Engineer

Mobile app and game developers can use on-device machine learning in their apps to increase user engagement and grow revenue. We worked with game developer HalfBrick to train and implement a custom model that personalized the user's in-game experience based on the player's skill level and session details, resulting in increased interactions with a rewarded video ad unit by 36%. In this blog post we'll walk through how HalfBrick implemented this functionality and share a codelab for you to try it yourself.

Background

Every end-user or gamer is different, and personalizing an app purely based on hard-coded logic can quickly become complex and hard to scale. However, taking all relevant signals into account, game developers can train a Machine Learning model to create a personalized experience for each user. Once trained, they can “ask” the model to give the best recommendations for a particular user given a set of inputs (e.g. “Which item from the catalog should I offer to a user that has reached skill level ‘pro’, is on level 5, and has 1 life left?")

Custom ML models can be architected, tuned, and trained based on the inputs that you think are relevant to your decision, making the implementation specific to your use case and customers. If you are looking for a simpler personalization solution that doesn’t require you to train your own model and where the answer is unlikely to change multiple times per session (e.g. "What is the right frequency to offer a rewarded video to each user?") we recommend using Firebase Remote Config’s personalization feature.

Image showing game from Halfbrick

We recently worked with game developer HalfBrick on optimizing player rewards within one of their most popular games, with over 500M downloads, Jetpack Joyride. In-between levels, players are presented with an option to obtain a digital good by watching a rewarded video ad. Our objective was to increase the conversion rate for this particular interaction. Before the study, the digital good offered in return for watching the ad was always selected at random. Using a custom ML model, we were able to personalize which reward to offer using inputs such as the gamer’s skill level, and information about the current session (e.g. why they died in the last round, what powerups they selected), and increased conversions by 36% in just our first iteration of the experiment. Once trained, the model runs fully on-device, so it doesn’t require network requests to a cloud service and there are no per-inference costs.

Solution

ML workflows largely anchor themselves on to the following components: A problem to solve, data to train the model, training of the model, model deployment, and client-side implementation. We have described the problem above, and will now cover the rest of the steps. You can also follow along with more detailed implementation steps in the codelab.

Image showing how to collect the data necessary to train the model

Collect the data necessary to train the model

HalfBrick collected 129 different signals to train the model. These were of different types, such as event timestamp, user skill levels, information about recent sessions, and other information about the user's gameplay style. This is where intuition around which factors could influence the outcome are very helpful (e.g. a user’s skill level, point balance, or choice of avatar may be useful when predicting which powerups they’d prefer).

We can collect training data easily by sending events to Google Analytics for Firebase and then exporting them to BigQuery. BigQuery is Google’s fully-managed, serverless data warehouse that enables scalable analysis over petabytes of data. In our scenario, it makes it easy to summarize and transform the data to get it ready for model training, as well as combine it with other data sources our app might be using

Train the model, evaluate its performance, and iterate

Training requires a model architecture (e.g. number of layers, types of layers, operations), data, and the actual training process. We picked a reference architecture implemented in TensorFlow, and iterated both on the data we used to train it and the architecture itself. In a production workflow we would retrain the model regularly with fresh data to ensure that it continues to behave optimally.

In our codelab we have abstracted away the process of refining the model architecture, so you can still implement this workflow without having much of a background in ML.

Deploy the latest version of the model

Traditional ML models that run in the cloud can be easily updated, just like you would a web application or service. With on-device ML we need a strategy to get the latest model onto the user’s device. The first version can be bundled with the app, so that when the user first installs it there is a version ready to go. Subsequent updates are made by deploying the models to Firebase ML. With a combination of Remote Config and Firebase ML you can direct users to the right model name (e.g. “active_model = my_model_v1.0.3”), which will then automatically download the latest available model.

Run the model on the device

Once the model is deployed on the device, we use the TensorFlow Lite runtime to “make inferences” or ask questions of the model based on a set of inputs. You can think of this as asking, “What digital good should I show a user based on the following characteristics?” We configured our implementation so that a certain percentage of sessions are shown a random option, which will help us gather new data and keep the model fresh and updated.

Try it yourself!

We have created a codelab for you to easily follow and replicate these steps in your own app or game. In this implementation you will encounter a scenario similar to HalfBrick's, but we've reduced the number of input features to 6 to make it easier for you to replicate and learn from. You should start with these and iterate as you see fit.

We hope you can use this ML workflow to build smarter and more engaging apps for your users and grow your business!

Stephen McDonald
Developer Relations Engineer, Google Pay

Back in 2019 we launched Firebase Extensions - pre-packaged solutions that save you time by providing extended functionality to your Firebase apps, without the need to research, write, or debug code on your own. Since then, a ton of extensions have been added to the platform covering a wide range of features, from email triggers and text messaging, to image resizing, translation, and much more.

Google Pay Firebase Extension

We're now excited to have launched a new Google Pay Firebase Extension at Firebase Summit 2021, which brings the ease of the Google Pay API to your Firebase apps.

Image that says Make payments with Google Pay

With the Google Pay Firebase Extension, your app can accept payments from Google Pay users, using one or more of the many supported Payment Service Providers (or PSPs), without the need to invoke their individual APIs.

With the extension installed, your app can pass a payment token from the Google Pay API to your Cloud Firestore database. The extension will listen for a request written to the path defined during installation, and then send the request to the PSP's API. It will then write the response back to the same Firestore node, which you can listen and respond to in real time.

Open Source

Like all Firebase Extensions, the Google Pay Firebase Extension is entirely open source, so you can modify the code yourself to change the functionality as you see fit, or even contribute your changes back via pull requests - the sky's the limit.

Summing it up

Whether you're new to Google Pay or Firebase, or an existing user of either, the new Google Pay extension is designed to save you even more time and effort when integrating Google Pay and any number of Payment Service Providers with your application.

Get started with the Google Pay extension today.