A. Introduction Last updated: 30-05-2025

Learn how to use AppNava to allow everyone in your organization to get the business data they need without bothering technical teams.

Home > Docs > Introduction > What is AppNava?

1. What is AppNava?

AppNava was founded back in 2019 to make apps and games richer and happier. We are a team of four awesome tech and mobile data experts who aspire to bring a revolution to the mobile and digital world. Data-driven solutions and machine learning utilization just run in our blood! This team of four excels at modelling complex user behaviours for mobile apps and predicting the future of your customers. With our experience in data science and engineering, we are putting forth our best to make you thrive. After a year of model development, we are happy to be there for mobile apps and games with data driven insights.

Home > Docs > Introduction > What makes AppNava different?

2. What makes AppNava different?

Know Your Players

Understand and analyze the what, when and why of your players' behaviour. Understand the past; what happened and why! At this stage, machine learning algorithms start being trained.

Predict Next Behaviour

Start predicting how your players will behave in the future even they don't know yet. Our registered machine learning algorithms predict player behavior. AppNava allows you to understand your players much more deeper.

Take Right Actions

Everyone is different in many ways; tastes, lifestyles, expectations… Showing the same offer to everybody does not make sense. Personalize your campaign and game regarding this knowledge.

Home > Docs > Introduction > How AppNava Works?

3. How AppNava Works?

Connect to the largest mobile ecosystem with integrated partners globally. Just select your analytics partners and measurement tools (AWS, Google Analytics, Game Analytics, etc.). Then, on the receiving side, choose the API that works for you to get access to your data, all of the time. With completed API connection, the mapping system is working to choose the required features/variables from your raw data. Here 80% match is enough to continue to the next steps.

For next steps you can go to this How it Works page to learn more!

Home > Docs > Introduction > What is Important Metrics and KPIs

4. What is Important Metrics and KPIs

In this section you can learn about what are our resources and how to use them

ROI and ROAS Calculator

Learn what ROI and ROAS are and how to use the calculator!

ROI definition

ROI stands for “Return on Investment.” Essentially, it’s a measurement of the return on a particular investment, relative to the cost of the investment. In other words, it’s a ratio between your net profit and investment. There’s a simple formula that you can use to work out the ROI: ROI = (Net Profit / Net Spend) x 100

ROAS definition

ROAS stands for “Return on Ad Spend.” ROAS can help you determine the efficiency of your online advertising campaigns by calculating the amount of money your business earns for each pound it spends on advertising. You can use the following formula to calculate ROAS: ROAS = (Revenue Generated from Ads / Advertising Spend) x 100.

The difference between ROI and ROAS

When it comes to ROI vs. ROAS, there are a couple of major differences. Firstly, ROAS looks at revenue, rather than profit. Secondly, ROAS only considers direct spend, rather than other costs associated with your online campaign. In a nutshell, ROAS is the best metric to look at for determining whether your ads are effective at generating clicks, impressions, and revenue. However, unlike ROI, it won’t tell you whether your paid advertising effort is actually profitable for the company.

Should I use ROI or ROAS?

When you consider ROI vs. ROAS, it’s important to remember that it isn’t an either/or situation. Whereas ROI can help you understand long-term profitability, ROAS may be more suited to optimising short-term strategy. To craft an effective digital marketing campaign, you’ll need to utilise both the ROI and ROAS formulas. ROI provides you with insight into the overall profitability of your advertising campaign, while ROAS can be used to identify specific strategies that can help you improve your online marketing efforts and generate clicks and revenue.

Tip

AppNava helps you automate payment collection, cutting down on the amount of admin your team needs to deal with when chasing invoices. You can go this page and try our ROI and Roas Calculator!

Do you want to know about the titles of the fields you need to enter into the calculator?

Read below for more info!

LTV

LTV is lifetime value. This is the estimated value that you expect to extract from the player. It makes more sense to couple this lifetime value with a number of days during which the user interacts with the product (the game in this case). This enables us to study whether we are on the right track, and to reason about the product. So, LTV365 is the expected (read average) value or revenue we get from a player after 365 days or 1 year after coming into contact with the game for the first time.

Retention

Retention is a measure that will tell us how the players will keep interacting with the game. Day 1 retention (D1), is a percentage of how many players returned to the game after launching it for the first time (D0). The higher the retention, the better because it means that players keep coming back, so there is something about the product/game/app that motivated them to return.

ARPDAU

ARPDAU is average revenue per daily active user. This metric is very convoluted. By itself, it doesn’t say much. An ARPDAU of 2€ says very little. If you have a restaurant and DAU (daily active user), is the number of customers that walk in, 2€ ARPDAU might leave you bankrupt. For a mobile game, if you create a compelling title where on average you get 2€ per daily active user then you might have struck gold.

CPI

CPI is the cost per install. Lately, User Acquisition (UA), is an integral part of the business model of scaling F2P mobile games. And with more and more publishers paying to acquire users, the market is getting more and more competitive, and the cost per install and acquiring a new player is going through the roof.

UA

So User Acquisition works just like the old fashioned advertisements. You pay upfront, to get customers/players/users walking in, and hopefully, those that convert (end up buying something), will make up for the price of the advertisement and yield some extra revenue. So it’s an upfront investment and in order to minimize the risks associated with this investment, we have to study and predict how we will make the money back with a profit.

B. Getting Started

Welcome to AppNava! This guide will help you get started quickly so you can begin exploring business data, setting up dashboards, and sharing insights across your team.

Home > Docs > Getting Started > Create an Account

1. Create an Account

  • Go to appnava.com and click and go the Sign Up page
  • Fill in the necessary information and click Sign Up
  • An activation email is sent to you. Click Activate Account and you are done.
  • Home > Docs > Getting Started > Company Setup

    2. Company Setup

    To use AppNava’s prediction platform, you need to be associated with a company. You can either:

    • Set up your own company (if you're the first from your team)
    • Join an existing company (if you’ve been invited)
    1. Go to Profile tab on Company Profile page
    2. Click Start Your Company! button
    3. Fill in the necessary information and click Save
    1. Copy your token from the email sent by your company admin
    2. Go to the Company Profile tab
    3. Click Join a Company!
    4. Paste your token and click Join

    Home > Docs > Getting Started > Add Your Game/App

    3. Add Your Game/App

    1. Go to the Products page under the Company tab.
    2. Click Add Product and choose BigQuery as your data location.
    3. To add your the BigQuery Project ID open Google BigQuery Console.
    4. Find your Project ID as shown in the screenshot.
    5. Use the format project-name-123456 (lowercase, hyphenated).

    Backend Integration (Optional but Recommended)

    If you plan to use prediction with Firebase or Satori, it's strongly advised to provide relevant credentials:

    • Firebase: Fill out Android/iOS Firebase App ID and Secret ID under their respective tabs in the form.
    • Satori: Provide the Server Key and Server URL under their respective tabs in the form.
    1. Go to the Products page under the Company tab.
    2. Click Add Product and choose AWS as your data location.
    3. In the form, you will be asked to enter the S3 Bucket Name.
    4. You can find this in your AWS S3 Console.

    Backend Integration (Optional but Recommended)

    If you plan to use prediction with Firebase or Satori, it's strongly advised to provide relevant credentials:

    • Firebase: Fill out Android/iOS Firebase App ID and Secret ID under their respective tabs in the form.
    • Satori: Provide the Server Key and Server URL under their respective tabs in the form.
    1. Go to the Products page under the Company tab.
    2. Select Snowflake as your data location and provide the full table name (e.g., DB.SCHEMA.TABLE).
    3. Log into your Snowflake UI to locate the correct table.
    1. Go to the Products page under the Company tab.
    2. Choose Redshift and enter the Redshift Table Name you wish to use.
    3. You can retrieve this from the Redshift Console.

    Home > Docs > Getting Started > Invite Your Team Members

    4. Invite Your Team Members

    1. Go to Users page on the dashboard and click Invite User
    2. Enter the email address of the person you want to invite and click Send Invitation

    C. Setup Machine Learning Models

    1. Go to Data Operations Page
      • We will be notified and start working on your models
      • When your models are ready, you will be notified by e-mail

      Note

      Usually a dataset of 1.000.000 events and 30.000 distinct users is needed for the models to be working. We will find the time-point after which these numbers are met and update the associated columns.

    D. Setup Real-Time Predictions

    There are three ways for carrying out predictions:

    • API Call
    • Scheduled Prediction
    • Instant Prediction
  • After the model has been trained, you will be able to see the ready-to-use models in Predictions Page
  • Choosing API Call presents three options:

    • Direct Response to Game/App: The api can be invoked directly inside the game client and the results can be collected without any additional integration.
    • Through Firebase: Requires Firebase measurement API integration.
    • Through Satori: Requires Satori measurement API integration.
    1. User can create a prediction by clicking the predictions tab and navigating to the Predictions Page from the sidebar and choosing the API Call option.
    2. The user then can select the “Direct Response to Game/App” option.
    3. Choosing Direct Response to Game/App prompts you to fill the following fields:
      • Prediction Name: A unique name for your prediction.
      • Model Name: A selectable list of models that you have trained.
      • Prediction Frequency: The time constraint for the prediction that you will perform i.e First Session etc
      • Conversion Value: A checkbox to allow for receiving numeric values for non-numeric results for example in the case of Churn Models
      • Bulk Prediction: A checkbox to specify if the prediction should be done for all the daily users.
    4. After clicking Next, you will see the following modal that will contain a sample curl request that can be adjusted accordingly.
    5. Finally, clicking the Create Prediction button will redirect the user back to the Predictions page and create an entry in the table with the Prediction Type as API Call: Direct Access.
    6. For API Calls, the action tab will have a button to display the generated embed code as follows:
    7. This curl request can be used to hit our endpoint and retrieve the results directly inside the game client or from other sources.
    8. Below are examples of both successful and error responses that your application may receive from the API after making a prediction request. Understanding these formats will help you handle responses programmatically and troubleshoot issues effectively.

      API Response (Successful Responses)

      Classification Models

      
      {
        "success": true,
        "message": "Prediction succeeded",
        "data": {
          "prediction_result": "churn",
          "prediction_probability_for_positive_class": 0.85,
          "prediction_probability_for_negative_class": 0.15,
          "probability_group": "high"
        }
      }
      
      

      Regression Models

      
      {
        "success": true,
        "message": "Prediction succeeded",
        "data": {
          "prediction_result": 42.5,
          "prediction_probability": 42.5
        }
      }
      
      

      Notes:

      • All predictions are automatically saved to BigQuery for historical tracking.
      • The API supports both classification and regression models.
      • A positive class probability denotes the probability of the class the model is interested in happening i.e in the case of a churn model, it would simply be Churn and negative class would be vice versa
      • prediction_probability_for_tertiary_class will be additionally sent but only for the LTV C model since it is a multi-classification model.
      • prediction_result will change depending on the model i.e churn model will give a result of either churn or not-churn, a subscriber model will give a result of subscriber or non-subscriber.
      • Probability groups for classification models are categorized as follows:
        • Low: ≤ 0.39
        • Mid: 0.40 - 0.60
        • High: > 0.60
      • As mentioned in before, probability_group will be either “high“, 'mid“ or “low“ and depends on the positive class. This is automatically classed for general evaulation according to preset conditions and is an estimation. In regressor models, prediction_result is calculated by checking the average ltv and serves a similar purpose.
      • Individual users are limited to 20 predictions

      API Response (Error Responses)

      The API returns different error response formats depending on the error type.

      No Data Available
      
      {
        "success": false,
        "error_code": "NO_DATA",
        "message": "No data available for user: user123",
        "user_id": "user123"
      }
      
      

      Too Many Predictions

      
      {
        "success": false,
        "error_code": "SEEN_TOO_MANY",
        "message": "User has exceeded maximum prediction limit",
        "user_id": "user123"
      }
      
      

      No Users Found

      
      {
        "success": false,
        "error_code": "NO_USERS_FOUND",
        "message": "No users found matching the criteria",
        "user_id": "N.A (Bulk Pred)"
      }
      
      

      Common Mistakes:

      • Often the curl requests are not constructed properly inside the game client and hence the requests fails.
      • The --data-raw option specifies the data to be sent to the endpoint as a JSON object, it should be ensured that the JSON is properly formatted and is valid.
      • Booleans should be lowercase in JSON e.g true is valid but True is not valid.
      • JSON should be sent in the body of a HTTP request and should not be nested inside another body key.

      Tip

      In addition to using the Direct Response to Game/App method, you can also integrate predictions via third-party platforms. Currently, two integration options are supported: Satori and Firebase. These options require specific API credentials and setup steps to enable communication between the game and the prediction engine.

    1. User can create a prediction by clicking the predictions tab and navigating to the Predictions Page from the sidebar and choosing the API Call option.
    2. The user then can select the “Through Firebase” option.
    3. If the user has not set both Android/IOS App ID and Secret ID then he will not be allowed to choose the “Through Firebase” option and will be shown the following modal. He can choose to update the product and add the respective fields to enable the option:
    4. Assuming the option is enabled, he will be directed to the following modal. He can then fill out the following standard fields and choose the respective model and prediction frequency.
    5. The user will then be prompted to finalize the prediction and shown a sample curl request that can be used to call the prediction api.
    6. After a successful creation, the user will be able to see his prediction api call as a row on the Predictions Page.
    7. And that’s it! Your API is now primed to generate predictions and seamlessly send them to Firebase, unlocking powerful realtime processing and insights to drive your application to the next level!
    1. User can create a prediction by clicking the predictions tab and navigating to the Predictions Page from the sidebar and choosing the API Call option.
    2. The user then can select the “Through Satori” option.
    3. If the user has not set the Satori Server Key or the Satori Server URL then he will not be allowed to choose the “Through Satori” option and will be shown the following modal(Fig 4). He can choose to update the product and add the respective fields to enable the option:
    4. Assuming the option is enabled, he will be directed to the following modal(Fig 5). He can then fill out the following standard fields and choose the respective model and prediction frequency.
    5. The user will then be prompted to finalize the prediction and shown a sample curl request that can be used to call the prediction api as shown in Figure 6. It should be noted that the user_pseudo_id field here pertains to the nakama_id field of a user
    6. After a successful creation, the user will be able to see his prediction api call as a row on the Predictions Page, as shown in Figure 7.
    7. And that’s it! Your API is now primed to generate predictions and seamlessly send them to Satori, unlocking powerful realtime processing and insights to drive your application to the next level!

    Visualization of Results

    Appnava offers an elegant solution for visualizing your data. API call and Scheduled prediction requests can be visualized using thePrediction Results tab.

    The table and chart shows an accumulation of daily predictions. Depending on the models i.e Churn, LTV etc the columns will vary. The real results are calculated depending on the model as well and can be compared to the predicted results to evalute the accuracy of the model.

    1. User can create a prediction by clicking the predictions tab and navigating to the Predictions Page from the sidebar and choosing the Scheduled Prediction option.
    2. This option allows users to schedule predictions for a specific duration that will run daily automatically. Upon clicking the option, a modal can be seen with the following fields:
      • Prediction Name: A unique name for the scheduled prediction.
      • Model Name:A selectable list of models that you have trained.
      • Prediction Frequency: The time constraint for the prediction that you will perform. i.e First Session etc
      • Start Date: When the scheduled prediction should start.
      • Schedule Date: The duration for the scheduled prediction in days i.e how long it should last?
    3. Lastly, after clicking done, the user will be redirected to the Predictions page and prediction will be created in the table with the status Scheduled Prediction.
    4. Visualization of Results

      Appnava offers an elegant solution for visualizing your data. API call and Scheduled prediction requests can be visualized using thePrediction Results tab.

      The table and chart shows an accumulation of daily predictions. Depending on the models i.e Churn, LTV etc the columns will vary. The real results are calculated depending on the model as well and can be compared to the predicted results to evalute the accuracy of the model.

    1. Predictions can be done directly through the dashboard by clicking on the Predict Now button. Upon successful completion, the results will be visible in the table on the Trained Models tab as shown below.
    2. After clicking the Predict Now button, you will see a modal (a slight delay might occur while fetching daily users) to select the following options:
      • Trained Models: A list of selectable models that you have trained.
      • Prediction Selection: : The duration for the prediction that you will perform. A choice between First Session and First Day.
      • User Id: A list of users that are multi-selectable and on which the predictions will be performed.
    3. After selecting these fields, the Predict button can be clicked to start the prediction and upon successful completion the results should be available in the table below.

    Note

    The Bulk Prediction button is an experimental feature and is subject to change. Clicking this button performs predictions for all users of the day and stores the results in our database (results are not visible on the dashboard). The fields Trained Models and Prediction Selection are mandatory, while the User Id field is optional for this feature.

    E. Integration

    In this section you can learn about how to configure database connection. Below steps describe how AppNava works for Firebase / Google Cloud & Unity customers.

    Below steps describe how AppNava works for Firebase / Google Cloud & Unity customers.

    Collect Data

    Allow us to pull your data

    • Go to Google Cloud IAM
    • The following e-mail addresses are entered from the menu that opens on the side: 4
    • We need at least three roles to be able to read on Bigquery; search for roles using the search box:
      • BigQuery Data Viewer 5
      • Bigquery Resource Viewer 6
      • BigQuery User 7
    • Click Save button 8

    Follow these steps to set up and link Google Analytics 4 with BigQuery for Firebase/Google Cloud & Unity customers.

    Step 1: Create a Google-APIs-Console project and enable BigQuery

    Note: You must be an Editor or above to create a Google-APIs-Console project and enable BigQuery.

    1. Log in to the Google Cloud Console.
    2. Create a new Google Cloud Console project or select an existing project.
    3. Navigate to the APIs table: Open the Navigation menu in the top-left corner, click APIs & Services, then click Library.
    4. Activate BigQuery: Under Google Cloud APIs, click BigQuery API. On the following page, click Enable.
    5. If prompted, review and agree to the Terms of Service.

    Step 2: Prepare your project for BigQuery Export

    You can export Google Analytics data to the BigQuery sandbox free of charge (sandbox limits apply).

    Learn more about upgrading from the sandbox and BigQuery pricing.

    Step 3: Link a Google Analytics 4 property to BigQuery

    After you complete the first two steps, you can enable BigQuery Export from Analytics Admin.

    BigQuery Export is subject to the same collection and configuration limits as Google Analytics. If you need higher limits, you can upgrade your property to 360.

    1. In Admin, under Product Links, click BigQuery Links. Note: The previous link opens to the last Analytics property you accessed. You can change the property using the property selector.
    2. You must be an Editor or above at the property level to link an Analytics property to BigQuery.
    3. You must also use an email address that has OWNER access to the BigQuery project (view Permissions below for detailed access requirements).
    4. Click Link.
    5. Click Choose a BigQuery project to display a list of projects for which you have access.
      • If you have linked Analytics and Firebase (or plan to), consider exporting to the same Cloud project, which will facilitate easier joins with other Firebase data.
    6. Select a project from the list, then click Confirm.
    7. Select a location for the data. (If your project already has a dataset for the Analytics property, you can't configure this option.)
    8. Click Next.
    9. Select Configure data streams and events to select which data streams to include with the export and specific events to exclude from the export. You can exclude events by either clicking Add to select from a list of existing events or by clicking Specify event by name to choose existing events by name or to specify event names that have yet to be collected on the property.
    10. Click Done.
    11. Select Include advertising identifiers for mobile app streams if you want to include advertising identifiers. Note: To ensure smooth functionality and avoid issues, make sure to enable Include Advertising Identifiers in Export in Firebase settings under Project > Settings > Integrations > BigQuery.
    12. Select either or both a Daily (once a day) or Streaming (continuous) export of data. For Analytics 360 properties, you may also select Fresh Daily.
    13. Click Next.
    14. Review your settings, then click Submit.

    Permissions

    Project getIamPolicy/setIamPolicy rights, Services get/enable rights. OWNER is a superset of these permissions.

    To create a BigQuery link the minimal permissions you need are:

    • resourcemanager.projects.get: To get the project
    • resourcemanager.projects.getIamPolicy: To get a list of permissions
    • resourcemanager.projects.setIamPolicy: To check if user has permission to create the link on this project
    • serviceusage.services.enable: To enable the BigQuery API
    • serviceusage.services.get: To check if the BigQuery API is enabled

    Verify the service account

    When you link Analytics and BigQuery, that process creates the following service account:

    [email protected]

    • Verify that the account has been added as a member of the project, and given the role of BigQuery User (roles/bigquery.user).
    • If you previously set up BigQuery Export to give your service account the Editor role for the Google Cloud project, you can reduce that role to BigQuery User. To change the role for the service account, you need to unlink and then relink Analytics to your BigQuery project. The first step is to unlink Analytics and BigQuery and remove the service account with the Editor role. Then, relink Analytics and BigQuery per the instructions above to create the new service account with the correct permission for the project.
    • After relinking, ensure that the Service Account has the Owner (bigquery.dataOwner) role on the existing export dataset. You can do this by viewing access policy of the dataset.

    Change regions

    If you choose the wrong region and need to change it after you've created the link:

    1. Delete the link to BigQuery (see below).
    2. Backup the data to another dataset in BigQuery (move or copy).
    3. Delete the original dataset. Take note of the name: you'll need it in the next step.
    4. Create a new dataset with the same name as the dataset you just deleted, and select the location for the data.
    5. Share the new dataset with [email protected] and give the service account the BigQuery Data Owner role.
    6. Copy the backup data into the new dataset.
    7. Repeat the procedure above to create a new link to BigQuery.

    After changing the location, you'll have a gap in your data: streaming and daily exports of data will not process between deletion of the existing link and creation of the new link.

    Delete a link to BigQuery

    1. In Admin, under Product Links, click BigQuery Links. Note: The previous link opens to the last Analytics property you accessed. You can change the property using the property selector. You must be an Editor or above at the property level to delete a link to BigQuery.
    2. Click the row for the link.
    3. In the top right, click More > Delete.

    BigQuery Export limits

    • Standard GA4 properties have a BigQuery Export limit of 1 million events for Daily (batch) exports. There is no limit on the number of events for Streaming export. If your property consistently exceeds the export limit, the daily BigQuery export will be paused and previous days’ exports will not be reprocessed.
    • For Analytics 360 properties, the Fresh Daily export contains all data fields and columns understood to be in the daily export, including observed user attribution and Ad Impression data. Learn more about the Fresh Daily export.
    • Property editors and administrators will receive an email notification each time a property they manage exceeds the daily limit. That notification will indicate when their export will be paused if action is not taken. Additionally, if a standard property significantly exceeds the one-million-event daily limit, Analytics may pause daily exports immediately. If you receive a notification, please leverage the data-filtering options (data-stream export and event-exclusion) to decrease the volume of events exported each day and ensure the daily export continues to operate.
    • Learn more about the higher limits available with 360 properties.

    Data filtering

    You can exclude specific data streams and events from your export, either to limit the size of your export or to make sure you're exporting just the events you want in BigQuery.

    • Exclude data streams and events during linking process: During the linking process, when you select the data streams you want to export, you also have the option to select events to exclude from export. See Step 9 in the linking process.
    • Add or remove data streams or events after you've configured linking: You can add or remove data streams and add events to or remove events from the exclusion list after you've configured the BigQuery link.
    1. In Admin, under Product Links, click BigQuery Links. Note: The previous link opens to the last Analytics property you accessed. You can change the property using the property selector.
    2. You must be an Editor or above at the property level to add or remove data streams or events.
    3. You must also use an email address that has OWNER access to the BigQuery project.
    4. Click the row for the project whose link you want to modify.
    5. Under Data streams and events, click View data streams and events.
    6. Under Data streams to export, you can select additional data streams to export or remove existing data streams from the list.
    7. On the Events to exclude list, click Add to select from a list of existing events or click Specify event by name to choose existing events by name or to specify event names that have yet to be collected on the property.
    8. To remove an event from the list, click the minus sign at the end of that row.

    Pricing and billing

    • BigQuery charges for usage with two pricing components: storage and query processing. You can review the pricing table and learn about the differences between interactive and batch queries.
    • You need to have a valid form of payment on file in Cloud in order for the export to proceed. If the export is interrupted due to an invalid payment method, we are not able to re-export data for that time.
    • You can also export Analytics data to the BigQuery sandbox free of charge but keep in mind that sandbox limits apply.

    When you start seeing data

    Once the linkage is complete, data should start flowing to your BigQuery project within 24 hours. If you enable daily export, then 1 file will be exported each day that contains the previous day’s data (generally, during early afternoon in the time zone you set for reporting).

    Below steps describe how AppNava works for AWS customers

    Collect Data

  • Use an AWS service of your choice to collect data into a bucket
  • Game Analytics Pipeline
  • Pinpoint
  • Allow us to pull your data

    • Go to AWS S3 Buckets
    • Bucket Policy:

      {
      "Version": "2012-10-17",
      "Statement": [
          {
              "Sid": "AppNavaDataPullPolicy",
              "Effect": "Allow",
              "Principal": {
                  "AWS": "arn:aws:iam::324464245602:root"
              },
              "Action": [
                  "s3:ListBucket",
                  "s3:GetObject"
              "Resource": [
                  "arn:aws:s3:::<your-bucket-name-here>/*",
                  "arn:aws:s3:::<your-bucket-name-here>"
              ]
          }
       ]
    • Click Save

    Option 1: Unity Analytics Integration

    If you have Unity Analytics integration available:

    Requirements

    • Snowflake Account
    • Unity Analytics Configuration
      • Unity Analytics must be enabled
      • Event tracking configured
      • Data Collection active and verified
    • Data Sharing Setup
    • Follow Unity's data access setup guide: Unity Analytics Data Access Setup

      Share the UNITY_UNITYLIVEOPS_UNITY_ANALYTICS_PDA view with our Snowflake account.

    Option 2: Custom Analytics Integration

    It is recommended that a materialized view/table with the following structure be created. Any deviation may result in delays during the integration process due to required changes:

    Data Structure

    • user_id (String): The unique identifier for the user.
    • user_data (JSON): Nested information about the user, such as country, gender, age, etc.
    • device_id (String): The unique identifier for the user’s device.
    • device_data (JSON): Nested information about the device, including app version, geo-location data, OS version, platform (e.g., "android" or "ios"), device brand, etc.
    • session_id (Int): A sequential identifier for the session. For example, "1" represents the user's first session.
    • session_data (JSON): Nested information about the session, such as start time, end time, etc.
    • event_name (String): The name of the event triggered by the user.
    • event_data (JSON): Nested information about the game event, including event timestamp, value (string or int), event type, etc.

    Note

    For time-related fields, it’s recommended to provide both millisecond and timezone timestamps, such as 1710941700000 or 2024-03-20 15:00:00. If both formats are not available, priority should be given to timezone timestamps.

    Access Requirements

    AppNava requires access to the materialized view/table. The most efficient way to grant this access is via Snowflake Secure Data Sharing.

    The materialized view/table must be shared with AppNava’s account (ID: kp95306) to enable querying.

    Integration Timeline

    The integration process may take approximately 2-3 weeks, depending on the quality and complexity of the client's data. During this period, we will analyze their data and make necessary adjustments to our system to accommodate their specific data structure.

    As part of this process, there may be challenges such as errors in the data, discrepancies that require clarification, or potential anomalies in user data that need further investigation. Additionally, we might need to engage in back-and-forth communication to clarify any edge cases or make adjustments based on their data’s nature. This iterative process is essential to ensure that the integration runs smoothly and that both parties are aligned on expectations.

    Initial Setup

    A materialized view aggregating data is recommended for efficient real-time querying by AppNava. Alternatively, any table or view that supports real-time user data queries can be used.

    Materialized View/Table Structure

    It is recommended that the following structure be adhered to. Any deviation may result in delays during the integration process due to required changes:

    • user_id (String): The unique identifier for the user.
    • user_data (JSON): Nested information about the user, such as country, gender, age, etc.
    • device_id (String): The unique identifier for the user’s device.
    • device_data (JSON): Nested information about the device, including app version, geo-location data, OS version, platform (e.g., "android" or "ios"), device brand, etc.
    • session_id (Int): A sequential identifier for the session. For example, "1" represents the user's first session.
    • session_data (JSON): Nested information about the session, such as start time, end time, etc.
    • event_name (String): The name of the event triggered by the user.
    • event_data (JSON): Nested information about the game event, including event timestamp, value (string or int), event type, etc.

    Note

    For time-related fields, it’s recommended to provide both millisecond and timezone timestamps, such as 1710941700000 or 2024-03-20 15:00:00. If both formats are not available, priority should be given to timezone timestamps.

    Access Requirements

    AppNava requires access to the materialized view/table. The most efficient way to grant this access is via Amazon Redshift Data Sharing.

    The materialized view/table must be shared with AppNava’s account (ID: 324464245602) to enable querying.

    Granting Data Share Access

    To grant access to the materialized view/table, the following example query can be adjusted and run:

    
    -- Create a data share
    CREATE DATASHARE test_datashare;
    
    -- Add the relevant schema and materialized view/table to the data share
    ALTER DATASHARE test_datashare ADD SCHEMA tickit;
    
    ALTER DATASHARE test_datashare ADD TABLE tickit.sales_mv;
    
    -- Grant access to AppNava's account
    GRANT USAGE ON DATASHARE test_datashare
    TO ACCOUNT '324464245602';
    

    Ensuring Public Access

    To avoid VPC access issues, the datashare must be made publicly accessible:

    
    -- The name of the actual datashare must be provided here
    ALTER DATASHARE temp_datashare SET PUBLICACCESSIBLE = TRUE;
    

    Integration Timeline

    The integration process may take approximately 2-3 weeks, depending on the quality and complexity of the client's data. During this period, we will analyze their data and make necessary adjustments to our system to accommodate their specific data structure.

    As part of this process, there may be challenges such as errors in the data, discrepancies that require clarification, or potential anomalies in user data that need further investigation. Additionally, we might need to engage in back-and-forth communication to clarify any edge cases or make adjustments based on their data’s nature. This iterative process is essential to ensure that the integration runs smoothly and that both parties are aligned on expectations.

    F. FAQs

    Find quick answers to the most commonly asked questions about AppNava, including how it works, who it's for, and how to get started. Whether you're a first-time user or looking to troubleshoot setup, this FAQ section is designed to help you get the information you need—fast.

    Home > Docs > FAQs > Introduction

    1. Introduction

    What kind of businesses benefit most from AppNava?

    AppNava is ideal for mobile app and game developers looking to understand user behavior, increase retention, and personalize experiences through data-driven decisions and machine learning.

    Do I need technical expertise to use AppNava?

    No, AppNava is designed to be accessible for non-technical users while still offering powerful tools and integrations for technical teams.

    Is AppNava suitable for early-stage startups?

    Absolutely! AppNava supports both startups and established companies by simplifying access to predictive insights and actionable analytics.

    Can I use AppNava with my existing analytics tools?

    Yes, AppNava supports integrations with popular analytics and measurement platforms like AWS, Google Analytics, and GameAnalytics.

    How does AppNava personalize user experiences?

    AppNava uses machine learning to segment users based on behavior and predicts their future actions, allowing you to customize offers and campaigns accordingly.

    What makes AppNava’s predictions reliable?

    AppNava’s machine learning models are trained on real player behavior patterns and continuously improved to ensure accurate forecasts and recommendations.

    Home > Docs > FAQs > Getting Started

    2. Getting Started

    What do I need to configure my product correctly?

    Ensure you provide the Product Name, choose the appropriate Data Location (e.g., BigQuery, AWS, Snowflake, Redshift, or Other) and Dataset Name.

    What do I need before enabling the Firebase prediction option?

    Make sure you have both the Android/iOS App ID and Secret ID set in your product. Without these, Firebase integration won't be available.

    Is backend integration required?

    No, it's optional. However, if you're using Firebase or Satori for backend data syncing or analysis, it's strongly recommended to provide the necessary credentials (App ID, Secret ID, Server Key, etc.).

    Where do I find the Snowflake or Redshift table names?

    You can find them in your Snowflake or Redshift dashboards respectively. Provide the full table name in the format required (e.g., DB.SCHEMA.TABLE for Snowflake).

    What if my product uses a different or custom data source?

    Choose "Other" from the Data Location dropdown and contact us for support with your integration setup.

    Can I invite my team to help manage products?

    Yes, go to the Users page in the dashboard and click Invite User. Enter their email and send the invitation.

    Home > Docs > FAQs > Setup ML Models

    3. Setup ML Models

    Where do I start setting up machine learning models?

    Begin by going to the Data Operations page on the dashboard and clicking the Push Data & Choose Models button.

    How do I select models for training?

    After clicking the button, you'll see a list of available models. Check the models you want to use and assign a tag to the dataset for tracking purposes.

    What happens after I click "Done"?

    Each selected model will be added as a new entry under the Data Preparation section. Our team will be notified and begin processing your models.

    How will I know when my models are ready?

    You’ll receive an email notification once your models are successfully trained and ready to use.

    Home > Docs > FAQs > Setup Predictions

    4. Setup Predictions

    Can I modify prediction details after creating it?

    Yes, you can update certain details and manage your predictions from the Predictions page.

    How often can I schedule predictions to run?

    You can set prediction frequency based on your model’s capabilities, such as daily or per session.

    What if my model is not showing in the prediction modal?

    Ensure your model has been successfully trained and is available in your project; only trained models will appear in selection lists.

    Can I integrate the prediction API with other platforms besides Firebase?

    Yes, the API is accessible via curl commands, so you can connect it with any platform that supports HTTP requests.

    How do I interpret the prediction results and charts?

    The results are tailored to each model type and include predicted vs. actual values to help you evaluate accuracy over time.

    What happens if a scheduled prediction's duration ends?

    The scheduled prediction will stop running automatically; you can create a new schedule if needed or extend the duration of your existed prediction.