Skip to content

How to enable Proactive User Experience

Introduction

In this article, we will see we can create a new user experience in a Flutter application. There are many frameworks and best practices to design user interfaces based on touch, clicks and keyboards but today we will focus on proactive user interfaces without traditional user inputs. This is an innovative way to create touchless interfaces, as seen in the self-service safety kiosk demonstration below.

This kind of proactive user experience is being provided by Mojo Facial Expression Recognition. Mojo Facial Expression Recognition is a service that has many features to enable proactive features by detecting and understanding people’s behavior. One of them is engagement evaluation.

So without wasting any more time, let’s get started.

Project Setup

First and foremost, let's set up our Flutter project to work with the Mojo Facial Expression API. For that, follow these steps:

  1. Create a new Flutter project
  2. Go to pubspec.yaml. Under dependencies, add this dependency:
dependencies
    flutter:
        sdk: flutter

    // add this line:
    - mojo_perception: ^1.0.0
  1. run flutter pub get
  2. Follow the quick installation instructions you will find here

That's it. Your project is ready to be integrated with the Mojo Facial Expression API. Now before writing the logic, we need a Mojo Facial Expression Recognition account so that we can connect our new app and test it. For that, follow these steps

  1. Go to the Mojo Facial Expression Recognition free offer website
  2. Register with your professional email to get your API key in your inbox
  3. Checkout your mail box. Copy the Key.

That's it. Our project setup is done and we have the API Key. Now, let's code.

Coding the logic

As you have seen in the video above, the proactive feature is based on engagement triggers. When the user is disengaged while the information is critical, the system notifies the user and waits until he's engaged again in order to continue delivering very important informations.

The best part about Mojo Facial Expression Recognition API is that we don't need to worry about the camera and recognition stuff, the API handles it. All we need to do is create callbacks to handle the changes of user engagement and navigate through the different UIs of our app.

Now let's code the logic. Follow these steps.

  1. Create a new mojoPerceptionAPI instance in your main function. Add this line to your code

    MojoPerceptionAPI mojoPerceptionApi = MojoPerceptionAPI(
          '<your_auth_token_here',
          'stream.mojo.ai',
          '443',
          '<user_namespace_here>');
    
    Make sure to replace with your own credentials 😎

  2. Create the logic callbacks

    void handleEngagementStateCallback(String engagement) {
    
        // Handle disengaging
        if (currentEngagement == Engagement.engaged) {
            // Navigate to waiting UI
        }
    
        if (currentEngagement != Engagement.engaged) {
            if (engagement == Engagement.engaged) {
                // Navigate to the next UI
            }
        }
    }
    

In the callback, we check the status of the current engagement of the user with the new one given by the API. If we are on a particular situation where the user need to be engaged, we navigate to a page dedicated to indicate this situation to the user.

Now we need to plug this callback to the instance of the MojoPerceptionAPI.

  1. Add the callbacks
    mojoPerceptionAPI.engagementCallback = handleEngagementStateCallback
    

This is it.

To go further, you can easily mix other social cues and emotions given by the API, like surprise, confusion or amusement in order to create even richer proactive user experience like a quizz or additional content in some case.

Conclusion

It's neverbeen easier to implement proactive user experiences, and we've covered the 0 to 1 in this article.

To explore more about the Mojo Facial Expression API, check out this documentation