Create Human Centric 
Entertainment Experiences

Digital Avatars

Interactive Avatars

Avatars are computer-generated or real-life characters that represent a real person or personality. Although avatars do not necessarily replicate the person they are representing, there is an important drive to make their appearance and movement as humanly realistically as possible. NVISO can capture real time facial expressions and body movements which are important for designing realistic avatars. Furthermore, by tailoring the avatar’s response to the human interacting with them, a true connection can be made with the help of NVISO’s human behaviour SDK. This allows avatars to be more lifelike than ever before!


Emotional Gaming

The gaming industry (computer, console or mobile) does not yet make extensive use of the camera input to deliver entertainment value. This is about to change, as understanding human behaviour is becoming more accessible and it’s showcasing innovative entertainment features. Several recent games have proven that tailoring the gaming experience to the user’s cognitive or emotional state can be more rewarding in terms of gameplay, by tailoring the periods of playing intensity and why not, emotional responses.

Immersive Entertainment

Immersive Entertainment

The entertainment industry is currently experiencing an increasing level of immersion with virtual- and augmented- reality devices or 4D cinemas. Among others, the recent advances in computer vision AI can deliver screen experiences leveraging the parallax effect: adapting the view to the user's 3D head position, orientation and attention, without trackers or any additional wearables. Imagine peeking over the corner to check your opponents during your Counter-Strike game by moving your head in a life-like motion.

Panasonic Nicobo
Case Study

AI Apps for
Digital Avatars and Gaming

Presence and Identity

Head and Eye Tracking

Through a well established computer vision AI pipeline we support the interaction of companion robots with their owners in their daily lives. We can detect presence and identity through facial recognition, anticipate and react to owners attention by observing head and eye movements and eye and head tracking, and we pay attention to and appropriately adjust to mood through observation of owner’s emotional state. These features can be deployed on any device, from a companion robot to a PC-based digital-avatar.


NVISO is among the pioneers of emotion recognition by computer vision. With more than 15 years of experience in detecting human emotional responses, we can provide our knowledge through our SDK to any application in gaming and entertainment. Any human-machine interaction, being with a device, game or digital avatar may benefit from advanced knowledge of human emotional state. Emotion-based responses or just knowledge of the emotional state of the user can allow designing new, innovative and at the same time intuitive and natural interactions modes.

Facial Expressions


The most prized mode of interaction between humans is face-to-face without any devices in between. Since we know computers, we have always interacted with them through artificial interfaces, such as keyboard, mouse, touch-screen. The recent advances in artificial intelligence may allow just that: a fully natural interface. Yes, NVISO’s SDK does not only detect the location of your body parts, it can decode a particular sequence of your body movements as a gesture, which can trigger a command to execute, or just an acknowledgement of understanding. 

Simplified Software Development
Integration Ready



Lower latency, less power, and greater security of data results in better user experiences from all software running on-device. Run directly on platforms such as Intel, NVIDIA, and close-to-sensor computing (Arm A5x, A7x + NPU Accelerators).


Our SDK includes documentation, processes, libraries, code samples, and guides that help developers during integration with their own apps. Simple and effective developer API's are provided for quick and fast integration.



Seamless integration with 3D game engines using JSON and Python/C#/C++ interfaces to enable full life-cycle development cost effectively.

for OEM Devices

Device manufacturers can add robust real-time human behaviour features to off-the-shelve camera devices. Featuring NVISO Neuro Models™, that are interoperable and optimised for neuromorphic computing, the NVISO Neuro SDK is designed for high volume consumer devices where cost and power are critical to market success. Flexible sensor integration options and placements are available delivering faster development cycles and time-to-value for software developers and integrators. It enables solutions that can sense, comprehend, and act upon human behavior and designed for real-world environments using edge computing it uniquely targets deep learning for embedded systems.

Accurate and Robust

CNNs scale to learn from billions of examples resulting in an extraordinary capacity to learn highly complex behaviors and thousands of categories. NVISO can train powerful and highly accurate and robust models for use in the toughest environments thanks to its proprietary datasets captured in real-world environments.

Easy to Integrate

Where AI is fragmented and difficult-to-navigate at the edge, NVISO AI Apps are simple to use, develop, and deploy, with easy software portability across a variety of hardware and architectures.​ It reduces the high barriers-to-entry into the edge AI space through cost-effective standardized AI Apps that are future proof and work optimally at the extreme edge.

Ethical and Trustworthy

AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. Additionally unfair bias must be avoided, as it could could have multiple negative implications. NVISO adopts Trustworthy AI frameworks and state-of-the-art policies and practices to ensure its AI Apps are "fit-for-purpose".

Run on Any Device
Enterprise Grade Performance

Microcontroller Unit (MCU)

AI functionality is implemented in low-cost MCUs via inference engines specifically targeting MCU embedding design requirements which are configured for low-power operations for continuous monitoring to discover trigger events in a sound, image, or vibration and more. In addition, the availability of AI-dedicated co-processors is allowing MCU suppliers to accelerate the deployment of machine learning functions.

Central Processing Unit (CPU)

Once a trigger event is detected, a high-performance subsystem such as an ARM Cortex A-Class CPU processor is engaged to examine and classify the event and determine the correct action. With its broad adoption, the ARM A-class processor powers some of the largest edge device categories in the world.

Graphic Processing Unit (GPU)

In systems where high AI workloads must run in real-time where MCUs and CPUs do not have enough processing power, embedded low power GPUs can used. GPUs are highly parallel cores (100s or 1,000s) for high-speed graphics rendering. They deliver high-performance processing, and typically have a larger footprint and higher power consumption than CPUs.

Want to learn more about our SDK?

SDK for Mobile Phones