The latest version of the sensAI stack (v4.1) is available now and supports Lattice’s roadmap of AI-based applications. Enhancements and new features of sensAI v4.1 include:
o User presence detection to automatically power on/off Client devices as a user approaches or departs.
o Attention tracking to lower a device’s screen brightness to conserve battery life when the user isn’t looking at the screen.
o Face framing to improve the video experience in video conferencing applications.
o Onlooker detection to realize when someone is standing behind a device and blurring the screen to maintain data privacy.
· Expanded application support – the performance and accuracy gains made possible with v4.1 expand the sensAI solution stack’s target applications to include the highly-accurate object and defect detection applications used in automated industrial systems. The stack has a new hardware platform for voice and vision-based ML application development featuring an onboard image sensor, two I2S microphones, and expansion connectors for adding additional sensors.
· Tools – the sensAI solution stack has an updated neural network compiler and supports Lattice sensAI Studio, a GUI-based tool with a library of AI models that can be configured and trained for popular use cases. sensAI Studio now supports
AutoML features to enable creation of ML modules based on application and dataset targets. Several of the models based on the Mobilenet ML inferencing training platform are optimized for the latest Nexus FPGA family, Lattice CertusPro™-NX.
The stack is compatible with other widely-used ML platforms, including the latest versions of Caffe, Keras, TensorFlow, and TensorFlow Lite.