In total, the suite consists of 95 algorithms, 67 code samples and 11 ready-to-use applications. Plug-and-play-provided algorithms include high-speed counting, vibration monitoring, spatter monitoring, object tracking, optical flow, ultra-slow-motion, machine learning and others. It provides users with both C++ and Python APIs as well as extensive documentation and a wide range of samples organized by its implementation level to incrementally introduce the concept of event-based machine vision.
“We have seen a significant increase in interest and use of Event-Based Vision and we now have an active and fast-growing community of more than 4,500 inventors using Metavision Intelligence since its launch. As we are opening the event-based vision market across many segments, we decided to boost the adoption of MIS throughout the ecosystem targeting 40,000 users in the next two years. By offering these development aids, we can accelerate the evolution of event-based vision to a broader range of applications and use cases and allow for each player in the chain to add its own value,” said Luca Verre, co-founder and CEO of Prophesee.
New features allow faster ramp up to custom solutions
The latest release includes enhancements to help speed up time to production, allowing developers to stream their first events in minutes, or even build their own event camera from scratch using the provided camera plugins under open-source license as a base.
They now also have the tools to port their developments on Windows or Ubuntu operating systems. Metavision Intelligence 3.0 features also allow access to the full potential of advanced sensor features (e.g. anti-flickering, bias adjustment) by providing source code access to key sensor plugins.
The Metavision Studio tool has also enhanced the user experience with improvements to the onboarding guidance, UI, ROI and bias setup process.
New Core Machine Learning modules to bridge frame and event-based vision systems
The core ML modules include an open-source event-to-video converter, as well as a video-to-event simulator. The event-to-video converter utilizes the pretrained neural network to build grayscale images based on events. This allows users to make the best use of their existing development resources to process event-based data and build algorithms upon it.
The video-to-event pipeline breaks down the barrier of data scarcity in the event-based domain by enabling the conversion of conventional frame-based datasets to event-based datasets.
Developers can easily download the Metavision Intelligence Suite and begin building products leveraging Prophesee sensing technologies for free.