Made in Japan
AI Behavior Analysis SystemVP-Motion
- Video analysis performed in real-time
- Quickly detect workplace mistakes and accidents
- Supplied machine-learning tool enables unlimited pattern recognition
Real-time Behavior Analysis through Machine Learning
The AI Solution to Improve Efficiency and Safety in the Workplace
VP-Motion is Japan’s leading technology in the field of AI-powered human behavior analysis software, that learns and automatically detects human movements. Through high-speed, video-based machine learning, the integrated AI acquires the ability to recognize a wide range of movement and behavioral patterns and enhance automatic detection capabilities.
Utilizing this product allows the user real-time detection of a diverse range of abnormal behaviors, from mistakes or accidents at the workplace—such as falls or people collapsing—to suspicious activities, and more. Through the use of VP-Motion, you establish an autonomous surveillance system, reducing the need for continuous human monitoring. This helps to eliminate work errors and elevate work safety.
VP-Motion can be used in a wide variety of settings, and the user can freely customize the software to target specific behaviors. Here are just some examples of the multitude of possible fields and scenarios in which VP-Motion could be an asset.
Teach VP-Motion to detect routine tasks Workflow Improvement in Manufacturing
Teach VP-Motion to detect someone collapsing Sudden Onset Illness Detection
Teach VP-Motion to detect suspicious movements Crime Prevention and Security
Teach VP-Motion to detect people falling Patient and Elderly Care Support
VP-Motion is comprised of three applications that all work together to let you generate trained AI-models through machine-learning, simply by feeding the program your own video data. The trained model is then used to monitor video images including, but not limited to, prerecorded video and live feeds.
Freely define behaviors
—– Easily create your data through the training data creation tool ‘Annotator’
A few videos and several minutes is all you need
—– High-speed machine-learning through state-of-the-art AI technology.
No more worrying about camera angles, VP-Motion detects everything
—– Thanks to our proprietary 3D-coordinate-detection technology, VP-Motion recognizes behaviors, even if
the camera angle is different from the one used to train the model.
Annotate smarter and faster
—– Annotator can be freely copied to other PCs, allowing multiple people to annotate different videos
at the same time, significantly cutting costs and boosting productivity.
Don’t be tied to one spot
—– ‘Standalone Monitor’ allows you to keep an eye on everything from whatever computer you choose
outside of the installed environment, by simply loading the training data into the software.
No special equipment needed
—– VP-Motion supports all cameras, whether it’s a webcam or a surveillance camera;
all camera’s allow a fast detection of the skeleton.
Implement the system and start saving right away
As an example of what can be achieved by implementing VP-Motion, we have compared how many manhours and how much money it would take to classify the content of video data of 100 employees doing their work, either manually or through VP-Motion. Below you will find an estimation of how much you would save.
Reduction of manhours
Save roughly 90% in terms of manhours
Reduction of costs
Save roughly 86% in terms of costs
How the System Works
Step 1Annotator: Label Behaviors to Create Training Data
The first step is to create training data. You do this by importing the video and labeling the desired behavior.Annotator makes this process child’s play. All you have to do is draw a rectangle around the action and give it a name.
Step 2Trainer: Train the AI With Training Data
The second step involves teaching our AI-engine. You do this by importing the training data from step 1. This takes a couple of minutes, and a small amount of data.
After the importing is complete, and the engine has successfully learned the behavior you want it to, you will get a Trained AI Model.
Step 3Monitor: See everything with as much as eight cameras
Finally, in the Monitor software you will use the trained model you created in step two. You can use both real-time footage or prerecorded video. Monitor allows you to analyze the data through two different methods. These methods are ‘skeletal-based’ and ‘image-based’, and both methods allow you to detect and/or analyze behavior.
Merits of skeletal-based analysis
Skeletal-based analysis takes information from a person’s skeleton as its starting point. The system is most effective when there is a large amount of characteristics to detect in movement and actions, and it can detect these, even when just seeing the upper body of a person, or having multiple people overlap. Furthermore, the software can accurately learn someone’s movements in a short amount of time. Through this, checking routine procedures and human movements becomes a piece of cake. VP-Motion is useful when it comes to getting rid of slipups or accidents during work, because previously unnoticed details are suddenly clear as day. Thus, the application contributes greatly towards the increase in work effectiveness.
Merits of image-based analysis
Image-based analysis takes far more factors into consideration, such as background, objects and colors. It is most effective when there is a lack of pronounced movement, such as the detailed movements of a hand holding a tool, or the correct way to handle different tools, because this mode of detection can also detect factors outside of the human body, such as tools held in the hand or devices that are being worked on.
- CASE 1
- Classification changes depending on what is being held, such as a tool or machine component
- CASE 2
- Classification changes depending on the details of the work being performed, such as specific equipment being used.
- CASE 3
- Behavior analysis of images where only the arms or upper half of the body is being shown, such as work performed on a workbench.
VP-Motion can also be used as surveillance camera recording software with the capability to log detected behavior. Users can also freely develop unique capabilities, such as sending out an alert when specific behavior is detected.
Listed below are some already accessible features
・Monitor and record up to eight cameras
・Export logged behavior analysis
・TCP/UDP communication through specified addresses and ports
VP-Motion for Windows
- VP-Motion (Annotator/Trainer/Monitor) x 1
Integrated environment for Annotator/Trainer/Monitor.
- Standalone Monitor x 1
Monitor app for a PC separate from the integrated environment.
- Product Key for VP-Motion x 1
- Product Key for Standalone Monitor x 1
|Supported OS||Windows 10 (64bit) / Windows 11|
|PC Specifications|| CPU: Intel® Core i7-6700 or equivalent, or higher|
Memory: 32GB or more
Minimum required GPU : NVIDIA® GPU with a minimum of 6GB of VRAM
Recommended GPU: NVIDIA® GeForce RTX 3060 or equivalent*, or higher
*VP-Motion specifically requires a GPU with NVIDIA® chipset; NEXT-SYSTEM cannot guarantee the product’s functionality when an AMD GPU is used.
|Number of Monitoring Cameras||Max. 8 cameras per PC (Depending on PC performance)|
Multiple monitoring PCs can be used through utilization of Standalone Monitor.