Automaker Study Finds RVT’s eVisionFactory the Most Accurate 3D Vision Guided Robotics Vision System on the Market

January 25, 2018

RVT Bests Five Other Competitors

We are very excited to share that eVisionFactory (eVF™) software is the most accurate vision guidance solution for 3D vision guided robotics, according to a recent study by one of the world’s largest and most technologically advanced automakers. The manufacturer conducted a study of the latest versions of the six top 3D vision guided robotics systems available on the market to support powertrain manufacturing for its best-selling truck brand, and found eVF the most accurate across all tests.

The goal of each system under test was to accurately calculate the position of a truck transmission case in all six degrees of freedom: the X, Y, and Z center positions with rX, rY, and rZ rotation degrees. With these six numbers, any object’s position and orientation can be perfectly described. Determining all six numbers with a high degree accuracy is one of the hardest problems facing vision-guided robotics today.

In all tests conducted, eVisionFactory™ (eVF™) was judged the most accurate vision system for each of the six degrees of freedom. In most cases, eVF™ performed from 10 to 100 times better than the competitors.
This difference in accuracy is unusual in the industry. For long-time users of the platform though it’s no surprise. The automaker’s lead engineer said, “We have been running eVF since 2003, and some of our vision programs haven’t changed in 10 years! Testing the latest version of the software shows it to be the most accurate solution available,”

Transmission Case Used for Study

To determine the winner of the tests, the automaker used an industry standard method called a “gage of repeatability and reproducibility” (gage R&R). Gage R&R objectively compares the accuracy between vision systems. Each vision system performed the same series of tests and was judged by the same standard.

The first type of test measured the repeatability of each system. In theory, if a part or camera does not move, the vision system should report the exact same part position every time. This ability is called “repeatability.”
In the second type of test, the camera was moved to multiple known positions by a robot. When a vision system reports the location of the part after movement, it needs to be able to reflect this movement in the new position it calculates. For example, if the camera was moved 25mm in the X direction, the part’s X position reported by the vision system should now be 25mm different. This is “reproducibility,” or the measurement of how accurately the system can “reproduce” the part’s offset.
Outlined in the chart below, our two tests displayed a higher accuracy than every other vision system studied in every category:

RVT’s results. Smaller numbers mean higher accuracy. Other competitors redacted by the automaker for confidentiality.

RVT’s results. Smaller numbers mean higher accuracy. Other competitors redacted by the automaker for confidentiality.

In some cases, the difference in accuracy went notably beyond the capability of the camera resolution or the robot itself. This level of accuracy of the eVisionFactory system was possible with its patented method of using multiple features to “vote” on the position of the part. Errors due to optics or robot are effectively minimized by using multiple points of interest to determine the position of the part..
We performed these tests with a single monochromatic 2-dimensional 5-megapixel camera from Sentech (STC-500POE). The robot was an ABB IRB6600,as specified by the automaker. We used two simple bar lights to achieve optimal lighting.
Since the study, the automaker has purchased several new systems. They are currently being installed in the company’s most critical factory floors by us and our network of integrators.

Earlier this year, we released a new vision system for the Universal Robot and KUKA LBR collaborative robots. Our new offering, collaborative Vision Factory , brings the same industrial grade 3D accuracy from a single 2D camera to the world of collaborative robots.

Paul Weidinger, VP Engineering
Silver Spring, MD