From Self-Driving Cars to Self-Driving Manufacturing

September 24, 2019 | 5 min read

Barry Clark, Senior Applied Scientist, Bright Machines

Computer vision technology has made monumental leaps in the last 10-20 years – look no further than self-driving cars as an example of the huge technical accomplishments in the space. Vision systems on self-driving cars can run at high frame rates, recognize their environment in real time with high accuracy, and turn the information gathered into instructions for the car’s control system. If we compare this to typical computer vision in the manufacturing industry, the two have very little in common. Cameras in industrial settings rarely take more than one image at a time; they look for one or two specific things with no regard for their environment, and they offload any higher-level computation to the robot or other control systems.

The differences between these two vision systems leads to a natural conclusion: computer vision systems in manufacturing are not flexible enough. In order to see rapid growth and improvement in computer vision for manufacturing, there is a lot engineers can learn and apply from more mainstream disciplines like transportation.

When It Comes to Computer Vision, Less Is Not More

One such concept is the use of multiple cameras in all robotic cells. Frequently in manufacturing, the use of multiple cameras is avoided due to cost and complexity. Cameras are placed only where they are needed and usually have a very narrow field of view. This means that if there is a process change, even a small one, the camera must be moved, and the lens may need to be changed. In the worst cases, the vision kit for that process may need to be completely rethought. In the long run, this leads to more time and more cost to keep a system up and running.

Multiple camera robotic cells provide the benefit of always being able to view the entire workspace. While for certain simple applications, this may seem redundant, the benefits become immediately clear when new vision tasks are required to increase reliability or when the region of interest for the vision algorithm moves due to a process change. Because all areas of the cell are under observation, there is no need to modify the hardware – this system is ready to adapt immediately, eliminating delays from system redesign and new hardware procurement.

Without Understanding, Vision Is Just Sight

Another feature from self-driving cars that is lacking in manufacturing is the ability for the vision system to understand its environment. Today, the workflows powering computer vision systems in manufacturing are focused on doing a single task, whether that is reading a barcode or measuring the distance between two holes. The system will always do that task during its allotted time, regardless of what is happening in the cell. If the task fails, the system may stop and alert an operator, or it may have a series of steps to try the task again. At the end of the day, the system lacks the ability to make decisions when the cell’s conditions are out of the ordinary.

In order to be more flexible, the cells must be able to adapt to these transient conditions, much like self-driving cars adjust to obstacles in the road, stop lights, and other environmental changes. By having an inherent understanding of unexpected scenarios – like when parts are missing, PCBs are in the wrong orientation, or the wrong pallet has been loaded – the vision system can more easily adapt, choose the right algorithm, or alert the operator with specific, helpful information. The computer vision engineer will no longer need to try and account for tens, if not hundreds, of edge case failures, but can

focus their time and energy on the workflow and remain confident that the overarching system will be able to understand what is going on in the cell at a macro level. This flexibility in the vision system’s intelligence will allow system engineers to worry about more important problems and trust that their vision workflow is robust from the very beginning.

Streamlining Wins Over Specificity

Finally, and perhaps most importantly, computer vision systems in manufacturing need to move away from specialized algorithms and push towards streamlined workflows that do not require copious amounts of configuration. This concept has been core to our current work at Bright Machines. In the past, finding a part in a manufacturing environment required large amounts of time and experimentation. This often ended in a custom workflow, combining lots of one-off techniques along with attempts to modify or control the environment. This made repurposing vision systems in a reasonable amount of time very difficult. By focusing on techniques to accurately detect and measure all objects in the field of view, we can greatly increase the flexibility of the vision system, both reducing the time required for a product change over as well as improving precision and robustness. These algorithms will allow for more straightforward parameterization as well as reduce the dependence on environmental conditions, making it easier for integrators in the field.

As we begin to incorporate concepts from other cutting-edge fields into computer vision for manufacturing, we will begin to see a drastic change in how robotic devices function and operate. It will no longer take months to install a robotic work line, but rather weeks or even days. System integrators will no longer have to have the immense breadth of knowledge in all fields but will be able to focus on the tasks that are imperative to their customers. They’ll no longer have to worry about if their products will work in a timely fashion or if a certain level precision is achievable. This new breed of flexible computer vision will play a key role in making Software-Defined Manufacturing a success, and ultimately, transforming manufacturing from a stagnant industry to one that is constantly innovating.

About the Author

Barry Clark is a Sr. Applied Scientist at Bright Machines where he’s focused on computer vision algorithms and software that will allow our robots to navigate and adjust precisely, in real time. Previously he worked on computer vision and controls algorithms for adaptive robotics in the textile industry with a focus on sewing.

To learn more about our capabilities in building the backbone of AI, visit Bright Machines.

Learn more

The Role of Upskilling in an Automated Future
Reskilling

The Role of Upskilling in an Automated Future

Empowering Women: A Spotlight on Bright Machines' Female Engineers
Bright Ideas

Empowering Women: A Spotlight on Bright Machines' Female Engineers

A New Paradigm for the ‘AI Backbone’
Manufacturing Innovation

A New Paradigm for the ‘AI Backbone’