Machine Vision and Manufacturing: A Reality 25 Years in the Making
By Ronald Poelman, Senior Software Architect, Bright Machines
April 9, 2019
Twenty-five years ago, scientist and futurist Hans Moravec predicted the tremendous impact of robotics and artificial intelligence on human society, remarking: “Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course for them” (Hans Moravec, 1993). For centuries, the world’s civilizations have been built through the arduous labor of humans, but since the inception of robotics, scientists like Hans have dreamed of a world in which all future civilizations are built on the backs of robots, not humans. While we have witnessed multiple artificial intelligence and robotics “winters” since the early days that have delayed this possibility, a convergence of recent technology breakthroughs makes this vision an ever-more-likely reality.
And, though most people will think about humanoid robots when they think about the pinnacle of robotics, a smart production line, an autonomous car or an intelligent distribution center is just as much a technological marvel and will deliver much more, real value to our civilization.
My career has always circled the periphery of robotics. As an academic, I worked on remote sensing, computer and machine vision, augmented reality and artificial intelligence. When the opportunity presented itself to work on the next generation of robotics with an extraordinarily talented team, I grabbed it with both hands. At Bright Machines, we focus on delivering intelligent software-defined manufacturing – through this technology, we’re making smart production lines part of this imminent reality.
Since I started my work on this team, I’ve had the opportunity to watch a multitude of production lines and I’m always amazed to see armies of system integrators and automation engineers assemble new lines. Yes, anyone can order robotic arms and equipment from the 800-pound gorillas in the industry, but after equipment installation, the level of customization required is astonishing. While every product is different, it seems that every new project starts nearly from scratch and that the lessons learned are locked in the minds of the integration engineers. This is not sustainable nor scalable; the gap between people and today’s hardware and software needs to be closed. Robotic lines cannot scale when we need highly skilled and highly experienced people to create, customize and operate those lines.
Today, robots are primarily operated without machine vision. Historically, this makes sense for highly repetitive, long-running tasks. However, with the need for greater flexibility, desire for repurposing lines for different products and the high pressure of time-to-market, traditional approaches are no longer sustainable. The way forward is to move some of the intelligence – currently in the minds of automation engineers – into the robotic lines themselves, thereby significantly reducing the setup time while increasing the flexibility of the equipment. Autonomous systems need sensing to operate – how can they optimize or course-correct otherwise? That’s why we’re equipping our Bright Robotic Cells with rich software, a multitude of sensors; machine vision will allow them to learn and improve, and provide all the data required for our artificial intelligence platform.
State-of-the-art research shows that deep learning and simulation can be used to replace tasks that are currently painstakingly conducted by automation engineers. With sensory capabilities, we can empower a digital twin that reflects the physical equipment very accurately. A lot of the intelligence is inside a computer aided design (CAD) model, thus empowering the simulation of the tasks even before the product is physically available, but the technology needed to accomplish this has only recently become available. We can already see in our lab what improvements can be made by deeply integrating computer vision in the DNA of our hardware and software; by giving Bright Robotic Cells sight and a brain, we improve their flexibility, set-up time and reconfigurability dramatically.
Decades before Bright Machines was founded, scientists like Moravec, Minsky and others wrote about this innovation and predicted the importance of robotics and artificial intelligence on human civilization. All that was missing were the sensors and the brain needed to make machines smarter. Today, we have the ability to turn this nearly 30-year vision into reality. There’s never been a more exciting time to be in the manufacturing industry as this new paradigm sets the tone for the development of future civilizations.
About the author
Ronald Poelman is a Ph.D. computer vision veteran with 20 years of experience in architecting cutting edge machine vision and machine learning pipelines. At Bright Machines he’s architecting the autonomy of our Bright Robotic Cells with computer vision and machine learning. Previously, he founded two startups and ended up in Silicon Valley for a large chunk of his career, relentlessly moving the technology barrier. His interest is largely to make dumb machines a little smarter and considers the current state of the market unacceptable.