Back to School: Lessons from the Classroom to Make Factories Smarter
By Amy Satin Spinelli, Bright Machinist
September 10, 2019
“Machine learning” gets thrown around a lot. These days it seems like any description of a technological innovation would not be complete without the term “ML” attached to it. Reminds me of “.com” circa 1999. But what does it mean for a machine to truly learn?
Research shows that active engagement – like group-based collaboration or self-directed research as opposed to lectures – is a more effective way for humans to learn and to retain knowledge. The same can be said for the way machines learn. True intelligence is acquired through active engagement, and machine learning is developed by architecting at the nexus of hardware and software by making use of dynamic data.
The teams I work with are creating a seamless integration between software and hardware to give eyes and brains to factory robots. As we feed them more data, our systems, which we call Bright Machines Microfactories, get more intelligent over time, and they are engineered to learn from this data. Just as students should take an active role in their learning, our intelligent software enables a dynamic brain in our factory robotic cells so they can actively learn. This stands in stark contrast to the typical industrial robot found on most factory floors.
From teach to learn
You only need to visit a factory to see firsthand how most of the current generation of robots perform tasks today. Most of these robots have a teach pendant affixed to them. When a robot needs to perform a task, say a pick and place function in which the robot selects parts and places them on a circuit board, a technician needs to “teach” the robot exactly where to find the items and where to place them. The technician uses the teach pendant to jog the robot arm and teach points along a path, storing this information on the robot arm. This path, and those coordinates, are essentially stored in the “black box” of the robot PLC. This is a painfully time consuming, error-prone, and highly manual process, which needs to be repeated for each robot. Manually calibrating and configuring each machine on the factory floor like this is not only laboriously time-consuming, it also costs real dollars in people’s time and lost market opportunity as unbuilt products wait for their lines to be configured. Like I said, painful—not just for the technician but for the manufacturer, the company producing the product, and even for consumers.
Companies like mine are changing this paradigm and enabling truly intelligent industrial robots. Rather than manually configuring individual robots, we focus on platforms that enable robotic cells to learn in an automated, repeatable fashion. Here’s how we develop robots that are primed for learning:
- Modular Robots with Holistic Knowledge: Our microfactories consist of flexible robots that can be configured with cameras, end of arm tools, conveyors, etc., all of which tie into our software, which we call Brightware. We are enabling a more thorough understanding of what the robot sees and does, laying the groundwork to generate and share useful data.
- Recipe Creation: Rather than programming a task on a specific robot, our process involves a technician creating a recipe—a high-level expression using an object-oriented model describing the tasks to be done that exists separate from any given robot. These recipes can exist independently from the machine, and can be deployed to multiple machines, enabling automation to scale rapidly.
- Digital Twin Simulation: In the near future, recipes will also be simulated on a digital twin, a virtual model of the robot and its tasks. With digital twin, the technician can use a CAD to create a set of paths and motion goals, avoiding collisions. The digital twin enables rapid iterations and enables the technician to experiment in a virtual environment before deploying to the factory floor. The digital twin will extract a path, and that path becomes part of the recipe. When the recipe is deployed on a robot, the robot then learns what to do.
- Data for training and testing: Our microfactories are designed to collect and make use of data through a hybrid cloud, so as the robots run recipes the entire network get brighter as it learns from each production run. In the near term, this is machine learning at work, enabling rapid decision making to foster more efficient microfactories. In the longer term, this lays the foundation for closed-loop learning.
New model, new mindset
In the old model, each piece of equipment on the factory floor had to be built and programmed to perform a particular task. Each time a new production line needed to be set up, a technician would have to program each robot using a teach pendant and storing data on each individual robot. Next generation production lines, however, are built to scale using shared instructions. Each robot has a holistic knowledge of its tasks – from the robot arm, to the computer vision system, to the end of arm tool—all working together to perform a series of tasks. This knowledge can be shared by an infinite number of robots, which enables our customers to replicate factory functions on multiple production lines almost instantaneously, driving radical efficiency. Once the robots are all working using these new tasks, they can begin learning from each stage of the production cycle, thus improving the efficiency of future production runs.
Changing an established mindset and learning new ways of doing things always feels strange in the beginning. But just like it felt odd the first time you called up a rideshare from your smartphone, entered your credit card number onto a website to purchase something you had not seen or touched in person, or let algorithms suggest movies or books you might enjoy, at some point we will marvel at how we did things in factories “the old way.”
Of course, machines don’t experience these emotions. But the same principles of education that apply to students in a classroom apply to robots in a factory. When we combine intelligent software with flexible robotics, we can apply science to scale configuration cheaply and easily. Shifting the paradigm from passive teaching of each individual robot to active simulation configuration in which robots share data and knowledge will unlock great benefits, such as reductions in set-up time, error rates, and cost, in addition to safety improvements. This is the bright way.