The time: 1961. General Motors' implementation of flexible robotic automation in Ternstedt, N.J., started U.S. manufacturing on the path to realizing a future depicted in a 1923 play by Karel Capek. In "R.U.R." (Rossum's Universal Robot), Capek's vision was for millions of mechanical workers -- robots (as derived from the Czech words for work or workers).
Although U.S. robot numbers are not yet measured in millions, the industrial automatons are nonetheless playing strategic roles in U.S. manufacturing competitiveness, says Jeffrey A. Burnstein, executive vice president, Robotic Industries Association. RIA estimates that more than 171,000 robots are now at work in U.S. factories, placing the U.S. second only to Japan in overall robot use. Worldwide, there are more than a million industrial robots in operation, Burnstein notes.
To properly appreciate the value contribution of that installed base, consider that those mechanical workers are for the most part succeeding despite being blind, deaf and without a sense of touch.
But that's changing. For example, robotic vision and other intelligence features were strong trends at the recent RIA 2007 International Robots & Vision Show. One example is a new intelligent welding robot from Fanuc Robotics America Inc.
Motoman Inc. demonstrated a new vision-guided robot solution at the robotics show that could pick randomly located automotive components out of a bin and place them on a table. The robot then individually placed the parts into another bin. The Motoman robot uses the vision system to access part position information using either serial or DeviceNet interfaces. The vision software supports true 3D (X, Y, Z, yaw, pitch, roll) with one, two or three cameras, without the use of range sensors or lasers. The single camera option requires a robot mounted camera and multiple inspections. With the multiple camera option, the cameras can be mounted on the robot or at a fixed location. That allows greater flexibility and can accommodate irregular part shapes. The technology makes complex bin-picking applications possible, despite confusing backgrounds.
Motoman says the customizable human machine interface (including menus) allows users to change system parameters, calibrate the system and see real-time inspection results. Integrated 2D solutions are also available.
Adds Greg Garmann, software and controls technology leader for Motoman, "It can be a trade off. By adding more intelligence to the robot and camera systems, the hardware and tooling investment can be simplified and diminished."
In addition to providing manufacturing flexibility to readily accommodate product changes, Boatner says today's vision systems are a lot less expensive. For example, in the mid-1980s a flexible manufacturing system went online sporting an elaborate $900,000 3D robot guidance system. By 1998, the average cost of an implementation was down to $44,000, according to the Automated Imaging Association. And the downward spiral didn't stop there.
"Vision has never been more affordable [than now]," says Fanuc Robotics' Dick Johnson, general manager, material handling. For example, Fanuc Robotics offers 2D visual robot guidance for $7,995 and visual error proofing for $4,995 with its robots.
"In robotics, the important factor contributing to price reductions was the emergence of the automakers as the large volume early adopters of the technology," says Kevin Kozuszek, director of marketing, Kuka Robotics Corp.
Offering Sight to the Blind Robots
Vision hardware is becoming much more reliable, notes Johnson. "At one point vision algorithms ran on expensive, complex dedicated hardware. This was then replaced with systems that made use of personal computer hardware." He reports that the new trend is to offer vision built right into the robot, as with the Fanuc Robotics iRVision, or supply small cameras from companies like Cognex. "The elimination of hard disks and operating systems designed for personal use greatly increase the system reliability."
Expect vision systems with simpler lighting requirements, says Johnson. "Vision systems are becoming more immune to lighting variations. In the case of Fanuc Robotics vision, the programmer can take advantage of multi-exposure control to snap the same image with different exposures. That allows the vision algorithm a wider range of operation and the ability to compensate automatically for lighting variations through the day." Optional ring lights also simplify the engineer's job of properly lighting the part. The ring lights attach directly to the camera and provide a ring of LEDs around the camera lens. This ensures that the light is directed right where the camera needs it, explains Johnson.
The need for visual error proofing is also driving the robot/vision convergence. Early robots were sightless, leaving manufacturers to contend with the loss of operator feedback on process abnormalities. Johnson's example: "An operator might note that a part is missing a feature or that a label is being placed upside-down. Blind robots won't be aware." While low-cost error-proofing systems need to be programmed, they will perform even better than the human operator, Johnson says.
"As manufacturers strive for increased quality, the vision solution can offer an advantage," Johnson continues. "Six Sigma quality systems, for example, allow only 3.4 defects per million parts. While an operator may tirelessly check 10,000 or even 100,000 parts for a given defect, a properly programmed robot will find all three defects in a run of 1 million parts. The trend toward use of visual error-proofing allows robotic systems to improve quality by checking and taking action to reject defective parts, thus saving on the costs associated with scrap, rework, repair and warranty."
Vision capability and accompanying accuracy improvements are helping to spur more diverse robotic applications. Two examples were honored at the robotics show as winners in the RIA's user recognition program. The High Throughput Screening Core facility at the Memorial Sloan Kettering Cancer Center in New York, for instance, uses robots and machine vision to screen large chemical libraries against various cancer targets. "For economical reasons, many pharmaceutical companies aren't willing to research certain types of rare cancers," explains Dr. Hakim Djaballah. The system was designed internally at the center and integrated by the laboratory automation and integration group at Thermo Fisher Scientific.
Molding International & Engineering uses a vision-guided robot system to manufacture insert molded electrical connectors. Using end-of-arm tooling, the claimed benefits include increased plant capacity, quality improvements, reduced variation in molding processes and elimination of work-in-progress inventory. Automation suppliers include Denso Robotics and Tensor Automation.