ARTICLE

Machine vision in focus

03 April 2019

As the use of automation has become more more widespread, so has demand for one of its key enabling technologies - machine vision. In particular, rising demand for quality inspection and increasing migration towards 3D-based machine vision systems is fueling growth. Charlotte Stonestreet casts an eye on some of the latest developments

According to Neil Sandhu, SICK UK’s National Product Manager for Imaging, Measurement & Ranging, the beauty of machine vision lies in its ability to perform where humans fail - conducting relentless and repetitive tasks such as quality inspection, measurement or counting of product features over extended periods.

But in the past, an ambition to use automated systems to ‘see’ objects in three dimensions remained the ‘holy grail’ for many manufacturers. Specialist programming expertise was always needed to take the raw data output and configure it for different factory control networks. In turn, huge amounts of processing power and bulky equipment were required – not to mention a significant amount of hard cash.

3D vision offers additional advantages over 2D that are important for some applications because it can measure height, depth and volume. Plotting x, y and z axes enables for example, surface indentations to be examined and graded, or the depth of product fill in containers to be assessed. By comparison, two-dimensional inspection systems have their limitations because they are only able to detect a flat profile; 2D cannot measure the depth of any defect or be used to calculate the volume of the product.

3D gets smart

The good news is that 3D Vision has now become much more widely accessible. As processing power has exploded, packed into ever-smaller devices, the ability of smart vision sensors to capture, extract, process and communicate 3D data has leapt forward. Smart 3D vision sensors have evolved to become all-in-one solutions that are quick and easy to configure and commission. 

Smart 3D vision sensors have evolved to become all-in-one solutions

Usually mounted above a conveyor, 3D vision sensors, like the SICK Trispector 1000, comprise a vertically–mounted eye-safe laser with a camera mounted at an acute angle. This angle enables the camera to detect the detailed profile of a product passing through the laser beam, building up a large number of vertical ‘slices’ to make a fully three-dimensional profile. We call this technique ‘laser triangulation’.

Typical TriSpector 1000 applications include checking the contents, number and fill of a container such as a tray of biscuits compared with the taught-in ‘master’ profile.

This “democratisation” of 3D applications will be supported further by advances like SICK’s AppSpace software development platform.  AppSpace allows free and flexible customisation of applications on SICK programmable sensors and devices with ‘click and drop’ ease. With AppSpace, rather than being restricted to the available pre-developed proprietary software, machinebuilders and integrators can tailor-make their own solutions, and even share them with other users in the Cloud. Our ambition is to make it easy to install uploaded sensor ‘Apps’ to programmable SICK devices from the SICK AppPool, just as simply as if you would on a mobile phone.

Robots & cobots

Using 3D vision widens the opportunities to automate pick-and-place robotics solutions for applications like gripping of complex shapes and profiles, picking products with variable heights and picking products individually from a random arrangement in a bin or in-feed.

3D vision widens the opportunities to automate pick-and-place robotics

Now, SICK is working with customers to develop 3D vision-guided systems for smaller-scale robots and cobots. New applications are opening up for part localisation systems, for example picking specific small parts like bolts from a mixed parts bin.  The SICK Trispector P Beltpick enables picking of products from a conveyor through integrated 3D vision robot guidance with ‘plug-and-play’ support for ABB PickMaster and Universal Robots.

Automation challenge

Soft, squidgy mozarella cheese balls slithering around a brine-filled, sealed, glossy tubular bag: could there be a tougher challenge than automating the robot pick and place of these delicate individual portions into secondary packaging?

German packaging machinery manufacturer A&F, Automation & Fördertechnik, worked with specialists at SICK to develop a high-speed 3D vision-guided system based on SICK’s IVC-3D smart camera and A&F‘s FlexoPac solution using 4-axis Delta 3 pick-and-place robots. The camera sends data sets including the object centre, orientation and height to the robot controller, which calculates where to pick up the mozzarella bags on a moving conveyer.

Since the camera reports the height, the gripping position can be adapted to take account of the mozarella ball not being in the centre of the bag and guide the gripper to pick it gently every time, without risking a collision.

The system handles 150 mozzarella bags per minute on two synchroniseded conveyers. Identified based on their 3D measurements, they are placed into three different box types according to five pre-determined packaging plans.

Plug & play robot vision

The latest top of the range addition to RARUK Automation’s Pick-it 3D Robot Vision is the Pick-it M-HD high definition 3D camera, which detects almost any small and medium size object, made from any material, with even higher accuracy.

Pick-it allows any camera supported automation application to be built without expert help. There’s no need for complicated programming, Pick-it guides the robot to see, pick and place products from bins, boxes, pallets and tables onto a CNC machine, assembly line, conveyor belt, welding station or work bench.

Simply show an example part to the plug-and-play camera, save this into the teach detection engine, tell Pick-it where to look with a click and drag tool and Pick-it will guide the robot to the nearest pickable part. A typical detection cycle takes less than a second and Pick-it can find multiple parts in one cycle. 

The system can also be connected to the internet for remote monitoring, extending Pick-it’s potential for lights out operation and integration into the smart factory environment. And as Pick-it can find parts in any location and layout there is no need for a bulky feeding line with inflexible and expensive elements.

The combination of the latest Pick-it 2.0 3D picking software and the new M-HD camera provides a set-up that is ideal for picking small parts with a very high degree of accuracy.  Indeed, it allows RARUK Automation to provide a system that is 30 times more accurate and can detect objects that are 10 times smaller. It can also process this information 1.25 times faster too. 

As with the other Pick-it cameras in the range, the new M-HD model uses structured light to calculate the 3D images. The big advantage of this over the traditional 2D camera is that it does not require special lighting and is immune to reflections.  

Autonomous Machine Vision

With machine vision technologies ever advancing, visual QA is becoming an increasingly popular way to provide consistent and accurate inspection of products for flaws. No longer subject to the long wait times, reliance upon vision integrators, high costs and downtime of previous machine vision solutions, today’s affordability and immediacy of Autonomous Machine Vision systems allow manufacturers to release manpower from manual visual inspection, identify faulty components early on and directly reduce scrap.

According to Nir Zamir, VP of Marketing at Inspekto Adding an Autonomous Machine Vision system to such a production line means that the manufacturer no longer requires employees to work on manual visual inspection. In the UK, the average value added by each employee in automotive engineering is £100,000 per year. By assigning employees to tedious inspection tasks that add no value, the manufacturer loses £100,000 per year for each employee. Considering that most facilities will run multiple shifts and have several employees assigned to visual QA, over time, a simple investment of €9,720 in an Inspekto S70 can save a plant hundreds of thousands. In one recent installation, one Inspekto S70 installed is saving the plant a straight €1,420,000 over its depreciation period. This budget can now go towards enhancing production, competitive edge and productivity

The system’s Plug & Inspect technology makes it suitable for any handling method and any product range and its deep learning capabilities means that in minutes, the manufacturer can configure the system to inspect multiple products on the same line — something which has historically been impossible.

systems can be stored until needed and then rapidly set up on the line

Plug & Inspect technology also means that systems can be stored until needed and then rapidly set up on the line, with no downtime or labour expense. Plants using Inspekto S70 units, are even known to install them during lunch break. Alongside this, the manufacturer will benefit from the operational savings in lead time, mean time to repair, yield and labour costs.

Autonomous Machine Vision systems can be installed and set up in 30 to 60 minutes, without the intervention of a vision systems integrator. This, combined with an affordable sub €10K price tag, means that manufacturers can install an Autonomous Machine Vision system at every required point on the production line. This concept, known as Total QA, enables manufacturers to identify defects before they are buried into a product, meaning that fewer products will fail the end of line test.

Total QA also prevents the manufacturer from wasting energy on a product that will inevitably be scrapped. On top of this, many useful applications can be stacked on an INSPEKTO Autonomous Machine Vision platform – including Inspekto TRACE that archives all products’ images and data, enabling efficient void-claim rejection and root-cause analysis, further improving productivity and yield.

Integrated system

Another advocate of machine vision in quality assurance is Stephen Hayes, managing director of Beckhoff UK, who points to how even the tiniest measurements of parts, even down to the micrometre range, can be easily and thoroughly carried out using the technology. This is a task that would be impossible for the human eye to conduct accurately on a consistent basis.

Until recently, image processing in automation applications has typically been handled separately from other control systems and is either situated in a black box on a high-performance computer or implemented directly into specially configured smart cameras.

A disadvantage of having an integrated image processing solution hosted on a separate computer as we’ve described here is that even the smallest changes require input from a specialist or external system integrator, rather than a programmable logic controller (PLC) programmer. This is an avoidable drain on both time and finances.

traditional image processing methods and software are not able to guarantee an exact timing

In addition to this, traditional image processing methods and software are not able to guarantee an exact timing in image processing. This is because communication between the image processing and control system needs to be regulated, so that the results can reach the controller in the required time span, without external factors like the operating system affecting the transmission time.

So, what if manufacturers could not only eliminate the challenges of communication between image processing and control, but also have a system that allows the imaging processing and control components to directly communicate with one another? With Beckhoff’s TwinCAT Vision software, manufacturers can combine both worlds into one integrated system and do exactly this.

TwinCAT Vision adds image processing to a universal control platform that incorporates PLCs, motion control, robotics, high-end measurement technology, Internet of Things (IoT) networks and human machine interfaces (HMIs). An advantage of combining all control functions into one tool is that it means everything is operating in one runtime environment.

Beckhoff has also designed the software with a Gigabit Ethernet interface in mind to create the GigE Vision communication standard, which offers reliable and fast transmission of image data from cameras. This function makes it possible for TwinCAT Vision to provide real-time data directly into the controller memory, which updates the user with any partial results as they become available.

By incorporating image processing into the main control system, manufacturers can improve machine efficiency by leveraging machine vision capabilities, like those offered by TwinCAT Vision, to enhance operations. Not only can this help companies retain a competitive advantage, but it can assist manufacturers in overcoming the challenges that come with vision tasks and achieve substantial cost savings at the same time.

 
OTHER ARTICLES IN THIS SECTION
FEATURED SUPPLIERS
 
 
TWITTER FEED