Machine Vision Inspection Suppliers In China
Email:
info@automaticrobot.net

Basic elements of machine vision system

Release time:2023-03-27 14:01:57 Viewed:0

Machine vision system consists of many components, including camera, image acquisition card, lighting unit, optical element and lens, processor, software and display equipment. Simple machine vision systems can recognize 2D or 3D barcodes, and more sophisticated systems can ensure that the components being examined meet specific tolerance requirements, are assembled correctly, and are free of defects.

Many machine vision systems are equipped with cameras that use different types of image sensors. In order to determine the resolution available to the camera, it is important to understand the logarithms of lines per millimeter that these sensors can resolve, rather than the effective number of pixel counts.

For example, in a typical 2588 x 1958 pixel, 5 megapixel imager, pixels with a size of 1.4µm2 can provide a resolution of 357lp/mm; A 640-by-480 VGA imager, with a pixel size of 5.7µm2, can achieve a resolution of 88lp/mm. For an imager of the same size, the smaller the pixel, the more line logarithms per millimeter can be resolved.

Select camera

Usually, the Camera used in the machine vision system can transmit the images captured in the camera to the PC system through interfaces such as USB3.0, Ethernet, FireWire, Camera Link and CoaXPress.

Smart cameras, which integrate functions such as machine vision lighting, image capture and processing, are now providing economical solutions for automated vision tasks such as reading bar codes or detecting component defects. While a smart camera's processor might be powerful enough for these tasks, more complex or faster tasks require full-featured vision detection software to complement the vision hardware.

Lighting element

With the right machine vision lighting system, image features can be repeatedly captured at high contrast. The success, reliability, repeatability, and ease of use of machine vision systems are all at risk if lighting is incorrectly equipped. To ensure that the machine vision system is equipped with the correct lighting components, engineers need to examine different lighting options with the help of an image lighting lab.

LED lighting is beginning to replace fluorescent lamps, fiber-optic halogen lamps, and xenon flash lamps commonly used in machine vision systems today because of their greater consistency, longer service life, and greater stability. LED lighting offers a wide variety of colors and gallows light, a feature that is useful in high-speed machine vision applications.

In addition to the type of lighting, another important factor in determining the image image is the Angle at which the light hits the object under test. The two most common ways of lighting objects are dark field lighting and bright field lighting.

Dark field lighting illuminates objects from a low Angle, reflecting light beyond the camera's field of view on a very smooth mirror-like surface. The surface of the object will appear black, and the glowing part of the surface captured by the camera will correspond to defects or scratches on the surface.

Bright field lighting is the opposite of dark field lighting. Bright field lighting shines above the image object so that the light reflected by the object will be within the camera's field of view. In a bright-field lighting configuration, light reflected from any discontinuity on an object's surface cannot be picked up by the camera and appears black. Therefore, the technology is used to provide illumination for diffuse emission of non-reflective objects.

Color effect

If an application requires a color camera, white light is needed to illuminate the component to be examined. If the color of the component to be detected needs to be distinguished, the white light needs to produce an equal spectrum across the entire wavelength range in order to analyze the color in the picture.

The color in the image can also be recognized by a black and white monochrome camera. This method requires the selection of appropriate lighting to illuminate the image. The top line of the image shows what the human eye would see, while the bottom line shows what a monochrome camera would look like.

Image processing algorithm

Consider the skills of developers and end users and specific visual system task requirements when applying algorithms to process images.

Ifit provides a graphical Visual inspection platform (EMVP), from which users can drag and pull designed functions through simple training to achieve customized visual algorithms according to their own needs.

Algorithm class

Image processing algorithms can be divided into different categories to meet different application requirements.

By preprocessing the image data, the characteristics of the image can be extracted. Image threshold is one of the simplest methods in image segmentation algorithms. This method can be used to generate binary images from grayscale images so that objects can be separated from the background.

Other operators, such as image filters, sharpen images and reduce image noise; Histogram equalization can enhance image contrast. Preprocessing also involves image segmentation to locate objects or object boundaries that have similar properties in the image, such as color, brightness, or material.

Application of algorithm

In many visual systems, it is important to determine whether a component or a characteristic of a component exists. Attributes such as size, shape, or color can be used to identify components. Comparative analysis, blob analysis, model matching, or geometric search tools can identify components on an image.

To distinguish a component from others, relatively simple features such as edge detection operators can be used. If you want to detect exactly where the component is, you need to perform a geometric search or a blob analysis.

In order to detect defects on components or web at high speed, comparative analysis or model image matching operators are needed. If defects need to be categorized and detected, blob analysis or edge analysis can measure defect parameters and compare them to known normal parameters.

In some image detection applications, sub-pixel resolution can be obtained by measuring the position of lines, points or edges in the image with accuracy exceeding the standard pixel resolution of the image. This can be done by comparing the gray level of the pixels on the edges of the object to the gray level of the pixels on each edge of the object.


8 advantages of the vision inspection system
Machine vision inspection scheme of round hardware accessories
Visual inspection machine instead of manual has become a trend!
CCD visual detection of plastic accessories surface defects online remove defective products
Test scheme for joint screws

ONLINE MESSAGE