Cooperative operation and integration between multiple hardware and software segments is one of our key abilities and is often critical to high level operation. RPL develops and commercialises comprehensive systems that are tailored to industry requirements through extensive client and relevant industry consultation. From technology validation, computer 3D design, prototyping, development, integration, commercialisation, right through to commercial production, RPL has extensive capability to cover most technological requirements.
RPL utilise a range of industrial sensors to achieve system awareness. These sensors range from higher level sensors like cameras for machine vision to devices that provide direct output of an event occurring. If necessary, several sensors can be calibrated to work together (sensor fusion) to improve awareness and robust detection.
Developing image processing algorithms for machine vision is one of our specialties. Machine Vision provides systems with a high level of awareness and accuracy, especially when dealing with a diverse range of objects, environments, or organic items like fruit. Our capabilities go beyond processing two-dimensional images. It includes depth analysis of surfaces and shapes using stereo-vision, Time-of-Flight camera and sensor fusion with other sensors, like LiDAR. Sensor fusion is achieved by combining the data of several sensors. This allows the strengths of each sensor to be utilised, or to provide more information for better automated decision making. RPL uses machine vision for tasks like:
Figure 1: Analysis output for machine vision
detection of kiwifruit for automated harvesting
LiDAR is an accurate, fast and reliable form of object detection in even dusty and wet outdoor environments. There are several LiDAR configurations ranging from single point distance, single plane, multi-plane and full three-dimensional sensors. RPL currently uses LiDAR for:
Figure 2: SICK two dimensional LiDAR unit used for scanning operations
Sensor mapping or data fusion allows multiple sensors or devices to have a common coordinate geometry. The mapping process translates coordinates of one sensor or system into that common coordinate geometry. This allows coordination between:
Figure 3: LIDAR data points mapped into a machine vision image. The green and pink circles are the LIDAR data points detected from the scanner, that have then been automatically mapped to their corresponding image location. This uses the reliable detection of LIDAR and then allows the machine vision system to classify what the objects are. The green line is the path the navigation algorithms have determined that the robot needs to follow, from the LIDAR data.
There is an extensive range of industrial sensors that RPL utilise to achieve robust automation. Some of these include: