The Smart Way to Inspect 850k Base Stations
Every year, Huawei builds, replaces, or adds capacity to over 850,000 base stations worldwide. The contractors we work with have varying levels of technical expertise, so guaranteeing high-quality construction demands that we spend heavily on sending people to supervise and perform checks.
Huawei’s Global Technical Service carries out over 24 million inspections every year, generating over 45 million pictures and data sets. Inspections are time-consuming and expensive in terms of human resources, and there’s often a lag between the completion of work and inspections, which leads to time wasted making repeated site visits. It also interrupts other work and increases costs. So, how did we solve these problems?
There are many inspections that need to be carried out before working on a base station. These inspections include not just the installation of site equipment, but also the environmental, health, and safety (EHS) rules to be observed by onsite staff.
Inspections can be automated through the use of IT systems, computer vision, and sensors. Most inspections can be performed by vision AI.
During a vision AI inspection, the AI system first detects the object of the inspection. Applying the installation and construction requirements, it then checks equipment types, quantities, quality, positioning, and connections. On this basis, it can judge if the base station equipment has been installed as required.
When completing acceptance on equipment in an indoor equipment room, the AI system can identify whether the equipment’s type and its number and connections of interfaces, labels, grounding, and switches meet installation requirements. When the installation plan specifies the exact model number and capacity of the equipment, this must be identified and checked as well.
During acceptance checks for outdoor wireless sites, the AI system identifies whether the radio equipment, and its number of waterproof tapes, grounding, and color-coding labels meet installation requirements. This involves judging whether or not the installation complies with standards, and whether the cabling and connections have been properly installed.
EHS inspections involve checks on workers, their hard hats, high-vis jackets, gloves, and other safety equipment, as well as checks on site signage, barriers, fire safety equipment, first aid kit, and anything else.
The staff responsible for algorithms and site installation and construction identified 228 inspections that could be automated using vision AI. On a random sampling of AI inspections, human graders agreed with images marked as “pass” by the AI system 99.64% of the time, while a significant proportion of images marked as “fail” by the AI system proved to contain significant quality issues. The results were not perfect, but they indicate that vision AI can be rolled out for large-scale use in site installation and construction.
To date, 336,000 base station sites have been inspected using vision AI, with a total of over 2 million separate inspections completed.
Equipment installation and site construction acceptance include many details, logical relationships, and 3D structures, each of which can pose particular challenges for vision AI.
1. Visual Reasoning
Equipment installation involves not only equipment inspection and identification, but also judgments concerning the connection and positioning of equipment. This demands logical reasoning in addition to the identification of equipment.
For simple situations such as the positioning of rigid equipment and components, rule-based judgments are sufficient. But for more complex situations, such as flexible connections, applying logical rules results is highly inaccurate. Moreover, reference to other contextual information and images is necessary to generate accurate evaluations.
A deep learning inference algorithm with convolutional neural network plus deductive rules can learn to make judgments on connections, with significantly higher accuracy for outdoor connections than a rules-based model alone.
The deep learning model is now in widespread use for object detection, and image classification and segmentation. Visual reasoning is still in the academic research stage and has extremely limited use. This project was the first major use of AI visual reasoning in the industrial vision domain.
2. 3D Modeling
During an installation and construction project, many of the objects are standardized items like equipment and connecting pieces. For this reason, a 3D model can be used to simulate various scenarios and generate massive training samples, thus potentially reducing the number of photographs taken onsite and manual labeling.
However, if used for image training only, the deep learning model could easily mistake the technical features of image rendering as the basis for identifying objects, causing poor image identification results. To address this problem, we can render simulated images with background images, and use transfer learning and blended learning techniques that combine a few real site images and images generated by AI to gradually enable the automatic generation of most of the training samples.
3. Detecting Small Objects
Interfaces, grounding, color-coding labels, and labels on equipment are all very small objects. Some occupy a space of only a few tens of pixels, so their features are very unclear. During acceptance checks, many small objects must be counted, with great precision. These technical challenges continue to increase.
We observed that the small objects were unclear in themselves, but what they are can often be “guessed” from context. We therefore developed a multi-scale network model that uses the underlying information that is not hidden in the image plus semantic contextual information to help.
There will be many future technical challenges in the application of computer vision in the area of automated site acceptance. These include equipment pose and angles, deformed and abnormal components, text identification from skewed angles, highly similar equipment, serious occlusions, 3D object detection, learning from small samples, natural scenario text identification, and fine-grained image classification.
However, these are challenges worth overcoming.