ELITE: Evolution From “Cerebellum” To “Brain” Under the Development Path of Machine Intelligence

        “The human brain is divided into two parts, the cerebellum and the brain. The cerebellum is more focused on the control of movement. The brain is responsible for thinking, learning, language, etc. Our expectation for the development direction of robots is to not only achieve the work of the human cerebellum, but also to the work of the brain. That is the development direction of future robot control. ” said ELITE Chairman Cao Yunan at the 2017 Annual Conference of Advanced Robotics with the theme of “New Era? New Opportunity? New Mission “on the afternoon of January 11.

 Imitation | Convergence | Evolution

          Cao Yunan divided the development of machine intelligence from the cerebellum to the brain into three major phases: imitation, convergence, and evolution.

         “Imitation” is the high-speed and high-precision motion control and process realization of industrial robots. Cao Yunan believes that in order to successfully pass the imitation phase, it is necessary to do the following ten things: hybrid real-time operating system, high-performance multi-core ARM processor, high-speed servo bus, high-speed smooth transfer speed preview, offline programming and multi-point teaching, multiple external axis linkage and tracking, real-time torque feedforward, implementation of technology library, DH parameter calibration and tool self-calibration, singularity obstacle avoidance and joint space optimization.

        What are the main capabilities of industrial robots today? Cao Yunan believes that the processes with torque sensing capability in the 3C industry, or plasma cutting technology with torque sensors, and offline programming capabilities are some typical robot application processes. Other than these, the relatively simple palletizing, spot welding in the automotive industry, welding with external shafts, loading and unloading of machine tools, polishing, and mixing of materials through visual identification all provide opportunities for domestic robots.

        Continuing the main line of “control”, machine intelligence has developed to the second stage, which is convergence. In addition to basic motion control at this stage, the robot incorporates some human attributes. The collaborative robot is a typical product of this stage.

        Cao Yunan said, “The seven-degree-of-freedom light collaborative robot is not a supplement to traditional industrial robots, nor is it an exception. It is a next-generation robot that integrates visual perception, force perception, autonomous obstacle avoidance, autonomous path planning, autonomous energy consumption assessment, AR interaction, APP teaching, and more security. “He predicted that in the future, collaborative robots will be more widely used in areas such as 3C assembly, intelligent logistics sorting, medical treatment, education, and new retail.

       Now, what are the capabilities of collaborative robots? First, the aging of collaborative beats. Secondly, the end hovering, also known as “collaboration end pose”, whose benefit comes from its ability to avoid obstacles autonomously through optimal strategies. Third, collision detection capabilities, which Cao Yunan calls “passive security.” Fourth, simplified teaching ability, which on the one hand achieves extremely high cost control, and on the other hand, reduces its difficulty in use. Fifth, the “Orchid Finger” free drive teaching on force control. This ability is mainly for the free drive teaching of traditional robots.

        The third stage is the evolutionary stage of “realizing intelligent robot + AI”. At this stage we can witness the birth of “smart workers”! The robot will develop into a secondary development and integrated application platform for intelligent manufacturing, facing the industrial Internet of Things with intelligent control as the carrier, to create a brand-new ecological environment with intelligent robots. 

Future Expression 1: Intelligent Logistics Real-Time Object Sorting and Grabbing

        Deep learning algorithms (DL), deep vision sensors, large-scale physical 2D / 3D digital simulations, and capture of pose simulation data sets provide a solid foundation for “smart workers” in industrial environments and robotic applications for service industries. At this time, the collaborative robot can identify and capture any solid object, with a high success rate of grasping and a stable grasping action. The robot can also autonomously plan the path and position control without teaching, thus providing a cost-effective overall solution.

Future Expression 2: Home Robot

          This expression is mainly aimed at grasping and manipulating changing objects. A robot capable of stacking clothes represents the vision and continuous efforts of the robot entering the home and completing housework. The collaborative robot uses the deep reinforcement learning (DRL) algorithm and deep vision sensor to accurately locate the clothing stacking point, and automatically optimizes the best motion trajectory to achieve the stacking effect. The system also uses rapid simulation modeling and transfer learning (TL) methods to speed up the learning speed and reduce the cost of data acquisition, and finally map the simulation results to the actual robot operations.

Technologies and Products Focusing On “Control”

        Regarding the interpretation of control, Cao Yunan summarized three main technical lines. The first main line is “artificial intelligence”, including vision sensor integration, deep learning and reinforcement learning, speech semantic learning, and behavioral learning. The second main line is “human-machine collaboration”, which includes integrated joints, drag teaching and collision detection, multi-sensor fusion and AR perspective, autonomous obstacle avoidance and active safety, and human-machine group collaboration. The third main line is “motion control”, including trajectory planning dynamics, machine vision integration, drive-control integrated seven-axis planning, multi-channel and multi-machine collaboration, AI dynamics, and AI trajectory planning. Three major products have been extended from these three main technical lines. The first type of product is the industrial robot control system. The subdivided products are industrial robots with 6 axes or less that realize basic industrial applications, including full application coverage of trajectory tracking and sorting, small and medium-sized robot drive systems, and 7-axis industrial robots, to complete the complex control of multiple machines for factory applications. 

        The second type of product is collaborative robots, including 7-axis 5kg light robots that implement moving and assembly functions, 6-axis 3kg, 5kg, 10kg series of light-weight robots, 7-axis dual-arm cooperation, and smart workers. The third type of product is mobile robots, including AGVs used in e-commerce logistics, compound robots that appear in the e-commerce logistics supermarket catering and AGV + collaborative robots, and finally the AGV + dual-arm collaborative robots that perform housekeeping services. The above is Cao Yunan ’s complete understanding of control technology. “It is obvious that the final convergence point of these three product lines is the intelligent worker.” Cao Yunan said.

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Scroll to Top

request a quote