The present disclosure relates to an intelligent wheelchair system based on big data and artificial intelligence and a control method thereof, and specifically relates to a moving intelligent robot based on the big data and the artificial intelligence, and control methods for controlling image detection and processing, route exploring and robot movement.
Intelligent devices capable of moving such as a cleaning robot and an intelligent balance wheel become more common in daily life. An intelligent robot system is able to recognize the surroundings and automatically move based on an existing map to provide services within an area where it is located. With rapidly expanded service demands, an intelligent robot system with multifunction of updating a map, planning a route and moving automatically is desired, and an intelligent robot adapted to a more complicated region is more expected.
In addition, with the acceleration of population aging in the society and the increasing number of lower limb injuries, providing superior travelling tools for the elderly and the disabled has become one of focuses of issues in the whole society. The lower limb injuries are caused by various diseases, work injuries, traffic accidents, etc. As a service robot, an intelligent wheelchair has various functions such as autonomous navigation, obstacle avoidance, man-machine dialog, providing special services, etc. The intelligent wheelchair may provide a safe and convenient lifestyle for disabled people with cognitive disorders (such as dementia patients, etc.), disabled people with mobility disorders (such as cerebral palsy patients, quadriplegia patients, etc.), the elderly, etc., thereby greatly improving the quality of their daily life and work, regaining their self-care ability and social integration. As an application platform for robot technology, the intelligent wheelchair combines various technologies from robot researches, e.g., robot navigation and positioning, machine vision, pattern recognition, multi-sensor information fusion, human-computer interaction, etc.
One aspect of the present disclosure relates to an intelligent wheelchair system. The intelligent wheelchair system may include a memory storing instructions and a processor in communication with the memory. When executing the instructions, the processor may be configured to establish communication with a movement module and a holder via a communication port. The processor may obtain information from sensors of the movement module and the holder to construct a map. The processor may further plan a route based on the information and determine control parameters based on the information.
Another aspect of the present disclosure relates to a method. The method may include establishing communication with a movement module and a holder via a communication port. The method may include obtaining information from sensors of the movement module and the holder to construct a map. The method may further include planning a route based on the information, and determining control parameters based on the information.
Yet another aspect of the present disclosure relates to a permanent computer readable medium embodied as a computer program product. The computer program product may include a communication port for establishing communication between a processor and a movement module, and between a processor and a holder. The communication port may establish communication by using an application program interface (API).
The methods, systems, and/or program described herein are further described in terms of embodiments. These exemplary embodiments are described in detail with reference to the drawings. These embodiments are non-limiting exemplary embodiments, in which like reference numerals represent similar structures throughout other views of the drawings, and wherein:
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. It should be apparent to those skilled in the art that the present disclosure may be practiced without such details. In other cases, well-known methods, procedures, systems, components, and/or circuits in the present disclosure have been described at relatively high levels elsewhere, and are not described in detail in this disclosure to avoid unnecessarily repeating.
It should be understood that the terms “system,” “apparatus,” “unit,” and/or “module” used in this disclosure are one method to distinguish different components, elements, parts, section or assembly of different level in ascending order. However, the terms may be displaced by other expression if they may achieve the same purpose.
It will be understood that when a device, unit, or module is referred to as being “on,” “connected to” or “coupled to” another device, unit, or module, it may be directly on, connected or coupled to, or communicate with the other device, unit, or module, or an intervening device, unit, or module may be present, unless the context clearly indicates otherwise. For example, as used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As shown in the specification and claims of the present disclosure, words such as “a,” “an”, “one”, and/or “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. In general, the terms “comprises,” “comprising,” “includes,” and/or “including” are merely meant to include the features, integers, steps, operations, elements and/or components that are specifically identified, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
These and other features, and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, may be better understood upon consideration of the following description with reference to the accompanying drawing(s), all of which form a part of this specification. It is to be expressly understood, however, that the drawing(s) are for the purpose of illustration and description only and are not intended to limit the scope of the present disclosure. It should be understood that the drawings are not to scale.
Moreover, the system and method in the present disclosure is described primarily in regard to methods and systems for determining a state of an intelligent robot, it should also be understood that the description in this disclosure is only an exemplary embodiment. The intelligent robot system or method may also be applied to any type of intelligent devices or vehicles other than the intelligent robot. For example, the intelligent robot system or method may be applied to different intelligent device systems, and the intelligent device systems include a balance wheel, an unmanned ground vehicle (UGV), an intelligent wheelchair, or the like, or any combination thereof. The intelligent robot system may also be applied to any intelligent system for management and/or distribution, for example, a system for sending and/or receiving an express, carrying people or goods to some locations, etc.
The terms “robot,” “intelligent robot”, “intelligent device” in the present disclosure are used interchangeably to refer to an equipment, a device or a tool that may move and operate automatically. The term “user equipment” in the present disclosure may refer to a tool that may be used to request a service, order a service, or facilitate the providing of the service. The term “mobile terminal” in the present disclosure may refer to a tool or interface that may be used by a user to control the intelligent robot.
The positioning technology used in the present disclosure may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a compass navigation system (COMPASS), a Galileo positioning system (Galileo), a quasi-zenith satellite system (QZSS), a wireless fidelity (WiFi) positioning technology, or the like, or any combination thereof. One or more of the above positioning technologies may be used interchangeably in the present disclosure.
The present disclosure describes an intelligent wheelchair system 100 as an exemplary system, and methods for constructing a map and planning a route for the intelligent wheelchair system 100. The method and system as disclosed herein may aim at constructing the map based on, e.g., information obtained by the intelligent wheelchair system 100. The information may be captured by sensor(s) located in the intelligent wheelchair system 100. Types of the sensor(s) may be optical or magnetic-electric. For example, the sensors may include a camera or a Lidar.
The intelligent robot 110 may establish communication with the user device 130. The communication between the intelligent robot 110 and the user device 130 may be wired or wireless. For example, the intelligent robot 110 may establish communication with the user device 130 or the database 140 via the network 120. The intelligent robot 110 may wirelessly control the intelligent robot 110 based on operational instructions (e.g., a movement instruction or a rotation instruction) from the user device 130. As another example, the intelligent robot 110 may be directly connected to the user device 130 or the database 140 via a cable or fiber. In some embodiments, the intelligent robot 110 may update or download a map stored in the database 140 based on the communication between the intelligent robot 110 and the database 140. For example, the intelligent robot 110 may capture information of routes and analyze the information to construct a map. In some embodiments, an entire map may be stored in the database 140. In some embodiments, the map constructed by the intelligent robot 110 may include information corresponding to a portion of the entire map. In some embodiments, the corresponding portion of the entire map may be updated by the constructed map. When a destination and a location of the intelligent robot 110 are determined, the entire map stored in the database 140 may be accessible to the intelligent robot 110. A portion of the entire map including the destination and the location of the intelligent robot 110 may be selected by the intelligent robot 110 to plan a route. In some embodiments, the intelligent robot 110 may plan the route based on the selected map, the destination and the location of the intelligent robot 110. In some embodiments, the intelligent robot 110 may use a map of the user device 130. For example, the user device 130 may download the map from the Internet. The user device 130 may instruct a movement of the intelligent robot 110 based on the map downloaded from the Internet. As another example, the user device 130 may download the latest map from the database 140. Once the destination and the location of the intelligent robot 110 are determined, the user device 130 may transmit the map obtained from the database 140 to intelligent robot 110. In some embodiments, the user device 130 may be a portion of the intelligent robot 110. In some embodiments, if the map constructed by the intelligent robot 110 includes the destination and the location, the intelligent robot 110 may plan the route based on the map.
The network 120 may include a single network or a combination of different networks. For example, the network 120 may include a local area network (LAN), a wide area network (WAN), a public network, a private network, a wireless local area network (WLAN), a virtual network, a metropolitan area network (MAN), a public switched telephone network (PSTN), or any combination thereof. For example, the intelligent robot 110 may communicate with the user device 130 and the database 140 via Bluetooth. The network 120 may also include various network access points. For example, a wired or wireless access point be included in the network 120. The wired or wireless access point may include a base station or an Internet exchange point. The user may send control operations from the user device 130 to the intelligent robot 110 and receive results via the network 120. The intelligent robot 110 may access information stored in the database 140 directly or via the network 120.
The user device 130 connectable to the network 120 may include a mobile device 130-1, a tablet computer 130-2, a laptop computer 130-3, a built-in device 130-4, or the like, or any combination thereof. In some embodiments, the mobile device 130-1 may include a wearable device, an intelligent mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the user may control the intelligent robot 110 using the wearable device. The wearable device may include an intelligent bracelet, an intelligent footwear, an intelligent glasses, an intelligent helmet, an intelligent watch, an intelligent clothing, an intelligent backpack, an intelligent accessory, or the like, or any combination thereof. In some embodiments, the intelligent mobile device may include a smart phone, a personal digital assistant (PDA), a gaming device, a navigation device, a point of sale (POS) device, or the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, a virtual reality glass, a virtual reality patch, an augmented reality helmet, an augmented reality glass, an augmented reality eyewear, or the like, or any combination thereof. For example, the virtual reality device and/or the augmented reality device may include a Google Glass™, an Oculus Rift™, a HoloLens™, a Gear VR™, or the like. In some embodiments, the built-in device 130-4 may include an onboard computer, an onboard television, or the like, or any combination thereof. In some embodiments, the user device 130 may include a device with positioning technology for positioning a location of the user and/or the user device 130 associated with the user. For example, a route may be determined by the intelligent robot 110 based on a map, a destination and a location of the intelligent robot 110. The location of the intelligent robot 110 may be obtained by the user device 130. In some embodiments, the user device 130 may be a device capable of capturing an image. For example, the map stored in the database 140 may be updated based on information captured by an image sensor (e.g., a camera). In some embodiments, the user device 130 may be a portion of the intelligent robot 110. For example, an intelligent phone with a camera, a gyroscope, and an accelerometer may be supported by a holder of the intelligent robot 110. The user device 130 may be used as a sensor to detect the information. As another example, a processor 210 and a storage 220 may be portions of the intelligent phone. In some embodiments, the user device 130 may also serve as a communication interface for the user of the intelligent robot 110. For example, the user may touch a screen of the user device 130 to select control operations of the intelligent robot 110.
The database 140 may store the entire map. In some embodiments, a plurality of intelligent robots wirelessly connected to the database 140 may exist. Each intelligent robot connected to the database 140 may construct a map based on information captured by a sensor of the intelligent robot. In some embodiments, the map constructed by the intelligent robot may be a portion of the entire map. During an updating process, the constructed map may replace a corresponding portion in the entire map. Each intelligent robot may download a map from the database 140 when a route from a location of the intelligent robot 110 to a destination needs to be planned. In some embodiments, the map downloaded from the database 140 may be a portion of the entire map at least including the location and the destination of the intelligent robot 110. The database 140 may also store historical information related to the user connected to the intelligent robot 110. For example, the historical information may include historical operations of the user or information related to how the intelligent robot 110 operates. As illustrated in
It should be noted that the intelligent wheelchair system 100 described above is merely provided for illustrating an example of the system, and not intended to limit the scope of the present disclosure.
The storage 220 may store instructions for the processor 210. When executing the instructions, the processor 210 may implement one or more functions or one or more operations described in the disclosure. For example, the storage 220 may store the instructions executed by the processor 210 to process the information obtained by the sensor(s) 230. In some embodiments, the processor 210 may automatically store the information obtained by the sensor(s) 230. The storage 220 may also store the one or more results (e.g., the displacement information and/or the depth information for constructing the map) generated by the processor 210. For example, the one or more results may be generated by the processor 210 and stored in the storage 220. The one or more results may be read by the processor 210 from the storage 220 to construct the map. In some embodiments, the storage 220 may store the map constructed by the processor 210. In some embodiments, the storage 220 may store a map obtained by the processor 210 from the database 140 or the user device 130. For example, the storage 220 may store the map constructed by the processor 210, and then the constructed map may be transmitted to the database 140 to update a corresponding portion of the entire map. As another example, the storage 220 may temporarily store the map downloaded by the processor 210 from the database 140 or the user device 130. In some embodiments, the storage 220 may include a mass storage, a removable storage, a volatile read-and-write memory, a read-only memory (ROM), or the like, or any combination thereof. Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc. Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. Exemplary volatile read-and-write memory may include a random access memory (RAM). Exemplary RAM may include a dynamic RAM (DRAM), a double date rate synchronous dynamic RAM (DDR SDRAM), a static RAM (SRAM), a thyristor RAM (T-RAM), and a zero-capacitor RAM (Z-RAM), etc. Exemplary ROM may include a mask ROM (MROM), a programmable ROM (PROM), an erasable programmable ROM (EPROM), an electrically-erasable programmable ROM (EEPROM), a compact disk ROM (CD-ROM), or a digital versatile disk ROM.
The sensor(s) 230 may be any device capable of obtaining the image data, the gyroscope data, the accelerometer data, the location data, the distance data from objects or obstacles, and any other data used by the intelligent robot 110 to implement various functions described in the present disclosure. For example, the sensor(s) 230 may include one or more night vision cameras for obtaining image data in low light environments. In some embodiments, the data and/or information obtained by the sensor(s) 230 may be stored in the storage 220 and processed by the processor 210. In some embodiments, the sensor(s) 230 may be installed in the robot body 260. More specifically, for example, one or more image sensors may be installed in a holder of the robot body 260. One or more navigation sensors, gyroscopes and accelerometers may be installed in both the holder and a movement module. In some embodiments, the sensor(s) 230 may automatically explore the environment and detect a location under the control of the processor 210. For example, the sensor(s) 230 may be used to dynamically sense or detect locations of objects, obstacles, or the like.
The communication port 240 may be a port for communication within the intelligent robot 110. That is, the communication port 240 may exchange information among components of the intelligent robot 110. In some embodiments, the communication port 240 may transmit signal/data of the processor 210 to an internal portion of the intelligent robot 110, and receive signals from the internal portion of the intelligent robot 110. For example, the processor 210 may receive information from the sensor(s) installed in the robot body 260. As another example, the processor 210 may transmit control operations to the robot body 260 via the communication port 240. The transmitting-receiving process may be implemented by the communication port 240. The communication port 240 may receive various wireless signals according to certain wireless communication specifications. In some embodiments, the communication port 240 may be provided as a communication module for known wireless local area communication such as a Wi-Fi, a Bluetooth, an infrared (IR), an ultra wideband (UWB), a ZigBee, or the like, or as a mobile communication module such as a 3G, a 4G or a Long Term Evolution (LTE), or as a known communication technique for wired communication. In some embodiments, the communication port 240 may be not limited to an element for transmitting/receiving signals from an internal device, and implemented as an interface for interactive communication. For example, the communication port 240 may establish communication between the processor 210 and other portions of the intelligent robot 110 by circuits using application programming interface (API). In some embodiments, the user device 130 may be a portion of the intelligent robot 110. In some embodiments, the communication between the processor 210 and the user device 130 may be implemented by the communication port 240.
The input/output interface 250 may be an interface for communication between the intelligent robot 110 and other external devices such as the database 140. In some embodiments, the input/output interface 250 may control data transmission with the intelligent robot 110. For example, the latest map may be transmitted from the database 140 to the intelligent robot 110. As another example, the map constructed based on the information obtained by the sensor(s) 230 may be transmitted from the database 140 to the intelligent robot 110. The input/output interface 250 may also include various additional elements such as a wireless communication module (not shown) for wireless communication or a tuner (not shown) for adjusting broadcast signals, which depends on a design type of the intelligent robot 110. The various communication elements may be used for receiving signals/data elements from external inputs. The input/output interface 250 may be provided as a communication module for known wireless local area communication, such as a Wi-Fi, a Bluetooth, an infrared (IR), a Ultra-Wide Band (UWB), a ZigBee, or the like, or as a mobile communication module such as a 3G, a 4G or a Long Term Evolution (LTE), or as a known input/output interface for wired communication. In some embodiments, the input/output interface 250 may be provided as a communication module for known wired communication such as fiber optics or Universal Serial Bus (USB). For example, the intelligent robot 110 may exchange data with the database 140 of a computer via a USB interface.
The robot body 260 may be a body for supporting the processor 210, the storage 220, the sensor(s) 230, the communication port 240, and the input/output interface 250. The robot body 260 may execute instructions from the processor 210 to move around, and to rotate the sensor(s) 230 to obtain or detect information of an area. In some embodiments, the robot body 260 may include a movement module and a holder as described in other portions of the disclosure (such as
The analysis module 310 may analyze information obtained from the sensor(s) 230 and generate one or more results. The analysis module 310 may construct a map based on the one or more results. In some embodiments, the constructed map may be transmit to the database 140. In some embodiments, the analysis module 310 may receive the latest map from the database 140 and transmit the latest map to the navigation module 320. The navigation module 320 may plan a route from a location to a destination of the intelligent robot 110. In some embodiments, an entire map may be stored in the database 140. The map constructed by the analysis module 310 may correspond to a portion of the entire map. The updating process may include replacing the corresponding portion of the entire map with the constructed map. In some embodiments, the map constructed by the analysis module 310 may be latest and include the location and the destination of the intelligent robot 110. The analysis module 310 may be unnecessary to receive the map from the database 140. The map constructed by the analysis module 310 may be transmitted to the navigation module 320 to plan the route. The intelligent robot control module 330 may generate control parameters of the intelligent robot 110 based on the route planned by the navigation module 320. In some embodiments, the control parameters may be temporarily stored in the storage 220. In some embodiments, the control parameters may be transmitted to the robot body 260 to control a movement of the intelligent robot 110. Descriptions of the control parameter may be found elsewhere in the present disclosure. See, e.g.,
The image processing unit 410 may process image data to implement one or more functions of the intelligent robot 110. For example, the image data may include one or more images (e.g., still images, video frames, etc.), an initial depth and an initial displacement of each pixel in each frame, and/or any other data associated with the one or more images. In some embodiments, the displacement may include a displacement of a wheel within a time interval and a displacement of a camera relative to the wheel within the time interval. Two adjacent frames may be obtained within the time interval. The image data may be provided by any device capable of providing the image data, such as the sensor(s) 230 (e.g., one or more image sensors). In some embodiments, the image data may include data associated with a plurality of images. The images may include a sequence of video frames (also referred to as “frames”). Each of the frames may include a frame, a field, etc.
In some embodiments, the image processing unit 410 may process the image data to generate movement information of the intelligent robot 110. For example, the image processing unit 410 may process two frames (e.g., a first frame and a second frame) to determine a difference between the two frames. Then the image processing unit 410 may generate the movement information of the intelligent robot 110 based on the difference between the two frames. In some embodiments, the first frame and the second frame may be adjacent frames (e.g., a current frame and a previous frame, a current frame and a subsequent frames, etc.). In addition, the first frame and the second frame may be non-adjacent frames. More specifically, for example, the image processing unit 410 may determine one or more corresponding pixels in the first frame and the second frame and one or more regions (also referred to as “overlapping regions”) including the one or more corresponding pixels. In response to a determination that a first pixel and a second pixel correspond the same object, the image processing unit 410 may determine the first pixel in the first frame as the corresponding pixel of the second pixel in the second frame. The first pixel and the corresponding pixel in the second frame (e.g., the second pixel) may correspond to the same location of the same object. In some embodiments, the image processing unit 410 may identify one or more pixels in the first frame that fail to correspond to one or more pixels in the second frame. The image processing unit 410 may further identify one or more regions (also referred to as “non-overlapping regions”) including the identified pixels. The non-overlapping regions may correspond to a movement of the sensor(s) 230. In some embodiments, pixels in the non-overlapping regions in the first frame that fail to correspond to one or more pixels in the second frame may be omitted for further processing (e.g., processing by the displacement determination unit 420 and/or the depth determining unit 430).
In some embodiments, the image processing unit 410 may identify intensities of pixels in the first frame and the corresponding pixels in the second frames. In some embodiments, the intensities of the pixels in the first frame and the corresponding pixels in the second frames may be obtained as a standard for determining the difference between the first frame and the second frame. For example, the RGB intensity may be selected as a standard for determining the difference between the first frame and the second frame. The pixels, the corresponding pixels and the RGB intensities may be transmitted to the displacement determination unit 420 and/or the depth determination unit 430 for determining the displacement and the depth of the second frame. In some embodiments, the depth may represent a space depth of an object in the two frames. In some embodiments, the displacement information may include a set of displacements of a set of frames. In some embodiments, the depth information may include a set of depths of a set of frames. The frames, the displacement information, and the depth information may be used to construct the map.
The displacement determination unit 420 may determine the displacement information based on data provided by the image processing unit 410 and/or any other data. The displacement information may include one or more displacements that may represent movement information of the sensor(s) 230 that generates image data (e.g., an image sensor that captures a plurality of frames). For example, the displacement determination unit 420 may obtain data of the corresponding pixels in the two frames (e.g., the first frame and the second frame). The data may include one or more values corresponding to the pixels, such as gray values, intensities of the pixels, or the like. The displacement determination unit 420 may determine the values corresponding to the pixels based on any suitable color model (e.g., a RGB (red, green, and blue) model, an HSV (hue, saturation, and brightness) model, etc.). In some embodiments, the displacement determination unit 420 may determine a difference between pairs of corresponding pixels in two frames. For example, the image processing unit 410 may identify a first pixel in the first frame and a corresponding pixel in the second frame (e.g., a second pixel). The second pixel may be determined based on a transformation of coordinates of the first pixel. The first pixel and the second pixel may correspond to the same object. The displacement determination unit 420 may also determine a difference between a value of the first pixel and a value of the second pixel. In some embodiments, a displacement may be determined by minimizing a sum of the differences between the pairs of corresponding pixels in the first frame and the second frame.
In some embodiments, the displacement determination unit 420 may determine an initial displacement ξji,1 indicating estimated value of the displacement from an origin. For example, the initial displacement ξji,1 may be determined based on Equation (1) as below:
wherein, x represents coordinates of a pixel in the first frame, ω(x, Di(x), ξji) represents coordinates of a corresponding pixel in the second frame, ω(x, Di(x), ξji) and Ii(x) may be at the same location of an object, and ω(x, Di(x), ξji) is a transformed pixel of x after a certain displacement ξji of a camera. Ω is a set of pairs of pixel. Each pair of pixels may include a pixel in the first frame and a corresponding pixel in the second frame. Ii(x) is a RGB intensity of pixel x; Ij(ω(x, Di(x), ξji)) is a RGB intensity of pixel ω(x, Di(x), ξji).
ω(x, Di(x), ξji) is the transformed coordinate of pixel x after the displacement of the camera. In some embodiments, the displacement determination unit 420 may calculate the corresponding pixel ω(x, Di(x), ξji) based on an initial value of ξji and an initial depth Di(x). In some embodiments, the initial depth Di(x) may be a zero matrix. ξji may be a variable. In order to obtain ξji,1, the displacement determination unit 420 may need the initial value of ξji as shown in the iteration function (1). In some embodiments, the initial value of ξji may be determined based on a displacement ξji′ of wheels and a displacement ξji″ of a camera relative to the wheels. Descriptions of the initial value of ξji may be found elsewhere in the present disclosure. See, e.g.,
In some embodiments, the depth determination unit 430 may determine an updated depth Di,1(x). The updated depth Di,1(x) may be calculated by Equation(2):
wherein depth Di(x) represents a variable for the difference between the two frames in Equation (2). When the difference between the two frames is the smallest, a value of Di,1(x) may be determined as the updated depth. In some embodiments, the initial depth Di(x) may be a zero matrix.
The displacement determination unit 420 may also generate an updated displacement ξji,1u based on the updated depth Di,1(x). In some embodiments, the updated displacement ξji,1u may be obtained based on Equation (1) by replacing the initial depth Di(x) with the updated depth Di,1(x).
The closed loop control unit 440 may implement closed loop detection. The closed loop control unit 440 may detect whether the intelligent robot 110 returns to a previously visited location and update displacement information based on the detection. In some embodiments, in response to a determination that the intelligent robot 110 has returned to the previously visited position in a route, the closed loop control unit 440 may adjust the updated displacement of frames using a g2o closed loop detection to reduce errors. The g2o closed loop detection may be a general optimization framework for reducing non-linear errors. The adjusted updated displacements of frames may be set as displacement information. In some embodiments, if the intelligent robot 110 includes a depth sensor such as a Lidar, the depth may be directly obtained, the displacement may be determined based on Equation (1), and then the displacement may be adjusted by the closed loop control unit 440 to generate an adjusted displacement.
Firstly, when the depth information is detected by the depth sensor, the displacement information may include a set of displacements based on Equation (1), and then adjusted by the closed loop control unit 440. When the depth information is a set of updated depths, the displacement information may include a set of displacements after calculating Equation (1), Equation (2), and adjusted by the closed loop control unit 440.
In some embodiments, the closed loop control unit 440 may generate a map based on frames, the displacement information, and the depth information.
The analysis module 310 may also include an object detection unit 450 that may detect obstacles, objects, and distances from the intelligent robot 110 to the obstacles and the objects. In some embodiments, the obstacles and the objects may be detected based on data obtained by the sensor(s) 230. For example, the object detection unit 450 may detect the objects based on distance data captured by a sonar, an infrared distance sensor, an optical flow sensor, or a Lidar.
The intelligent robot control module 330 may determine control parameters based on the route planned by the route planning unit 520 in the navigation module 320. In some embodiments, the intelligent robot control module 330 may segment the route into a set of segments. The intelligent robot control module 330 may obtain a set of joints of the segments. In some embodiments, a joint between two segments may be a destination of a previous segment and a start location of a subsequent segment. Control parameters for the segment may be determined based on the destination and the start location.
In some embodiments, during the movement of the intelligent robot 110 in the segment, the destination of the intelligent robot 110 may mismatch with a predetermined destination of the segment. The route planning unit 520 may plan a new route based on the mismatched destination (a new location of the intelligent robot 110) and the predetermined destination. In some embodiments, the intelligent robot control module 330 may segment the new route and generate one or more new segments. The intelligent robot control module 330 may determine a set of control parameters for each new segment.
The image sensor 810 may capture image data. In some embodiments, based on the image data, the analysis module 310 may construct a map. In some embodiments, the image data may include frames, an initial depth and an initial displacement of each pixel in each frame. In some embodiments, the initial depth and the initial displacement may be used to determine a depth and a displacement. Descriptions of determining the depth and the displacement may be found elsewhere in the present disclosure. See, e.g., Equation (1) in
The accelerometer 820 and the gyroscope 830 may operate together to keep balance of a movement module and a holder. The balance may be necessary for obtaining stable information from the sensor(s) 230. In some embodiments, the accelerometer 820 and the gyroscope 830 may operate together to control a pitch attitude within a threshold. In some embodiments, the accelerometer 820 and the gyroscope 830 may be supported by the movement module and the holder, respectively. Description of keeping balance may be found elsewhere in the present disclosure. See, e.g.,
The sonar 840, the infrared distance sensor 850, and the optical flow sensor 860 may be used to locate the intelligent robot 110. In some embodiments, the intelligent robot 110 may be located by the sonar 840, the infrared distance sensor 850, or the optical flow sensor 860, or any combination thereof.
The Lidar 870 may detect a depth of an object in a frame. That is, the Lidar 870 may obtain a depth for each frame, and it may be unnecessary to calculate the depth by the analysis module 310 in the processor 210. The depth obtained by the Lidar 870 may be used to calculate the displacement described in Equation (1) in
The sonar 840, the infrared distance sensor 850, and the optical flow sensor 860 may locate the intelligent robot 110 by detecting a distance between the intelligent robot 110 and an object or an obstacle. The navigation sensor 880 may position the intelligent robot 110 within a rough region or a location range. In some embodiments, the navigation sensor 880 may locate the intelligent robot 110 with any type of positioning systems. The positioning system may include a Global Positioning System (GPS), a Beidou navigation or positioning system, and a Galileo positioning system.
As shown in
A traditional 3-axis holder may be used for aerial photography. In order to keep the stability of the holder 930 during movement along a route, the dynamic Z-buffering rod 1120 may be adopted in the holder 930. The dynamic Z-buffering rod 1120 may keep the stability of the holder 930 along the Z-axis. In some embodiments, the dynamic Z-buffering rod 1120 may include a retractable rod that may extend and retract along the Z-axis. The process for operating the dynamic Z-buffering rod 1120 in the holder 930 may be illustrated in
The intelligent robot 110 may include a plurality of modules and units.
In some embodiments, the first type sensor 1220 and the second type sensor 1240 may obtain information. The analysis module 310 may process the obtained information and construct a map. In some embodiments, the constructed map may be transmitted to the database 140. For determining a route to a destination, a map may be needed for navigation. The analysis module 310 may download the latest map from the database 140 and transmit the latest map to the navigation module 320. The navigation module 320 may process the latest map and determine the route from the location to the destination of the intelligent robot. In some embodiments, the analysis module 310 may be unnecessary to download an entire map. A portion of the entire map including the location and the destination of the intelligent robot 110 may be enough for planning the route. In some embodiments, the map constructed by the analysis module 310 may include the location and the destination of the intelligent robot 110, and the map may be the latest map in the database. The map constructed by the analysis module 310 may be transmitted to the navigation module 320 to plan the route. The navigation module 320 may include the mapping unit 510 and the route planning unit 520. In some embodiments, based on the latest map or the constructed map from the analysis module 310, the mapping unit 510 may generate a 2D map for route planning. The route planning unit 520 may plan the route transmitted to the intelligent robot control module 330. The intelligent robot control module 330 may segment the route into one or more segments. The intelligent robot control module 330 may generate control parameters for each segment. For each segment, a start location and a destination may exist. A destination of the segment may be a start location of a subsequent segment. In some embodiments, a location stopped by the intelligent robot 110 for the segment may mismatch with a predetermined destination of the segment, which may influence the remaining portion of the route. Thus, it is necessary to re-plan a route based on the mismatched location (a new location of the intelligent robot 110) and the destination. In some embodiments, after each segment, the re-planning process may be operated by the navigation module 320 if the mismatching is detected.
In some embodiments, information captured by the first type sensors 1220 in the movement module 920 and the second type sensors 1240 in the holder 930 may be improper to construct the map if the first type sensors 1220 in the movement module 920 and the second type sensors 1240 in the holder 930 are unstable. The intelligent robot control module 330 may generate control parameters to adjust an attitude of the movement module 920 and the holder 930 to stabilize the first type sensors 1220 and the second type sensors 1240.
Sensors may be installed on the movement module 920 and the holder 930. In some embodiments, the first type sensor 1220 may include at least one of an accelerometer 820, a gyroscope 830, a sonar 840, an infrared distance sensor 850, an optical flow sensor 860, a Lidar 870, and a navigation sensor 880. In some embodiments, the second type of sensor 1240 may include at least one of an image sensor 810, an accelerometer 820, a gyroscope 830, a sonar 840, an infrared distance sensor 850, an optical flow sensor 860, a Lidar 870, and a navigation sensor 880.
As shown in
In 1310, the processor 210 may obtain information from the sensor(s) 230. As described in
In 1320, the processor 210 may determine a destination and a current location of the intelligent robot 110 based on the received information. For example, the analysis module 310 in the processor 210 may receive location data from the sensor(s) 230. The sensor(s) 230 may include, but are not limited to, a sonar, an infrared distance sensor, an optical flow sensor, a Lidar, a navigation sensor, or the like. In some embodiments, a user may determine the destination via the input/output (I/O) interface 250. For example, the user may input the destination of the intelligent robot 110. The information of the destination may be used by the processor 210 to provide a route for the movement of the intelligent robot 110. In some embodiments, the processor 210 may determine the current location of the intelligent robot 110 based on the received information. In some embodiments, the processor 210 may determine the current location of the intelligent robot 110 based on information obtained from the sensor(s) 230. For example, the processor 210 may determine a rough location of the intelligent robot 110 based on information obtained by the navigation sensor 880 in the positioning system (e.g., GPS). As another example, the processor 210 may determine a precise location of the intelligent robot 110 according to information obtained by at least one of the sonar 840, the infrared distance sensor 850, and the optical flow sensor 860.
In 1330, the processor 210 may obtain a map based on the destination and the current location of the intelligent robot 110. The map may be used to plan a route. In some embodiments, an entire map including a plurality of points representing a city may be stored in the database 140. After the destination and the current location of the intelligent robot 110 are determined by the processor 210 in 1310 and 1320, a map including the current location of the intelligent robot 110 and the destination may be needed for planning a route between the current location and the destination. In some embodiments, the map including the current location of the intelligent robot 110 and the destination may be a portion of the entire map. In some embodiments, the analysis module 310 in the processor 210 may obtain a suitable portion of the entire map from the database 140 based on the destination and the current location of the intelligent robot 110. In some embodiments, the analysis module 310 may construct a map based on the information obtained from the sensor(s) 230. The constructed map may be transmitted to the database 140 to update the entire map. In some embodiments, the constructed map may include the destination and the current location of the intelligent robot 110. The constructed map may be used by the navigation module 320 to plan a route.
In 1340, the route may be planned from the current location of the intelligent robot 110 to the destination based on the obtained map in 1330. The planning of the route may be implemented by the navigation module 320. In some embodiments, as illustrated in
In 1350, the intelligent robot control module 330 may segment the planned route into one or more segments. The segmentation of the route may be based on a threshold. For example, if the length of the planned route is shorter than a threshold, the segmentation of the route may be not implemented. In some embodiments, the segmentation may be implemented by the intelligent robot control module 330 based on instructions stored in the storage 220.
In 1360, the intelligent robot control module 330 may determine control parameters for controlling the robot based on the one or more segments in 1350. In some embodiments, each segment segmented by the intelligent robot control module 330 in 1350 may have a start location and a destination. In some embodiments, the intelligent robot control module 330 may determine control parameters for the segment based on the start location and the destination. Examples for determining control parameters between two locations may be found in
In some embodiments, when the intelligent robot 110 passes through a segment based on predetermined control parameters, the intelligent robot 110 may stop at a location which mismatches with a destination for the segment predetermined by the intelligent robot control module 330. The navigation module 320 may re-plan a new route based on the mismatched location and the destination of the intelligent robot 110. The intelligent robot control module 330 may further segment the new planned route into one or more segments. The intelligent robot control module 330 may determine new control parameters of the intelligent robot 110 for the one or more new segments. In some embodiments, the mismatching with the destination may be estimated according to the comparison between a stopped location and a predetermined destination for each segment of the intelligent robot 110 after the intelligent robot 110 passes the segment.
In 1410, the analysis module 310 may obtain image data from the image sensor 810. In some embodiments, the image data may include a plurality of frames, an initial depth and/or displacements for each pixel in the plurality of frames. The displacements may include a displacement of a wheel and a displacement of a camera relative to the wheel. In some embodiments, the initial depth may be set as a zero matrix. In some embodiments, if a Lidar or a camera with a depth detecting function is included in the sensor(s) 230, depth information (e.g., the initial depth) may be obtained by the sensor(s).
In 1420, one or more reference frames may be determined by the analysis module 310 based on the image data. In some embodiments, the image data may include the plurality of frames, the initial depth and/or the displacements for each pixel in the frames. In some embodiments, the analysis module 310 may select the one or more reference frames from the plurality of frames. Detailed descriptions can be found elsewhere in the present disclosure. See e.g.,
In 1430, the analysis module 310 may determine depth information and displacement information based on the one or more reference frames. That is, the image data may be processed by the analysis module 310 for obtaining displacement information and depth information for each frame. The process for determining the displacement information and the depth information may be found elsewhere in the present disclosure, e.g., see
In 1440, the analysis module 310 may construct the map based on the one or more reference frames, the depth information and the displacement information of the one or more reference frames. In some embodiments, a 3D map may be constructed by connecting the one or more reference frames with displacements corresponding to the one or more reference frames.
The map may be determined based on the plurality of frames and the displacement information and the depth information corresponding to the plurality of frames. In some embodiments, operations 1420 and 1430 may be implemented in inverted order, or simultaneously. For example, operation 1420 for determining the one or more reference frames may include determining the displacement information and the depth information as illustrated in 1430. That is, operation 1430 may be a sub-step of the operation 1420 for determining the one or more reference frames. As described in
In 1502, the analysis module 310 may obtain image data including a plurality of frames. The plurality of frames may at least include a first frame and a second frame. In some embodiments, the first frame may be an existing frame, and the second frame may be a subsequent frame of the first frame. That is, the image sensor 810 may capture the first frame at a moment and capture the second frame at a subsequent moment. That is, the plurality of frames may be adjacent to each other in the time domain.
In some embodiments, based on the image data that has been obtained, the analysis module 310 may preprocess the image. Mere by way of example, the preprocessing the image may include image segmentation, image enhancement, image fusion, image compression, or the like, or any combination thereof.
In some embodiments, techniques for the image segmentation may include a wavelet transform technique, a Gabor transform technique, a morphological image processing technique, an image frequency domain processing technique, a histogram-based technique (e.g., a color histogram-based technique, an intensity histogram-based technique, an edge histogram-based technique, etc.), a compression-based technique, a region growing technique, a technique based on partial differential equation, a variational technique, an image segmentation technique, a watershed transform technique, a model-based segmentation technique, a multi-scale segmentation technique, a triangulation technique, a co-occurrence matrix technique, an edge detection technique, a threshold technique, or the like, or any combination thereof.
In some embodiments, the image enhancement may include enhancing one or more properties of the image. The one or more properties of the image may include a contrast (local or global), a brightness (local or global), a saturation (local or global), a sharpness (local or global), a grayscale of the image, or the like, or any combination thereof.
In some embodiments, the analysis module 310 may determine one or more historical space features from the images. The space features may relate to an integral pixel intensity, a local pixel intensity (an integral or a local brightness), a location of an object, a length of an object or a size of an object (e.g., a plane, a protrusion, an obstacle, a channel, etc.) in an image. For example, the space features may include an area of an object, a positioning location of an object, a shape of an object, an integral or local brightness, a location of an object, a boundary of an object, an edge of an object, an angle of an object, a ridge of an object, a spot content, or the like, or any combination thereof. In some embodiments, the analysis module 310 may determine one or more historical time features from two or more images of the plurality of images. The historical time features may include changes of certain physical properties in an image sequence including the plurality of images or the two or more images. For example, the historical time features may include a time mode, a movement, a time gradient, or the like, or any combination thereof. For example, the analysis module 310 may determine a movement based on a time-series analysis of historical features. The time-series analysis of the historical features may include analyzing the images within a particular time period. Image analysis over time may reveal movement patterns in a plurality of still images captured over time. The movement may include a translation of an object, a rotation of an object, or the like. The movement patterns may indicate seasonality or periodicity that occurs again. In some embodiments, a moving average or a regression analysis may be used. In addition, certain types of filters (e.g., a morphological filter, a Gaussian filter, an unsharp filter, a frequency filter, an averaging filter, a median filter, etc.) may be used in the analysis on image data to reduce errors. The analysis may be implemented in a time domain or in a frequency domain.
In some embodiments, the analysis module 310 may process the images using a particular technique to determine one or more features as one or more orthogonal inputs. In some embodiments, the particular technique may include a principal component analysis (PCA), an independent component analysis, an orthogonal decomposition, a singular value decomposition, a whitening technique or a spheroidizing technique, or the like. The orthogonal inputs may be linearly independent.
In 1504, the analysis module 310 may determine the first frame as a reference frame and the second frame as a candidate frame. In some embodiments, the analysis module 310 may select the reference frame and the candidate frame using a model. In some embodiments, the model may include a technique, an algorithm, a procedure, an equation, a rule, or the like, or any combination thereof. Mere by way of example, the model may include an image segmentation model, an image enhancement model, a user interface model, a workflow model, or the like, or any combination thereof. In some embodiments, the model may include models of big data and artificial intelligence, e.g., a Feedforward Neural Networks (FNN), a Recurrent Neural Network (RNN), a Kohonen self-organizing map, an automatic encoder, a Probabilistic Neural Network (PNN), a Time Delay Neural Network (TDNN), a Radial Basis Function Network (RBF), a Learning Vector Quantization, a Convolutional Neural Network (CNN), an Adaptive Linear Neuron (ADALINE) Model, an Associated Neural Network (ASNN), a Generative Adversary Networks (GAN), or the like, or any combination thereof. Exemplary recurrent neural network (RNN) may include a Hopfield network, a Boltzmann machine, an echo state network, a long-term short-term memory network, a two-way recurrent neural network, a hierarchical recurrent neural network, a random neural network, or the like or any combination thereof.
In 1506, the analysis module 310 may determine one or more first pixels in the reference frame corresponding to one or more second pixels in the candidate frame. In some embodiments, the reference frame and the candidate frame may have an overlapping region. At this time, the one or more first pixels and the one or more second pixels may indicate the same positions of an object in the overlapping region of the reference frame and the candidate frame. In some embodiments, the one or more first pixels may be a set of pixels Ω described in
In some embodiments, the analysis module 310 may determine the one or more second pixels in the candidate frame, and/or the one or more first pixels in the reference frame using a clustering algorithm. The clustering algorithm may include a hierarchical clustering algorithm, a subarea clustering algorithm, a density clustering algorithm, a model clustering algorithm, a grid clustering algorithm, and a soft computing clustering algorithm. The hierarchical clustering algorithm may include an aggregate hierarchical clustering and a segmentation hierarchical clustering, a single-link clustering, a full-link clustering, an average link clustering, or the like. The subarea clustering algorithm may include a minimum error algorithm (e.g., a K-means algorithm, a K-center technique, a K-prototype algorithm), a graphical theory clustering, or the like. The density clustering algorithm may include an expectation maximization algorithm, a Density-based Spatial Clustering of Applications with Noise (DBSCAN) algorithm, a an Ordering Points To Identify the Clustering Structure (OPTICS) algorithm, an automatic clustering algorithm, a Selection of Negatives through Observed Bias (SNOB) algorithm, an MCLUST algorithm, or the like. The model clustering algorithm may include a decision tree clustering, a neural network clustering, a self-organizing map clustering, or the like. The soft computing clustering algorithm may include a fuzzy clustering, an evolutionary techniques for clustering, a simulated annealing algorithm for clustering, or the like.
In 1508, the analysis module 310 may determine depth information, intensity information and/or displacement information of the reference frame and the candidate frame. In some embodiments, the process for determining the depth information, the intensity information, and/or the displacement information may be found in
In 1510, the analysis module 310 may determine whether the candidate frame is the last frame. Specifically, the analysis module 310 may detect whether a subsequent frame of the candidate frame exists in the time domain. If the subsequent frame of the candidate frame exists, the process may proceed to operation 1512; otherwise, the process may proceed to operation 1514.
In 1512, the analysis module 310 may output the reference frame and depth information and/or displacement information corresponding to the reference frame if the candidate frame is the last frame.
In 1514, the analysis module 310 may determine a difference between the reference frame and the candidate frame. In some embodiments, the difference between the reference frame and the candidate frame may be determined based on the intensity information of the reference frame and the intensity information of the candidate frame. In some embodiments, the intensity information of the reference frame may be determined by RGB intensities of the one or more first pixels, and the intensity of the candidate frame may be determined by RGB intensities of the one or more second pixels. In some embodiments, the intensity information of the reference frame and the intensity information of the candidate frame may be determined in 1504. In some embodiments, the intensity information of the reference frame and the intensity information of the candidate frame may be determined in 1514 before determining the difference between the reference frame and the candidate frame.
In 1516, the analysis module 310 may determine whether the difference between the reference frame and the candidate frame is greater than a threshold. If the difference between the reference frame and the candidate frame is greater than the threshold, the process may proceed to operation 1518; otherwise, the process may proceed to operation 1520.
In 1518, if the difference between the reference frame and the candidate frame is greater than the threshold, the analysis module 310 may designate the candidate frame as an updated reference frame and designate a frame subsequent with the candidate frame as an updated candidate frame. In some embodiments, the frame subsequent with the candidate frame may be a frame that is closely adjacent to the candidate frame. At this time, the updated reference frame and the updated candidate frame may be transmitted to operation 1506 to repeat the process 1500.
In 1520, if the difference between the reference frame and the candidate frame is smaller than or equal to the threshold, the analysis module 310 may designate a frame subsequent with the candidate frame as an updated candidate frame. At this time, the updated reference frame and the updated candidate frame may be transmitted to step 1506 to repeat the process 1500.
In some embodiments, operation 1518 or operation 1520 may output the updated reference frame and the updated candidate frame to be processed by the analysis module 310. In some embodiments, the updated reference frame may be obtained by replacing the reference frame with the candidate frame when the difference between the reference frame and the candidate frame is greater than the threshold. In some embodiments, the updated candidate frame may be obtained by replacing the candidate frame by the subsequent frame. That is, the replacement of the candidate frame may be unconditional, and the replacement of the reference frame may be conditional.
The process 1500 may be terminated when the map is generated in operation 1512. In some embodiments, some termination conditions may be designated so that the process 1500 may be terminated in time. For example, a counter may be used in the process 1500 so that the count of cycles of the process 1500 may be smaller than or equal to a predetermined threshold value.
In 1610, the analysis module 310 may obtain a first frame and a second frame from the plurality of frames obtained by the image sensor 810. In some embodiments, the analysis module 310 may select the first frame and the second frame from the plurality of frames obtained by image sensors. In some embodiments, the first frame and the second frame may be adjacent to each other with respect to the time domain. The first frame may be an existing frame and the second frame may be a subsequent frame.
In 1620, the analysis module 310 may identify one or more first pixels in the first frame corresponding to one or more second pixels in the second frame. The identifying the one or more first pixels in the first frame corresponding to one or more second pixels may be implemented using the process described in operation 1506 as illustrated in
In 1630, the analysis module 310 may obtain an initial depth based on the one or more first pixels and the one or more second pixels. In some embodiments, the initial depth may be set as a zero matrix. In 1640, the analysis module 310 may determine an initial displacement based on the one or more first pixels, the one or more second pixels and/or the initial depth. For example, operation 1640 may be implemented using Equation (1) as described in
In 1650, the analysis module 310 may determine an updated depth based on the one or more first pixels, the one or more second pixels and the initial displacement. In some embodiments, operation 1650 may be implemented using Equation (2) as described for
In 1660, the analysis module 310 may determine an updated displacement based on the one or more first pixels, the one or more second pixels, and/or the updated depth. In some embodiments, operation 1660 may be implemented using Equation (1) as described for
As illustrated in
In 1710, the image data may be obtained by the analysis module 310. In some embodiments, the initial value of displacement may be determined based on the image data. More specifically, the initial value of displacement may be determined based on displacements in the image data. In some embodiments, the displacements in the image data may include a displacement of a movement unit (e.g., two wheels) and a displacement of a camera relative to the movement unit within a time interval when two adjacent frames are obtained.
In 1720, the analysis module 310 may obtain a first displacement associated with the movement unit based on the image data. In some embodiments, the first displacement associated with the movement unit may include a displacement of a central point of the two wheels within the time interval. In some embodiments, the first displacement associated with the movement unit may include a displacement of a point where a navigation sensor is located within the time interval. In some embodiments, the navigation sensor may be installed in the central point of the two wheels. In some embodiments, the time interval may be the time interval within which the image sensor 810 obtains the two adjacent frames.
In 1730, the analysis module 310 may obtain a second displacement associated with the image sensor 810 with respect to the movement unit. In some embodiments, the second displacement may include a displacement of the image sensor 810 relative to the movement unit. In some embodiments, the image sensor 810 may include a camera.
In 1740, the analysis module 310 may determine a third displacement associated with the image sensor 810 based on the first displacement and the second displacement. In some embodiments, the third displacement may be a vector sum of the first displacement and the second displacement. In some embodiments, the third displacement may be the initial value of displacement for determining the initial displacement value.
During the movement of the intelligent robot 110, a holder may need to be controlled to obtain a precise attitude of the intelligent robot 110. In some embodiments, the attitude of the intelligent robot 110 may be controlled by controlling a rotary angle of an axis of the holder 930.
In 1715, the image data may be obtained by the analysis module 310. As described in
In 1725, the analysis module 310 may obtain a first rotary angle with respect to a reference axis. The first rotary angle may be associated with the movement unit based on the image data. In some embodiments, the first rotary angle with respect to the reference axis associated with the movement unit may be obtained based on rotary information from the image data. In some embodiments, the first rotary angle may include an angle within the time interval. In some embodiments, the time interval may include a time interval at which the image sensor 810 obtains the two frames.
In 1735, the analysis module 310 may obtain a second rotary angle with respect to the movement unit associated with the image sensor within the time interval. In some embodiments, the second rotary angle may include a relative rotary angle of the image sensor 810 relative to the movement unit. In some embodiments, the image sensor 810 may include a camera.
In 1745, the analysis module 310 may determine a third rotary angle with respect to a reference axis associated with the image sensor 810. In some embodiments, the third rotary angle may be determined based on the first rotary angle and the second rotary angle. In some embodiments, the third angle may be a vector sum of the first rotary angle and the second rotary angle.
During the movement of the intelligent robot 110, the sensor(s) 230 may be installed in the movement module 920 and the holder 930 to obtain information. In some embodiments, the sensor(s) 230 may be installed in the carrier 1010, or the sensor(s) 230 may be installed in an intelligent phone held by the holder 930. In some embodiments, the movement module 920 and the holder 930 may need all-directional stabilization to obtain precise and reliable information. The process for keeping the balance of the movement module 920 and the holder 930 relative to horizontal plane may be described in detail in the description of
As shown in
Firstly, the gyroscope data and the accelerometer data of the first frame may be processed at time t1. The integrator 1820 may generate the output angle θ1 associated with the first frame. The accelerometer 820 may generate a first angle θ1′. The adder 1840 may generate a second angle θ1″ based on the output angle θ1 and the first angle θ1′. In some embodiments, the second angle θ1″ may be obtained by vector subtracting the output angle θ1 from the first angle θ1′. A compensatory angular velocity ω1″ may be determined by the component extractor 1830 based on the second angle θ1″. In some embodiments, the component extractor 1830 may include a differentiator.
Then, the gyroscope data and the accelerometer data of the second frame may be processed at time t2. The gyroscope 830 may generate an angular velocity ω2. The adder 1810 may generate a revised angular velocity ω2′ based on the angular velocity ω2 and the compensatory angular velocity ω1″. In some embodiments, the revised angular velocity ω2′ may be obtained as a vector sum of the angular velocity ω2 and the compensatory angular velocity ω1″. Finally, the integrator 1820 may output the angle θ2 associated with the second frame at time t2 based on the revised angular velocity ω2′.
In some embodiments, the process described in
In 1910, the processor 210 may obtain a plurality of frames including a first frame and a second frame. In some embodiments, the first frame and the second frame may be captured by the image sensor 810 within a time interval. For example, the first frame may be captured by the image sensor 810 at time t1. The second frame may be captured by the image sensor 810 at time t2. A time interval between t1 and t2 may be a sampling interval of the image sensor 810.
In 1920, the processor 210 may obtain gyroscope data and accelerometer data associated with the first frame and/or the second frame. In some embodiments, the gyroscope data and the accelerometer data may include parameters such as angular velocities and angles.
In 1930, the processor 210 may determine first angular information based on the accelerometer data associated with the first frame. In some embodiments, the first angular information may include a first angle.
In 1940, the processor 210 may determine compensatory angular information based on the first angular information and angular information associated with the first frame. In some embodiments, the angular information associated with the first frame may be an output angle associated with the first frame. In some embodiments, the first angular information may be processed by vector subtracting the output angle associated with the first frame. In some embodiments, the compensatory angular information may include a compensatory angular velocity. The compensatory angular velocity may be determined by the component extractor 1830 based on the subtraction of the output angle associated with the first frame from the first angular data.
In 1950, the processor 210 may determine second angular information based on the compensatory angular information and the gyroscope data associated with the second frame. In some embodiments, the second angular data may include an angle between the horizontal plane and the Z axis detected by the processor 210 associated with the second frame at time t2 when the second frame is captured.
As illustrated in
The process for keeping horizontal balance of the movement module 920 or the holder 930 may be illustrated in
In 2010, the processor 210 may obtain a first displacement of a motor along a rotation axis. In some embodiments, the rotation axis may include Z axis, and the first displacement may be a vector along the Z axis.
In 2020, the processor 210 may determine whether the displacement of the motor along the Z axis is greater than a threshold. In some embodiments, the threshold may be a limit value within which the second type sensors 1240 can obtain information stably.
In 2030, the processor 210 may generate a first control signal to cause the motor to move to an initial position when the displacement of the motor is greater than the threshold. In some embodiments, the initial position may be a position predetermined and suitable for obtaining the information.
In 2040, the processor 210 may output the first control signal to the motor to make the second type sensors 1240 installed in the intelligent phone back to the initial position to obtain stable information.
In 2050, the processor 210 may obtain a first acceleration along the rotation axis when the displacement of the motor is smaller than or equal to the threshold. In some embodiments, the acceleration may be obtained by the accelerometer 820 installed in the intelligent phone.
In 2060, the processor 210 may generate a second acceleration based on the first acceleration. In some embodiments, the second acceleration may be a filtered acceleration of the first acceleration.
In 2070, the processor 210 may determine a second displacement based on the second acceleration. In some embodiments, the second displacement may be calculated based on an integral value of the second acceleration. In some embodiments, the second displacement may be a vector along with Z axis.
In 2080, the processor 210 may generate a second control signal to control a movement of the motor based on the second displacement. In some embodiments, the second control signal may determine a remaining gap (a remaining range available for the movement) of displacement based on the second displacement and the threshold. Then the processor 210 may control the movement of the sensors in the intelligent phone along with the Z axis.
In 2090, the processor 210 may output the second control signal to the motor.
The present disclosure is described and illustrated with reference to a plurality of embodiments, it will be understood by those skilled in the art that various changes in form and details may include made therein without departing from the spirit and scope of the present disclosure, as defined by the appended claims and their equivalents.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2017/072101 | 1/22/2017 | WO | 00 |