Vehicles have been proposed that are capable of navigating difficult terrain and environments. These vehicle do not exclusively use wheels to navigate, but rather are equipped with legs that allow the vehicle to step or walk through difficult terrain. For example, such a vehicle is capable of navigating through a forest by moving around trees, climbing over objects such as downed trees or rocks, traversing creeks and streams, and otherwise traversing the terrain.
Furthermore, some of the proposed vehicles are capable of autonomous movement, such that the vehicles can navigate the terrain towards a destination without an active user or driver present. In order to navigate autonomously, these vehicle require a knowledge of the space within which they are navigating, understanding objects and obstacles to travel over or around.
In one aspect, we now provide imaging systems of a vehicle's environment, including long-range, high-resolution, three-dimensional, surround view imaging of a vehicle's environment.
In preferred aspects, the surround view can enable or facilitate autonomous navigation of the vehicle through the environment by identifying obstacles, paths, etc. The present systems can provide locally processed, real-time detection of objects in a high-vibration environment. In particular, embodiments described herein can provide a three-dimensional vision system for an omnidirectional vehicle, which requires a 360-degree surround view for autonomous navigation.
In a preferred aspect, vehicles are provided that comprise a) a plurality of wheel-leg components, wherein the plurality of wheel-leg components can operate to provide locomotion to the vehicle; and b) an imaging system for generating a surround view image of the vehicle. Preferably, the imaging system can generate a view image of the vehicle, the surround view image comprising a 360-degree, three-dimensional view of an environment surrounding the vehicle. Preferably, the vehicle is configured to operate autonomously based on data from the imaging system. The imaging system comprises a plurality of cameras. Preferably, a plurality of cameras are positioned on the vehicle to provide a 360-degree view around the vehicle. The vehicle suitably comprises a chassis in communication with the wheel-leg components.
The preferred lightweight construction, multi-jointed wheel-leg components, and active suspension of the preferred omnidirectional walking vehicle described herein present a unique challenge for traditional stereo vision systems, due to constant motion and camera mounting constraints. In one aspect, the present vehicles are capable of locomotion using both, either or alternatively 1) a walking motion and/or 2) rolling traction, i.e. 1) a roll or driving state and/or 2) a step or walk state.
We provide imaging view systems for a vehicle capable of autonomous control and omnidirectional movement, including wheeled locomotion and walking locomotion. In some embodiments, the vehicle includes four wheel-leg components that are each capable of up to six or seven degrees of freedom, for a total of 24 or 28 degrees of freedom for the vehicle. For instance, the wheel-leg components are capable of actively driven wheel locomotion (one degree of freedom) and five degrees of freedom within joints of the leg. Such degrees of freedom also are described in U.S. Patent Application Publication 2020/0216127. The wheel-leg components are configured to operative cooperatively to provide different walking gaits that are appropriate to a given terrain.
In order to autonomously navigate, a detailed and accurate understanding of the vehicle surroundings is obtained, using the surround view imaging. This allows the vehicle to select a navigable path through its environment. Furthermore, this allows the vehicle to select the appropriate walking gait through the environment for navigating the selected path. The surround view imaging can be updated as the vehicle travels through its surroundings (e.g., changes its position relative to the surroundings). It should be appreciated that the selected path (e.g., direction of locomotion) may be updated at least as frequently as the surround view imaging is updated. As discussed, in certain aspects, the present vehicles may be autonomous or semi-autonomous. An autonomous vehicle is a vehicle having an autonomous driving function that autonomously controls a vehicle's behavior by identifying and determining surrounding conditions. To achieve a high level of autonomous driving function, an autonomous vehicle needs to safely control its behavior by realizing surrounding environments under various conditions in research and development stages, and by detecting and determining the surrounding environments well.
In a fully autonomous vehicle, the vehicle may perform all driving tasks under all conditions and little or no driving assistance is required a human driver. In semi-autonomous (or partially autonomous) vehicle, for example, the automated driving system may perform some or all parts of the driving task in some conditions, but a human driver regains control under some conditions, or in other semi-autonomous systems, the vehicle's automated system may oversee steering and accelerating and braking in some conditions, although the human driver is required to continue paying attention to the driving environment throughout the journey, while also performing the remainder of the necessary tasks.
Methods are also provided, including methods for operating a method. Preferred methods may include (a) providing a vehicle that comprises i) plurality of wheel-leg components coupled to the chassis, wherein the plurality of wheel-leg components can provide wheeled locomotion and walking locomotion; and ii) an imaging system for generating a view image of the vehicle; and (b) operating the vehicle. In preferred aspects, the imaging system can generate a view image of the vehicle, the surround view image comprising a 360-degree, three-dimensional view of an environment surrounding the vehicle. Preferably, the imaging system comprises a plurality of cameras, suitably positioned at varying locations on the vehicle to enable a 360-degree image of the vehicle's environment. In preferred aspects, the vehicle may be operated autonomously, for example operated partially autonomously or operated fully autonomously. Suitably, in such methods the vehicle further comprises a chassis in communication with the wheel-leg components.
Other aspects of the invention are disclosed infra.
The accompanying drawings, which are incorporated in and form a part of the Description of Embodiments, illustrate various embodiments of the subject matter and, together with the Description of Embodiments, serve to explain principles of the subject matter discussed below. Unless specifically noted, the drawings referred to in this Brief Description of Drawings should be understood as not being drawn to scale. Herein, like items are labeled with like item numbers.
The following Description of Embodiments is merely provided by way of example and not of limitation. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding background or in the following Description of Embodiments.
Reference will now be made in detail to various embodiments of the subject matter, examples of which are illustrated in the accompanying drawings. While various embodiments are discussed herein, it will be understood that they are not intended to limit to these embodiments. On the contrary, the presented embodiments are intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope the various embodiments as defined by the appended claims. Furthermore, in this Description of Embodiments, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present subject matter. However, embodiments may be practiced without these specific details. In other instances, well known methods, procedures, and components have not been described in detail as not to unnecessarily obscure aspects of the described embodiments.
Some portions of the detailed descriptions which follow are presented in terms of procedures, logic blocks, processing and other symbolic representations of operations on data within an electrical circuit. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In the present application, a procedure, logic block, process, or the like, is conceived to be one or more self-consistent procedures or instructions leading to a desired result. The procedures are those requiring physical manipulations of physical quantities. Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in an electronic device.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the description of embodiments, discussions utilizing terms such as “generating,” “determining,” “simulating,” “transmitting,” “iterating,” “comparing,” “maintaining,” “calculating,” or the like, refer to the actions and processes of an electronic device such as: a processor, a memory, a computing system, a mobile electronic device, or the like, or a combination thereof. The electronic device manipulates and transforms data represented as physical (electronic and/or magnetic) quantities within the electronic device's registers and memories into other data similarly represented as physical quantities within the electronic device's memories or registers or other such information storage, transmission, processing, or display components.
Embodiments described herein may be discussed in the general context of processor-executable instructions residing on some form of non-transitory processor-readable medium, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or distributed as desired in various embodiments.
In the figures, a single block may be described as performing a function or functions; however, in actual practice, the function or functions performed by that block may be performed in a single component or across multiple components, and/or may be performed using hardware, using software, or using a combination of hardware and software. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, logic, circuits, and steps have been described generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. Also, the example fingerprint sensing system and/or mobile electronic device described herein may include components other than those shown, including well-known components.
Various techniques described herein may be implemented in hardware, software, firmware, or any combination thereof, unless specifically described as being implemented in a specific manner. Any features described as modules or components may also be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a non-transitory processor-readable storage medium comprising instructions that, when executed, perform one or more of the methods described herein. The non-transitory processor-readable data storage medium may form part of a computer program product, which may include packaging materials.
The non-transitory processor-readable storage medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, other known storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a processor-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer or other processor.
Various embodiments described herein may be executed by one or more processors, such as one or more motion processing units (MPUs), sensor processing units (SPUs), host processor(s) or core(s) thereof, digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), application specific instruction set processors (ASIPs), field programmable gate arrays (FPGAs), a programmable logic controller (PLC), a complex programmable logic device (CPLD), a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein, or other equivalent integrated or discrete logic circuitry. The term “processor,” as used herein may refer to any of the foregoing structures or any other structure suitable for implementation of the techniques described herein. As it employed in the subject specification, the term “processor” can refer to substantially any computing processing unit or device comprising, but not limited to comprising, single-core processors; single-processors with software multithread execution capability; multi-core processors; multi-core processors with software multithread execution capability; multi-core processors with hardware multithread technology; parallel platforms; and parallel platforms with distributed shared memory. Moreover, processors can exploit nano-scale architectures such as, but not limited to, molecular and quantum-dot based transistors, switches and gates, in order to optimize space usage or enhance performance of user equipment. A processor may also be implemented as a combination of computing processing units.
In addition, in some aspects, the functionality described herein may be provided within dedicated software modules or hardware modules configured as described herein. Also, the techniques could be fully implemented in one or more circuits or logic elements. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of an SPU/MPU and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with an SPU core, MPU core, or any other such configuration.
Discussion begins with a description of a vehicle capable of autonomous navigation using both wheeled locomotion and walking locomotion, in accordance with various embodiments. An example system for generating a surround view image for use in such a vehicle is then described.
Embodiments described herein provide a walking vehicle including a chassis and a plurality of wheel-leg components. The plurality of wheel-leg components are collectively operable to provide wheeled locomotion and walking locomotion. In some embodiments, the wheel-leg components have multiple degrees of freedom. In some embodiments, the wheel-leg components provide the wheeled locomotion in a retracted position and provide the walking locomotion in an extended position. In one embodiment, the plurality of wheel-leg components utilize a mammalian walking gait during the walking locomotion. In one embodiment, the plurality of wheel-leg components utilize a reptilian walking gait during the walking locomotion.
In preferred aspects, vehicles and wheel-leg components as disclosed in U.S. Patent Publication 2020/0216127 may be utilized.
Embodiments described here provide a long-range, high-resolution, three-dimensional, surround view imaging of a vehicle's environment. The surround view enables autonomous navigation of the vehicle through the environment by identifying obstacles, paths, etc. Embodiments described herein provide a three-dimensional vision system for an omnidirectional vehicle, which requires a 360-degree surround view for autonomous navigation. In order to autonomously navigate, a detailed and accurate understanding of the vehicle surroundings is obtained, using the surround view imaging. This allows the vehicle to select a navigable path through its environment. Furthermore, this allows the vehicle to select the appropriate walking gait through the environment for navigating the selected path. The surround view imaging can be updated as the vehicle travels through its surroundings (e.g., changes its position relative to the surroundings). It should be appreciated that the selected path (e.g., direction of locomotion) may be updated at least as frequently as the surround view imaging is updated.
Multiple (such as four per vehicle) wheel-leg components are preferably used with a vehicle.
In one embodiment, wheel-leg components 110 include six degrees of freedom. It should be appreciated that while wheel-leg components 110 are controlled collectively to provide rolling and walking locomotion, each wheel-leg component 110 is capable of different movement or positioning during operation. For example, while using wheeled locomotion on an upward slope, in order to maintain the body of vehicle 100 level with flat ground, the front wheel-leg components 110 may be retracted and the rear wheel-leg components 110 be extended. In another example, while using walking locomotion to traverse rough terrain, each wheel-leg component 110, or opposite pairs of wheel-leg components 110 (e.g., front left and rear right), can move differently than the other wheel-leg components 110.
In some embodiments, vehicle 100 includes four wheel-leg components 110 that are each capable of up to six degrees of freedom, for a total of twenty-four degrees of freedom for the vehicle. For instance, the wheel-leg components are capable of actively driven wheel locomotion (one degree of freedom) and five degrees of freedom within joints of the leg. The wheel-leg components 110 are configured to operative cooperatively to provide different walking gaits that are appropriate to a given terrain.
Embodiments of the described vehicle are serviceable in different use cases, such as use in extreme environments. As illustrated, vehicle 100 is shown in a mountainous region with uneven and rocky terrain, requiring the usage of walking locomotion. The described vehicle may be of a size to hold and transport passengers, or may be a smaller unmanned vehicle meant for exploration or cargo transport. Depending on the use case, there are mobility capabilities that cover most types of terrain traversal while in walking locomotion mode. The mobility capabilities include, without limitation, 1) step-up, 2) ramp or incline climb, 3) obstacle step-over, and 4) gap crossing.
In some embodiments, vehicle 100 can operate in different walking locomotion modes, such as a mammalian walking gait or a reptilian walking gate. As with the mammalian and reptilian walking gaits found naturally in mammals and reptiles, different walking gaits are amenable to different terrains and environments. For instance, a reptilian gait has a wide stance, increasing balance, while a mammalian gait generally improves traversal in the forward direction by providing increased speed. Other walking gaits, or combinations of features from different walking gaits found in nature, can be combined to provide desired mobility and locomotion. For example, vehicle 100 may require the ability to fold wheel-leg components 110 so that they would be compact when retracted.
Vehicle 100 includes a system for generating a surround view image of vehicle 100's environment. The surround view enables autonomous navigation of vehicle 100 through the environment by identifying obstacles, paths, etc. The surround view image generation system provides locally processed, real-time detection of objects in a high-vibration environment. Embodiments described herein provide a three-dimensional vision system for vehicle 100, which requires a 360-degree surround view for autonomous and omnidirectional navigation.
In accordance with various embodiments, the system for generating a surround view image utilizes multiple stereo cameras for image capture. It should be appreciated that any number of stereo cameras may be utilized in generating the surround view image. In one embodiments, for example as illustrated in
Embodiments described herein utilize a system of stereo cameras to generate a surround view image using location and mapping techniques, as well as the pose of the vehicle itself. It should be appreciated that the pose of the vehicle can be determined either directly (e.g., using motor encoders of the wheel-leg components to determine an absolute pose of the vehicle) or implicitly (e.g., by knowing the position of the vehicle relative to the environment from the surround stereo image). For example, when the vehicle is located in uneven terrain (e.g., where the wheel-leg components are subject to slipping or sinking in soft terrain) it may be difficult to determine the pose of the vehicle. The system described herein is capable of generating a surround stereo image using cameras on all sides of the vehicle. This is useful, for example, where a horizon line moves relative to the vehicle or where rocks move upon contact with the vehicle.
When using walking locomotion to navigate terrain, wheel-leg components of the vehicle are operable to walk or step through the environment, where the wheel-leg components are lifted and placed in different locations to move the vehicle. The surround view image allows for the accurate and appropriate placement of the wheel-leg components. Moreover, using the surround view image, a best route through space to a destination or objective can be determined. This best route can be updated during the movement of the vehicle continuously, allowing for adjustments to the route as new information (e.g., obstacles) is obtained from the surround view image. For example, a large rock may block the view of a downed tree. As the vehicle moves around the large rock, the surround view image identifies the downed tree. The vehicle navigation can update to determine whether the downed tree can be traversed, or whether the vehicle should determine another route of navigation.
The images generated from stereo cameras 602a through 602n are received at surround view image generator 610. Surround view image generator 610 generates a surround view image of the vehicle for use in navigation. The surround view image is a 360-degree three-dimensional image of the environment surrounding the vehicle. In some embodiments, the range and resolution of the surround view image are such that the vehicle can determine a navigable path through its environment. For example, the range of the surround view image can be related to the speed of the vehicle. For instance, the range of the surround image view can be shorter for slower speeds. This could reserve additional digital processing for improving or increasing the resolution of the surround view image.
The surround view image is received at autonomous navigation module 620, which uses the surround view image for autonomous navigation to a destination or objective 622. The destination or objective 622 can be submitted by a user or another computer system, and is used for directing the navigation of the vehicle. Using the surround view image, the vehicle can navigate through its environment to the destination or objective 622, aware of the terrain and any obstacles that must be circumnavigated. Autonomous navigation module 620 transmits control instructions to locomotion system 630 for moving the vehicle through the environment.
Locomotion system 630 receives control instructions from autonomous navigation module 620 and uses the control instructions to control the locomotion of the vehicle. In some embodiments, the vehicle includes walking legs or wheel-leg components, and can operate in different walking locomotion modes, such as a mammalian walking gait or a reptilian walking gate. Using the control instructions, locomotion system 630 controls the operation of the walking legs or wheel-leg components to utilize the selected locomotion (e.g., walking gait, wheeled locomotion, pose, etc.) to propel the vehicle through the environment.
Turning now to the figures,
It is appreciated that computer system 700 of
Computer system 700 of
Referring still to
Computer system 700 also includes an I/O device 720 for coupling computer system 700 with external entities. For example, in one embodiment, I/O device 720 is a modem for enabling wired or wireless communications between computer system 700 and an external network such as, but not limited to, the Internet. In one embodiment, I/O device 720 includes a transmitter. Computer system 700 may communicate with a network by transmitting data via I/O device 720.
Referring still to
The examples set forth herein were presented in order to best explain, to describe particular applications, and to thereby enable those skilled in the art to make and use embodiments of the described examples. However, those skilled in the art will recognize that the foregoing description and examples have been presented for the purposes of illustration and example only. Many aspects of the different example embodiments that are described above can be combined into new embodiments. The description as set forth is not intended to be exhaustive or to limit the embodiments to the precise form disclosed. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Reference throughout this document to “one embodiment,” “certain embodiments,” “an embodiment,” “various embodiments,” “some embodiments,” or similar term means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of such phrases in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics of any embodiment may be combined in any suitable manner with one or more other features, structures, or characteristics of one or more other embodiments without limitation.
In particular and in regard to the various functions performed by the above described components, devices, systems and the like, the terms (including a reference to a “means”) used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., a functional equivalent), even though not structurally equivalent to the disclosed structure, which performs the function in the herein illustrated exemplary aspects of the claimed subject matter.
The aforementioned systems and components have been described with respect to interaction between several components. It can be appreciated that such systems and components can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, and according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components (hierarchical). Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components. Any components described herein may also interact with one or more other components not specifically described herein.
In addition, while a particular feature of the subject innovation may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes,” “including,” “has,” “contains,” variants thereof, and other similar words are used in either the detailed description or the claims, these terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
Thus, the embodiments and examples set forth herein were presented in order to best explain various selected embodiments of the present invention and its particular application and to thereby enable those skilled in the art to make and use embodiments of the invention. However, those skilled in the art will recognize that the foregoing description and examples have been presented for the purposes of illustration and example only. The description as set forth is not intended to be exhaustive or to limit the embodiments of the invention to the precise form disclosed.