The present disclosure relates generally to power closure member systems for motor vehicles and, more particularly, to a user-activated, non-contact power closure member system for moving a closure member relative to a vehicle body between a closed position and an open position or from the open position to the closed position.
This section provides background information related to the present disclosure which is not necessarily prior art.
Motor vehicles, such as sports utility vehicles, can be designed to include a user-activated, non-contact power closure member system (e.g., power liftgate system) for automatically opening a closure member of the vehicle. The power closure member system includes a sensor to detect motion of the user desiring to open the closure member, for example a kicking motion of the user's foot beneath a rear bumper in the event that the closure member is a rear liftgate. The system includes technology to confirm the user, who is in possession of a key fob associated with the vehicle, is the source of the motion, so that the closure member is not incorrectly activated, for example by another human, animal, weather conditions, or objects which could enter the space beneath the bumper. The system allows for convenient, user-friendly opening of the closure member when the user's hands are occupied, for example when the user is holding items to be loaded in the vehicle. However, the user-activated, non-contact power closure member systems which are currently available could be improved.
This section provides a general summary of the present disclosure and is not a comprehensive disclosure of its full scope or all of its features, aspects and objectives.
Accordingly, it is an aspect of the present disclosure to provide a user-activated non-contact power closure member system for detecting a gesture and operating a closure member a vehicle. The system includes at least one non-contact sensor attached to a vehicle body for detecting at least one of an object and motion corresponding to the gesture made by a user and outputting data in response to detecting the at least one of the object and the motion. The at least one non-contact sensor includes a radar based gesture recognition subassembly for providing an intermediate radar field within a predetermined distance from the radar based gesture recognition subassembly in which the user can interact. An indicator is attached to the vehicle body for informing the user of an appropriate location to make the gesture. An electronic control unit is coupled to the indicator and the at least one non-contact sensor and is configured to receive and analyze the data output by the at least one non-contact sensor. The electronic control unit is also configured to determine whether the data corresponds with an activation gesture to transition to a triggering event mode defined by the gesture made by the user corresponding to the activation gesture and a non-triggering event mode defined by the gesture not corresponding to the activation gesture and initiate movement of the closure member in response to transitioning to the triggering event mode. The electronic control unit is additionally configured to notify the user using the indicator.
It is another aspect of the present disclosure to provide a method for operating a closure member of a vehicle using a non-contact power closure member system including an indicator and a non-contact sensor including a radar based gesture recognition subassembly and an electronic control unit. The method includes the step of detecting a key fob associated with the vehicle within a predetermined distance of the vehicle. Next, notifying a user to present a gesture using the indicator. The method proceeds by generating an intermediate radar field adjacent to the vehicle using the radar based gesture recognition subassembly and detecting the gesture in the intermediate radar field made by the user. The method also includes the steps of determining a time frame for the gesture made by the user and comparing the gesture with an activation gesture and the time frame with a required period of time required to initiate a triggering-event mode for operating the closure member of the vehicle.
The user-activated, non-contact power closure member system according to the present disclosure provides numerous benefits, which are especially attractive to a user of the vehicle. Due to the indicator, also referred to as an icon, the user is now aware of whether the system is activated, in motion, and/or waiting for a gesture signal, such as a kicking motion, as they approach the vehicle. The user is also informed that they are making the activation gesture in the correct location, and that the activation gesture has been received by the system.
These and other aspects and areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purpose of illustration only and are not intended to limit the scope of the present disclosure.
In accordance with another aspect of the present disclosure, the system includes at least one sensor for sensing at least one of an object and motion adjacent the closure member and outputting data corresponding to at least one of an object and motion. At least one indicator is disposed on the vehicle. An electronic control unit is coupled to the at least one sensor and the at least one indicator is configured to receive and process data corresponding to the at least one of the object and motion from the at least one sensor. The electronic control unit is also configured to determine whether the data associated with the at least one of the object and motion is a correct activation gesture required to move the closure member. Additionally, the electronic control unit is configured to initiate movement of the closure member in response to the at least one of the object and motion being the correct activation gesture with the correct activation gesture including the at least one of the object and the motion being adjacent to the at least one sensor and the at least one of the object and the motion being nonadjacent to the at least one sensor after a predetermined period of time. The electronic control unit is also configured to notify the user using the at least one indicator. The valid activation gesture may also include the object having no motion during the predetermined period of time.
According to another aspect of the present disclosure is a method of operating a closure member of a vehicle using a non-contact power closure member system. The method begins by detecting at least one of an object and a motion located adjacent the closure member using at least one sensor. The method continues with the step of determining whether data associated with at least one of the object and the motion is an activation gesture which is required to initiate opening of the closure member, with the activation gesture including the at least one of the object and the motion being adjacent to the at least one sensor and the at least one of the object and the motion being nonadjacent to the at least one sensor after a predetermined period of time. The method continues by initiating movement of the closure member in response to determining that the data associated with the at least one of the object and the motion is a correct activation gesture. The method also includes the step of notifying the user.
Other advantages of the present disclosure will be readily appreciated, as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings wherein:
In general, several example and non-limiting embodiments of a user-activated, non-contact power closure member system constructed in accordance with the teachings of the present disclosure will now be disclosed. A method of operating a closure member of a vehicle using the non-contact power closure member system constructed in accordance with the teachings of the present disclosure will also be disclosed. The example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are described in detail. Also, the system could alternatively be used to open and/or close another closure element, such as, but not limited to a sliding door or a power swing door of the vehicle.
Referring initially to
The non-contact power closure member system 10 includes at least one sensor 20 which senses an object or motion when a key fob 22 associated with the specific vehicle 12 is located within a predetermined distance of the vehicle 12, for example when the key fob 22 is in possession of a user 24 approaching the vehicle 12. Although the key fob 22 is used in the example embodiment, another component associated with the specific vehicle 12 and which can be detected by the vehicle 12 could be used or it may be possible to otherwise initialize the system 10 without using the key fob 22. An example of the object detected by the at least one sensor 20 is a foot of the user 24, and an example of the motion detected by the at least one sensor 20 in a detection zone 62 is a kicking or waving motion or step of the user 24 or a combination thereof. Another example may be a motion detection followed by a non-waving, stationary motion detection, for example representing a step into the detection zone 62, and vice versa. It should be appreciated that other objects and/or motions, and combinations thereof may be alternatively utilized.
The at least one sensor 20 can comprise various different types of non-contact sensors in the non-contact power closure member system 10 constructed in accordance with the present disclosure. For example, the at least one sensor 20 could be an ultrasonic, capacitive, radar sensor, or another type of proximity sensor capable of detecting an object or gesture in the detection zone 62 without requiring physical contact. When the at least one sensor 20 is an ultrasonic sensor, the rear bumper 18 can include a clearance slot 26, as best shown in
As best shown in
An exploded view of the user-activated, non-contact power closure member system 10 with one ultrasonic sensor 20 according to the example embodiment is shown in
An audible warning tone, honk, or beep can also be used, with or without the graphic 30, to alert the user 24. The indicator 28 can also include other features or components to notify the user 24, for example another type of light or lighted area along or near the rear bumper 18, tail lights, reverse lights, signal lights, an object or projection on a glass of the vehicle 12, for example a projected image or light. According to one example embodiment, the indicator 28 has a different color in the ON and OFF state and provides the user 24 with an idea of where to place his or her foot. Additionally, the indicator 28 used to notify the user 24 may be any other area on the vehicle 12 that could be visible to the user 24. In summary, various options are possible for the feature or features used as an indicator 28 to notify the user 24. The key point is that feedback is provided to the user 24 for foot detection.
According to the example embodiment, as the user 24 approaches the vehicle 12, the vehicle 12 senses the key fob 22 and powers on the non-contact power closure member system 10. Once the system 10 wakes up, the at least one sensor 20 and indicator 28 are activated. In the example embodiment, the indicator 28 is a lighted picture, in the example shown is an image of an open liftgate representing the vehicle system that will be operated, on the rear bumper 18 to notify the user 24 that the system 10 is activated and waiting for the activation gesture from the user 24 to open the rear liftgate 14. The indicator 28 also notifies the user 24 of the correct position to perform the activation gesture, which in this case is the presence of a foot. It should be understood that the activation gesture could also include the foot of the user 24 being placed adjacent to the at least one sensor 20 (i.e., a step-in of the detection zone 62) and the foot of the user 24 being moved nonadjacent to the at least one sensor 20 (i.e., a step out of the detection zone 62) after a predetermined period of time during which time period the foot of the user 24 may optionally not move, or is held stationary for a period of time. The user 24 then places his or her foot under the lighted indicator 28. Once the foot is detected, the indicator 28 flashes and optionally an audible tone can be made by the system 10 or another component of the vehicle 12 to indicate the presence of the foot. The user 24 then leaves his or her foot stationary for a required period of time needed to initiate opening of the rear liftgate 14. On the other hand, if the user 24 leaves his or her foot stationary but does not meet the required period of time, i.e. less than the period of time needed to initiate the opening of the rear liftgate 14, the indicator 28 flashes and optionally an audible tone can be made by the system 10 or another component of the vehicle 12 to indicate that the gesture made by the user does not meet the requirement for opening the rear liftgate 14.
The system 10 also includes an electronic control unit 32 executing software and connected to the at least one sensor 20. According to an aspect, the electronic control unit 32 is separate from and in communication with a power closure member electronic control unit (not shown) and the electronic control unit 32 can initiate the opening of the closure member (e.g., rear liftgate 14) by communicating with the power closure member electronic control unit; however, it should be appreciated that the electronic control unit 32 itself could instead control the rear liftgate 14 or the functions of the electronic control unit 32 could alternatively be carried out by the power closure member electronic control unit. When an object or motion and characteristics (e.g., speed, angle, size, etc.) of the object in the detection zone 62 is detected by the at least one sensor 20, such as the foot, the at least one sensor 20 sends data related to the object or motion (and characteristics) to the electronic control unit 32 (i.e., software). The electronic control unit 32 processes the data from the at least one sensor 20 to determine if the object or motion is the activation gesture required to open the rear liftgate 14, rather than a false signal (e.g. passing debris, a cat or other object walking past the sensor) or incorrect gesture. If the data indicates the presence of the correct activation gesture, the electronic control unit 32 initiates opening of the rear liftgate 14. In the example embodiment, when the rear liftgate 14 about to open or opening, the indicator 28, for example the lighted graphic 30 and audible tone, are activated to notify the user 24.
According to the example embodiment, the software first establishes a baseline measurement, which can be a distance between the at least one sensor 20 and the ground 83 beneath the rear liftgate 14 without any obstacles. The system 10 then continues to monitor the sensor data and looks for a change in the baseline measurement that exceeds a given threshold distance. Once the threshold distance has been exceeded, the electronic control unit 32 perceives this as the correct activation gesture, rather than a false signal, and communicates to the power liftgate electronic control unit that an opening or closing request has been given. If the detected data does not meet the threshold set, then the electronic control unit 32 determines a false signal occurred, for example which could occur by an object (e.g., a foot of the user 24) unintentionally moving beneath the rear bumper 18. After the correct activation signal is communicated to the electronic control unit 32, the electronic control unit 32 can then initiate the opening of the rear liftgate 14. According to the example embodiment, the system 10 again flashes the indicator 28 and makes the audible tone to indicate opening of the rear liftgate 14, and the rear liftgate 14 opens.
As best shown in shown in
According to an aspect, an example of the radar based gesture recognition subassembly 25 includes a waveform generator 29 for generating a waveform (e.g., continuous wave waveform) with a frequency as best shown in
The radar based gesture recognition subassembly 25 also includes at least one receive antenna element 35 for receiving the reflections of, or sense the interactions within the intermediate radar field (i.e., the emitted radar waves from the at least one transmit antenna element 31). A first receive amplifier 47 is coupled to the one receive antenna element 35 for amplifying the reflections of the emitted radar waves and outputting an amplified reflected wave signal. A mixer 49 is coupled to another of the plurality of splitter outputs 45 of the splitter 41 and to the first receive amplifier 47 for mixing the amplified heterodyne signal and the amplified reflected wave signal to generate a mixed receive signal. The radar based gesture recognition subassembly 25 additionally includes a second receive amplifier 50 coupled to the mixer 49 for amplifying the mixed receive signal and outputting an amplified mixed receive signal. A signal processor 27 is coupled to the second receive amplifier 50 for receiving and processing the amplified mixed receive signal (i.e., the received reflected CW radar signal) to determine frequency shifts of the emitted radar waves (e.g., continuous wave 54e) indicative of a speed V of the object or user 24. The signal processor 27 can also be coupled to the electronic control unit 32 or alternatively be integrated in the electronic control unit 32. The signal processor 27 is disposed in communication with the at least one receive antenna element 35 for processing the received reflections or the reflections of the emitted radar waves (i.e., the signal processor 27 can execute instructions to perform calculations on the received reflection and transmitted radiation signals or mixed signals to implement the various detection techniques including, but not limited to CW Radar, frequency modulated continuous wave Radar, time of flight) within the intermediate radar field to provide motion and/or gesture data for determining the gesture made by the user 24.
So, the radar based gesture recognition subassembly 25 shown in
As illustratively shown in
As illustratively shown in
The intermediate radar field or detection zone 62 provided by the at least one transmit antenna 31 can be a three-dimensional volume, e.g. hemispherical shape, cube, cone, or cylinder. Again, the at least one receive antenna element 35 is used to receive reflections from interactions in the intermediate radar field and the signal processor 27 is used to process and analyze the received reflections to provide gesture data usable to determine gestures for opening the rear liftgate 14 or other closure member. To sense gestures through obstructions, the radar based gesture recognition subassembly 25, 25′, 25″ can be configured to emit radar waves capable of substantially penetrating fabric, wood, plastic, and glass, as well as other non-metallic material. The at least one receive antenna element 35 can be configured to receive the reflections from the human tissue through the fabric of a user's clothing, as well as through plastic, ice, rain, snow, dirt, wood, and glass.
So, according to the example embodiment, as the user 24 approaches the vehicle 12, the vehicle 12 senses the key fob 22 and activates the radar based gesture recognition system and the indicator 28. The radar based gesture recognition system has a triggering event mode and a non-triggering event mode. The indicator 28 in accordance with the example embodiment is a light disposed on the rear bumper 18 to notify the user 24 that the system 10 is activated and waiting for the activation gesture from the user 24 to open the closure member (e.g., rear liftgate 14). The indicator 28 also notifies the user 24 of the correct position to perform the activation gesture (e.g., the presence of a foot of the user 24). At the same time, the radar based gesture recognition subassembly 25, 25′, 25″ produces the intermediate radar field adjacent to the indicator and the vehicle 12.
For the example embodiment, the indicator notifies the user 24 by illuminating a red light. To initiate the triggering event mode, the user 24 places his or her foot under the lighted indicator 28. When the user 24 places his or her foot under the lighted indicator 28 (e.g. such a motion may be a natural and intuitive “step-in” involving moving his or her foot into the detection zone 62 in a motion equivalent to a step, which an initial entry into the detection zone 62 at a position above the ground 83, followed by a motion towards the ground 83 and towards the vehicle 12, and finally the motion terminating with the foot contacting the ground 83 in the detection zone 62), the at least one receive antenna element 35 of the radar based gesture recognition subassembly 25, 25′, 25″ receives reflections from interactions in the intermediate radar field. Then, the signal processor 27 processes and analyzes the received reflections to provide gesture data usable to determine the gesture. For example, the signal processor 27 can process the received reflection to determine a Doppler shift for calculating the speed/velocity V of the object or user 24, or a frequency shift for calculating the distance and speed of the object or user 24, as well as angle and directional changes which may indicate a vertical change for example indicating a the object or user 24 is moving towards the ground 83 Intensities of the reflected radar signal may also be processed to determine the size of the user or object 24. For the signal processor 27 to process the received reflections to conclude the activation gesture has been made, the user 24 may have to leave his or her foot stationary for a require period of time (e.g., four seconds). Once the user 24 leaves his or her foot stationary for the required period of time and the proper gesture is provided, the indicator 28 notifies the user by flashing an illuminated yellow light. In this example, the gesture consists of a sequential combination of a motion into the detection zone 62, and a non-movement of the foot in the detection zone 62. Next, the system 10 initiates movement of the closure member (e.g., the opening of the rear liftgate 14). On the other hand, if the user 24 leaves his or her foot stationary but does not meet the required period of time (e.g., less than four seconds) needed to initiate the opening of the rear liftgate 14, the non-triggering event mode is initiated. During the non-triggering event, the indicator 28 quickly flashes the illuminated yellow light to indicate to the user 24 that the gesture made by the user 24 does not meet the requirement for opening the rear liftgate 14.
Thus, as best shown in
The step of 104 generating the intermediate radar field adjacent to the vehicle 12 using the radar based gesture recognition subassembly 25, 25′, 25″ can include the step of 112 changing the frequency of the waveform and outputting a heterodyned signal using an oscillator 33 coupled to the waveform generator 29. The step of 104 generating the intermediate radar field adjacent to the vehicle 12 using the radar based gesture recognition subassembly 25, 25′, 25″ can also include the steps of 114 amplifying the heterodyned signal and outputting an amplified heterodyne signal using a transmit amplifier 37 coupled to the oscillator and 116 splitting the amplified heterodyne signal using a splitter 41 having a splitter input 43 coupled to the transmit amplifier 37 and having a plurality of splitter outputs 43.
The step of 104 generating the intermediate radar field adjacent to the vehicle 12 using the radar based gesture recognition subassembly 25, 25′, 25″ can additionally include the step of 118 emitting emitted radar waves corresponding to the amplified heterodyne signal to provide an intermediate radar field within a predetermined distance D from the radar based gesture recognition subassembly 25, 25′, 25″ using at least one transmit antenna element 31 coupled to one of the plurality of splitter outputs 43. In more detail, the step of 118 emitting emitted radar waves corresponding to the amplified heterodyne signal to provide an intermediate radar field within a predetermined distance D from the radar based gesture recognition subassembly 25, 25′, 25″ using at least one transmit antenna element 31 coupled to one of the plurality of splitter outputs 43 can include the step of 120 emitting emitted radar waves corresponding to the amplified heterodyne signal to provide an intermediate radar field within a predetermined distance D from the radar based gesture recognition subassembly 25, 25′, 25″ using the plurality of transmit antenna elements 31 to 31n coupled to the one of the plurality of splitter outputs 43.
The method can continue by 122 detecting the gesture in the intermediate radar field made by the user 24. Thus, the method can also include the step of 124 receiving reflections of the emitted radar waves in the intermediate radar field using least one receive antenna element 35. As discussed above, the at least one receive antenna element 35 can include a plurality of receive antenna elements 351, 352, to 35n, thus, the step of 124 receiving reflections of the emitted radar waves in the intermediate radar field using the at least one receive antenna element 35 can include 126 receiving reflections of the emitted radar waves in the intermediate radar field using a plurality of receive antenna elements 351, 352, to 35n. The next step of the method is 128 amplifying the reflections of the emitted radar wave and outputting an amplified reflected wave signal using a first receive amplifier 47 coupled to the at least one receive antenna element 35. The method can proceed with the steps of 130 mixing the amplified heterodyne signal and the amplified reflected wave signal to generate a mixed receive signal using a mixer 49 coupled to another of the plurality of splitter outputs 43 of the splitter 41 and to the first receive amplifier 47 and 132 amplifying the mixed receive signal and outputting an amplified mixed receive signal using a second receive amplifier 50 coupled to the mixer 49. The method can continue with the step of 134 receiving and processing the amplified mixed receive signal to determine frequency shifts of the emitted radar wave indicative of a speed V of the object 24 using a signal processor 27 of the radar based gesture recognition subassembly 25, 25′, 25″ coupled to the electronic control unit 32 and to the second receive amplifier 50. Other motion and gesture information such as distance/range, direction/angle, and size of the object 24 may also be determined through the processing of the amplified mixed receive signal at step 134.
The method also includes the steps of 136 determining a time frame for the gesture made by the user and 138 comparing the gesture to an activation gesture and the time frame with a required period of time required to initiate a triggering-event mode for operating the closure member 14 of the vehicle 12 (typically conducted by software incorporated into the system 10 and executed by the electronic control unit 32).
It should be appreciated that various techniques may be used for the detecting the interactions in the intermediate radar field. For the example embodiment, as illustrated in
Alternatively, in accordance with another example embodiment, as illustrated in
The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.
Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “comprising,” “including,” and “having,” are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed.
When an element or layer is referred to as being “on,” “engaged to,” “connected to,” or “coupled to” another element or layer, it may be directly on, engaged, connected or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, when an element is referred to as being “directly on,” “directly engaged to,” “directly connected to,” or “directly coupled to” another element or layer, there may be no intervening elements or layers present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.). As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
Although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
Spatially relative terms, such as “inner,” “outer,” “beneath,” “below,” “lower,” “above,” “upper,” “top,” “bottom,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms may be intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the example term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated degrees or at other orientations) and the spatially relative descriptions used herein interpreted accordingly.
This utility application is a continuation-in-part of U.S. Ser. No. 15/696,657, filed Sep. 6, 2017 which claims priority to U.S. Provisional Application No. 62/384,930, filed Sep. 8, 2016 and this utility application claims the benefit of U.S. Provisional Application No. 62/460,247 filed Feb. 17, 2017 and U.S. Provisional Application No. 62/610,655 filed Dec. 27, 2017. The entire disclosures of the above applications are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62384930 | Sep 2016 | US | |
62460247 | Feb 2017 | US | |
62610655 | Dec 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15696657 | Sep 2017 | US |
Child | 15896426 | US |