The present invention relates to a vehicle configuration system and an associated method, and more particularly to a vehicle configuration system and an associated method that modifies settings of facilities in a vehicle based on facial motions.
Technological developments have led to the implementation of what is known as a sharing economy in many societies. A vehicle-sharing service or a vehicle rental service allows users to share a vehicle without purchasing it.
Different users, however, may have different preferences regarding vehicle configurations. The problem of sharing a vehicle with others is whenever a user is ready to use a shared vehicle, he/she has to modify the settings to fit his/her own preferences and/or body. This may take time especially when the model of the shared vehicle is new to the user, and he/she is not familiar with the configurations.
Exemplary embodiments of systems, methods, and apparatus for configuring facilities in a vehicle based on a facial motion are described herein.
The described techniques can be implemented separately, or in various combinations with each other. As will be described more fully below, the described techniques can be implemented on a variety of hardware devices having or being connected to an image capture module.
In one exemplary embodiment disclosed herein, settings of facilities in a vehicle are configured based on facial motions. The configuration system includes: a storage module, an image capture module, an image processing module, and a control unit. The storage module stores a registered user's facial information and a configuration profile. The configuration profile is associated with a default facial motion made by the registered user and the facility's setting. The image capture module captures a facial motion made by a user. The image processing module determines the user's facial information based on the facial motion, and further to compares the user's facial information with the registered user's facial information. The control unit modifies the facility's setting conforming with the configuration profile if the user's facial information approximately matches the registered user's facial information.
In another exemplary embodiment, a method for configuring settings of facilities based on facial motions is disclosed herein. The method includes the followings steps: storing facial information of a registered user and a configuration profile, wherein the configuration profile is associated with a default facial motion made by the registered user and the facility's setting; capturing a facial motion made by a user; retrieving the user's facial information based on the facial motion; comparing the user's facial information with the registered user's facial information; and modifying the settings of the facilities conforming with the configuration profile if the user's facial information approximately matches the registered user's facial information.
This summary is provided to introduce a selection of concepts in a simplified form that is further described below. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. Additional features and advantages of the disclosed technology will be made apparent from the following detailed description of embodiments that proceeds with reference to the accompanying drawings.
To provide a better understanding of the present invention, preferred embodiments will be detailed as follows. The preferred embodiments of the present invention are illustrated in the accompanying drawings with numbered elements to elaborate on the contents and effects to be achieved. It should be noted that the drawings are simplified schematics, and therefore show only the components and combinations associated with the present invention, in order to provide a clearer description of the basic architecture or method of implementation. The components would be complex in reality. In addition, for ease of explanation, the components shown in the drawings may not represent their actual number, shape, and dimensions; details can be adjusted according to design requirements.
In view of the foregoing, the technology described hereinafter discloses a configuration system and a method that configure the settings of devices installed in a vehicle based on facial motions. Specifically, the system and the method of the present invention associate a facial motion with a configuration profile containing one or more settings of the device(s) installed in a vehicle.
The operation of the invention requests users to register and establish configuration profiles. Specifically, a user pre-enrolls one or more facial motion and associates the facial motion(s) with a configuration of a device, i.e. the configuration profile. The configuration profile contains one or more setting(s) regarding facilities and is stored in a database in a storage of the configuration system.
An example of configuration profiles is depicted in
In the present invention, more than one users may register to the configuration system.
Moreover, in another occasion, a universal facial motion may link to various configurations profiles of different users. For instance, when a vehicle ignites, the driver of the vehicle may be asked to smile (i.e. the universal facial motion in the instance exampled) to configure the vehicle. The configuration(s) may include an adjustment of the driver's seat and the angles of the mirrors to fit the user's body, a change of theme of the dashboard, a play of the user's favor audio channel, and an adjustment of the vehicle temperature. In brief, the settings of the devices in the vehicle are associated with the same facial motion, and different users have different profiles regarding the settings of the devices. For example, assuming the universal facial motion is a smiling, as shown in
With reference to
Bask to
Moreover, the image processing module 130 is provided to perform facial recognition based on the facial images captured by the image capture module 120. The image processing module 130 is electrically connected to or wirelessly connected to the image capture module 120 and the storage module 110. The image processing module 130 may be a processing unit capable of processing images. The image processing module 130 executes machine-readable instructions. In a multi-processor system, multiple processing units execute machine-readable instructions to increase processing power.
As shown in
As mentioned above, a user needs to register his/her identity and configuration profiles to the configuration system 100 beforehand. He/she may enroll one or more facial motion(s) and associate each of the facial motions with a configuration profile to one or more facilities VFs as exampled in
The facial information may include a static facial information and a dynamic facial information. The static information contains data of the registered user's facial feature(s) in a static state; while the dynamic facial information contains data of the registered user's facial feature(s) in a dynamic state. More specifically, the dynamical facial information may be obtained by recording a source video clip where the registered user makes a facial motion over a time frame. The source video clip contains a plurality of source image frames of the registered user. The image processing module 130 extracts at least one facial feature(s) of the registered user from the source facial image frames and calculates a variation on the registered user's facial feature(s) over the time frame. The data of the facial feature(s) and the variation may be collectively stored as the registered user's facial information.
When a user 20 attempts to configure the vehicle 10, the image capture module 120 captures a video clip where the user 20 makes a facial motion over a time frame. The video clip may also contain a plurality of image frames. The image processing module 130 extracts at least one facial feature(s) from the image frames and calculate a variation on the user's facial feature(s) over the time frame. The variation of facial feature (s) is compared with the facial information stored in the storage unit 110. If the deviation between them is within a threshold, the control unit 140 goes on to adjust the configuration of the vehicle facility VF in accordance with the associated configuration profile also stored in the storage module 110.
There are many ways known to skilled persons to calculate the deviation, for example, without limitation, the time dynamic warping (DTW) technique. As aforementioned, the configuration profile may be to display the registered user's favor album, to reduce the room temperature of the vehicle 10 through the air conditioner to fit the user's preference, etc.
In one embodiment, the configuration system 100 may perform a two-step facial recognition. Specifically, the configuration system 100 may firstly verify if the user 20 in front of the image capture module 120 is a registered user. If so, the configuration system 100 then proceeds to figure out what configuration(s) of the vehicle should be made based on the facial motion made by the user 20. It should be noted that under the scope of the present invention, the first step of facial recognition is made based on a still facial image of the user 20, while a dynamic facial motion is based upon in the second step. In the two-step facial recognition, it is only when the user 20 is verified and the user 20 makes approximately the same facial motion that the associated configurations will be made to the associated facilities.
Apart from directly obtaining the facial feature(s) from the image frames of captured video clip, the facial feature(s) used to verify the identity of the user 20 may be obtained from other sources. For instance, they may be extracted from a still image separately and independently captured by the image capture module 120. Alternatively, they may be obtained through a binding device, such as the user's own cellphone, where his/her biometric data, including facial feature(s), are stored. When the configuration system 100 operates, the binding device sends over the biometric data through any sort of communication protocols to the configuration system 100 to verify the user 20. Further, the facial feature(s) may not be a single feature but a multiple or a combination of facial landmarks (or key points). The above is a mere example and should not to any extend become a limitation to the present invention.
When the user 20 attempts to modify the settings of the facilities installed in the vehicle 10 by making a facial motion containing winking, snout-up and mouth-open over a time frame, the image capture module 120 captures the facial motion as a video clip. The video clip includes a plurality of image frames collectively called a dynamic facial image FDT as shown in
The configuration system 100 provides a more secure and reliable way to modify the settings of facilities installed in a vehicle. Under the present invention, it will not be easy for anyone who looks like any of the registered users to change the configurations of the vehicle 10.
As discussed, since the user's facial feature(s) can be extracted from the image frames of the video clip, the user's identity can be verified simultaneously without additional authentication processes. Thus, although in the present invention the dynamic facial recognition serves to determine whether the vehicle configurations should be modified, the result can also be used to determine if the user 20 is any of the registered users. In other words, if the user's facial information matches the registered user's facial information, the user's identity is verified. So long as the variation of the user's facial feature(s) matches the source variation of the registered user's facial feature(s), it will be sufficient to verify the user and modify the vehicle configurations accordingly. That is, in one embodiment of the present invention, the static facial recognition may be skipped without negating the operation of the configuration system 100. Yet in another embodiment, the two-step facial recognition may still be conducted, and the overall recognition rate is decided by combining the results of the static and the dynamic facial recognition with difference weightings given to them.
The recognition method adopted in the configuration system 100 may be any image processing techniques and/or algorithms known to the skilled persons in the field so long as they are able to fulfill the purpose of the present invention. Additionally, more than one techniques may be applied to the configuration system 100. The techniques may include, for instance and without limitation, machine learning computer vision, image processing and/or video processing. In one embodiment, one or more facial feature(s) may be extracted over a time frame. The variation on the facial feature(s) is observed over the time frame and is compared against the facial information stored in the storage module 110 to see if the deviation is within a threshold. The facial information may include the data of the registered user's one or more facial feature(s) that is able to distinguish the registered user from others. The facial feature may be a feature descriptor, facial landmarks (such as nose, eyes, lip, eyebrows, chin, cheek, etc.), or any combination of the above. The feature descriptor may include, without limitation, edges, histogram of gradient (HOG), local binary patterns (LBP), or key-points.
As mentioned above, in one embodiment, the facial landmarks P may be obtained from one of the image frames of FDI. Alternatively, they may be obtained from a still image separately and independently captured by the image capture module 120. Once the facial landmarks P are identified, the configuration system 100 calculates the vector variation of the user's facial landmarks P over the time frame. The variation is compared with the facial information of the registered users to determine whether the settings of the chosen facilities should be modified. Although only two facial landmarks P, nose tip and lip, are selected in the
There are various ways to determine if the variation of the user's facial features is identical to that of a registered user's. A technique called dynamic time warping (DTW) may be applied to determine the deviation. Taking a smile for example, one may tag 20 key points surrounding a registered user's mouth and record the positional changes on the key points over the registered user's smile. The positional changes altogether compose a smile track function—SFtn. When the user 20 attempts to modify the configurations of the vehicle 10 by smiling, the configuration system 100 spots approximately the same key points from the user 20 and records the positional changes over the user's smile. The user's smile track function is represented as Ftn. The configuration system 100 applies the DTW technique to calculate a deviation between SFtn and Ftn to obtain a DTW score. If the DTW is within a predefined threshold, it means the user 20 passes the face recognition and the corresponding configuration profile is applied to change the settings of the facilities.
The above example considers positional changes as the basis to obtain the track functions. Nevertheless, persons skilled in the art should understand that other alternatives such as angle changes, distances, etc. can also be used to achieve the same purpose. Additionally, different weightings may be given to different key points. If the confidence level of a particular key point rises, the weighting given to the particular key point is increased as well.
In another embodiment, a machine learning algorithm may embed one or more facial features of the user into a vector in a multiple dimension. The vector is compared with the registered users' facial information in the same way as previously disclosed. The machine learning algorithm may include, with limitation, neural network, principal component analysis (PCA), and auto encoder, etc.
Back to
The configuration system 100 of the present invention may further include numbers of sensors disposed on the vehicle 10 (not shown in the diagram) to sense environmental information, such as ambient light, temperature (temperature outside the vehicle 10 and/or temperature inside the vehicle 10), humidity (humidity outside the vehicle 10 and/or humidity inside the vehicle 10) or barometric pressure, etc. The sensors electrically or wirelessly connected to the control unit 140 so that the control unit 140 may modify the settings of the vehicle facility VF not only in accordance with the associated configuration profile but the relevant environmental information. For instance, if the configuration profile includes an adjustment to the room temperature, then the outside temperature may also be considered to the adjustment. Moreover, the configuration system 100 of the present invention may further receive information, such as time, date, seasonal or festival data, calendar, schedules, etc. from a remote cloud or the user's mobile device. The information may also be considered by the configuration system 100 when any modification is made to the facilities in accordance with a configuration profile. For instance, if a modification will be made to the scheme of a dashboard and it is understood, from the information received, that Halloween is coming, not only the user's set configuration profile but also the elements of Halloween (e.g. pumpkins) will be considered when the controlling unit 140 modifies the theme of the dashboard.
Step A510: requesting a user to register.
Step A520: recording a default facial motion made by the registered user; the default facial motion is associated with a configuration profile.
Step A530: retrieving at least one facial feature(s) of the registered user.
Step A540: determining a variation of the facial feature(s) over the default facial motion.
Step A550: storing the registered user's facial information containing the facial feature(s) and the variation, and storing the associated configuration profile.
As disclosed previously, the obtaining of the user's facial feature(s) may be from one of the facial image frames contained in the source video clip; alternatively, they may be obtained from a separately captured facial image. Moreover, the facial motion may be a combination of a series of facial expressions. Additionally, the configuration profile relates to one or more settings for one or more chosen facilities installed in a vehicle.
When a user attempts to modify the configurations of the vehicle 10 through the operation of the present invention, he/she may appear in front of the image capture module 120 and make a facial motion. A configuration method in accordance with the present invention is depicted in
Step B510: capturing the facial motion made by the user.
Step B520: retrieving the user's facial information based on the facial motion.
Step B530: comparing the user's facial information with the registered user's facial information stored in the database.
Step B540: modifying the settings of the facilities conforming with the configuration profile if the user's facial information approximately matches the registered user's facial information.
As has been discussed, the method in
Embodiments of the invention may include various operations as set forth above or fewer operations or more operations or operations in an order that is different from the order described. The operations may be embodied in machine-executable instructions that cause a general-purpose or special-purpose processor to perform certain operations. That is, the techniques may be carried out in a computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions. Such a computer program may be stored or transmitted in a machine-readable medium. A machine-readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistance, manufacturing tool, any device with a set of one or more processors, etc.). For example, a machine readable medium includes recordable/non-recordable storage medium (e.g., any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, flash memory, magnetic or optical cards, or any type of media suitable for storing electronic instructions), or a machine-readable transmission medium such as, but not limited to, any type of electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.).
It occurs that different vehicles may have different ways to configure the same facility. For instance, an air conditioner in a first vehicle may be adjusted by a precise number of degree; while in a second vehicle, the temperature is set by rough three different degrees: low-, mid- and high-temperature. To make a configuration profile applicable to either of the two vehicles, there should be some sort of translation among the parameters. Specifically, if, for example, a user A prefers the room temperature remains at 27° C. in a summer season; and the configuration is associated with a facial motion of smiling. When the user A steps into the first vehicle and smiles, because the air conditioner in the first vehicle is capable of adjusting the temperature precisely, through the operation of the disclosed techniques, the temperature can be set to 27° C. accurately. On the other hand, since there are only three temperature degrees provided by the air conditioner in the second vehicle, a translation is made to the configuration parameter. In the instance case, the present invention may interpret 27° C. as falling within the mid-temperature level. Thus, when the User A sitting in the second vehicle and smiles, the room temperature is set to mid-temperature.
The above is a mere example demonstrating how the translation module 210 works. It should be noted that the more facilities involved, the more complicate a translation could be. Thus, preferably, the translation module 210 may maintain a translation table to correspond configuration parameters; alternatively, there may be a set of rules for interpreting the parameters among various vendors.
Alternatively, as shown in
The disclosed methods, apparatus, and systems should not be construed as limiting in any way. Instead, the present disclosure is directed toward all novel and nonobvious features and aspects of the various disclosed embodiments, alone and in various combinations and with one another. The disclosed methods, apparatus, and systems are not limited to any specific aspect or feature or combination thereof, nor do the disclosed embodiments require that any one or more specific advantages be present or problems be solved. The techniques and solutions described in this application can be used in various combinations to provide an improved user experience with vehicles.
In view of the many possible embodiments to which the principles of the disclosed technology may be applied, it should be recognized that the illustrated embodiments are only preferred examples and should not be taken as limiting the scope of the disclosure. Rather, the scope of the disclosure is defined by the following claims and their equivalents. We therefore claim all that comes within the scope of these claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
201711099567.9 | Nov 2017 | CN | national |
201721487383.5 | Nov 2017 | CN | national |
201711240893.7 | Nov 2017 | CN | national |
201721643439.1 | Nov 2017 | CN | national |
201810237040.6 | Mar 2018 | CN | national |
201820387687.2 | Mar 2018 | CN | national |
This patent application is a continuation-in-part (CIP) of U.S. patent application Ser. No. 15/935,053 filed on 25 Mar. 2018. The present application is based on and claims priority to U.S. patent application Ser. No. 15/935,053 filed on 25 Mar. 2018, which is incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
Parent | 15935053 | Mar 2018 | US |
Child | 16183744 | US |