This application is a U.S. National Stage Application of International Application No. PCT/EP2016/066292 filed Jul. 8, 2016, which designates the United States of America, and claims priority to DE Application No. 10 2015 214 685.5 filed Jul. 31, 2015, the contents of which applications are hereby incorporated by reference in their entirety.
The invention relates to a method and a system for displaying driving modes of a vehicle, wherein the vehicle can be operated in at least two different driving modes.
Due to the development of automatic driving in a vehicle, the driver is confronted with different levels of automation in the vehicle. The levels of automation in this case comprise manual driving, in which the driver undertakes the entire vehicle guidance himself, partially automatic driving in which the driver is partially assisted by the vehicle in which, whilst the vehicle undertakes the vehicle guidance, the driver is not released from monitoring the vehicle guidance, and highly automatic driving in which the driver passes all driving responsibility to the vehicle. The highly automatic driving mode in this case also comprises a fully automatic driving mode. The level of automation which is available may depend on the route to be traveled. If a plurality of levels of automation are possible, the driver himself is able to select which level of automation he uses for guiding the vehicle.
DE 10 2013 208 206 A1 discloses a device for indicating upcoming automatically performed steering interventions in a vehicle. In this case, a plurality of lighting elements which are positioned adjacent to one another in the steering wheel are activated according to information about upcoming steering interventions.
Moreover, DE 10 2012 002 306 A1 discloses a driver assistance apparatus which is designed to guide a motor vehicle automatically during a journey. To this end, it is possible to switch between a plurality of assistance modes. The assistance modes in this case comprise the spectrum from manual driving via partial automation and high automation to full autonomy of the vehicle. In this case a different control element is used for each assistance mode.
A drawback here, however, is that the driver is not able to identify clearly in which mode he has to undertake which tasks himself.
It is, therefore, an object of the present invention to provide a method and a system in which the driver is able to identify intuitively and in a simple manner the tasks which he has to undertake.
According to the invention, this object is solved by a method having the features of the independent method claim and by a system having the features of the independent system claim. Embodiments and developments are disclosed in the dependent claims and in the following specification.
The invention will now be explained in detail on the basis of exemplary embodiment with reference to the drawings, wherein:
In a method according to a first aspect of the invention, it is ascertained whether the vehicle is being operated in a first or a second driving mode, wherein in the first driving mode a first actuator carries out a first action and in the second driving mode a second actuator carries out the first action. In this aspect, a first graphical element, which represents the first actuator and a second graphical element, which represents the second actuator are generated on a display surface. Depending on whether the vehicle is in the first or the second driving mode, a first graphical action element, which represents the first action is generated with the first and/or second graphical element.
According to this aspect and depending on whether an action element is actually displayed with one of the two graphical elements, it is provided in a simple manner which actuator is currently carrying out the action.
Within the context of the present explanation, “with the first and/or second graphical element” is understood that the action element may be clearly assigned visually to one of the two graphical elements. If the first and the second graphical elements, for example, are displayed below one another, the action element is displayed adjacent to the graphical element which represents the actuator which carries out the action. If the first and the second graphical elements, for example, are displayed adjacent to one another, the action element is displayed below the graphical element which represents the actuator which is carrying out the action.
According to the present aspect, a high degree of flexibility is provided for displaying the different driving modes. As different actuators are displayed, an action which has been carried out may be assigned in a flexible manner to an actuator and displayed therewith. As a result, the driver is able to detect intuitively, which tasks he is responsible for in the current driving mode and which tasks, for example, the vehicle undertakes itself.
In one embodiment of the method according to the present aspect, at least one second action is carried out by the first or the second actuator, wherein depending on the ascertainment as to whether the vehicle is in the first or the second driving mode, a second graphical action element which represents the second action is generated with the first and/or second graphical element. This embodiment permits that the display is still designed intuitively and in a simple manner, even in the case of more than one possible action which may be carried out by each actuator. A visualization of all potential configurations between the actuators and the actions is possible. Therefore, it is no longer necessary to use a separate symbol for each individual configuration, the meaning thereof potentially not being clear to the driver. Instead, by a limited number of graphical elements, wherein the driver is able to identify each individual graphical element in a simple manner, a plurality of possible configurations is covered.
Additionally, the first actuator in a corresponding embodiment may be a system for the automatic control of the vehicle and/or the second actuator may be the driver of the vehicle. In the first driving mode, the driver thus drives the vehicle manually himself whilst in a highly automatic driving mode, the vehicle is guided and monitored by a system of the vehicle. The system for automatic control in this case may comprise, e.g., all driver assistance systems which are related to automated driving. These driver assistance systems, for example, may comprise a lane keeping system, an automatic distance regulator, a speed regulator, etc. Hereinafter, when the vehicle is denoted as an actuator it is understood that the system for automatic control carries out the actions.
In one embodiment, the first action comprises a monitoring of the vehicle and/or the second action comprises a vehicle guidance of the vehicle. These potential actions form general subgroups of actions which are to be carried out when guiding a vehicle. The vehicle guidance may be subdivided into further sub-actions, such as for example steering, braking, and accelerating. The monitoring in turn may be subdivided into further sub-actions, such as for example traffic monitoring and monitoring of vehicle guidance, in particular monitoring of the traveled speed.
In corresponding embodiments, in the first driving mode, the second actuator carries out the first and the second action. In the second driving mode, the first actuator carries out the first and the second action. In a third driving mode, the first actuator carries out the second action and the second actuator carries out the first action. The driving modes in this embodiment are thus characterized by the different configurations of the actuators with the actions.
In corresponding embodiments, in the first driving mode, the vehicle is guided manually, in the second driving mode, the vehicle is guided highly automatically, and/or in the third driving mode, the vehicle is guided partially automatically. The different driving modes, therefore, denote the different levels of automation, which are possible in a vehicle. By the display of the different action elements with the graphical elements, these action elements may be displayed in a simple manner.
In a further embodiment, it is ascertained in which driving mode the vehicle is operated. If it is ascertained that the vehicle is in the first driving mode, the first and the second graphical action elements are generated with the second graphical element. If it is ascertained that the vehicle is in the second driving mode, the first and the second graphical action elements are generated with the first graphical element. If it is ascertained that the vehicle is in the third driving mode, the first action element is generated with the second graphical element and the second action element is generated with the first graphical element. As a result, the concept for the display of the levels of automation may be embodied in a highly flexible manner.
In a further embodiment, it is ascertained in which driving mode the vehicle is in. Irrespective of which driving mode the vehicle is in, the first and the second graphical action elements are generated with the first and with the second graphical element. If it is ascertained that the vehicle is in the first driving mode, the first and the second action elements are displayed highlighted with the second graphical element. If it is ascertained that the vehicle is in the second driving mode, the first and the second action elements are displayed highlighted with the first graphical element. If it is ascertained that the vehicle is in the third driving mode, the first action element is displayed highlighted with the second graphical element and the second action element is displayed highlighted with the first graphical element. In this embodiment, it is displayed to the driver that both actuators are configured to carry out both actions. Here, the action elements may be displayed highlighted by being displayed enlarged in comparison with the non-highlighted action elements. Alternatively, the non-highlighted action elements may be displayed grayed-out so that the other action elements appear highlighted. Additionally, a color coding of the action elements is also conceivable in corresponding embodiments, in which the highlighted action elements are displayed in a specific color. Moreover, the action elements may also be colored, according to which actuator has generated said action elements.
The action elements in the first actuator may in corresponding embodiments be colored, for example, yellow, orange or red, and the action elements in the second actuator may be colored, for example, green, blue or turquoise.
In another embodiment, if it is ascertained that the vehicle is in the first driving mode, the second graphical element is displayed highlighted. If, however, it is ascertained that the vehicle is in the second driving mode, the first graphical element is displayed highlighted. If it is ascertained that the vehicle is in the third driving mode, the first and the second graphical elements are displayed highlighted. As a result, it may be displayed to the driver in a simple manner which actuator is currently active. In this case, the highlighting of the first or, respectively, the second graphical element may be carried out in the same manner as the highlighting of the action elements.
In a further embodiment of the method according to the current aspect, a time is ascertained after which the vehicle changes from the ascertained driving mode to a different driving mode. In this embodiment, this time interval may also be transmitted from an apparatus of the vehicle. During the changeover, the first and/or the second action may be transferred from the first or second actuator to the respective other actuator. A direction-indicating graphical element is generated which points from the first and/or second action element with the first and/or second graphical element to the respective other first and/or second action element with the second or, respectively, the first graphical element. The ascertained time is then displayed with the direction-indicating element. This embodiment describes a highly automatic driving mode in which the driver generally does not have to monitor the system permanently. The driver has sufficient time in reserve in order potentially to take over the vehicle guidance. In contrast thereto, the driver in the second driving mode does not have to undertake either of the two actions at a defined time. For example, it may be necessary to take over the vehicle guidance on specific routes or sections of route. Thus, for example, automatic lane keeping is then no longer possible for the system when road markings are absent. The driver then has to undertake the vehicle guidance himself.
Moreover, in the third driving mode, an indicator may be emitted/shown which displays which action has to be carried out by the second actuator. As a result, it is made clear to the driver immediately which of the two actions he has to carry out himself.
Additionally and in a corresponding embodiment, a warning may be emitted before an imminent change from the second or third driving mode into the first driving mode, which indicates which action has to be undertaken by the second actuator. The warning is displayed, e.g., for a specific time interval before the changeover. As a result, the driver may readily be prepared to undertake the vehicle guidance and/or the monitoring again himself. This may make it possible to prevent the driver from being surprised by such a changeover.
A further aspect relates to a system for displaying driving modes of a vehicle, wherein the vehicle is able to be operated in at least two different driving modes. The system comprises an ascertainment unit by means of which it can be ascertained whether the vehicle is operated in the first or the second driving mode, wherein in the first driving mode a first action is able to be carried out by means of a first actuator and in the second driving mode the first action is able to be carried out by means of a second actuator. Additionally and in a corresponding embodiment, the system may comprise a display surface on which a first graphical element, which represents the first actuator, may generated and a second graphical element, which represents the second actuator may be generated. Moreover, the system comprises a control unit, which is configured to generate a first graphical action element, which represents the first action with the first and/or second graphical element on the display surface. depending on the ascertainment as to whether the vehicle is in the first or the second driving mode. The system is designed, in particular, to carry out the method according to the first aspect. The system, therefore, has all of the advantages of the method according to first aspect.
The display surface in one embodiment may be arranged, e.g., in or on the windshield of the vehicle. The display surface may for example be a so-called head-up display. The driver then does not have to divert his view from the road in order to look at the display on the display surface. Alternatively, however, the display surface may also be arranged in the central console of the vehicle or in the combination instrument of the vehicle.
The invention further relates to a vehicle having such a system.
The invention is now explained by means of further exemplary embodiments with reference to the drawings.
An exemplary embodiment of a system 1 and an arrangement of the system 1 in a vehicle 6 is explained hereinafter with reference to
The system 1 according to the present embodiment comprises a display device 2 with a display surface 3. The display device 2 is coupled to a control unit 4 which in turn is coupled to an ascertainment unit 5.
By means of the ascertainment unit 5, it is possible to determine in which driving mode the vehicle 6 is operated.
The driving modes in which the vehicle 6 may be operated relate in this case to a level of automation which the vehicle 6 may adopt. The level of automation is characterized by which actions are carried out by which actuator. The actuators in this example may be, in particular, the driver of the vehicle 6 or a system of the vehicle 6 for automated driving. Such a system comprises, e.g., all driver assistance systems which are used in partially automated driving or highly automated driving. Such driver assistance systems are, for example, a lane keeping system, an automatic distance regulator, a speed regulator, etc.
The actions which are carried out in automated driving are vehicle guidance or monitoring. The monitoring in turn comprises traffic monitoring and the monitoring of the vehicle guidance itself.
In the present example, the vehicle 6 may be operated in three different driving modes. The first driving mode describes manual driving in which a driver guides and monitors the vehicle 6 himself. The second driving mode describes highly automatic driving in which the vehicle 6 itself undertakes the guidance and monitoring. An intervention of the driver in the highly automatic driving mode is not necessary at any time. The third driving mode describes a partially automatic driving mode in which the vehicle 6 undertakes the vehicle guidance and the task of monitoring is the responsibility of the driver. In the partially automatic driving mode an intervention of the driver is possible or, respectively, even necessary in the case of the respective actions of vehicle guidance and monitoring. If the ascertainment unit 5 has ascertained in which driving mode the vehicle 6 is operated, therefore, it may be established which actuator carries out which action.
This information is forwarded to the control unit 4. The control unit 4 controls the display surface 3 such that a graphical element is generated for each actuator. Additionally, depending on which actuator carries out an action, a graphical action element is generated with the corresponding actuator.
The display surface 3 is arranged in the windshield of the vehicle 6. The display surface 3 is, therefore, part of a so-called head-up display. Alternatively, the display surface 3 may also be arranged in the central console of the vehicle 6 or in the combination instrument of the vehicle 6.
With reference to
With reference to
With reference to
The starting point here is that the driver starts a journey.
After the driver has started the engine, it is ascertained in which driving mode the vehicle 6 is operated. This may be preset or manually set by the driver at the start of the journey.
Additionally, the first graphical element 7 which displays the vehicle 6 as the actuator and the second graphical element 8 which displays the driver as the actuator are shown below one another on the display surface 3. The first graphical element 7 in this case is arranged above the second graphical element 8. Alternatively, the second graphical element 8 may also be arranged above the first graphical element 7.
If it is ascertained that the vehicle 6 is operated in the first driving mode, i.e., the manual driving mode, a display is generated on the display surface 3, as shown in
If, however, it is ascertained that the vehicle 6 is operated in the second driving mode, i.e., the highly automatic driving mode, a display is generated on the display surface 3 as shown in
The first action element 9, which represents the “monitoring” action and the second action element 10 which represents the “vehicle guidance” action are, therefore, displayed with the first graphical element 7. In the present case, “with” is understood once again as adjacent to the first graphical element 7. As a result, the driver is able to detect immediately by looking at the display surface 3 that he does not have to carry out either of the two actions himself and is able to devote himself to other activities.
With reference to
The initial situation in this case is once again that the driver starts a journey.
First, the first graphical element 7 and the second graphical element 8 are generated below one another on the display surface 3. In each case, the graphical action elements 9 and 10 are displayed adjacent to the first 7 and the second graphical element 8.
In this case the graphical elements 7 and 8 and the action elements 9.1 and 10.1 and 9.2 and 10.2 are displayed as color-coded. The graphical element 7 and the action elements 9.1 and 10.1 assigned to the graphical element 7, are, for example, displayed in orange and the graphical element 8 and the action elements 9.2 and 10.2 assigned to the graphical element 8 are displayed in turquoise.
It is ascertained in which driving mode the vehicle 6 is operated. This may be preset once again or set manually by the driver at the start of a journey.
If it is ascertained that the vehicle 6 is operated in the first driving mode, i.e., the manual driving mode, a display as shown in
Here, the second graphical element 8 is displayed highlighted in comparison to the first graphical element 7. As a result, the second graphical element 8 is displayed highlighted by being displayed larger in comparison with the first graphical element 7. Additionally, the first graphical element 7 is displayed grayed-out, i.e. no longer orange but grey. This indicates to the driver that the actuator which is assigned to the second graphical element 8, i.e., the driver in the present driving mode, has to undertake an action and thus is the active actuator. The other actuator, however, is inactive.
Moreover, both graphical action elements 9.2 and 10.2, which are displayed adjacent to the second graphical element 8, are displayed highlighted in comparison to the graphical action element 9.1 and 10.1, which are displayed adjacent to the first graphical element 7. The two graphical action elements 9.2 and 10.2 are displayed highlighted, by the graphical action elements 9.1 and 10.1 being displayed grayed-out, i.e., no longer orange but gray. As a result, it is communicated to the driver which actions are the responsibility of the active actuator, i.e., in the present case the driver himself.
Additionally or alternatively, the graphical action elements 9.2 and 10.2 may also be displayed larger than the action elements 9.1 and 10.1.
Moreover, an indicator 11 is displayed adjacent to the action elements 9.2 and 10.2 which indicates to the driver that the vehicle is driven manually.
Alternatively, the first graphical element 7 and the action elements 9.1 and 10.1 assigned thereto may also be masked out.
If it is ascertained that the vehicle 6 is operated in the second driving mode, i.e., the highly automatic driving mode, a display is displayed as shown in
In this case, the first graphical element 7 is displayed highlighted in comparison to the second graphical element 8. The first graphical element 7 is displayed highlighted by being displayed larger in comparison to the second graphical element 8. Additionally, the second graphical element 8 is displayed grayed-out relative to the first graphical element 7, i.e., no longer turquoise but gray. This indicates to the driver that the actuator which is assigned to the first graphical element 7, i.e., the vehicle 6, in the present driving mode has to carry out an action and thus is the active actuator. The other actuator, however, is inactive.
Moreover, the two graphical action elements 9.1 and 10.1, which are displayed adjacent to the first graphical element 8 are displayed highlighted in comparison to the graphical action elements 9.2 and 10.2, which are displayed adjacent to the second graphical element 8. The two graphical action elements 9.1 and 10.1 are displayed highlighted by the graphical action elements 9.2 and 10.2 being shown grayed-out, i.e., no longer turquoise but gray. As a result, it is shown to the driver which actions are the responsibilities of the active actuator, i.e., in the present case the vehicle 6.
Since the driver in the case of the second driving mode does not have to carry out any action himself, no indicator is emitted.
Alternatively, the second graphical element 8 and the action elements 9.2 and 10.2 assigned thereto may be also masked out.
If it is ascertained that the vehicle 6 is operated in the third driving mode, i.e., one of the partially automatic driving modes, a display as shown in
Since in this case each of the two actuators is responsible for one of the two actions, neither of the two graphical elements 7 or 8 is displayed highlighted. As a result, it is indicated to the driver that both actuators carry out an action.
In the third driving mode, the driver undertakes the “monitoring” task and the vehicle 6 undertakes the “vehicle guidance” task.
This is displayed to the driver by the first action element 9.2 adjacent to the second graphical element 8 being displayed highlighted in comparison with the first action element 9.1. adjacent to the first graphical element 7. At the same time, the second action element 10.1 adjacent to the first graphical element 7 is displayed highlighted in comparison with the second action element 10.2 adjacent to the second graphical element 8.
The action elements 9.2 and 10.1 are in turn displayed highlighted, by the action elements 9.1 and 10.2 being displayed grayed-out, i.e., no longer orange or, respectively, turquoise but gray. Alternatively or at the same time they may also be displayed larger.
Since the driver now has to carry out the monitoring himself, an indicator 11 “please monitor” is displayed adjacent to the action elements 9.2 and 10.2. As a result, it is immediately clear to the driver which task he is responsible for.
Alternatively, the grayed-out action elements 9.1 and 10.2, i.e., the action elements which are displayed as inactive, may also be masked out.
The remaining display elements displayed in
In addition, the current speed of the vehicle 6 is displayed. Moreover, it is displayed that a change in the permitted maximum speed is imminent on the route.
With reference to
The initial situation in the third exemplary embodiment is that it has been ascertained that the vehicle 6 is operated in one of the driving modes and correspondingly a display is displayed on the display surface 3.
It is then ascertained that a change in driving mode is imminent.
In the case of
Then a time is ascertained after which the vehicle 6 changes from the first driving mode into the third driving mode. In this case the “vehicle guidance” action is transferred to the vehicle 6, whilst the “monitoring” action remains with the driver. In the present example, it is ascertained that this changeover is being carried out in one minute.
A direction-indicating graphical element 12 is generated between the action elements 10.1 and 10.2. The direction-indicating graphical element 12 in this case is designed as an arrow. The arrow 12 points from the action element 10.2 to the action element 10.1. As a result, it is indicated to the driver that the “vehicle guidance” action is transferred from the “driver” actuator, to whom the action element 10.2 is assigned, to the “vehicle” actuator 6, to which the action element 10.1 is assigned.
Moreover, in addition to the direction-indicating graphical element 12, the ascertained time is displayed numerically, namely 1 minute until the transfer of the action.
The arrow 12 comprises a line 12.2 and a tip 12.1. A part 14.2 of the line 12.2 is displayed highlighted in comparison with the other part 14.1 of the line 12.2. In this case the length of the part 14.2 of the line 12.2, which is displayed highlighted is dependent on the ascertained time.
The part 14.2 of the line 12.2 displayed highlighted becomes increasingly short, the shorter the remaining time. The remaining time is also displayed to the driver, therefore, adjacent to the numerical display via the removal of the highlighted part 14.2 of the line 12.2.
The highlighted part 14.2 is highlighted by the line thickness in the highlighted part 14.2 being selected to be larger than in the non-highlighted part 14.1.
Alternatively, the thicker part 14.2 of the arrow line 12.2 may also become increasingly long, the shorter the ascertained time. As a result, in the present example, the thicker part 14.2 of the arrow line 12.2 runs toward the action element 10.1.
In the example of
Therefore, in turn, a direction-indicating graphical element 12, once again an arrow, is displayed, which this time however points from the action element 10.1 to the action element 10.2. The time which remains until the driving mode is changed is displayed numerically adjacent to the arrow 12.
Moreover, a part 14.2 of the arrow line 12.2 is once again displayed highlighted by being displayed thicker than the non-highlighted part 14.1 of the arrow line 12.2. The length of the highlighted part 14.2 of the arrow line 12.2 is in turn dependent on the ascertained remaining time until the change of driving mode. The length of the highlighted part 14.2 of the arrow 12 is dynamically adapted to the remaining time.
Additionally, a further graphical element 13 which indicates to the driver that he currently has to carry out the “monitoring” action is shown in
A warning, which warns the driver of the change of driving mode may be additionally emitted on the display surface 3. As a result, the driver is not surprised that he himself now has to carry out one of the actions which he has not previously carried out.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor, module or other unit may fulfill the functions of several items recited in the claims.
The mere fact that certain measures are recited in mutually different dependent claims or embodiments does not indicate that a combination of these measured cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
Number | Date | Country | Kind |
---|---|---|---|
10 2015 214 685 | Jul 2015 | DE | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2016/066292 | 7/8/2016 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2017/021099 | 2/9/2017 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6067492 | Tabata | May 2000 | A |
6226570 | Hahn | May 2001 | B1 |
7091839 | Situ | Aug 2006 | B2 |
7327241 | Toda | Feb 2008 | B2 |
7747370 | Dix | Jun 2010 | B2 |
8269726 | Prados | Sep 2012 | B2 |
8330713 | Stelandre et al. | Dec 2012 | B2 |
9007199 | Yamada | Apr 2015 | B2 |
9174642 | Wimmer et al. | Nov 2015 | B2 |
9446771 | Yamada | Sep 2016 | B2 |
9567008 | Eichhorn | Feb 2017 | B2 |
9683875 | Hackenberg | Jun 2017 | B2 |
9694808 | Weiss | Jul 2017 | B2 |
20030023351 | Fukui | Jan 2003 | A1 |
20030168271 | Massen | Sep 2003 | A1 |
20060220810 | Toda | Oct 2006 | A1 |
20060287826 | Shimizu | Dec 2006 | A1 |
20080249692 | Dix | Oct 2008 | A1 |
20100052888 | Crowe et al. | Mar 2010 | A1 |
20100057281 | Lawyer | Mar 2010 | A1 |
20100262347 | Murota | Oct 2010 | A1 |
20110037582 | Wu | Feb 2011 | A1 |
20110248946 | Michaelis et al. | Oct 2011 | A1 |
20110260887 | Toledo | Oct 2011 | A1 |
20120185135 | Sheriff | Jul 2012 | A1 |
20140088814 | You et al. | Mar 2014 | A1 |
20140149909 | Montes | May 2014 | A1 |
20140210608 | Yamada | Jul 2014 | A1 |
20150239454 | Sujan | Aug 2015 | A1 |
20150291032 | Kim | Oct 2015 | A1 |
20160144856 | Kim | May 2016 | A1 |
20160185351 | Jerger | Jun 2016 | A1 |
20160257303 | Lavoie | Sep 2016 | A1 |
20160264137 | Lavoie | Sep 2016 | A1 |
20160314752 | Nakano | Oct 2016 | A1 |
20170136878 | Frank et al. | May 2017 | A1 |
20180222491 | Miyahara | Aug 2018 | A1 |
Number | Date | Country |
---|---|---|
19743024 | Apr 1999 | DE |
20180024 | Nov 2001 | DE |
10324580 | Dec 2004 | DE |
102006012147 | Mar 2007 | DE |
102006047893 | Jun 2007 | DE |
102008040755 | Feb 2010 | DE |
102009060391 | Jun 2011 | DE |
102010012247 | Sep 2011 | DE |
102011016391 | Dec 2011 | DE |
102010047778 | Apr 2012 | DE |
102012002306 | Aug 2013 | DE |
102013208206 | Nov 2014 | DE |
102013018966 | May 2015 | DE |
102014009985 | Jan 2016 | DE |
2159098 | Mar 2010 | EP |
2026174 | Oct 2010 | EP |
2953304 | Jun 2011 | FR |
2005216110 | Aug 2005 | JP |
2016000814 | Jan 2016 | WO |
2017021099 | Feb 2017 | WO |
Entry |
---|
International Search Report and Written Opinion, Application No. PCT/EP2016/066292, 9 pages, dated Nov. 14, 2016. |
Number | Date | Country | |
---|---|---|---|
20180222318 A1 | Aug 2018 | US |