The present disclosure relates to reducing power usage in a virtual visor.
Automotive vehicles are typically equipped with a visor that can fold down and block the sun from shining directly into the driver's eyes. However, the visor is not transparent and thus blocks the field of view for the driver, which can be dangerous. Virtual visors have been developed that use a transparent display that is electronically controlled to darken only the areas that is directly between the driver's eyes and the incoming direct sunlight. This blocks out the direct sunlight from the driver's eyes while leaving other regions of the virtual visor transparent to maintain visibility through the virtual visor.
According to an embodiment, a system for reducing power consumption of a virtual visor within a vehicle comprises a camera configured to capture images of a face of a driver; a visor screen having a plurality of liquid crystal display (LCD) pixels, each LCD pixel configured to (i) in an opaque state, block light from passing through a corresponding area of the visor screen, and (ii) in a transparent state, allow light to pass through the corresponding area of the visor screen; one or more monitors configured to monitor an environment inside or outside of the vehicle and output a trigger signal indicating a reduced need for the LCD pixels to be in the opaque state based on the monitored environment; and a processor configured to: process the captured images and select a group of the LCD pixels to transition between the transparent state and the opaque state based on the processed captured images, and in response to the trigger signal being output by the one or more sensors, cease the processing of the captured images and maintain the LCD pixels in the transparent state.
According to an embodiment, a method of controlling a virtual visor of a vehicle comprises capturing images of a face of a driver from a camera; performing facial recognition on the captured images to determine a location of eyes of the driver; transitioning a group of LCD pixels of the virtual visor from a transparent state to an opaque state based on the determined location of the eyes to block at least some sunlight from traveling into the eyes; monitoring an environment inside or outside of the vehicle via one or more sensors; receiving a trigger signal based on the monitored environment that indicates a reduced need for the LCD pixels to be in the opaque state; and disabling the step of performing facial recognition in response to receiving the trigger signal.
According to an embodiment, a non-transitory computer-readable medium configured to store instructions that, when executed by at least one processor, cause the at least one processor to perform operations comprising: operate a camera to capture images of a face of a driver of a vehicle; process the captured images to determine a location of eyes of the driver; based on the determined location of the eyes, command a group of LCD pixels of a virtual visor screen to switch between (i) a transparent state to allow light to transmit through a corresponding area of the virtual visor screen, and (i) an opaque state to block light from transmitting through the corresponding area of the virtual visor screen; operate one or more sensors to monitor an environment inside or outside of the vehicle; and cease the processing of the captured images in response to the monitored environment indicating a reduced need for the LCD pixels to be in the opaque state.
Embodiments of the present disclosure are described herein. It is to be understood, however, that the disclosed embodiments are merely examples and other embodiments can take various and alternative forms. The figures are not necessarily to scale; some features could be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the embodiments. As those of ordinary skill in the art will understand, various features illustrated and described with reference to any one of the figures can be combined with features illustrated in one or more other figures to produce embodiments that are not explicitly illustrated or described. The combinations of features illustrated provide representative embodiments for typical applications. Various combinations and modifications of the features consistent with the teachings of this disclosure, however, could be desired for particular applications or implementations.
In at least some embodiments, the screen 14 is mounted or otherwise attached to a surface within the cabin 12 in the field of view of the driver 18 or other passenger. Particular, in some embodiments, the screen 14 is mounted to the vehicle so as to be in the line of sight of the driver 18 sitting in the driver's seat and looking through the windshield 16. For example, in the case of a left-hand drive vehicle, the screen 14 may be mounted to the interior roof or headliner adjacent to the windshield 16 so as to cover and/or obstruct at least a portion of an upper-left (as viewed within the cabin 12) region of the windshield 16. Conversely, in the case of a right-hand drive vehicle, the screen 14 may be mounted to the interior roof or headliner adjacent to the windshield 16 so as to cover and/or obstruct at least a portion of an upper-right (as viewed within the cabin 12) region of the windshield 16. The screen 14 may also be mounted to any pillar of the vehicle to cover any window of the vehicle. In some embodiments, the screen is integrated within the glass of the windshield 16 or other window of the vehicle.
The screen 14 can also be automatically controlled such that a first portion of the visor is transparent and a second portion of the visor is opaque or non-transparent. In embodiments, the screen 14 may be a liquid crystal display (LCD) screen. Individual pixels or regions within the screen 14 that are aligned between the sunlight and the driver's eyes can be commanded to be opaque to block the sunlight for the driver's eyes, while other regions of the screen 14 not aligned between the sunlight and the driver's eyes may be transparent such that the field of view is maintained. In short, only a portion of the screen 14 may be commanded to be opaque, while the remainder of the screen 14 is commanded to be translucent.
To do so, the virtual visor system 10 includes an illumination sensor, such as a camera 22, as shown in
In an embodiment, the illumination sensor 24 is an incident light sensor. The incident light sensor may be mounted or otherwise attached inside or outside the cabin at a position where it can detect the ambient or incident light. The incident light sensor can detect and measure both the intensity and the direction of ambient light. In embodiments, the incident light sensor utilizes collimators or polarizers to determine the direction of the incident light source relative to the light sensor. In embodiments, the incident light sensor has external directional components used for calibration based on the relative position of the reference light source. Collimators, in connection with position sensitive light detectors, are used to collect information on the amount of electric charge induced in one or more electrodes by a collimated light beam. The information from the electrodes is used to derive a direction of incidence of the light. In embodiments, the light sensor 18 implements light detectors distributed over a spherical (e.g., hemispherical) surface to determine the direction of incident light based on which light detectors are activated by the incident light. In embodiments, the incident light sensor uses polarization filters to uniquely polarize light from different directions to detect the direction of incident light based on the type of polarization detected. Alternatively, the incident light sensor includes dielectric layer (or stack of dielectric layers), a plurality of photo detectors coupled relative to the dielectric layer, and a plurality of stacks of opaque slats embedded within the dielectric layer, wherein the dielectric layer is substantially transparent to the incident light, the photo detectors detect the incident light through the dielectric layer, and the stacks of opaque slats are approximately parallel to an interface between the dielectric layer and the photo detectors. The stacks of opaque slats define light apertures between adjacent stacks of opaque slats, and at least some of the stacks of opaque slats are arranged at a non-zero angle relative to other stacks of the opaque slats. In short, the light sensor is configured to detect and measure both the intensity and the direction of incident light, and the light sensor may take one of a variety of structural forms to do so.
The virtual visor system 10 further includes a processor 30. The processor is communicatively coupled to the camera 22 or other light sensor 24, as well as the memory 26 and the screen 14. The processor 30 can include more than one processor. The processor 30 is programmed to execute instructions stored on the storage 26 for altering the translucence or opaqueness of the screen 14. In particular, the screen 14 may be a liquid crystal display (LCD) screen having a plurality of independently operable LCD pixels and/or LCD shutters arranged in a grid formation. Each pixel is configured to be selectively operated by the processor 30 in one of at least two optical states: (1) an opaque state, in which the respective pixel blocks light from passing through a respective area of the screen 14 and (2) a transparent state, in which the respective pixel allows light to pass through the respective area of the screen 14. It will be appreciated, however, that any number of intermediate optical states may also be possible. As the processor 30 acts in this fashion to control the screen 14, the processor 30 may also be referred to as a “controller,” or may be connected to a separate controller that physically performs the action of controlling the pixels of the screen 14. Furthermore, the opaque state and the transparent state do not necessarily indicate a 100% opaque characteristic and a 100% transparent characteristic, respectively. Instead, the opaque state is simply one which blocks more light from passing through the respective area than does the transparent state. It will be appreciated that the screen 14 may instead utilize technology other the LCD pixels and a shutter screen may utilize any type of panel having shutter pixels that are electrically, magnetically, and/or mechanically controllable to adjust an optical transparency thereof. For example, the screen 14 may be include a grid of light emitting diodes (LED) that can be controlled to be off (e.g., transparent) and on (e.g., darkened color such as black).
If the screen 14 is an LCD screen operable by the processor 30, the screen 14 may operate as follows, according to various embodiments. The screen itself may comprise a thin layer of glass with liquid crystals, with a white illumination system is placed right behind the glass. Each single pixel may be composed by multiple (e.g., three) “subpixels”, each one able to produce a different color, such as red, blue and green. When activated by an electric current, the subpixels work as “shutters”. Depending on the intensity of the current, pixels will become more or less “closed”. This blocking—or partial blocking—takes place in a perpendicular manner to the passage of light. The mix of those three layers creates the actual final color visible on the screen 14. Likewise, if all three subpixels are “open” (or “not colored”), the backlight will then travel through the subpixels with no alteration. The result is then a transparent dot in the region of the pixel. So, in order for a region to be transparent, the LCD pixels in that area are energized.
The screen 14 may also be an organic light-emitting diode (OLED) screen operable by the processor 30. In such an embodiment, the screen may include two layers of glass on both sides of a set of addressable LEDs with an emissive layer and a conductive layer. Electrical impulses travel through the conductive layer and produce light at the emissive layer. So, in order for a region to be transparent, the OLED screen is simply not energized. However, OLEDs have a difficulty in creating dark (e.g., black) colors that may be beneficial to effectively block out direct sunlight.
It should be understood that the examples provided above regarding LCD and OLED screens are merely examples of transparent displays that can be used as the screen 14. Other available technologies can also be utilized as the screen 14. As controlled by the processor 30 and instructions stored in memory, utilizing any of the exemplary screen technologies described herein the screen is configured to alter between (1) an opaque state, in which regions of the screen are opaque to block out at least a portion of the sunlight, and (2) a transparent state, in which the regions allow light to pass through the respective area of the screen 14.
The processor 30 may include one or more integrated circuits that implement the functionality of a central processing unit (CPU), display controller, and/or graphics processing unit (GPU). In some examples, the processor 30 is a system on a chip (SoC) that integrates the functionality of the CPU and GPU. The SoC may optionally include other components such as, for example, the storage 26 into a single integrated device. In other examples, the CPU and GPU are connected to each other via a peripheral connection device such as PCI express or other suitable peripheral data connection. In one example, the CPU is a commercially available central processing device that implements an instruction set such as one of the x86, ARM, Power, or MIPS instruction set families. The processor may include one or more devices selected from microprocessors, micro-controllers, digital signal processors, microcomputers, central processing units, field programmable gate arrays, programmable logic devices, state machines, logic circuits, analog circuits, digital circuits, or any other devices that manipulate signals (analog or digital) based on computer-executable instructions residing in memory.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., a field programmable gate array (“FPGA”) or an application specific integrated circuit (“ASIC”). Such a special purpose circuit may be referred to as a computer processor even if it is not a general-purpose processor.
Regardless of the specifics, during operation, the processor 30 executes stored program instructions that are retrieved from storage 26. The storage 26, when accessed by the processor 30, may be configured to enable execution of instructions to alter the translucence and/or opaqueness of one or more pixels 32 or regions of pixels 32 of the screen 14. The storage 26 may include a single memory device or a plurality of memory devices including, but not limited to, random access memory (“RAM”), volatile memory, non-volatile memory, static random-access memory (“SRAM”), dynamic random-access memory (“DRAM”), flash memory, cache memory, or any other device capable of storing information. The non-volatile memory includes solid-state memories, such as NAND flash memory, magnetic and optical storage media, or any other suitable data storage device that retains data when the virtual visor system 10 is deactivated or loses electrical power. Programs residing in the non-volatile storage may include or be part of an operating system or an application, and may be compiled or interpreted from computer programs created using a variety of programming languages and/or technologies, including, without limitation, and either alone or in combination, Java, C, C++, C#, Objective C, Fortran, Pascal, Java Script, Python, Perl, and PL/SQL. The volatile memory includes static and dynamic random-access memory (RAM) that stores program instructions and data during operation of the virtual visor system 10.
As shown in
In at least one embodiment, the screen 14 includes a border or bezel 40 configured to surround and/or contain the array of pixels 32 (S1-S32) and to secure and/or hold the array of pixels 32 (S1-S32) together. While the screen 14 may be an LCD screen having these pixels, of course in other embodiments the pixels and/or type of light source may vary. The pixels 32 may be connected to driver circuits that control individual pixels and/or rows or columns of pixels 32.
The screen 14 may also include an electronic connector 42 configured to connect the controller or processor 30 to the individual pixels 32 and/or to the driver circuits, connect the driver circuits to the individual pixels, and/or connect the screen 14 to a power source. The controller or processor 30 as well as the driver circuits can be configured to provide appropriate voltages, currents, data, and/or other signals to the screen 14 via the connector 42 to operate the pixels 32 and control the optical states thereof (i.e., control whether each pixel is in the opaque state or the transparent state). In some embodiments, certain data (e.g., an identification of which pixels are opaque and which are transparent) or other signals are transmitted back to the processor or controller 30 from the pixels 32 via the connector 42.
The controller or processor 30 selects the regions of the screen 14 to transition from the transparent mode to the opaque mode to cast a shadow on the driver's face that blocks the sunlight to the driver's eyes while maintaining a remainder of the screen 14 as transparent. The processor 30 can select which areas of the screen 14 to be opaque based on several inputs. In one embodiment, the image data 28 from the images captured by the camera 22 is analyzed for facial recognition, which can include face detection, face alignment, 3D reconstruction, and the like. For example, the storage 26 may include a facial recognition model 44, or other similar model. The facial recognition model can be, for example, OpenFace or similar available machine learning model. The model may be a pre-trained model from, for example, DLIB or OpenCV. The image can first be analyzed for detecting a face, shown in
The facial recognition model can allow the associated processor to know the presence and location of the driver's eyes in the real-time captured image, shown within boundary box 52. The processor can be calibrated or pre-trained to associate a certain eye location within the image data with a corresponding pixel (or group of pixels) in the screen that, when turned opaque, will block out the direct sunlight from the driver's eyes. This calibration may also take into account the location of the sun. For example, the calibration of the virtual visor system 10 may receive, as inputs, the location of the light source (e.g., as detected from an externally-facing camera or other environmental sensor 34 such as an incident light sensor) and the location of the detected eyes of the driver, and may command a corresponding region of the screen 14 to transition to the opaque state such that the sunlight is blocked from traveling directly to the driver's eyes.
Once the selected pixels are turned opaque to cast a shadow on the driver's face, the camera or other sensor can detect the presence of the shadow on the driver's face to assure the shadow aligns with the detected location of the driver's eyes.
The above description of a facial recognition model to detect the presence and location of driver's eyes is but one example. Other facial recognition models exist and can be implemented to perform similar functions of detecting the location of the driver's eyes, the location and intensity of the sunlight, and darkening a corresponding region of the screen 14 such that the sunlight can pass through the screen 14 at locations other than those that would travel directly to the driver's eyes.
Once certain pixels 32 are controlled to be opaque, the camera 22 and associated image data 28 can be utilized to check the accuracy of the system. For example, the storage 26 can be provided with a shadow-detection instructions that, when executed by processor 30, cause the processor to analyze the image data to look for shadows. For example, as shown in
Instead of, or in addition to, the camera 22, a thermal sensor may be utilized. In such an embodiment, the thermal camera can detect heat coming from the driver's head, and a corresponding control algorithm or machine learning system can detect the presence and location of the user's head, along with the presence and location of the user's eyes based on the heat signatures from the thermal camera.
Continuous operation of the virtual visor system 10 can consume power at unnecessary times. The facial recognition model 44 may be one of several controls utilized in order to cast a shadow on the eye region of the driver's face. In addition to, or as part of, the facial recognition model, face alignment, 3D face reconstruction, estimation of sunlight direction facial shadow estimation, mapping from facial shadow to visor location, and other controls may be utilized. Detection of the driver's face and eyes via the aforementioned camera or sensors, along with the corresponding control algorithms and processing of the images obtained by the camera, can run at a high frequency (e.g., 10-30 Hz), thereby demanding a relatively large amount of power to operate. Determining the conditions in which the virtual visor system 10 is to be deployed can be extremely beneficial for reducing the power consumption of the system. For example, if conditions warrant the virtual visor system 10 be dormant or otherwise not actively working, power can be saved.
According to various embodiments of this disclosure, portions of the virtual visor system 10 are configured to be disabled when not needed, as indicated by certain triggers detected by one or more monitors (e.g., sensors). One or more intermittent on-board monitors independent of external connection can provide the triggers to control the operation of the screen 14 and perform calibration checks. The monitors may be operated at a lower frequency (e.g., lower than the high frequency explained above), thereby consuming less power than the overall control algorithms described earlier for controlling the pixels 32 of the screen 14. The monitors can be configured by the user while the parameters of the visor control algorithms may not be exposed to the user. For instance, if a user still wants to use the virtual visor while wearing sunglasses, the user can simply turn off the sunglasses monitor. The triggers can be provided to indicate certain scenarios in which use of opaque areas on the virtual visor would be unnecessary. For example, if it determined that it is dark outside, or raining, or the driver is wearing sunglasses, the facial recognition models and other steps of processing the image data from the camera may cease to save energy demands. Also, the transition of the pixels from opaque to transparent can likewise cease. The pixels 32 can be placed and maintained in the transparent mode until the trigger from the monitors is removed or absent.
The weather monitor 60 can include one or more environmental sensors, such as environmental sensor 34. In an embodiment, multiple environmental sensors are utilized. As an example, the weather monitor 60 may be performed by the processor 30 (or another processor) and may receive, as input, data from one or more of a variety of sensors such as an ambient light sensor, a rain sensor, an ambient temperature sensor, and the like. A sensor fusion algorithm can be used to combine the measurements from different sensors to estimate the current weather conditions. For example, in certain situations, one sensor alone may not be reliable. It may therefore be desirable to combine different sensors to obtain a robust measurement. One possible way is a decision-level fusion, where each sensor outputs a respective decision, and the algorithm then combines the decisions based on a majority voting strategy. Another possible way is a score-level fusion, where each sensor outputs a score (e.g., volume, voltage, probability) and these scores are fused using learning-based methods (e.g., regression). The weather monitor 60 can also receive locational information from the vehicle's global positioning system (GPS), and current weather parameters for the present location from an off-board sensor wirelessly communicating with the vehicle.
In any of these embodiments, the detected or estimated weather is used to determine what control state the virtual visor system 10 should operate in. For example, if the weather monitor indicates that there is currently inclement weather (e.g., cloud cover, rain, etc.) then the virtual visor system 10 can be deactivated or in a low-power mode. In some embodiments, the screen 14 can be placed in a transparent mode and held in such a mode until the weather monitor 60 indicates the weather has improved, removing the trigger condition for interrupting the control of the virtual visor system 10.
In an embodiment, a sunglasses monitor 62 may be provided. When the driver is wearing sunglasses, the use of the virtual visor system 10 may not provide any useful benefit to the driver. To determine whether the driver is wearing sunglasses, the sunglasses monitor may access the facial recognition model described above. In particular, once a face is detected, similar methods (e.g., machine learning, deep neural network in a pre-trained fashion) may be employed to determine whether or not there are sunglasses present on the detected face. If it is determined that the driver is wearing sunglasses, the screen 14 can be placed in a transparent mode and held in such a mode until the sunglasses monitor 62 clears the trigger condition (e.g., the driver is no longer detected to be wearing sunglasses) for interrupting the control of the virtual visor. During calibration of the sunglasses monitor, the driver can be requested to remove sunglasses before initializing the system.
In an embodiment, a day/night monitor 64 may be provided. During night, the operation of the sun visor is likely not useful, and therefore the power flow to the virtual visor system can be interrupted (or the power used by the system can be reduced) to save power as described herein. The day/night monitor 64 may include an environmental sensor 34 in the form of an ambient light sensor configured to detect the amount of light outside the vehicle. The day/night monitor 64 may also gain access to the current day and time, and optionally the current location and access a look-up table that matches the time of day and the location to the current position of the sun, or time of sunset or darkness in which a visor would not provide benefits. These points can be calibrated into the system.
As previously explained, an incident light sensor may be utilized for control of the screen 14. In an embodiment, the incident light sensor detects a location and magnitude of a light source (e.g., the sun, other vehicle's headlights, etc.) The processor can then control the opaqueness of regions of the screen 14 based on the incident light. For example, the overall ambient light might be below a threshold, which might indicate that it is night outside and the visor may be deactivated to save power usage. However, if the incident light sensor determines that an incident light is above a threshold even though the overall ambient light is below a corresponding threshold, the active control of the screen 14 may still be provided. For example, it may be near dawn or dusk outside, with the overall ambient light not relatively large, yet the incident light sensor may detect the directly sunlight exceeding a threshold that may interfere with the driver's vision. Therefore, the screen 14 may be controlled to turn corresponding pixels opaque to block out the direct sunlight even though ambient light is below a threshold. In another embodiment, it may be dark or night time in which the ambient light is far below a corresponding threshold which might otherwise cease the active control of the screen to save power according to the teachings herein. However, bright headlights (e.g., high beams) from an oncoming vehicle may interfere with the driver's vision. Therefore, the screen 14 may actively be controlled if the incident light sensor determines that the high beams are in a location that may interfere with the driver's vision, and turn corresponding pixels of the screen 14 opaque.
The monitors 60-64 are merely exemplary ways of determining whether the use of the virtual visor system 10 would not provide benefit to the driver. It should be understood that other monitors 66 can also be provided to work along with monitors 60-64 or as standalone monitors. In one embodiment, the other monitors 66 includes a switch that determines whether the screen 14 or surrounding visor is folded down or up. If the screen 14 is folded up against the inner roof of the vehicle in a storage position, the virtual visor control can be disabled. If, the screen 14 is folded down away from the inner roof of the vehicle in a use position (e.g., between the driver's eyes and a portion of the windshield), then the virtual visor control can be enabled (e.g., the controller can control the pixels 32). The switch to determine the position of the screen 14 can be a physical proximity switch or the like.
If any of the above monitors 60-66 output a signal indicating a trigger of said monitor, the control of the virtual visor can be disabled at 68. In other words, if the weather monitor 60 indicates inclement weather, or the sunglasses monitor 62 indicates the driver is wearing sunglasses, or if the day/night monitor 64 indicates it is night time, or if the other monitors 66 output a similar signal, the virtual visor system 10 can be placed “off” at 68. If placed in the “off” mode, the system can be in a low-power mode, wherein certain structure such as the camera 22 and pixels 32 are not activated. This can reduce the overall power consumption of the system 10. Alternatively, this may simply implement a kill switch (either hardware-based or software-based) to shutdown the system 10 until the monitors 60-66 indicate the absence of the trigger (e.g., the weather has improved, the driver has taken off his/her sunglasses, etc.). The ability to control the screen 14, as represented by the virtual visor control 70, may be carried out by the processor or controller 30 described herein. For example, the processor or controller 30 can disable the operation of the controls and analysis described herein (e.g., face detection, face alignment, face reconstruction, shadow estimation, light direction estimation, etc.), and place all of the pixels in a constant state such as the transparent mode. As these controls and analyses consume relatively large amounts of power, power is saved by not running these.
If the sensors do not indicate that it is night time (e.g., it is daylight outside), then the processor determines whether the driver is wearing sunglasses at 88. In other words, the output of the sunglasses monitor 62 is accessed. If the sensors determine that the driver is wearing sunglasses, the virtual visor control is disabled at 86.
If the driver is not wearing sunglasses, the method advances to 90 in which the processor accesses the output of the weather monitor 60 to determine whether there are conditions that indicate inclement weather outside the vehicle, such as rain, cloud cover, etc. If there is inclement weather, the virtual visor control is disabled at 86 to reduce power demands.
If the various monitors 60-66 all do not output signals indicating the screen 14 would not be beneficial for the driver, then the method proceeds to 92 in which the virtual visor system 10 is allowed to operate normally, e.g., able to control the pixels to be in either the transparent mode or the opaque mode.
It should be understood that the steps shown in
While embodiments described herein show the virtual visor system implemented in an automotive vehicle, the teachings of this disclosure can also be applied in other visor settings. For example, the systems described herein can be applied to aircraft to help reduce sun glare or direct sunlight on the pilot's eyes. Also, the systems described herein can be applied to helmets, such as motorcycle helmets. These helmets typically have one or more fold-down visors or shields, and the technology of this disclosure can be implemented into one of those visors.
While one processor 30 is illustrated, it should be understood that references to “a processor” can include one or more processors. In certain embodiments, it may be advantageous or appropriate to include multiple processors that communicate with designated hardware and sensory equipment, but nonetheless are either individually or collectively capable of performing the actions described herein.
The processes, methods, or algorithms disclosed herein can be deliverable to/implemented by a processing device, controller, or computer, which can include any existing programmable electronic control unit or dedicated electronic control unit. Similarly, the processes, methods, or algorithms can be stored as data and instructions executable by a controller or computer in many forms including, but not limited to, information permanently stored on non-writable storage media such as ROM devices and information alterably stored on writeable storage media such as floppy disks, magnetic tapes, CDs, RAM devices, and other magnetic and optical media. The processes, methods, or algorithms can also be implemented in a software executable object. Alternatively, the processes, methods, or algorithms can be embodied in whole or in part using suitable hardware components, such as Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs), state machines, controllers or other hardware components or devices, or a combination of hardware, software and firmware components.
While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms encompassed by the claims. The words used in the specification are words of description rather than limitation, and it is understood that various changes can be made without departing from the spirit and scope of the disclosure. As previously described, the features of various embodiments can be combined to form further embodiments of the invention that may not be explicitly described or illustrated. While various embodiments could have been described as providing advantages or being preferred over other embodiments or prior art implementations with respect to one or more desired characteristics, those of ordinary skill in the art recognize that one or more features or characteristics can be compromised to achieve desired overall system attributes, which depend on the specific application and implementation. These attributes can include, but are not limited to cost, strength, durability, life cycle cost, marketability, appearance, packaging, size, serviceability, weight, manufacturability, ease of assembly, etc. As such, to the extent any embodiments are described as less desirable than other embodiments or prior art implementations with respect to one or more characteristics, these embodiments are not outside the scope of the disclosure and can be desirable for particular applications.