The present disclosure generally relates to field of controlling display units. More particularly, the present disclosure relates to a system and a method for adjusting a display of content for a user based on effects of light incident on the user.
Display devices provide visual presentation of data and images. The display devices may be part of vehicles, electronic appliances, and the like. The display devices are playing a prominent role to provide useful information to a user in many applications. Light from nature effects a display of content from the display devices in some applications.
Conventional techniques teach to compensate the veiling glare by increasing brightness of the display 104. The brightness is increased based on an output provided by an ambient light sensor. The ambient light sensor is an additional component assembled in the display 104. Implementation of such sensors involves additional cost. Further, the ambient light sensor may be placed at a distance from the user 102. Hence, the ambient light sensor may not accurately measure the light 101 that illuminates particularly on the eye of the user 102. There is therefore, a need for a system which is able to accurately compensate veiling glare in a cost efficient manner.
The information disclosed in this background of the disclosure section is only for enhancement of understanding of the general background of the disclosure and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
In an embodiment, the present disclosure discloses a system for adjusting a level of luminance of a display. The system comprises a display unit configured to display a content.
Further, the system comprises a capturing unit configured to capture an image of a front view of the display unit. Furthermore, the system comprises a computing unit coupled to the display unit and the capturing unit. The computing unit is configured to receive the image from the capturing unit. Further, the computing unit is configured to determine one or more target regions from a plurality of regions in the image. The one or more target regions are determined based on a weightage assigned to each of the plurality of regions. Furthermore, computing unit is configured to determine effects of light incident on the one or more target regions. Thereafter, the computing unit is configured to adjust the level of luminance of the display based on the effects of the light. This aspect of the disclosure provides a system for adjusting a level of luminance of a display according to an ambient lighting surrounding the display. Therefore, compensation of veiling glare is achieved.
In an embodiment, the computing unit is configured to adjust the level of luminance of the display comprising adjusting a brightness of the display, a contrast of the display, a colour of the display, a grey level of colour components of the display or any combination thereof. This aspect of the disclosure yields a system to accurately compensate veiling glare by adjusting a level of luminance of a display according to an ambient lighting surrounding the display.
In an embodiment, the computing unit (207) is configured to determine the one or more target regions from the plurality of regions by assigning the weightage based on a priority associated with the plurality of regions, and selecting regions from the plurality of regions with the weightage greater than a pre-defined threshold value as the one or more target regions. The level of luminance of the display is adjusted according to regions surrounding the display.
In an embodiment, the weightage assigned to a region from the plurality of regions is greater when the priority of the region is higher than other regions from the plurality of regions. This aspect of the disclosure determines a selection of regions surrounding the display is categorised according to priority to accurately measure the ambient lighting surrounding the display.
In an embodiment, the priority is based on at least identification of a face of a user and one or more facial organs of the user.
In an embodiment, the effects of the light comprises an intensity of the light, distribution of the light on the one or more target regions, aperture of iris in an eye of the user or any combination thereof.
In an embodiment, the computing unit is further configured to analyse context information related to an automobile; and determine a requirement of adjusting the level of luminance of the display based on the analysis.
In an embodiment, the context information comprises a direction of the automobile, a speed of the automobile, a time data, a location of the automobile or any combination thereof.
In an embodiment, the computing unit is further configured to adjust the level of luminance of the display based on one or more preferences of a user.
In an embodiment, the one or more preferences of the user comprises an age of the user, conditions of the user, a display mode preferred by the user or any combination thereof.
In an embodiment, the present disclosure discloses a method for adjusting a level of luminance of a display using a system as disclosed herein. The method comprises displaying a content, by the display unit. Further, the method comprises capturing, by the capturing unit, an image of a front view of the display unit. Further, the method comprises receiving, by the capturing unit, the image from the capturing unit. Further, the method comprises determining, by the capturing unit, one or more target regions from a plurality of regions in the image. The one or more regions are determined based on a weightage assigned to each of the plurality of regions. Furthermore, the method comprises determining, by the capturing unit, effects of light incident on the one or more target regions. Thereafter, the method comprises adjusting, by the capturing unit, the level of luminance of the display based on the effects of the light. This aspect of the disclosure provides a method for adjusting a level of luminance of a display according to an ambient lighting surrounding the display. Therefore, compensation of veiling glare is achieved.
In an embodiment, adjusting the level of luminance of the display comprises adjusting a brightness of the display, a contrast of the display, a colour of the display, a grey level of colour components of the display or any combination thereof. This aspect of the disclosure yields a method of adjusting the level of luminance of the display to accurately compensate veiling glare.
In an embodiment, determining the one or more target regions from the plurality of regions comprises assigning the weightage based on a priority associated with the plurality of regions, and selecting regions from the plurality of regions with the weightage greater than a pre-defined threshold value as the one or more target regions.
In an embodiment, the weightage assigned to a region from the plurality of regions is greater when the priority of the region is higher than other regions from the plurality of regions.
In an embodiment, the priority is based on at least identification of a face of a user and one or more facial organs of the user.
In an embodiment, the effects of the light comprises an intensity of the light, distribution of the light on the one or more target regions, aperture of iris in an eye of a user or any combination thereof.
In an embodiment, the method further comprises analysing context information related to an automobile, and determining a requirement of adjusting the level of luminance of the display based on the analysis, by the computing unit.
In an embodiment, the context information comprises direction of the automobile, speed of the automobile, a time data, a location of the automobile or any combination thereof.
In an embodiment, adjusting level of luminance of the display is based on one or more preferences of a user.
In an embodiment, the one or more preferences of the user comprises an age of the user, conditions of the user, and a display mode preferred by a user or any combination thereof.
As used in the present disclosure, the term “display unit” is a unit configured to display a content to a user. For example, the display unit may be implemented in a vehicle to display the content as map, contact list, fuel indications, and the like. In another example, the display unit may be associated with a television, configured to display news, entertainment content, and the like.
The term “capturing unit” may refer to an imaging device or camera configured to capture the image in front view of the display unit. For example, when the display unit is associated with a vehicle, the capturing unit may be installed in the interior of the vehicle. In another example, when the display unit is associated with a television, the capturing unit may be installed in the household environment.
The term “image” may be defined as a picture of an environment in front of the display unit. For example, the image may be a picture of a driver and a passenger sitting next to the driver.
The term “weightage” may be defined as a value assigned to each region from a plurality of regions in the image captured by the capturing unit, based on the effects of light.
The term “one or more target regions” may be refer to one or more regions in the image with greater weightage than other regions in the image. The weightage is assigned based on the effects of the light, to measure illuminance at required regions in the image. The display of the content is adjusted based on the effects of the light on the one or more target regions.
The term “effect of the light” may refer to an impact of the light on the user which may cause a change in perception of the displayed content to the user. The effects of light may comprises an intensity of the light, distribution of the light on the one or more target regions, aperture of iris in an eye of the user or any combination thereof.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
The novel features and characteristics of the disclosure are set forth in the appended claims. The disclosure itself, however, as well as a preferred mode of use, further objectives, and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying figures. One or more embodiments are now described, byway of example only, with reference to the accompanying figures wherein like reference numerals represent like elements and in which:
It should be appreciated by those skilled in the art that any block diagram herein represents conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and executed by a computer or processor, whether or not such computer or processor is explicitly shown.
In the present document, the word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or implementation of the present subject matter described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments.
While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown byway of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure.
The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises . . . a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or apparatus.
Embodiments of the present disclosure relate to a system for adjusting a level of luminance of a display. A display unit is configured to display a content to the user. A perception of the displayed content may change for the user, due to incidence of ambient light on the user. The system aims to overcome the problem of change of perception of the displayed content to the user. A capturing unit is configured to capture an image of a front view of the display unit. A computing unit is configured to determine target regions in the image. The target regions are determined based on a weightage assigned to regions in the image. Further, the computing unit is configured to determine effects of the light incident on the target regions. Further, the computing unit is configured to adjust the level of luminance of the based on the effects of the light. Since the weightage of the target regions is considered in the present disclosure, a measurement of illuminance is at required regions and is accurate. Further, the present disclosure uses image processing techniques for determining the effects of the light rather than light sensors. Hence, accuracy in determining the effects of the light is increased. Also, additional cost of the light sensors is reduced.
Reference is now made to
The computing unit 207 may include Central Processing Units 209 (also referred as “CPUs” or “one or more processors 209”), Input/Output (I/O) interface 210, and a memory 211. In some embodiments, the memory 211 may be communicatively coupled to the processor 209. The memory 211 stores instructions executable by the one or more processors 209. The one or more processors 209 may comprise at least one data processor for executing program components for executing user or system-generated requests. The memory 211 may be communicatively coupled to the one or more processors 209. The memory 211 stores instructions, executable by the one or more processors 209, which, on execution, may cause the one or more processors 209 to adjust the level of luminance of the display of the content for the user 202. In an embodiment, the memory 211 may include one or more modules 213 and data 212. The one or more modules 213 may be configured to perform the steps of the present disclosure using the data 212, to adjust the level of the display of the content for the user 202. In an embodiment, each of the one or more modules 213 may be a hardware unit which may be outside the memory 211 and coupled with the computing unit 207. As used herein, the term modules 213 refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a Field-Programmable Gate Arrays (FPGA), Programmable System-on-Chip (PSoC), a combinational logic circuit, and/or other suitable components that provide described functionality. The one or more modules 213 when configured with the described functionality defined in the present disclosure will result in a novel hardware. Further, the I/O interface 210 is coupled with the one or more processors 209 through which an input signal or/and an output signal is communicated. For example, the computing unit 207 may receive the image 208 from the capturing unit 203 via the I/O interface 210. The computing unit 207 may communicate with the display unit 201 via the I/O interface 210 to provide instruction for adjusting the level of luminance of the display of the content for the user 202. In an embodiment, the computing unit 207, to adjust the level of luminance of the display of the content for the user 202, may be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a Personal Computer (PC), a notebook, a smartphone, a tablet, e-book readers, a server, a network server, a cloud-based server and the like. An internal architecture 300 of the computing unit 207 to adjust the level of luminance of the display of the content for the user 202 is illustrated using
In one implementation, the modules 213 may include, for example, an input module 307, a region determination module 308, a light effects determination module 309, a display adjust module 310, an analysis module 311, and other modules 312. It will be appreciated that such aforementioned modules 213 may be represented as a single module or a combination of different modules. In one implementation, the data 212 may include, for example, input data 301, weightage data 302, region data 303, light effects data 304, display data 305, and other data 306.
In an embodiment, the input module 307 may be configured to receive the image 208 of a front view of the display unit 201. The capturing unit 203 may be configured to capture the image 208. The input module 307 may be configured with the capturing unit 203. In an example, when the display unit 201 is associated with a vehicle, the front view of the display unit 201 may comprise a driver, a person next to the driver, a seatbelt, and the like. In another example, when the display unit 201 is associated with a television, the front view of the display unit 201 may comprise one or more users, one or more objects, and the like. In such embodiments, the input module 307 may be configured to receive one or more images of the front view of the display unit 201. In an embodiment, the input module 307 may receive the image 208 at pre-defined time intervals. In another embodiment, the input module 307 may receive the image 208 upon receiving an indication from the user 202. For example, the user 202 may provide the indication to the input module 307 when a readability of the content displayed on the display unit 201 is less, due to incidence of the light 204. The image 208 may be stored as the input data 301 in the memory 211. In an embodiment, the input module 307 may pre-process the image 208. Pre-processing may include, but is not limited to, compressing the image 208, removing noises, normalizing, increasing resolution, changing format and the like.
In an embodiment, the region determination module 308 may be configured to receive the image 208 from the input module 307. The region determination module 308 may determine the one or more target regions from the plurality of regions in the image 208. The one or more target regions may be determined based on a weightage assigned to each of the plurality of regions. The weightage may be assigned based on a priority associated with the plurality of regions. The priority associated with the plurality of regions may be based on effects of the light 204 on the plurality of regions. The priority may be higher when the effects of the light 204 is greater. For example, an eye of the user 202 is highly affected by incident light. Hence, region with the eye may have a higher priority. The priority may be lower when the effects of the light 204 is lower. For example, region with a seatbelt worn by the user 202 may have a lower priority. Further, the region determination module 308 may select regions from the plurality of regions with the weightage greater than a pre-defined threshold value as the one or more target regions. The weightage assigned to a region from the plurality of regions is greater when the priority of the region is higher than other regions from the plurality of regions. The priority may be based on at least identification of the face of the user 202 and one or more facial organs of the user 202. For example, the plurality of regions in the image 208 may comprise a body of the user 202, a seatbelt of a vehicle, and the like. The light 204 may affect a face of the user 202. More particularly, the light 204 may affect vision of the user 202. The effect may change a perception of the content displayed on the display. For example, the content displayed on the display unit 201 may be text. The user 202 may perceive characters in the text differently due to the effect of the light 204 on vision of the user 202. The eyes of the user 202 may have a priority higher than the seatbelt. The weightage assigned to a region in the image 208 associated with the eyes of the user 202 may be greater than weightages of other regions from the plurality of regions. Hence, the one or more target regions may be the eyes of the user 202. The weightage assigned to the plurality of regions may be stored as the weightage data 302 in the memory 211. The one or more target regions may be stored as the region data 303 in the memory 211.
In an embodiment, the light effects determination module 309 may be configured to receive the one or more target regions in the image 208 from the region determination module 308. Further, the light effects determination module 309 may be configured to determine effects of the light 204 incident on the one or more target regions. The effects of the light 204 may comprise at least one of, an intensity of the light 204, distribution of the light 204 on the one or more target regions, and aperture of iris in an eye of the user 202. The light effects determination module 309 may use image processing techniques to determine the effects of the light 204 on the one or more target regions. For example, an intensity of the light 204 may be determined from a histogram of the image 208. A person skilled in the art will appreciate that any known image processing techniques may be used to determine each of the effects of the light 204. The effects of the light 204 incident on the one or more target regions may be stored as the light effects data 304 in the memory 211.
In an embodiment, the display adjust module 310 may be configured to receive the light effects data 304 from the light effects determination module 309. The display adjust module 310 may be configured to adjust the level of luminance of the content displayed to the user 202 based on the effects of the light 204. The display adjust module 310 may be coupled with the display unit 201. The display adjust module 310 may be configured to adjust a brightness of the display, a contrast of the display, a colour of the display, a grey level of colour components of the display, and the like. A person skilled in the art will appreciated that properties of the display other than above-mentioned properties may be adjusted based on the effects of the light 204. The display adjust module 310 may be configured to adjust the level of luminance of the display based on one or more preferences of the user 202 along with effects of light 204. The one or more preferences of the user 202 may comprise an age of the user 202, conditions of the user 202, a display mode preferred by the user 202 or any combination thereof. For example, the readability of the content on the display may reduce with increased age of the user 202. The display may be adjusted to increase the brightness of the display. The conditions of the user 202 may comprise medical conditions such as cataract, corneal edema, and the like. The one or more preferences may be stored in a database 205 shown in
In an embodiment, the system 206 may comprise the analysis module 311, when the system 206 is implemented in an automobile. The analysis module 311 may be configured to analyses context information related to the automobile. The term “context information” may be defined as information related to the automobile required to determine a requirement of adjusting the level of luminance of display. The context information may comprise at least one of, direction of the automobile, speed of the automobile, a time data, and a location of the automobile. The term “time data” may refer to a time of a day, a timestamp, and the like. For example, the time of a day may be morning, afternoon, evening, and night. The timestamp may be 11:30. In another example, the timestamp may be 22:00. Further, the analysis module 311 may be configured to determine a requirement of adjusting the level of luminance of the display based on the analysis. For example, when the speed of the automobile is greater than a threshold value, the effects of the light 204 may vary significantly. The analysis module 311 may determine that adjusting the level of luminance of the display is not required, since the effects of the light 204 may be dynamically varying based on current environment of the automobile. Accordingly, the light 204 may not affect the user 202. In another example, when a time of a day is night, the analysis module 311 may determine that adjusting the level of luminance of the display is not required, since the display may not be affected due to the light 204 from the sun. Data related to the analysis may be stored as analysis data (not shown in Figure) in the memory 211.
The other data 306 may store data, including temporary data and temporary files, generated by the one or more modules 213 for performing the various functions of the computing unit 207. The one or more modules 213 may also include the other modules 312 to perform various miscellaneous functionalities of the computing unit 207. The other data 306 may be stored in the memory 211. It will be appreciated that the one or more modules 213 may be represented as a single module or a combination of different modules.
The order in which the method 400 is described is not intended to be construed as a limitation, and any number of the described method blocks may be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the scope of the subject matter described herein. Furthermore, the method may be implemented in any suitable hardware, software, firmware, or combination thereof.
At step 401, the display unit 201 displays the content to the user 202. The content displayed to the user 202 may be data such as text, image, video, and the like. In an example, when the display unit 201 may be associated with a vehicle, the content displayed to the user 202 may be information such as map, rear obstacle image, and the like. In another example, when the display unit 201 may be associated with a television, the content displayed to the user 202 may be entertainment, news, sports, and the like. A person skilled in the art will appreciate that the content displayed may be other than above-mentioned content displayed to the user 202.
At step 402, the capturing unit 203 captures the image 208 of the front view of the display unit 201. In an embodiment, the capturing unit 203 may capture an entire front view of the display unit 201. In another embodiment, the capturing unit 203 may capture one or more images to cover the entire front view. For example, the one or more images may be captured to capture a face of driver, and a person next to the driver.
At step 403, the computing unit 207 receives the image 208 from the capturing unit 203. The computing unit 207 may be configured with the capturing unit 203. The computing unit 207 and the capturing unit 203 may communicate over a communication network. In an example, when the display unit 201 is associated with a vehicle, the front view of the display unit 201 may comprise a driver, a person next to the driver, a seatbelt worn by the user 202, and the like. In another example, when the display unit 201 is associated with a television, the front view of the display unit 201 may comprise one or more users, one or more objects, and the like. In such embodiments, the computing unit 207 may be configured to receive one or more images of the front view of the display unit 201. In an embodiment, the computing unit 207 may receive the image 208 at pre-defined time intervals. In another embodiment, the computing unit 207 may receive the image 208 upon receiving an indication from the user 202. For example, the user 202 may provide the indication to the computing unit 207 when a readability of the content displayed on the display unit 201 is less, due to incidence of the light 204.
At step 404, the computing unit 207 determines the one or more target regions from the plurality of regions in the image 208. The one or more target regions may be determined based on a weightage assigned to each of the plurality of regions. The computing unit 207 may determine the one or more target regions from the plurality of regions by assigning the weightage based on a priority associated with the plurality of regions. Further, the computing unit 207 may select regions from the plurality of regions with the weightage greater than a pre-defined threshold value as the one or more target regions. The weightage assigned to a region from the plurality of regions is greater when the priority of the region is higher than other regions from the plurality of regions. The priority may be based on at least identification of the face of the user 202 and one or more facial organs of the user 202. A person skilled in the art will appreciate that any image processing technique may be used to determine the one or more target regions. Referring to example 500
At step 405, the computing unit 207 determines the effects of the light 204 incident on the one or more target regions. The effects of the light 204 may comprise the intensity of the light 204, the distribution of the light 204 on the one or more target regions, and the aperture of the iris in the eye of the user 202. The intensity of the light 204 may be an amount of energy transferred to the one or more target regions. The distribution of the light 204 may be a projected pattern of the light 204 on the one or more target regions. The aperture of the iris in the eye of the user 202 indicates dilation of the eye due to penetration of the light 204 through lens into retina in the eye. The computing unit 207 may use image processing techniques to determine each of the effects of the light 204 on the one or more target regions. For example, the intensity of the light 204 and the distribution of the light 204 may be determined from the histogram of the image 208. Computer vision techniques may be used to determine the aperture of the iris in the eye of the user 202. A person skilled in the art will appreciate that any known techniques may be used to determine each of the effects of the light 204.
At step 406, the computing unit 207 adjusts the display of the content for the user 202 based on the effects of the light 204. The computing unit 207 may be coupled with the display unit 201. The computing unit 207 may be configured to adjust at least one of a brightness of the display, a contrast of the display, a colour of the display, a grey level of color components of the display, and the like. For example, when the intensity of the light 204 is greater than a pre-determined threshold value, the brightness of the display may be increased. Further, the computing unit 207 may be configured to adjust the level of the luminance of the display based on one or more preferences of the user 202. The one or more preferences of the user 202 may comprise at least one of, an age of the user 202, conditions of the user 202, and a display mode preferred by the user 202. For example, the readability of the content on the display may reduce with increased age of the user 202. The display may be adjusted to increase the brightness of the display. The conditions of the user 202 may comprise medical conditions such as cataract, corneal edema, and the like. The one or more preferences may be stored in the database 205. The computing unit 207 may retrieve the one or more user preferences from the database 205. The computing unit 207 may be configured to adjust the level of the luminance of the display of the content such that readability, visibility, and the like, of the content may be increased. Further, the computing unit 207 may be configured to adjust the level of the luminance of the display of the content based on other information in the database 205, when the display is associated with the vehicle. The other information may comprise location and direction of the vehicle, travelling speed, time information, and the like.
The processor 602 may be disposed in communication with one or more input/output (I/O) devices (not shown) via I/O interface 601. The I/O interface 601 may employ communication protocols/methods such as, without limitation, audio, analog, digital, monoaural, RCA, stereo, IEEE (Institute of Electrical and Electronics Engineers)-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), Radio Frequency (RF) antennas, S-Video, VGA, IEEE 802.n/b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
Using the I/O interface 601, the computer system 600 may communicate with one or more I/O devices. For example, the input device 610 may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, stylus, scanner, storage device, transceiver, video device/source, etc. The output device 611 may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, Plasma display panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.
The computer system 600 is connected to the capturing unit 612 and the display unit 613 through a communication network 609. The processor 602 may be disposed in communication with the communication network 609 via a network interface 603. The network interface 603 may communicate with the communication network 609. The network interface 603 may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc. The communication network 609 may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc. The network interface 603 may employ connection protocols include, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
The communication network 609 includes, but is not limited to, a direct interconnection, an e-commerce network, a peer to peer (P2P) network, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, Wi-Fi, and such. The first network and the second network may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), etc., to communicate with each other. Further, the first network and the second network may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, etc.
In some embodiments, the processor 602 may be disposed in communication with a memory 605 (e.g., RAM, ROM, etc. not shown in
The memory 605 may store a collection of program or database components, including, without limitation, user interface 606, an operating system 607, web browser 608 etc. In some embodiments, computer system 600 may store user/application data, such as, the data, variables, records, etc., as described in this disclosure. Such databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle® or Sybase®.
The operating system 607 may facilitate resource management and operation of the computer system 600. Examples of operating systems include, without limitation, APPLE MACINTOSH® OS X, UNIX®, UNIX-like system distributions (E.G., BERKELEY SOFTWARE DISTRIBUTION™ (BSD), FREEBSD™, NETBSD™, OPENBSD™, etc.), LINUX DISTRIBUTIONS™ (E.G., RED HAT™, UBUNTU™, KUBUNTU™, etc.), IBM™ OS/2, MICROSOFT™ WINDOWS™ (XP™, VISTA™/7/8, 10 etc.), APPLE® IOS™, GOOGLER ANDROID™, BLACKBERRY® OS, or the like.
In some embodiments, the computer system 600 may implement the web browser 608 stored program component. The web browser 608 may be a hypertext viewing application, for example MICROSOFT® INTERNET EXPLORER™, GOOGLER CHROME™, MOZILLAR FIREFOX™, APPLE® SAFARI™, etc. Secure web browsing may be provided using Secure Hypertext Transport Protocol (HTTPS), Secure Sockets Layer (SSL), Transport Layer Security (TLS), etc. Web browsers 608 may utilize facilities such as AJAX™, DHTML™, ADOBER FLASH™, JAVASCRIPT™, JAVA™, Application Programming Interfaces (APIs), etc. In some embodiments, the computer system 600 may implement a mail server (not shown in Figure) stored program component. The mail server may be an Internet mail server such as Microsoft Exchange, or the like. The mail server may utilize facilities such as ASP™, ACTIVEX™, ANSI™ C++/C #, MICROSOFT®, .NET™, CGI SCRIPTS™, JAVA™, JAVASCRIPT™, PERL™, PHP™, PYTHON™, WEBOBJECTS™, etc. The mail server may utilize communication protocols such as Internet Message Access Protocol (IMAP), Messaging Application Programming Interface (MAPI), MICROSOFT® exchange, Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), or the like. In some embodiments, the computer system 600 may implement a mail client stored program component. The mail client (not shown in Figure) may be a mail viewing application, such as APPLE® MAIL™, MICROSOFT® ENTOURAGE™, MICROSOFT® OUTLOOK™, MOZILLA® THUNDERBIRD™, etc.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, non-volatile memory, hard drives, Compact Disc Read-Only Memory (CD ROMs), Digital Video Disc (DVDs), flash drives, disks, and any other known physical storage media.
Embodiments of the present disclosure determine the one or more target regions based on a weightage. Since the weightage of the target regions is considered in the present disclosure, a measurement of illuminance is at required regions and accurate.
Further, the present disclosure considers the user preferences to adjust the level of luminance of the display. Since the accuracy of adjusting the level of luminance of the display is increased, the content is sharp and visible. This contributes to safe driving when the display is associated with the vehicles. Re-using the driver monitoring camera, saves additional cost of installation of a camera.
Further, the present disclosure uses image processing techniques for determining the effects of the light rather than using light sensors. Advantageously, additional cost of the light sensors is reduced. The effects of light measured using the light sensors may not be accurate since the light sensors may be placed at a distance from the user. The image processing techniques may increase accuracy in determining the effects of the light. Consequently, the system and method disclosed herein achieves the purpose of adjusting a level of luminance of a display according to an ambient lighting surrounding the display. Therefore, compensation of veiling glare is achieved.
Furthermore, the present disclosure determines a requirement of adjusting the level of luminance of the display based on analyzing of context information. The level of luminance of the display may be adjusted based on the requirement surrounding the display unit, which may vary from situation to situation. This is in particular practical and useful in automotive applications, where ambient lighting is subject to changes depending one whether the motor vehicle is in use during day time, subject to sunlight or night time, subject to street lights. Further thereto, since the display is adjusted only when there is a requirement, power of the system may be saved.
The terms “an embodiment”, “embodiment”, “embodiments”, “the embodiment”, “the embodiments”, “one or more embodiments”, “some embodiments”, and “one embodiment” mean “one or more (but not all) embodiments of the invention(s)” unless expressly specified otherwise.
The terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless expressly specified otherwise.
The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a”, “an” and “the” mean “one or more”, unless expressly specified otherwise.
A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention.
When a single device or article is described herein, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the disclosure need not include the device itself.
The illustrated operations of
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the disclosure of the embodiments of the disclosure is intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope being indicated by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
2104606.5 | Mar 2021 | GB | national |
This U.S. patent application claims the benefit of PCT patent application No. PCT/EP2021/087357, filed Dec. 22, 2021, which claims the benefit of United Kingdom patent application No. GB 2104606.5, filed Mar. 31, 2021, both of which are hereby incorporated by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/087357 | 12/22/2021 | WO |