Electronic devices are used by millions of people daily to carry out business, personal, and social operations. Examples of electronic devices include desktop computers, laptop computers, all-in-one devices, tablets, smartphones, wearable smart devices, and gaming systems to name a few. Users execute electronic device functionality and communicate with other users and entities via user interfaces of the electronic devices.
The accompanying drawings illustrate various examples of the principles described herein and are part of the specification. The illustrated examples are given merely for illustration, and do not limit the scope of the claims.
Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements. The figures are not necessarily to scale, and the size of some parts may be exaggerated to more clearly illustrate the example shown. Moreover, the drawings provide examples and/or implementations consistent with the description; however, the description is not limited to the examples and/or implementations provided in the drawings.
Electronic devices have become commonplace in today's society and it is not uncommon for an individual to interact with multiple electronic devices on a daily basis. Information is presented to the user, and in some examples collected from the user, via a user interface. In other words, the user interface of an electronic device is the gateway through which the user interacts with the electronic device and other users through the electronic device. As electronic devices are becoming more ubiquitous in society, an electronic device that provides a customized presentation of information may enhance their use throughout society.
For example, some users may find a user interface difficult to navigate, which difficulty may prevent the electronic device from providing its intended function, i.e., digital communication and/or digital interaction. That is, an inefficient user interface may be a hindrance to such communication, rather than being a gateway to digital communication. For example, while a particular subset of users may be comfortable with a variety of interfaces, elderly users may not be able to access the full complement of electronic functionality on account of the user interface being inefficient. A similar situation may arise for small children.
As such, the present specification describes a multi-user adaptive interface that may accommodate a diversity of end users by changing the user interface elements automatically based on an automatic detection of the user's age. Specifically, the layout of components on the user interface as well as the size and color of visual assets may be updated based on characteristics of an end user.
Accordingly, the present specification uses machine-learning techniques to detect and associate users with an age-based group. The age-based group of the user triggers the automatic, without additional user intervention, adaptation of the graphical user interface (GUI) based on the estimated age of a user that is in front of the electronic device. As such, the present electronic devices and methods may produce dynamic interfaces that can adjust layouts, component disposition, sizes, colors, and other GUI-related components, based on the detected user age group.
Specifically, the present specification describes an electronic device. The electronic device includes a camera to capture an image of a user facing the electronic device. An image analyzer of the electronic device determines a characteristic of the user from the image of the user. The electronic device also includes a presentation controller. The presentation controller 1) selects a presentation characteristic based on a determined characteristic of the user and 2) alters a display of the electronic device based on a selected presentation characteristic.
The present specification also describes a method. According to the method, a video stream of a user facing an electronic device is captured. The video stream is biometrically analyzed to estimate an age group of the user. Based on the estimated age group of the user, presentation characteristics of the user interface of the electronic device are selected and the user interface is altered based on selected presentation characteristics.
The present specification also describes a non-transitory machine-readable storage medium encoded with instructions executable by a processor of an electronic device. The instructions, when executed by the processor, cause the processor to capture an image of a user facing the electronic device and biometrically analyze, via a machine-learning engine, the image to estimate an age of the user. The instructions, when executed by the processor, cause the processor to classify the user into an age group based on an estimated age of the user and select, based on a determined age group of the user, presentation characteristics of a user interface of the electronic device. The instructions are also executable by the processor to alter the user interface of the electronic device based on selected presentation characteristics of the user interface.
In summary, such a system, method, and machine-readable storage medium may, for example 1) provide a user interface tailored for a user based on characteristics of that particular user; 2) adjust the user interface automatically and without user intervention; and 3) automatically detect the user characteristics which trigger the update to the user interface. However, it is contemplated that the devices disclosed herein may address other matters and deficiencies in a number of technical areas, for example.
As used in the present specification and in the appended claims, the term “a number of” or similar language is meant to be understood broadly as any positive number including 1 to infinity.
Turning now to the figures,
In some examples, the image may be captured during biometric authentication of the user. That is, some electronic devices (100) may rely on the camera (102) and/or facial recognition to unlock an electronic device (100). In this example, this same image that is relied on to unlock the electronic device (100) may be used by the image analyzer (104) and presentation controller (106) to 1) estimate an age group of the user and 2) select presentation characteristics, respectively.
The electronic device (100) also includes a component to determine the characteristic of the user from the image of the user. Specifically, the image analyzer (104) may include hardware components such as a processor and/or memory that analyzes the image to determine the user characteristic. In a particular example, the user characteristic that is determined is an age of the user. Features that may be indicative of age include the size of the face, facial feature shape, wrinkles, face contour, and facial feature distribution on the face. As such, the image analyzer (104) may analyze aspects and features of the image to estimate an age of the user. Different users have different facial features, and some of those facial features may be indicative of an age of the user. For example, the position and relative spacing of facial features such as the eyes, the nose, ears, teeth spacing etc. may be unique to a user and size and/or spacing ranges of these features may be indicative of the age of the user. For example, young children may have a smaller head size and may have different eye-spacing relative to the overall head size as compared to adults. As such, the image analyzer (104) may capture these facial measurements and use them to estimate an age of the user. For example, each captured facial measurement may be indicative of a predicated age for the user depicted in the image. In an example, the image analyzer (104) may average the individually collected age predictions.
Note that in some examples, the image analyzer (104) in addition to providing an estimate of the age of a user, may also track the face of the user as it moves. That is, a user may not be stationary in front of the electronic device (100). In this example, the image analyzer (104) may track the movement of the user all while collecting data by which the age of the user may be estimated. Note that such estimations may not be precise, but may provide an approximation of the age of the user.
In an example, the image analyzer (104) is a machine-learning image analyzer (104) that estimates the age group of the user based on a training set of data. That is, the measurements and characteristics of a user that are indicative of age may be determined based on measurements taken from a training set of data of users of a known age. For example, deep learning and convolutional neural networks (CNN) may identify discriminative features on an image directly from the pixels of the image. In general, the image analyzer (104) may detect a face and perform image preprocessing, such as landmark detection and facial alignment, feature extraction, which includes the extraction of relevant features from the input image, and age classification.
For example, the input to the machine-learning image analyzer (104) may be the image of the user. As described above, there may be a relationship between the age of a user and the ability of the user to access the complement of services provided by the electronic device (100). As such, the machine-learning image analyzer (104) receives an input image and processes the image. Specifically, the machine-learning image analyzer (104) may analyze the pixels of the image or video stream to identify certain features of the user depicted in the image or video stream. Doing so, the image analyzer (104) may extract facial features, such as eyes, ears, nose, mouth, and other facial features from the image. The image analyzer (104) may also determine the relative position and/or distance between different facial features. Characteristics of these features, such as the size, shape, position, and/or color may be compared against a training set of data to estimate the age of the user. That is, a training set may include measurements of these characteristics as they relate to users of a known age.
As such, a comparison of the measured features of a user facing the electronic device (100) may be compared against measurements from the training set. A similarity in the measurements of the user and measurements from the training set may be used to indicate that the user facing the electronic device (100) is of the same approximate age as an individual in the training set with similar measurements.
To facilitate the collection of data to supplement the training set, the camera (102) may be activated during a calibration period. For example, a user may input their age to a system and the camera (102), over a calibration period of time, may capture images of the user such that measurements may be taken by which the age of other users may be estimated. In an example, the calibration period may be a period when the camera (102) is not targeted by an application executing on the electronic device (100). That is, many applications such as video conferencing applications may activate, or target, the camera (102) in executing its intended function. Even when the camera (102) is not targeted by an application, the camera (102) may be activated to capture images whereby the training set of data may be updated. In one particular example, the image analyzer (104) includes an age and gender recognition model implemented as a multitask network, which employs a feature extraction layer and an age regression layer.
The electronic device (100) may also include a presentation controller (106). In general, the presentation controller (106) manages the presentation of visual elements on the display of an electronic device (100). In some examples, this may be based on metadata and/or a database that indicates what visual elements are to be presented and how those visual elements are to be presented. The presentation controller (106) may select a presentation characteristic based on a determined characteristic of the user and may alter a display of the electronic device (100) based on a selected presentation characteristic.
That is, as described above, visual information may be presented in any number of ways and user interfaces have different presentation characteristics. The present electronic device (100) automatically updates these presentation characteristics based on an automatically detected user age. That is, rather than relying on user input to update some characteristics, the present electronic device (100) does so automatically and may update a variety of presentation characteristics at the same time.
Examples of presentation characteristics that may be adjusted include a color scheme for the display. For example, younger users may respond better to a brighter display with more colors where as an older user may prefer a more muted pallet. As another example, the user input elements to be presented on the display may be altered. For example, a user interface to be used for a user who is able to read may include buttons with text such as “home,” “next page,” and “previous page.” A user interface to be used for a younger user may replace this text with graphical indications. In another example, the font size, of the user input elements or others, may be enlarged. In yet another example, the font type may be adjusted. For example, for users learning to read, an upper-case font may be easier to read. As yet another example, instructional text may be added or hidden based on a determined age group for the user of the electronic device (100).
As yet another example, the graphics and/or content to be presented on the display of the electronic device (100) may be altered. Still further, a user interface layout of the display may be selected. That is, the position and relative arrangement of different components of the visual display may be selected. As yet another example, audio content may be presented on the electronic device (100). That is, for users who cannot read or who suffer from visual impairment, which may be indicated by an estimated age group, instead of including textual content, audio content may be provided on the electronic device (100).
In some examples, a first presentation characteristic is associated with both a first age group and a second age group. That is, presentation characteristics may be common among different age classifications. While particular reference is made to a few presentation characteristics, any number of presentation characteristics may be selected and/or altered based on the estimated age group for a user.
As such, the present specification relies on machine-learning models to estimate an age of a user and to automatically adapt visual interfaces, by for example, adjusting their layout and components. As compared to other solutions, the present specification selects the changes based on image analysis of a user and may do so without user intervention whereas other systems rely on a user manually changing the user interface elements.
As used in the present specification and in the appended claims, the term, the presentation controller (106) and the image analyzer (104) may include a processor, an application-specific integrated circuit (ASIC), a semiconductor-based microprocessor, a central processing unit (CPU), and a field-programmable gate array (FPGA), and/or other hardware device.
The memory may include a computer-readable storage medium, which computer-readable storage medium may contain, or store computer-usable program code for use by or in connection with an instruction execution system, apparatus, or device. The memory may take many types of memory including volatile and non-volatile memory. For example, the memory may include Random Access Memory (RAM), Read Only Memory (ROM), optical memory disks, and magnetic disks, among others. The executable code may, when executed by the respective component, cause the component to implement at least the functionality described herein.
According to the method (200), a video stream of a user facing the electronic device (
The method (200) may include biometrically analyzing (block 202) the video stream to estimate an age group of the user. That is, as described above certain facial features may be indicative of an age of the user. As noted above, such an estimation may not be precise, but may classify the user as falling within a particular age group for which certain presentation characteristics are to be selected. Examples of age groups include a 0-14 age group, a 15-47 age group, a 48-63 age group, and an over 64 age group. Each of these age groups may map to particular presentation characteristics. For example, the presentation layout for a user between the ages of 0 and 14 may differ from the presentation layout when a user over the age of 64 is detected facing the electronic device (
In some examples, in addition to being based on biometric information, the estimated age may be determined based on additional information such as content consumed, applications executed, data input or combinations thereof. This additional input may provide additional data points wherein the electronic device (
As described above, a presentation controller (
In the example depicted in
As depicted in
The method (500) may further include capturing (block 502) a video stream of a user facing an electronic device (
As described above, the image analyzer (
Additionally, the presentation characteristics may be selected (block 505) and the user interface altered (block 506) as described above in connection with
In some examples, the method (500) may further include updating (block 507) the user interface in real-time responsive to detecting a second user facing the electronic device (
The machine-readable storage medium (612) causes the processor to execute the designated function of the instructions (614, 616, 618, 620, 622). The machine-readable storage medium (612) can store data, programs, instructions, or any other machine-readable data that can be utilized to operate the electronic device (
Referring to
In summary, such a system, method, and machine-readable storage medium may, for example 1) provide a user interface tailored for a user based on characteristics of that particular user; 2) adjust the user interface automatically and without user intervention; and 3) automatically detect the user characteristics which trigger the update to the user interface. However, it is contemplated that the devices disclosed herein may address other matters and deficiencies in a number of technical areas, for example.