Generally, fifteen percentage of the global population may experience some form of disability The disability prevalence may be higher for developing countries. Furthermore, one-fifth of the estimated global population, or between 110 million and 190 million people, may experience significant disabilities. For example, a disabled person may not know what content may look like. A disability may be a disability in digital experiences, which includes any condition that makes the content more difficult for a person to understand certain content. Hence, it may not necessary that the disability may need to be medical related condition. The disability in the digital experiences could include, for example, color blindness, bad Internet connection, having bad sleep quality or bad memory, and the like, which may cause some information to be lost while viewing the content. Furthermore, a disability may also include astigmatism, cataract, color blindness (deuteranopia, protanopia, tritanopia), using electronic display with light leakage or with low-resolution (<1080P), hearing losses, myopia, non-native language speaker, using electronic device under sunshine, using earphone in noisy environment, dyslexia, and the like. The aforementioned disabilities may cause information loss to disabled users than the normal users.
Conventional systems may disclose aspects of determining content placement based on memorability, utilizing neural network models. For example, a conventional content system may receive content and target user category data identifying target users of the content and may modify one or more features of the content to generate a plurality of content data based on the content. The conventional content system may select a neural network model, based on the target user category data, and may process the plurality of content data, with the neural network model, to determine first and second memorability scores for the plurality of content data and plurality of areas, respectively. The conventional content system may perform one or more actions based on the first memorability scores or the second memorability scores. For example, based on the first memorability scores, the conventional content system may provide information identifying one or more changes to the one or more features (of the content) to increase a likelihood of the one or more target users remembering the content. Based on the second memorability scores, the conventional content system may provide information identifying one or more recommended areas (in the content) for placing content (e.g., placing a logo, placing a graphical object).
Specifically, when the content is displayed to the person of disability in the digital experiences, the brain of the disabled person would have received less information (image, video, text) from the displayed content or may have difficulty in understanding certain contents than the normal person. Conventionally, there may be no methods or systems to quantify the amount of information loss for certain/different disabled persons. Further, the conventional methods may not provide content appropriate for a disabled person.
An embodiment of present disclosure includes a system including an information loss determination engine. A processor may cause the information loss determination engine to, for a given disability, run at least one simulation to simulate how a content may be experienced by a user having such disability. The disability may include, human disability, technical disability, social disability, and the like. The processor may cause the information loss determination engine to compute information loss based on comparison of the simulated content with desired original content. Further, the processor may cause the information loss determination engine to transmit data packets indicative of a content optimization strategy that may be determined based on the determined information loss.
Another embodiment of the present disclosure may include a method for disability simulations and accessibility evaluations of content. The method may include, for a given disability, running at least one simulation to simulate how a content may be experienced by a user having such disability. The method may include computing information loss based on comparison of the simulated content with desired original content. Further, the method may include transmitting data packets indicative of a content optimization strategy that may be determined based on the determined information loss.
Yet another embodiment of the present disclosure may include a non-transitory computer readable medium including machine executable instructions that may be executable by a processor to receive an input data corresponding to a programming language. The processor may, for a given disability, run at least one simulation to simulate how a content may be experienced by a user having such disability. The processor may compute information loss based on comparison of the simulated content with desired original content. Further, the processor may transmit data packets indicative of a content optimization strategy that may be determined based on the determined information loss.
For simplicity and illustrative purposes, the present disclosure is described by referring mainly to examples thereof. The examples of the present disclosure described herein may be used together in different combinations. In the following description, details are set forth in order to provide an understanding of the present disclosure. It will be readily apparent, however, that the present disclosure may be practiced without limitation to all these details. Also, throughout the present disclosure, the terms “a” and “an” are intended to denote at least one of a particular element. The terms “a” and “an” may also denote more than one of a particular element. As used herein. the term “includes” means includes but not limited to, the term “including” means including but not limited to. The term “based on” means based at least in part on, the term “based upon” means based at least in part upon, and the term “such as” means such as but not limited to. The term “relevant” means closely connected or appropriate to what is being performed or considered.
Various embodiments describe providing a solution in the form of a system and a method for disability simulations and accessibility evaluations of content.
Exemplary embodiments of the present disclosure have been described in the framework of improved disability simulations and accessibility evaluations of content. In an example, for a given disability, the present disclosure may run at least one simulation to simulate how a content may be experienced by a user having such disability. Further, the present disclosure may compute information loss based on comparison of the simulated content with desired original content. Further, the present disclosure may transmit data packets indicative of a content optimization strategy that may be determined based on the determined information loss. For each simulated video, the present disclosure may calculate an information loss such as text information loss, image information loss and audio information loss. The input content can be video or image or audio. The content may be optimized to respective user of disability viewing the content. The disability may include, but are not limited to, low resolution screen, astigmatism, using device under sunshine, color blindness, myopia, low energy state/memory, dyslexia, screen light leakage, cataract, hearing loss, noisy environment, non-native language speaker, bad sleep quality, device virus impact, user disability, bad Internet connection, and the like.
The system 102 may be a hardware device including the processor 104 executing machine-readable program instructions to perform disability simulations and accessibility evaluations of content. Execution of the machine-readable program instructions by the processor 104 may enable the proposed system 102 to perform disability simulations and accessibility evaluations of content. The “hardware” may comprise a combination of discrete components, an integrated circuit, an application-specific integrated circuit, a field programmable gate array, a digital signal processor, or other suitable hardware. The “software” may comprise one or more objects, agents, threads, lines of code, subroutines, separate software applications, two or more lines of code or other suitable software structures operating in one or more software applications or on one or more processors. The processor 104 may include, for example, microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuits, and/or any devices that manipulate data or signals based on operational instructions. Among other capabilities, the processor 104 may fetch and execute computer-readable instructions in a memory operationally coupled with system 102 for performing tasks such as data processing, input/output processing, feature extraction, and/or any other functions. Any reference to a task in the present disclosure may refer to an operation being or that may be performed on data.
1n the example that follows, assume that a user of the system 102 desires to improve disability simulations and accessibility evaluations of content with respect to target users. The user may include an administrator of a website, an administrator of a social media site, an administrator of a social media application, an administrator of video content (e.g., television content, video on demand content, or online video content), among other examples. The disability simulations and accessibility evaluations of content may be to quantify amount of an information loss by the target users and to provide disability friendly content. The content may include, but are not limited to, digital content such as an image, a video, textual information, an audio, a webpage, a set of video frames, and the like, and physical content such as store design/designs, interior/exterior designs, drawings, newspaper/magazines, billboards, and the like. In some implementations, the content may be obtained from a website, a thumbnail image, a poster, a recording, a streamed content (real/non-real-time), camera feed, a social media post, received content, shared content, online content, and the like.
In an example embodiment, the system 102 includes the information loss determination engine 108, which when executed using the processor 104. The processor 104 may cause the information loss determination engine 108 to, for a given disability, run at least one simulation to simulate how a content may be experienced by a user having such disability. hi an example embodiment, the content may include, but are not limited to, digital content such as an image, a video, text, an audio, a webpage, a set of video frames, and the like, and physical content such as store design/designs, interior/exterior designs, drawings, newspaper/magazines, billboards, and the like. For the content being a video, the key information components comprise one or more key video frames. The key information components may be determined based on the disability for which respective at least one simulation may be to be executed. In an embodiment, the disability may include, but are not limited to, low resolution screen, astigmatism, using device under sunshine, color blindness, myopia, low energy state/memory, dyslexia, screen light leakage, cataract, hearing loss, noisy environment, non-native language speaker, bad sleep quality, device virus impact, user disability, bad Internet connection, and the like.
Further, the processor 104 may cause the information loss determination engine 108 to compute information loss based on comparison of the simulated content with desired original content. The comparison may be undertaken based on difference hash (dHash). In an embodiment, the information loss may include, but are not limited to, text information loss, visual information loss, content information loss, audio information loss, video information loss, image information loss, and the like.
In an embodiment, for computation of information loss, the processor 104 may cause the information loss determination engine 108 to determine key information components that forms part of the desired original content. Further, the processor 104 may cause the information loss determination engine 108 to extract corresponding determined key information components from the simulated content. Furthermore, the processor 104 may cause the information loss determination engine 108 to compute the information loss based on comparison of the key information components that form part of the simulated content with the desired original content.
In an embodiment, the text information loss may be computed based on annotation and recognition of text that forms part of the content using contrasts (C) and text recognition certitude values (T) for the j-th image pair and the i-th text block pair as shown in equation 1 below:
In the above equation 1, the term ri=1{font size>3% video height} or 0{font size<3% video height}, the term ‘n’ may refer to the number of key frame pairs, and the term ‘m’ may refer to the number of text blocks that font size>3% video height
In an embodiment, the age information loss may be computed based on Peak Signal-To-Noise Ratio (PSNR) for simulated image/video, and object recognition using calculation of object recognition certitude value (OC) on j-th image pair and the i-th object may be detected as shown in equation 2 below:
In the above equation 2, the term p2 ∈[0,1] (Default 0.5), ri=1{object size>3% video size} or 0{object size≤3% video size}, the term ‘n’ may refer to the number of key frame pairs, the term may refer to the number of objects recognized that object size>3% video size.
In an embodiment, the audio information loss may be computed based on speech recognition and sound classification, wherein for the i-th speech phrase recognized and j-th sound classified, speech recognition certitude (SC) and music recognition certitude (MC) may be calculated and used to calculate the audio information loss as shown in equation 3 below:
In the above equation 3, the term ‘m’ may refer to the number of speech phrases and k may be the number of soundtracks, the term p3 ∈[0,1] (Default 0.5).
In an embodiment, the general information loss may be computed based on average of the text information loss, the audio information loss, and the image information loss. The general information loss may be computed based on joint probability distribution for j-th disability Pj, as shown in equation 4 below:
L=Σj
In the above equation 4, the term ‘n’ may refer to the number of simulations to be generated, and the term may present the information loss of the multi-disability simulated content.
Furthermore, the processor 104 may cause the information loss determination engine 108 to transmit data packets indicative of a content optimization strategy that may be determined based on the determined information loss. In an embodiment, the information loss may be approximated as shown in equation 5 below:
In the above equation 5, the term ‘n’ may refer to the number of simulations to be generated.
In an embodiment, the data 204 may be stored in the memory 106 in the form of various data structures. Additionally, the data 204 can be organized using data models, such as relational or hierarchical data models. The other data 218 may store data, including temporary data and temporary files, generated by the information loss determination engine 108 for performing the various functions of the system 102.
In an embodiment, the data 204 stored in the memory 106 may be processed by the information loss determination engine 108 of the system 102. The information loss determination engine 108 may be stored within the memory 106. In an example, the information loss determination engine 108 communicatively coupled to the processor 104 configured in the system 102, may also be present outside the memory 106 and implemented as hardware. As used herein, the term modules refer to an application-specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
In an embodiment, the information loss determination engine 108 may include, for example, a running module 222, a computing module 224, a transmitting module 226, and other modules 228. The other modules 228 may be used to perform various miscellaneous functionalities of the system 102. It will be appreciated that such aforementioned modules may be represented as a single module or a combination of different modules.
In an embodiment, the running module 222 may run via the information loss determination engine 108, for a given disability, at least one simulation to simulate how a content is experienced by a user having the given disability. The at least one simulation may be stored as simulation data 206. The content experienced by the user may be stored as the experience data 208. Further, the computing module 224 may compute information loss based on comparison of the simulated content with desired original content. The computed information loss may be stored as the computation data 210. The comparison of the simulated content may be stored as the comparison data 212. Further, the transmitting module 226 may transmit data packets indicative of a content optimization strategy that is determined based on the determined information loss. The transmitted data packets may be stored as the transmission data 214. The content optimization strategy may be stored as the optimization data 216.
For example, the system 102 can generate a value to quantify amount of information loss for certain disabled users, and then the system 102 may output disability friendly content. The input to the system 102 may be, but are not limited to, advertisement video, social media post, and the like. Consider, a video including, different subtitles, different images, colorful images, and the like as shown in clippings of video image in
The system 102 may calculate the general information lost for each disability of the target user.
For example, for the low-resolution screen (non-High Definition (HD) display on devices, and the prevalence may be even, for example, 70% of the users use the low a relatively low-resolution screen. For these users, when the user uses the low-resolution display to watch the previous video, the video may lose some information. Based on a final output, the system 102 may calculate the general accessibility score for the input video. This implies that there may be more information lost for the different kinds of disabilities. A content creator may take a week to manually improve, for example, the contrast of the images, the size of the font in the videos. The expected output from the system 102 may include, video example as shown in clippings of video in
For example, the first step may be to extract the key frames. Normally, for a video for a normal video, they may be about 25. up to 30 frames a second, For a short event for a short video, there may be plurality of images to process. The system 102 may extract the key frames, which means the most important frame frames in the video. The system, 100 may define key frames, using the Dhash algorithm for the input video, for example, 500 images. Finally, the system 102 may retain, for example, 17 keyframe images in the video, as seen in the video in the
The second step, may be to run rendered simulations on the input video. All the simulations, astigmatism which may be overlaid of the same frame multiple times with little distances shifted, cataract which may whiten the frame and render spherical blur transformation, color blindness (deuteranopia, protanopia, tritanopia), user using LCD screens with light leakage, which may need to simulate white light leakage on the top and bottom edges, myopia, non-native language speaker, the user may be using the device under sunshine, deuteranopia, in
Further, the third step, may be to calculate the text information loss. The text information loss improves two different information loss. The first may be the text and notation and the recombination on the text and notation. Using Al, the system 102 may recall and analyze all the text in images. For each recombination text that recombination results, we have the certitude about the recognition. The text information loss may be calculated using the equation 1 above. The first part of the formula may be to compare the certitude (T) before the simulation and after the simulation to calculate for the human loss, the text recombination difficulties may have changed or not. Under the second part, the text information loss may be to calculate the contrast of each text, due to the higher contrast for each text may be much easier for the human to recognize the text. If the contrast may be too low, the users may not able to recognize the text anymore. Combining first and second part of the equation 1, the system 102 may calculate for all the text in images, regarding the information loss for the disabled users. if the information loss may be very low, then after the simulation, the contrast of each text and text re-combinations haven't been changed, which there may not be any text information loss, On the contrary, if the text information may be high, then all the texts may be harder for the human to be recognized.
Based on equation 1, the system 102 may calculate the final information loss, which may be, for example, 0.67. This means 67% of the information has lost in the pair of simulation. Similarly, the image information loss may be calculated using equation 2. In the image information loss, the system 102 may calculate the set certitude of the object, recombination certitude and additionally the system 102 may calculate the Peak Signal to Noise Ratio (PSNR) that can evaluate amount of noise in images. Further, the audio information loss may be calculated using the equation 3 above. hi the audio information loss, the system 102 may calculate speech recognition certitude and music recombination certitude. This may be to determine speech, background music, and the like. The speech recognition certitude and the music recognition certitude may be used to evaluate the audio quality after the simulation. Finally, the system 102 may combine the text recombination information loss, image information loss and audio information loss, to calculate the average number of the information loss as the general information loss using the equation 4 above. Based on a probability, the system 102 may output final information loss for the input video. The final general accessibility score for the input video may be calculated using the equation 4 above. For example, if the score may be less, then the input video has multiple issues with the disability.
In some instance, only images theories may be inputted, or the sound checks for the video, Here a list of disabilities to simulate may be listed by the system 102, and performs the visual simulation and the audit simulations on the images series and soundtracks in the video, calculates the text information loss, image information loss, and general information loss. Based on the three intermediate information loss, the system 102 may calculate the final score based on the equation 4. For example, L(1,1,1,0,0, . . . ,0) may present to simulate astigmatism, cataract and color blindness at the same time on the input video and calculate the general information loss for the simulation. The general information loss for Oh disability simulation may be provided using equation 6 below:
All the disabilities may be independent and the prevalence value for j-th disability may be “Pj”. To calculate information loss based on joint probability distribution formula as shown in equation 6 above. Based on the final score, the system 102 may provide the content optimization strategy. For example, in
For example, a good video or disability friendly video may have at least 80/100 score for the accessibility score. If, the input video may have only, for example, 67 over 100 score, then the video, for example advertisements, the content might have some issues with disabled users. The advertisement company may provide several version/formats of the of the advertisement for different disabilities to determine which video may have the highest accessibility score.
In some implementations, the system 102 may identify the one or more portions of the video/images using one or more image classification techniques (e.g., a Convolutional Neural Networks (CNNs) technique, a residual neural network (ResNet) technique, a Visual Geometry Group (VGG) technique) and/or an object detection technique (e.g., a Single Shot Detector (SSD) technique, a You Only Look Once (POLO) technique, and/or a Region-Based Fully Convolutional Networks (R-FCN) technique). In some examples, the one or more portions may include one or more areas of the content (e.g., a top-right area, a bottom half area, a center area, or an entire area), one or more logos present in the content, one or more graphical objects in the content, and the like.
A neural network model (selected by the content system) may include a residual neural network (ResNet) model, a deep learning technique (e.g., a faster regional convolutional neural network (R-CNN)) model, a feed forward neural network model, a radial basis function neural network model, a Kohonen self-organizing neural network model, a recurrent neural network (RNN) model, a convolutional neural network model, a modular neural network model, a deep learning image classifier neural network model, a Convolutional Neural Networks (CNNs) model, and the like.
In some implementations, the neural network model may be trained using training data (e.g., historical and/or current). In some examples, the training data may include different content, data regarding features of the different content, data identifying a user category, content category data regarding categories (e.g., of content) identified by the different content, data regarding different=exposure times for the different content to users associated with the user category, time interval between exposures of the different content, information indicating whether the users remembered the different content, information identifying areas of the different content remembered by the users (e.g., a top-right area, a bottom half area, a center area, and/or an entire area), among other examples. The categories (identified by the different content) may include goods, services, among other examples. The exposure time may refer to a period of time during which the different content may be exposed (or presented) to the users.
The hardware platform 400 may be a computer system such as the system 102 that may be used with the embodiments described herein. The computer system may represent a computational platform that includes components that may be in a server or another computer system. The computer system may execute, by the processor 405 (e.g., a single or multiple processors) or other hardware processing circuit, the methods, functions, and other processes described herein. These methods, functions, and other processes may be embodied as machine-readable instructions stored on a computer-readable medium, which may be non-transitory, such as hardware storage devices (e.g., RAM (random access memory), ROM (read-only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), hard drives, and flash memory). The computer system may include the processor 405 that executes software instructions or code stored on a non-transitory computer-readable storage medium 410 to perform methods of the present disclosure. The software code includes, for example, instructions to gather data and documents and analyze documents. In an example, the information loss determination engine 108, may be software codes or components performingthese steps.
The instructions on the computer-readable storage medium 410 are read and stored the instructions in storage 415 or in random access memory (RAM). The storage 415 may provide a space for keeping static data where at least some instructions could be stored for later execution. The stored instructions may be further compiled to generate other representations of the instructions and dynamically stored in the RAM such as RAM 420. The processor 405 may read instructions from the RAM 420 and perform actions as instructed.
The computer system may further include the output device 425 to provide at least some of the results of the execution as output including, but not limited to, visual information to users, such as external agents. The output device 425 may include a display on computing devices and virtual reality glasses. For example, the display may be a mobile phone screen or a laptop screen. GUIs and/or text may be presented as an output on the display screen. The computer system may further include an input device 430 to provide a user or another device with mechanisms for entering data and/or otherwise interact with the computer system The input device 430 may include, for example, a keyboard, a keypad, a mouse, or a touchscreen. Each of these output devices 425 and input device 430 may be joined by one or more additional peripherals. For example, the output device 425 may be used to display the results such as bot responses by the executable chatbot.
A network communicator 435 may be provided to connect the computer system to a network and in turn to other devices connected to the network including other clients, servers, data stores, and interfaces, for instance. A network communicator 435 may include, for example, a network adapter such as a LAN adapter or a wireless adapter. The computer system may include a data sources interface 440 to access the data source 445. The data source 445 may be an information resource. As an example, a database of exceptions and rules may be provided as the data source 445. Moreover, knowledge repositories and curated data may be other examples of the data source 445.
At step 502, the method 500A includes, inputting the content such as video or image or audio. At step 504, the method 500A includes selecting a list of disabilities to be simulated. At step 506 and 508, the method 500A includes simulating disability by render all the selected disability simulations on the input content. The key frames from the video may be extracted, by use the same timings for extracting key frames in every simulated content. At step 510 and 514, the method 500A includes calculating text information loss and image information loss if the input content may be video or image. At step 514, the method 500A includes calculating audio information loss if the input content may be video or audio. At step 516, the method 500A includes calculating final score for the input content by using information loss approximation formula. At step 518, the method 500A includes delivering the content, in which the key frames which have the highest information loss for each simulation may be optimized using a content optimization strategy.
At block 522, the method 500B may include retrieving, by the information loss determination engine 108 via the processor 104, for a given disability, run at least one simulation to simulate how a content may be experienced by a user having such disability.
At block 524, the method 500B may include establishing, by the information loss determination engine 108 via the processor 104, compute information loss based on comparison of the simulated content with desired original content.
At block 526, the method 500B may include quantifying, by the information loss determination engine 108 via the processor 104, transmit data packets indicative of a content optimization strategy that may be determined based on the determined information loss.
The order in which the method 500B are described is not intended to be construed as a limitation, and any number of the described method blocks may be combined or otherwise performed in any order to implement the method 500B or an alternate method. Additionally, individual blocks may be deleted from the method 500B without departing from the spirit and scope of the present disclosure described herein. Furthermore, the method 500B may be implemented in any suitable hardware, software, firmware, or a combination thereof, that exists in the related art or that is later developed, The method 500 describes, without limitation, the implementation of the system 102, A person of skill in the art will understand that method 500B may be modified appropriately for implementation in various manners without departing from the scope and spirit of the disclosure.
One of ordinary skill in the art will appreciate that techniques consistent with the present disclosure are applicable in other contexts as well without departing from the scope of the disclosure.
What has been described and illustrated herein are examples of the present disclosure. The terms, descriptions, and figures used herein are set forth by way of illustration only and are not meant as limitations. Many variations are possible within the spirit and scope of the subject matter, which is intended to be defined by the following claims and their equivalents in which all terms are meant in their broadest reasonable sense unless otherwise indicated.
Number | Date | Country | Kind |
---|---|---|---|
22305057.6 | Jan 2022 | EP | regional |