A disclosed implementation generally relates to a user device, such as smart telephone.
A user device, such as a smart telephone, portable computer, or camera, may include one or more sensors to collect data regarding a surrounding environment. The sensor may correspond to, for example, a camera to collect image data, a microphone to collect audio data, a gyroscope or accelerometer to collect information regarding a movement of the user device, or a location sensor (such a global positioning service, or GPS, unit) to collect information regarding a position of the user device. Furthermore, the user device may be programmed to automatically perform an action based on data collected by the sensor. For example, a camera may be programmed to automatically capture an image of a subject when the subject is looking in the direction of the camera.
According to one aspect, a method is provided. The method may include receiving, by a processor associated with a first user device and during a first time period, data to be displayed to a first user; determining, by the processor, whether the first user is engaged in a social interaction with a second user during the first time period; presenting, by the processor, the data for display to the first user during the first time period when the first user and the second user are not engaged in the social interaction during the first time period; when the first user is engaged in the social interaction with the second user during the first time period, determining, by the processor, a second time period associated with a break in the social interaction, wherein the second time period is subsequent to the first time period; and when the first user is engaged in the social interaction with the second user during the first time period, presenting, by the processor, the data for display to the first user during the second time period.
According to another aspect, a device is provided. The device may include a memory configured to store instructions; and a processor configured to execute one or more of the instructions to: receive, during a first time period, data to be displayed to a first user, determine whether the first user is engaged in the social interaction with a second user during the first time period, present the data for display to the first user during the first time period when the first user and the second user are not engaged in the social interaction during the first time period, determine, when the first user is engaged in the social interaction with the second user during the first time period, a second time period associated with a break in the social interaction, wherein the second time period is subsequent to the first time period, and present, when the first user is engaged in the social interaction with the second user during the first time period, the data for display to the first user during the second time period associated with the break.
According to another aspect, a non-transitory computer-readable medium is provided, The non-transitory computer-readable medium may store instructions, the instructions comprising one or more instructions that, when executed by a processor, cause the processor to: receive, during a first time period, data to be displayed to a first user, determine whether the first user is engaged in a social interaction with a second user during the first time period, present the data for display to the first user during the first time period when the first user and the second user are not engaged in the social interaction during the first time period, determine, when the first user is engaged in the social interaction with the second user during the first time period, a second time period associated with a break in the social interaction, wherein the second time period is subsequent to the first time period, and present, when the first user is engaged in the social interaction with the second user during the first time period, the data for display to the first user during the second time period associated with the break.
The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements.
The terms “user,” “consumer,” “subscriber,” and/or “customer” may be used interchangeably. Also, the terms “user,” “consumer,” “subscriber,” and/or “customer” are intended to be broadly interpreted to include a user device or a user of a user device. The term “document,” as referred to herein, includes one or more units of digital content that may be provided to a user. The document may include, for example, a segment of text, a defined set of graphics, a uniform resource locator (URL), a script, a program, an application or other unit of software, a media file (e.g., a movie, television content, music, etc.), or an interconnected sequence of files (e.g., hypertext transfer protocol (HTTP) live streaming (HLS) media files).
First user device 110-A and second user device 110-B may connect to network 120, for example, through a wireless radio link to exchange data. For example, first user device 110-A and/or second user device 110-B may include a portable computing and/or communications device, such as a personal digital assistant (PDA), a smart phone, a cellular phone, a laptop computer with connectivity to a cellular wireless network, a tablet computer, a wearable computer, etc. First user device 110-A and/or second user device 110-B may also include a portable user device such as a camera, watch, fitness tracker, etc. First user device 110-A and/or second user device 110-B may also include non-portable computing devices, such as a desktop computer, consumer or business appliance, set-top devices (STDs), or other devices that have the ability to connect to network 120.
In the example shown in
Status data 103 may include location and/or movement information associated with first user device 110-A and/or second user device 110-B, and first user device 110-A may use to the location information to determine whether first user 101 and second user 102 are engaged in a social interaction. For example, first user device 110-A may determine that first user 101 and second user 102 are engaged in a social interaction if first user device 110-A and second user device 110-B are located within a threshold distance (e.g. less than five meters) for more than a threshold duration of time (e.g., more than 10 seconds). In another example, first user device 110-A may determine that first user 101 and second user 102 are engaged in a social interaction if first user device 110-A and second user device 110-B are exchanging status data 103 via a short-range communications protocol (e.g. directly or via a same wireless router) for more than a threshold duration of time. In another example, status data 103 may include a connection request. First user device 110-A may identify a break in the social interaction if first user device 110-A and second user device 110-B move more that a threshold distance apart and/or cease to exchange status data 103 via the short-range communications protocol.
In another example, status data 103 may include information regarding the operation of first user device 110-A and/or second user device 110-B, and first user device 110-A may evaluate a social interaction between first user 101 and second user 102 based on the operation status. For example, first user device 110-A may determine that first user 101 and second user 102 are engaged in a social interaction first user 101 and second user 102 are in proximity of one another and first user device 110-A and/or second user device 110-B are inactive (e.g., displays 112-A and 112-B are not activated, a user input is not received, or another activity is not performed during a threshold duration of time). Similarly, first user device 110-A may infer a break in a previously detected social interaction when display 112-B is activated (e.g., second user 102 is reading a message) or an activity is performed on second user device 110-B (e.g., second user 102 places a call, activates an application, accessed data, etc.).
In yet another example, first user 101 and second user 102 may be located remotely and status data 103 may relate to interactions between first user 101 and second user 102. For example, status data 103 may include information regarding the status of a communication between first user 101 and second user 102 such as data regarding whether a telephone or video conference channel or session is active. Additionally or alternatively, status data 103 may include information regarding activity in the communications, such as an indication of whether first user 101 and second user 102 are conversing during a threshold time period.
After withholding display data 104 based on detecting a social interaction between first user 101 and second user 102, first user device 110-A may later present display data 104 via display 112-A when a break is detected in the social interaction.
In one implementation, first user device 110-A may selectively present display data 104 based on the expected duration of the break. For example, if display data 104 includes text data, first user device 110-A may determine an estimated amount of time for reading the text data and first user device 110-A may present display data 104 when the expected duration of the break exceeds the expected time for reading the text data.
First user device 110-A may estimate a duration of a break in a social interaction based on the status data 103. For example, first user device 110-A may determine an activity being performed by second user 102, and first user device 110-A may estimate a duration of a break in a social interaction based on the activity. For example, if second user 102 is reading a message, first user device 110-A may estimate the duration of the social break based on an expected time for second user 102 to read the message.
In other implementations described below with respect to
Referring back to
In one implementation, network 120 may include a closed distribution network. The closed distribution network may include, for example, cable, optical fiber, satellite, or virtual private networks that restrict unauthorized alteration of contents delivered by a service provider. For example, network 120 may also include a network that distributes or makes available services, such as, for example, television services, mobile telephone services, and/or Internet services. Network 120 may be a satellite-based network and/or a terrestrial network.
Document generator 130 may include a component that generates a document presenting display data 104 based on, for example, the expected duration of a social break. Document generator 130 may further generate/modify a document for presenting display data 104 based on a reading speed for first user 101 and/or information specifying data to include/exclude from display data 104.
To generate a document for presenting display data 104, document generator 130 may store an original document and may modify the original document based on the data received from first user device 110-A regarding the interaction. For example, the original document may be designed to be read in a certain length of time. If first user device 110-A determines that the expected break is less than the expected time needed to read the original document, document generator 130 may modify the original document to form a modified document that can be read by first user 101 in less time. For example, document generator 130 may remove one or more sections of the original document, simplify the language, grammar, and/or presentation of the original document, etc., to allow first user 101 to read the resulting display data 104 in less time.
Conversely, if first user device 110-A determines that the expected break is greater than the expected time to read the original document, document generator 130 may modify the original document to generate display data 104 that is longer, more complex, etc. For example, document generator 130 may modify the language, grammar, and/or presentation of the original document to cause first user 101 to take more time to read the resulting display data 104. Additionally or alternatively, document generator 130 may add one or more sections to the original document. For example, document generator 130 may identify one or more key terms (e.g., terms that frequently appear in prominent locations) in the original document and add additional content (e.g., text, images, multimedia content) related to the key terms when generating display data 104. To identify possible content to add to the original document, document generator may generate a search query and use the query to perform a search to identify relevant content on the Internet or in a data repository (e.g., using a search engine).
In another example, pre-prepared documents may be divided into paragraphs, and the paragraphs may be ranked by importance. When generating display data 104, document generator 130 may first include one or more paragraphs ranked as more important and/or exclude one or more paragraphs ranked as less important in display data 104.
In one implementation, document generator 130 may determine the expected time to read the original document and/or generated display data 104 based on statistics (e.g., the average number of words per minute) associated with an ordinary reader. Alternatively, document generator 130 may determine the expected time required to read the original document and/or generated display data 104 based on data received from first user device 110-A. For example, first user device 110-A may determine an amount of time that first user 101 takes to read other documents, and document generator 130 may use this information to determine an individualized reading speed for first user 101 based on the length, complexity, etc. of the other documents.
In one implementation, document generator 130 may dynamically create display data 104 based on the data received from first user device 110-A (e.g., document generator 130 does not create display data 104 from a template). For example, document generator 130 may use document generation software such as Yseop® or Narrative Solutions®. For example, document generator 130 may identify a target group (e.g., an educational level, age, etc.) associated with first user 101 (e.g., based on the available break time) and may generate display data 104 based on attributes of the target group.
It should be further appreciated that although display data 104 is described as being read by first user 101 (e.g., that first user 101 is reviewing text within display data 104), display data 104 may include multimedia content, such as audio and/or video content. Document generator 130 may modify multimedia content based on a length of the break in the social interaction. For example, document generator 130 may remove certain portions (e.g., remove the credits) or may otherwise modify the playtime of the multimedia content (e.g., by modifying an associated playback speed).
Additionally or alternatively to modifying the content included in display data 104, document generator 130 may further modify a writing style for display data 104 to modify the amount of time that it would take for first user 101 to read display data 104. For example, document generator 130 may change the complexity of text within display data 104 (e.g., average number of letters per word, average number of words per sentence, etc.) to change an associated reading time. Document generator 130 may also change the grammar associated with display data 104, such as to vary the sentence structure and placement of terms, modify descriptive clauses, etc. to achieve a desired reading time.
First user device 110-A may dynamically detect and monitor a social interaction between first user 101 and second user 102 based on different or additional factors. For example, in environment 100-B shown in
First sensor 116 may include one or more components to detect data regarding first user 101, second user 102, and/or surrounding environment 100-B. For example, First sensor 116 may include a location detector, such as a sensor to receive a global positioning system (GPS) or other location data, or a component to dynamically determine a location of first user device 110-A (e.g., by processing and triangulating data/communication signals received from base stations). Additionally or alternatively, first sensor 116 may include a motion sensor, such as a gyroscope or accelerometer, to determine movement of user device 110.
Additionally or alternatively, first sensor 116 may include a sensor to collect information regarding first user 101, second user 102, and/or environment 100-B. In one example, first sensor 116 may include an audio sensor (e.g., a microphone) to collect audio data associated with first user 101 and/or second user 102, and first user device 110-A may process the audio data to evaluate a social interaction between first user 101 and second user 102. For example, when status data 103 indicates that first user device 110-A and second user device 110-B are within a threshold distance of each other, first user device 110-A may evaluate audio data collected by first sensor 116 to determine whether first user 101 and second user 102 are conversing (e.g., whether speech is detected). First user device 110-A may infer a break in the social interaction if, for example, audio data (e.g., a conversation) from first user 101 and/or second user 102 is not detected by first sensor 116 during a threshold time period. For example, the audio data may be processed to determine if second user 102 is not responding during a threshold time period, and therefore, not paying attention to first user 101/
In another example, first sensor 116 may include an image sensor (e.g., a camera) to collect image data associated with first user 101 and/or second user 102. In the example shown in
First user device 110-A may evaluate facial features included in an image of second user's 102 face. For example, first user device 110-A may determine that second user 102 is looking in the direction of first user 101 if the image includes both of second user's 102 eyes, the eyes are not blocked by another facial element (e.g., second user's 102 nose), the eyes are of substantially equal size (e.g., less than 10% different in width), the eyes are at least a threshold distance apart, etc. First user device 110-A may detect a break in the social interaction based on detected changes in the images. For example, first user device 110-A may infer a break in the social interaction if, for example, the image data indicates that second user 102 has turned away from first user 101 (e.g., first sensor data 105 includes image data that does not show both of second user's 102 eyes).
In another example, first user device 110-A may determine that first user 101 and second user 102 are travelling together (e.g., in single automobile or a public transportation vehicle such as a bus or train) if first sensor data 105 indicates that the both first user device 110-A and second user device 110-B are moving at a common speed and a common direction. First user device 110-A may determine that first user 101 and second user 102 and engaged in a social interaction while riding on the public transportation, and first user device 110-A may delay presentation of display data 104 until the end of the ride. The estimated time associated with the public transportation vehicle may be set by first user 101 and/or or may be determined based on various factors and/or data collected from other sources, such as the distance of the route traversed by the public transportation vehicle, the velocity of the public transportation vehicle, traffic conditions, etc. In one implementation, the estimated time for travelling in the public transportation vehicle may be modified based on a time spent by first user 101 and/or or second user 102 on a prior ride on the public transportation vehicle.
In another example, first sensor 116 may include or interface with a sensor device, such as a fitness monitor, that identifies attributes of first user 101, such as the user's heart rate, body temperature, respiration rate, etc. First user device 110-A may use the information regarding first user 101 to further identify associated activities, and first user device 110-A may identify a time (e.g., a break in the activity) to present display data 104 based on the determined activities. For example, if first user 101 has an elevated heart rate and is moving at a particular velocity range, first user device 110-A may determine that first user 101 and second user 102 are walking together, and first user device 110-A may estimate an time when the activities ends based on identifying an expected destination (that is, in turn, identified based on prior movements by first user 101, addresses associated with contacts, etc.) and identify an amount of time it would take first user 101 to walk to the destination at a current velocity.
In another implementation, first user device 110-A may detect and monitor a social interaction between first user 101 and second user 102 without interfacing with second user device 110-B. For example, in environment 100-C shown in
For example, in environment 100-C in
In another example, first user device 110-A may perform facial analysis of image data included in first sensor data 105 and/or second sensor data 106. For example, first user device 110-A may determine whether first user 101 and second user 102 are smiling or displaying other facial indications associated with a social interaction. First user device 110-A may also perform speech-to-text analysis of audio data included in first sensor data 105 and/or second sensor data 106. For example, first user device 110-A may determine whether first user 101 and second user 102 are uttering greetings or other phrases associated with a social interaction.
As shown in environment 100-C in
Although
Furthermore, one or more components of environment 100 may perform one or more tasks described as being performed by one or more other components of environment 100. For example, document generator 130 may be coupled to or be included as a component of first user device 110-A such that first user device 110-A obtains display data 104 locally (e.g., without exchanging data via network 120). For example, document generator 130 may be an application or component residing on first user device 110-A.
Touch screen 230 may include a component to receive input electrical signals and present a visual output in the form of text, images, videos and/or combinations of text, images, and/or videos which communicate visual information to the user of communications device 200. In one implementation, touch screen 230 may selectively present display data 104. In one implementation, touch screen 230 may display text input into communications device 200, text, images, and/or video received from another device, and/or information regarding incoming or outgoing calls or text messages, emails, media, games, phone books, address books, the current time, etc. Touch screen 230 may also include a component to permit data and control commands to be inputted into communications device 200 via touch screen 230. For example, touch screen 230 may include a pressure sensor to detect touch for inputting content to touch screen 230. Alternatively or in addition, a capacitive or field sensor to detect touch.
Control buttons 240 may include one or more buttons that accept, as input, mechanical pressure from the user (e.g., the user presses a control button or combinations of control buttons) and send electrical signals to a processor (not shown) that may cause communications device 200 to perform one or more operations. For example, control buttons 240 may be used to cause communications device 200 to transmit information.
Microphone 250 may include a component to receive audible information from a user and send, as output, an electrical signal that may be stored by communications device 200, transmitted to another user device, or cause the device to perform one or more operations. In one implementation, microphone 250 may capture audio data related to first user 101 and/or second user 102, and communication device 200 may identify a social interaction and a break in the social interaction based on the audio data.
Camera element 260 may be provided on a front or back side of communications device 200, and may include a component to receive, as input, analog optical signals and send, as output, a digital image or video that can be, for example, viewed on touch screen 230, stored in the memory of communications device 200, discarded and/or transmitted to another communications device 200. In one implementation, camera element 260 may capture image data related to first user 101 and/or second user 102, and communication device 200 may identify a social interaction and a break in the social interaction based on the image data.
Although
As shown in
Continuing with
In operation, AR device 300 may determine actions of first user 101 via sensors 320 (e.g., determining whether first user 101 is moving or staying in one position) and/or capture images (e.g., activate eye cameras 330 to determine when first user 101 is viewing display data 104 and/or activate front camera 340 to collect information regarding second user 102 and or a surrounding environment). For example, AR device 300 (or another device) may use data collected from eye cameras 330 to identify a time period when first user 101 is looking at second user 102 and may use this information to determine the status of a social interaction between first user 101 and second user 102. In another example, AR device 300 (or another device) may use data collected from eye cameras 330 to identify amounts of time that first user 101 views different portions of a document. Document generator 130 may use this information when generating/modifying display data 104 to achieve a desired reading time. AR device 300 may then selectively present display data 104 (e.g., via projector 350) when a break in a social interaction is detected based on data collected from eye cameras 330 and/or camera 340. For example, projector 350 may provide display data 104 to first user 101 when camera 340 records image data indicating that second user 102 is looking away from first user 101 (e.g., looking toward second user device 110-B or another user). In this way, AR device 300 may selectively present or cause another device (not shown) to selectively present display data 104 in a socially appropriate manner and without disrupting a social interaction between first user 101 and second user 102.
Although
Processing unit 420 may include one or more processors, microprocessors, or other types of processing units that may interpret and execute instructions. Main memory 430 may include a RAM or another type of dynamic storage device that may store information and instructions for execution by processing unit 420. ROM 440 may include a ROM device or another type of static storage device that may store static information and/or instructions for use by processing unit 420. Storage device 450 may include a magnetic and/or optical recording medium and its corresponding drive.
Input device 460 may include a mechanism that permits an operator to input information to device 400, such as a keyboard, a mouse, a pen, a microphone, voice recognition and/or biometric mechanisms, etc. Output device 470 may include a mechanism that outputs information to the operator, including a display, a printer, a speaker, etc.
Communication interface 480 may include any transceiver-like mechanism that enables device 400 to communicate with other devices and/or systems. For example, communication interface 480 may include mechanisms for communicating with another device or system via network 120. For example, if user device 110 is a wireless device, such as a smart phone, communication interface 480 may include, for example, a transmitter that may convert baseband signals from processing unit 420 to radio frequency (RF) signals and/or a receiver that may convert RF signals to baseband signals. Alternatively, communication interface 480 may include a transceiver to perform functions of both a transmitter and a receiver. Communication interface 480 may further include an antenna assembly for transmission and/or reception of the RF signals, and the antenna assembly may include one or more antennas to transmit and/or receive RF signals over the air.
As described herein, device 400 may perform certain operations in response to processing unit 420 executing software instructions contained in a computer-readable medium, such as main memory 430. A computer-readable medium may be defined as a non-transitory memory device. A memory device may include space within a single physical memory device or spread across multiple physical memory devices. The software instructions may be read into main memory 430 from another computer-readable medium or from another device via communication interface 480. The software instructions contained in main memory 430 may cause processing unit 420 to perform processes described herein. Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
Although
As shown in
Based on received display data 104, first user device 110-A may determine whether first user 101 is socially interacting with second user 102 (block 520). For example, as described above in the discussion of
As shown in
Otherwise, if there is a social interaction between first user 101 and second user 102 (block 520-Yes), device 110-A may store display data 104 (e.g., in main memory 430) and may monitor the social interaction to identify a break in the social interaction (block 540). For example, first user device 110-A may evaluate status data 103, first sensor data 105, and/or second sensor data 106 to monitor the social interaction based on, for example, a position or movement of the first user 101 and second user 102; use of second user device 110-B, facial features of first user 101 and/or second user 102, dialog between first user 101 and second user 102, etc.
After a break in a social interaction is detected in block 540, first user device 110-A may present display data 104 based on identifying the break (block 550). In one example, first user device 110-A may present the original display data 104, as received in block 510, in response to detecting the break. In another example, a notification or a portion (e.g., an excerpt) of display data 104 may be presented to first user 101 based on identifying the break in the social interaction. If the break is very fast, display data 104 may be flashed to first user 101 (e.g., presented in front of first user's 101 eyes for less than a tenth of second).
In another implementation, display data 104 is presented during a detected break in a social interaction. For example, contents of display data 104 may be presented to first user 101 during the break, and presentation of the content may cease when the social interaction resumes (e.g., when dialog is detected, second user 102 is looking in the direction of first user 101, etc.). In another example, presentation of display data 104 may vary based on the duration of the social break. For example, first user device 110-A may present the original display data 104, as received in block 510, during the break and may cease presenting display data 104 after the break (e.g., when the social interaction resumes). Presentation of display data 104 may then resume during another break in the social interaction is identified.
In another example, the format of display data 104 may be modified so that it is presented in a less conspicuous manner to first user 101. For example, if first user 101 and second user 102 are in visual content (e.g., first user 101 and second user 102 are in close proximity and/or are communicating via a video conference through user device 110-A and 110-B or other devices), display data 104 may be converted into audio content and audibly played to first 101 in a manner that would not be noticeable to second user 102 (e.g., so that first user 101 can maintain eye content and/or display data 104 is not visible to second user 102).
In blocks 530 and/or 550, content from display data 104 may be presented by first user device 110-A to first user 101. Alternatively, first user device 110-A may send instructions to cause another device to present a portion of display data 104. For example, first user device 110-A may send (e.g., via a short range communications protocol such as Bluetooth® or WiFi®) instructions causing another device, such as a display device or a speaker, to present a portion of display data 104. Thus, a first device may detect the break, and a second, different device may present display data 104.
As shown in
Continuing with
In another example, first user device 110-A may estimate a duration of the break based on movements of second user 102. For example, as shown in
Additionally or alternatively, first user device 110-A may estimate first distance 710-A and second distance 710-B based on eye sizes for second user 102 in image data captured by first sensor 116. For example, imaging data captured by first sensor 116 may detect second user 102 as having relatively larger eyes or other facial features as second user 102 moves closer to first sensor 116. First user device 110-A may estimate third distance 730-A based on comparing eye sizes for second user 102. For example, if one of second user's 102 eyes is relatively smaller or is partially blocked by second user's 102 nose other facial feature, first user device 110-A may determine that second user 102 is turned by an angle away from first user device 110-A, and the amount of the angle can be estimated based on the size difference.
As shown in
In one example, a break is detected only when second user 102 has turned away from first user 101 by more than a threshold angle (e.g., more than 30 degrees). The particular threshold angle can be dynamically determined by recording images of second user 102 and determining, for example, a threshold head angle movements associated with an end of dialog, movements away from first user 101, use of second user device 110-B, or other indications of a break in a social interaction. Thus, the threshold angle may vary to reflect different types of social interactions. For example, in some cultures, a younger second user 102 may turn away from first user 101 as a sign of respect, even when second user 102 is in a social interaction with first user 101 (i.e., no break is occuring).
Although
Referring back to
For example, document generator 130 may modify a layout (e.g., to change the position of images, charts, page breaks, text size, etc.) of the original document presenting display data 104 to achieve a desired reading time. For example, if first user 101 takes some time to view certain types of images (e.g., images of certain size colors, content, etc.), document generator 130 may add that type of images when generating display data 104 that first user 101 can read in a longer time or may remove this type of images to generate display data 104 that first user 101 can read in a shorter time.
While a series of blocks has been described with regard to processes 500 and 600 shown in
It will be apparent that systems and methods, as described above, may be implemented in many different forms of software, firmware, and hardware in the implementations illustrated in the figures. The actual software code or specialized control hardware used to implement these systems and methods is not limiting of the implementations. Thus, the operation and behavior of the systems and methods were described without reference to the specific software code—it being understood that software and control hardware can be designed to implement the systems and methods based on the description herein.
Further, certain portions, described above, may be implemented as a component or logic that performs one or more functions. A component or logic, as used herein, may include hardware, such as a processor, an ASIC, or a FPGA, or a combination of hardware and software (e.g., a processor executing software).
It should be emphasized that the terms “comprises” and “comprising,” when used in this specification, are taken to specify the presence of stated features, integers, steps or components but do not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
No element, act, or instruction used in the present application should be construed as critical or essential to the implementations unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise.