Embodiments generally relate to monitoring and adapting computing environments/smart work spaces to provide healthy interactions for the users. More particularly, embodiments relate to monitoring and adapting ergonomics in ubiquitous computing environments/smart work spaces with inputs and outputs that are mobile, distributed, and dynamic to provide healthy interactions for the individuals that use them.
Conventional ergonomic solutions may monitor the posture of an individual within a workstation and alert the individual to make adjustments. Additional ergonomic challenges may occur, however, in ubiquitous computing environments where inputs and outputs are mobile, distributed, and dynamic.
The various advantages of the embodiments will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:
In the following detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.
Technology to monitor and adapt ergonomics in ubiquitous computing environments/smart work spaces is described herein. A system may monitor users in smart work spaces, taking into account ergonomic and comfort factors when determining when and where to project inputs and outputs within the smart work spaces to provide a ubiquitous computing environment. The system may balance ergonomics against convenience based on incidence and length of time of interactions to promote good posture and prevent repetitive trauma (injuries that occur over time because of repeated bad actions). In some embodiments, a system may include one or more processors and one or more modules to be executed by the one or more processors to provide a dynamic and ergonomically sound smart work space well suited for its users. In another embodiment, a semiconductor package apparatus may include a substrate with logic coupled to the substrate, the logic to provide a dynamic and ergonomically sound smart work space well suited for its users.
Human characteristics, such as, but not limited to, height, weight, body proportions, and health conditions of the users may be taken into account when determining the placement of inputs and outputs to provide healthy interactions between the users and the system. Characteristics of the smart work spaces and the type of tasks being performed, such as, but not limited to, length of time of interactions, incidence/rate of interactions, location of the user(s), height differences among multiple users, body posture, neck position, viewing distance and direction of activity and relevant objects, potential for repetitive trauma (injuries that occur over time because of repeated bad actions), etc., may also be taken into account when determining the placement of inputs and outputs to provide healthy interactions between the users and the system. These and other aspects of the present disclosure will be more fully described below.
Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments.
References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one A, B, and C” can mean (A); (B); (C); (A and B); (B and C); (A and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (B and C); (A and C); or (A, B, and C).
The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device). As used herein, the term “logic” and “module” may refer to, be part of, or include an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group), and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs having machine instructions (generated from an assembler and/or a compiler), a combinational logic circuit, and/or other suitable components that provide the described functionality.
In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, it may not be included or may be combined with other features.
The input devices 102a-n may include sensors, cameras, microphones, chemical sensors, proximity sensors, keyboards, touch surfaces, projected input devices, etc., or any combination thereof. Cameras may include various types, such as, for example, two-dimensional (2D), three-dimensional (3D), depth, arrays, and/or fish-eye cameras. In one embodiment, the input devices 102a-n may include single sensors forming a wireless sensor network, with each sensor spatially distributed within the smart space to monitor physical and/or environmental conditions. Each sensor may have the ability to pass its data through the network to the ergonomic control system 104. Information from the ergonomic control system 104 may also be passed to the input devices 102a-n. In another embodiment, input devices 102a-n may include arrays of sensors forming a wireless sensor network, with each array spatially distributed within the smart space to monitor physical and/or environmental conditions. Yet, in another embodiment, the input devices 102a-n may include both single sensors and arrays of sensors. Array sensors may be in a certain geometric pattern, such as, for example, linear and circular, or they may be randomly spaced.
The ergonomic system architecture 100 includes a plurality of output devices 106a-n. Output devices 106a-n may include, but are not limited to, projected displays, liquid crystal displays (LCD), light-emitting diode (LED) displays, speakers, smell generators, drones/robots, haptic surfaces, etc., or any combination thereof.
In one embodiment, the ergonomic control system 104 may be an edge server located in the home. In another embodiment, the ergonomic control system 104 may be on a server in the cloud. In yet another embodiment, the ergonomic control system 104 may be on a local server at a company, university, college, or any entity having at least one smart work space.
The ergonomic control system 104 comprises a computing system 108, a user profile database 110, a context engine 112, a task modeler 114, a posture modeler 116, an activity history database 118, a hazard evaluation module 120, a trade-off module 122, and an application 124 (e.g., providing content and/or information).
The computing system 108 may be used to perform user recognition using various means, such as, for example, video and audio. In one embodiment, user recognition may occur using video. In such an embodiment, an image of a user may be taken via a video source, such as, for example, one or more cameras from the input devices 102a-n, and face recognition techniques may be performed on the image to identify the user. Face recognition techniques are well known to those skilled in the relevant art(s). In another embodiment, user recognition may occur using audio. In such an embodiment, a user's voice may be captured via a microphone source, such as, for example, one or more microphones from the input devices 102a-n, and speaker recognition techniques may be performed on the voice captured to identify the user. Speaker recognition techniques are also well known to those skilled in the relevant art(s).
The computer system 108 may also create user profiles for a newly recognized user. User profiles may include body dimension information, such as, for example, estimated height, weight, and age, health issues, such as, for example, hearing and vision, and other personal information. User profiles may be stored in the user profile database 110. The computer system 108 may also update user profiles periodically. The updates may be done manually by the user or with the help of an administrator for the ergonomic control system 104, periodically, or automatically when the system detects a change with respect to a user. The computing system 108 is further described below with reference to
The context engine 112 receives input information from the input devices 102a-n as context information to determine location, identity, activity, and/or time using well-known context aware techniques. The context engine 112 collects the input information, analyzes the information, and provides context information to various components of the ergonomic control system 104 to aid the system 100 in providing ergonomics and comfort to the user(s) of the smart work space. For example, the context engine 112 determines user locations and the activity to be performed within the smart work space. With the information received from one or more camera input devices 102 within the smart work space, the context engine 112 may determine the proximity of one or more users to resources, such as, for example, screens and keyboards within the smart work space. The information from the one or more camera input devices 102 may also allow the context engine 112 to determine the state of a user's hands (i.e., whether a user's hands are messy or clean), obstructions to a user's senses, such as, for example, whether the user is wearing headphones, gloves, or sunglasses, and the availability of wearable or mobile devices the system may leverage. This information may be useful when determining what type of inputs and outputs to provide to a user. For example, one would not want to provide a touch surface if the user's hands are messy. With the information received from one or more microphone input devices 102 within the smart work space, the context engine 112 may detect whether a conversation is taking place. The context engine 112 may also view scheduled activities on a user's calendar. This information may be used, in conjunction with other sensed data, to determine the activity to be performed.
The task modeler 114 may receive input from the application 124 and from the activity history database 118. The input from the application 124 may relate to the task that will be performed. The task determines what information may need to be displayed and what input the user may need to provide. For example, if the task is cooking, then the application may be a recipe. The system will then determine the user interfaces needed to display the recipe to be prepared to the user(s)/cook(s).
The activity history database 118 stores information about how long previous interactions took, such as, for example, how long a user held a specific posture while doing an interaction. The activity history database 118 also stores information on the previous history of injuries caused by a particular posture. This information is also taken into consideration by the posture modeler 116 (to be discussed below) when predicting postures for users to perform an interaction in the smart space.
Task modeler 114 estimates the length of time of interactions and the incidence of interactions based on the activity history. In one embodiment, the task modeler 114 creates a list of potential interaction alternatives for inputs and outputs, including where touch surfaces should be placed as well as the location of projected information. Convenience of the location of the input(s) and output(s) (also referred to as the “user interfaces”), especially proximity to the users, may be strongly considered by the task modeler 114. The ergonomic control system 104 may predict the best input(s) and output(s) (i.e., user interfaces) to use for a given task and continue monitoring the interaction and situational context, in turn, adjusting the locations of the input(s) and output(s) accordingly or providing suggestions and alternatives. For example, in group settings people may come and go or different people may start interacting with the system. Scenarios in which people are moving require constant monitoring and adjusting of the locations of input(s) and output(s) (i.e., user interfaces). Also, when multiple people work together, the system may optimize input(s) and output(s) across all people involved, taking into account each person's characteristics, past histories, and user profiles.
In ambient computing environments, the length of time of an interaction and incidence of a required interaction may vary a great deal. In ambient computing environments where the length of time of an interaction is relatively short and the incidence of a required interaction is minimal, the location of input(s) and output(s) may lean toward convenience rather than ergonomics. In ambient computing environments where the length of time of an interaction is longer and the incidence of a required interaction occurs regularly or repeats several times, the location of input(s) and output(s) may lean toward ergonomically sound decisions rather than convenience. For example, less comfortable outputs, if relatively rare, may be allowed in a trade-off for convenience, safety, visibility, or other factors.
The individuals in
Returning to
The hazard evaluation module 120 determines whether the placement of system input(s) and output(s) (i.e., user interfaces) may cause the user to have contact with dangerous surfaces, fire, sharp tools, etc. For example, if for one of the alternatives the display of the recipe in
The trade-off module 122 receives inputs from the context engine 112, the posture modeler 116, the task modeler 114, and the hazard evaluation module 120, and evaluates each potential interaction alternative. The trade-off module 122 uses a formula or set of heuristics for choosing from the list of potential interaction alternatives. Each alternative may be weighted or scored for desired use. In one embodiment, each interaction may be scored with normalized scores for: incidence+length of time+posture health+convenience*hazard. The interactions with the highest scores may be used for a given task. In this example, a ‘0’ for hazard, indicating a bad score, would create a ‘times 0’ score for that interaction, thus removing it from the consideration list.
Note that some of the above-mentioned functions may be executed in different modules. For example, instead of the task modeler determining convenience of projected displays, the trade-off module may include that as part of the trade-off formula.
The application 124 provides the system with information relating to the task to be performed. For example, if the task is cooking (see
Embodiments of the system track users in a smart space to ensure that ambient computing inputs and outputs are projected into locations that provide a ubiquitous computing environment that is comfortable and prevents bad posture and repetitive trauma (injuries that occur over time due to repeated actions). The system takes into account ergonomic and comfort factors such as, but not limited to, length of time of interactions, incidence/rate of interactions, history of previous injuries, locations of users, viewing distance and direction of attended activity and relevant objects, body posture, neck position, occlusions from users point of view, messy hands, potential for repetitive trauma injuries, height differences among co-viewers of a display, comfort and safety measures when interacting/touching surfaces of various temperatures, health conditions of individuals, such as, for example, physical challenges of the user whether temporary or permanent, obstruction of display or touch surfaces by people or objects, etc. Ambient computing input(s) and output(s) may include projected images or other displays, wearable displays that may include augmented reality, such as, for example, a head mounted display with augmented reality, dynamic touch surfaces, for example, touch gestures captured by one or more cameras, audio outputs, which may include localized 3D audio, haptic output on surfaces, chemical output for odor generation, interaction of environment with wearable devices on the body for input and output, and other types of input(s) and output(s) that may be projected.
For example, computer program code to carry out operations shown in the method 500 may be written in any combination of one or more programming languages, including an object-oriented programming language such as JAVA, SMALLTALK, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. Additionally, logic instructions might include assembler instruction, instruction set architecture (ISA) instructions, machine instruction, machine depended instruction, microcode, state setting data, configuration data for integrated circuitry, state information that personalizes electronic circuitry and/or other structural component that are native to hardware (e.g., host processor, central processing unit (CPU), microcontroller, etc.)
The method 500 for monitoring and adjusting a smart work space for providing healthy ergonomic interactions begin in block 502 where the process immediately proceeds to block 504. In block 504, the ergonomic control system determines a task to be performed in a smart work space. This process is described below with reference to
In block 506, task modeling is performed. This includes determining projected input and output needed to perform the task. Task modeling is described in further detail with reference to
In block 508, optimal placement of inputs and outputs based on ergonomics and comfort are determined. Placement of projected inputs and outputs depends on such factors as incidence, length of time of interaction, posture health, convenience, and safety. This process is described below with reference to
In block 510, the selected input and output are projected into the smart workspace. The process then proceeds to block 512.
In block 512, the activity history database is updated. The updates include such information as, for example, the interaction, how long the interaction took, and how long a user held a specific posture while doing the interaction. In block 512, user profiles are also updated. At this time, any information gleaned from the interaction that is helpful to know about the user for future interactions is recorded in the user profile. The process then proceeds to block 514.
In block 514, the smart workspace is continuously monitored for changes that may affect the placement of the inputs and outputs. For example, a note placed on a counter may now need to be placed on a wall due to the movement of the user. The process proceeds to decision block 516.
In decision block 516, it is determined whether inputs or outputs need to be changed. This includes not only changing the type of input and output, but also changing the position of one or more inputs and outputs. If it is determined that changes need to be made, the process proceeds back to block 506 to perform task modeling.
Returning to decision block 516, if it is determined that no changes need to be made at this time, the process proceeds back to block 514, where the ergonomic control system 104 continuously monitor the smart workspace for changes.
The method 524 determining a task to be performed in a smart workspace begins in block 522, and immediately proceeds to block 524. In block 524, the ergonomic control system 104 continuously receives input data from a plurality of input sensors, cameras, and microphones (102a-n). The plurality of input sensors, cameras, and microphones (102a-n) are strategically placed throughout the smart work space. Using technologies such as, for example, face recognition, speech/speaker recognition, and context aware, the context engine 112 may discern context information about the smart workspace from the input sensors, cameras, and microphones (102a-n). The process proceeds to block 526.
In block 526, the users in the smart workspace are identified. Users may be identified through face recognition techniques using video input from one or more cameras. Users may also be identified through speaker recognition techniques using voice input from one or more microphones. Face recognition and speaker recognition techniques are well known to those skilled in the relevant art(s). The process proceeds to decision block 528.
In decision block 528, it is determined if there are any new users of the smart workspace. If there are new users of the smart workspace, the process proceeds to block 530.
In block 530, user profiles for the new users are created. The process then proceeds to decision block 532.
Returning to decision block 528, if there are no new users, the process proceeds to decision block 532.
In decision block 532, it is determined if there are any user profile updates. If there are any user profile updates, the process proceeds to block 534. In block 534, user profiles are updated. The process then proceeds to block 536.
Returning to decision block 532, if there are no user profile updates to be made, the process proceeds to block 536.
In block 536, the task to be performed is identified using context engine 112. Context engine 112 uses well known context aware techniques to analyze the input data collected from the input sensors, cameras, and microphones (102a-n) in the smart workspace. User locations may also be determined using the context engine as well as the input cameras and microphones.
The method 540 for performing task modeling begins in block 542, and immediately proceeds to block 544. In block 544, the task modeler 114 determines what information needs to be displayed in order to perform the identified task. The task modeler 114 uses information that it receives from the application 124 as well as activity history from the activity history database 118 to determine what information may need to be displayed to perform the task. For example, if the task modeler 114 receives content indicating a recipe to make lasagna, the task modeler can review the activity history database 118 to find previous cooking tasks to see what was displayed. The process then proceeds to block 546.
In block 546, the task modeler 114 determines what inputs the user may need to provide the ergonomic control system 104. Again, the task modeler 114 may use information that it receives from the application 124 as well as activity history from the activity history database 118 to determine what inputs the user may need to provide the system. The task modeler 114 may look through the activity history database 118 to find previous tasks of a similar kind to determine what inputs a user might need. The process then proceeds to block 548.
In block 548, the task modeler 114 may estimate incidence/rate of interactions and length of time of interactions. The task modeler 114 may utilize the activity history database 118 and the input from the application 124 to provide this estimate. The process then proceeds to block 550.
In block 550, the task modeler 114 also creates a list of potential interaction alternatives for inputs and outputs (i.e., user interfaces). Convenience of the inputs and outputs, especially proximity to the user's, is strongly considered by the task modeler 114. In instances where the rate of interactions and the length of time of interactions is low, the task modeler 114 may lean toward providing projected inputs and outputs that are placed in convenient locations that temporarily compromise optimal ergonomic position. On the other hand, instances in which the rate of interactions and the length of time of interactions is long will cause the task modeler 114 to lean more toward providing ergonomically sound positions within the smart workspace. The process then proceeds to block 552.
In block 552, hazard evaluations are performed for each item on the list of potential interaction alternatives using the hazard evaluation module 120. The hazard evaluation module 120 determines whether projected inputs and outputs may cause a user to have contact with dangerous surfaces, fire, sharp tools, and other dangerous items. If an item on the list of potential interaction alternatives is found to be hazardous, the item is removed from the list.
The method 564 determining optimal placement of inputs and outputs based on ergonomics and comfort begins in block 562, where it immediately proceeds to block 564. In block 564, a posture modeler 116 uses user profile information and task modeling information to predict user postures to perform an interaction with the system. An interaction may be, for example, viewing a projected display or keying in information using a projected input device. The posture modeler 116 includes information about healthy postures and length of time allowed for various postures. The posture modeler 116 uses this information along with body dimensions for each user (obtained from the user's user profile) and the information from the task modeler 114 (estimated rate of interaction/incidence and length of time of interaction) to predict user postures to perform an interaction in block 566. The posture modeler 116 also estimates user gaze direction to enable the task modeler 114 to choose projected display locations in block 568. The posture modeler 116 may also take voice commands, visual cues like neck rubbing, or less conscious verbal utterances that may indicate fatigue. The process then proceeds to block 570.
In block 570, trade off analysis is performed using trade off module 122. The trade off module 122 receives input from the context engine 112, task modeler 114, posture modeler 116, and hazard evaluation module 120 in block 572. The context engine 112 provides information such as the location of the users with in the smart workspace, the task, whether a user's hands are messy or clean (this information is useful to determine whether the user can input information through a touch surfaces), and proximity to resources such as, for example, screens and keyboards. The task modeler 114 provides a list of potential interaction alternatives for inputs and outputs. The posture modeler 116 may provide the user posture predictions as well as the estimated user gaze direction for the users in the smart workspace. The hazard evaluation module may provide a “0” or a “1” to indicate whether a given potential interaction alternatives is hazardous. If the indication is a “0”, the potential interaction alternative is hazardous. If the indication is a “1”, the potential interaction alternative is not hazardous. In block 574, the trade off module 122 applies a formula or set of heuristics for choosing from the list of potential interaction alternatives. Each alternative may be weighted or scored for desired use. In one embodiment, each interaction may be scored with normalized scores for: incidence+length of time of interaction+posture health+convenience*hazard. If the hazard is indicated as a “0”, the resulting score will be zero four that interaction, thus removing it from the consideration list. The interactions with the highest may be used for a given task. The process then proceeds to block 576.
In block 576, the projected input(s) and/or output(s) are selected.
Note that there may be instances in which a projected input or output is not needed due to the smart workspace already having an input or output device available for use during the task. In this instance, an LED on or near the device may be used as an indicator to alert the user to view the device or key in data using the device.
The method 580 starts at a point where the system has identified an interaction and now needs to determine the inputs/outputs to use. The process begins in block 581, where the process immediately proceeds to block 582.
In block 582, the system receives the required interaction. The process then proceeds to block 583.
In block 583, the system monitors the location of the users in the smart workspace. In some embodiments, the context engine 112, may determine user locations based on the information received from the input devices 102a-n. The process then proceeds to block 584.
In block 584, the user profiles of the users in the smart workspace are retrieved from the user profile database. The process then proceeds to block 585.
In block 858, the user profiles are used by the posture modeler 116 to model user postures for the interaction. The process then proceeds to block 586.
In block 586, the task modeler 114 estimates the incidence of interaction. In an embodiment, the task modeler 114 may search the activity history database 118 to find interactions of a similar nature to see what the incidence of interaction was and make an estimate of the incidence of interaction based on previous similar interactions. The process then proceeds to block 587.
In block 587, the task modeler 114 estimates the length of time of interaction in a manner similar to estimating the incidence of interaction as described above. The process then proceeds to block 588.
In block 588, a list of acceptable inputs/outputs are reviewed. The process then proceeds to block 589.
In block 589, convenience factors are determined for the list of acceptable inputs and outputs. For example, if the incidence is low and the length of interaction is short, then a more convenient location for projection of an input or output may be chosen. If, for example, the incidence is high and the length of interaction is long, then a location that provides good ergonomics, such as posture, neck position, etc., may be chosen. The process proceeds to block 590.
In block 590, information such as incidence, length of time of interaction, posture health, and convenience are provided as inputs to the trade-off module 122. The trade-off module analyzes each of the acceptable inputs/outputs on the list using the formula given above, or another formula that takes into consideration all of the factors previously discussed, such as, for example, incidence, length of time of interaction, posture health, and convenience. The trade-off module 122 scores each acceptable input/output on the list and selects the inputs/outputs with the highest scores. The process then proceeds to block 592.
In block 592, the hazard evaluation module determines whether the selected inputs and outputs from the trade-off module 122 may cause a user to have contact with dangerous surfaces, fire, sharp tools, or other hazards. If any of the selected inputs and outputs would be a danger to a user, the system has identified a safety issue. The process then proceeds to decision block 593.
In decision block 593, it is determined if a safety issue has been identified. If a safety issue has been identified, then the process proceeds to block 594.
In block 594, the list of acceptable inputs/outputs is updated by removing the entry that has resulted in a safety issue. In some embodiments, the system may look for an alternative as a replacement to add to the list.
Returning to block 593, if a safety issue has not been identified, then the process proceeds to block 595. In block 595, the system provides the inputs/outputs as identified by the trade-off module 122.
The network interface circuitry 610 may receive sensor input data from a plurality of input devices such as, for example, the input devices 102a-n (shown in
The processor core 800 is shown including execution logic 850 having a set of execution units 855-1 through 855-N. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function. The illustrated execution logic 850 performs the operations specified by code instructions.
After completion of execution of the operations specified by the code instructions, back end logic 860 retires the instructions of the code 805. In one embodiment, the processor core 800 allows out of order execution but requires in order retirement of instructions. Retirement logic 865 may take a variety of forms as known to those of skill in the art (e.g., re-order buffers or the like). In this manner, the processor core 800 is transformed during execution of the code 805, at least in terms of the output generated by the decoder, the hardware registers and tables utilized by the register renaming logic 825, and any registers (not shown) modified by the execution logic 850.
Although not illustrated in
Referring now to
The system 1000 is illustrated as a point-to-point interconnect system, wherein the first processing element 1070 and the second processing element 1080 are coupled via a point-to-point interconnect 1050. It should be understood that any or all of the interconnects illustrated in
As shown in
Each processing element 1070, 1080 may include at least one shared cache 1896a, 1896b. The shared cache 1896a, 1896b may store data (e.g., instructions) that are utilized by one or more components of the processor, such as the cores 1074a, 1074b and 1084a, 1084b, respectively. For example, the shared cache 1896a, 1896b may locally cache data stored in a memory 1032, 1034 for faster access by components of the processor. In one or more embodiments, the shared cache 1896a, 1896b may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof.
While shown with only two processing elements 1070, 1080, it is to be understood that the scope of the embodiments are not so limited. In other embodiments, one or more additional processing elements may be present in a given processor. Alternatively, one or more of processing elements 1070, 1080 may be an element other than a processor, such as an accelerator or a field programmable gate array. For example, additional processing element(s) may include additional processors(s) that are the same as a first processor 1070, additional processor(s) that are heterogeneous or asymmetric to processor a first processor 1070, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processing element. There can be a variety of differences between the processing elements 1070, 1080 in terms of a spectrum of metrics of merit including architectural, micro architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst the processing elements 1070, 1080. For at least one embodiment, the various processing elements 1070, 1080 may reside in the same die package.
The first processing element 1070 may further include memory controller logic (MC) 1072 and point-to-point (P-P) interfaces 1076 and 1078. Similarly, the second processing element 1080 may include a MC 1082 and P-P interfaces 1086 and 1088. As shown in
The first processing element 1070 and the second processing element 1080 may be coupled to an I/O subsystem 1090 via P-P interconnects 10761086, respectively. As shown in
In turn, I/O subsystem 1090 may be coupled to a first bus 1016 via an interface 1096. In one embodiment, the first bus 1016 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the embodiments are not so limited.
As shown in
Note that other embodiments are contemplated. For example, instead of the point-to-point architecture of
Example 1 may include an ergonomic control system comprising network interface circuitry to receive sensor input data from a plurality of input sensor devices, a processor coupled to the network interface circuitry, one or more memory devices coupled to the processor, the one or more memory devices including instructions, which when executed by the processor, cause the system to determine a task to be performed in a smart work space, perform task modeling, wherein task modeling includes determining one or more user interfaces involved with the task, determine one or more placements for the one or more user interfaces based on one or more ergonomic conditions, an incidence of an interaction, and a length of time of the interaction, and position the one or more user interfaces into the smart work space in accordance with the determined one or more placements.
Example 2 may include the ergonomic control system of Example 1, wherein the instructions, when executed, cause the computing system to predict user postures to perform the interaction using user profile information and task modeling information, wherein if a user does not have a user profile, the instructions, when executed, further cause the computing system to create a user profile for the user.
Example 3 may include the ergonomic control system of Example 1, wherein the instructions to cause the computing system to perform task modeling further includes instructions to cause the computing system to determine what information is to be displayed to perform the task, determine what inputs the user is to provide to the system to perform the task, estimate the incidence of the interaction and the length of time of the interaction, and create a list of potential interaction alternatives based on convenience of the one or more user interfaces.
Example 4 may include the ergonomic control system of Example 3, wherein the instructions, when executed, cause the computing system to evaluate each of the potential interaction alternatives on the list to determine whether any safety hazards are involved, wherein if a potential interaction alternative includes at least one safety hazard, the instructions, when executed, further cause the computing system to remove the potential interaction alternative from the list.
Example 5 may include the ergonomic control system of Example 4, wherein each of the potential interaction alternatives on the list is weighted and scored as a function of the incidence of the interaction, the length of time of the interaction, a posture health parameter, a user convenience parameter, and a hazard parameter, wherein the potential interaction alternatives with highest scores are used as the one or more user interfaces.
Example 6 may include the ergonomic control system of Example 3, wherein the instructions to cause the computing system to estimate the incidence and length of time of the interaction include further instructions, that when executed, cause the computing system to review entries of similar tasks in an activity history database, the activity history database includes information on how long previous interactions took and how long a user held a specific posture while performing the interaction and if any injuries occurred based on the specific posture.
Example 7 may include the ergonomic control system of Example 1, wherein user interface inputs include one or more of projected input devices, dynamic touch surfaces, keyboards, and wearable device environmental inputs.
Example 8 may include the ergonomic control system of Example 1, wherein user interface outputs include one or more of projected images, projected displays, physical displays, drones, robots, augmented reality wearable displays, speakers, audio outputs, haptic surfaces, odor generation outputs, and wearable device environmental outputs.
Example 9 may include the ergonomic control system of any one of Examples 1 to 8, wherein if the incidences are relatively minimal, the length of time of the interaction is relatively short and placement of the one or more user interfaces is for a relatively short period of time, further instructions, when executed, cause the computing system to position the one or more user interfaces in a location having a relatively high convenience to a user performing the task.
Example 10 may include the ergonomic control system of any one of Examples 1 to 8, wherein if there are multiple incidences, the length of time of interaction is relatively long and placement of the one or more user interfaces is for a relatively long period of time, further instructions, when executed, cause the computing system to position the one or more user interfaces in a location that provides a relatively high ergonomic result to a user performing the task.
Example 11 may include an ergonomic work space apparatus comprising a substrate, and logic coupled to the substrate, wherein the logic includes one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the substrate to determine a task to be performed in a smart work space, perform task modeling, wherein task modeling includes determining one or more user interfaces involved with the task, determine one or more placements for the one or more user interfaces based on one or more ergonomic conditions, an incidence of an interaction, and a length of time of the interaction, and position the one or more user interfaces into the smart work space in accordance with the determined one or more placements.
Example 12 may include the apparatus of Example 11, wherein the logic coupled to the substrate is to perform posture modeling, wherein posture modeling includes predicting user postures to perform the interaction using user profile information and task modeling information, wherein if a user does not have a user profile, the logic coupled to the substrate is to create a user profile for the user.
Example 13 may include the apparatus of Example 11, wherein the logic coupled to the substrate to perform task modeling further includes logic coupled to the substrate to determine what information is to be displayed to perform the task, determine what inputs the user is to provide to the system to perform the task, estimate the incidence of the interaction and the length of time of the interacion, and create a list of potential interaction alternatives based on convenience of the one or more user interfaces.
Example 14 may include the apparatus of Example 13, wherein the logic coupled to the substrate is to evaluate each potential interaction alternative on the list to determine whether any safety hazards are involved, wherein if a potential interaction alternative includes at least one safety hazard, the logic coupled to the substrate is to remove the potential interaction alternative from the list.
Example 15 may include the apparatus of Example 14, wherein each potential interaction alternative on the list is weighted and scored as a function of the incidence of the interaction, the length of time of the interaction, a posture health parameter, a user convenience parameter, and a hazard parameter, wherein the potential interaction alternatives with highest scores are used as the one or more user interfaces.
Example 16 may include the apparatus of Example 13, wherein the logic coupled to the substrate to estimate the incidence and length of time of the interaction includes logic coupled to the substrate to review entries of similar tasks in an activity history database, the activity history database includes information on how long previous interactions took and how long a user held a specific posture while performing the interaction and if any injuries occurred based on the specific posture.
Example 17 may include the apparatus of Example 11, wherein user interface inputs include one or more of projected input devices, dynamic touch surfaces, keyboards, and wearable device environmental inputs.
Example 18 may include the apparatus of Example 11, wherein user interface outputs include one or more of projected images, projected displays, physical displays, drones, robots, augmented reality wearable displays, speakers, audio outputs, haptic surfaces, odor generation outputs, and wearable device environmental outputs.
Example 19 may include the apparatus of any one of Examples 11 to 18, wherein if the incidences are relatively minimal, the length of time of the interaction is relatively short and placement of the one or more user interfaces is for a relatively short period of time, the logic coupled to the substrate to position the one or more user interfaces in a location having a relatively high convenience to a user performing the task.
Example 20 may include the apparatus of any one of Examples 11 to 18, wherein if there are multiple incidences, the length of time of interaction is relatively long and placement of the one or more user interfaces is for a relatively long period of time, the logic coupled to the substrate to position the one or more user interfaces in a location that provides a relatively high ergonomic result to a user performing the task.
Example 21 may include a method of providing smart work spaces in ubiquitous computing environments, comprising determining a task to be performed in a smart work space, performing task modeling, wherein task modeling includes determining one or more user interfaces involved with the task, determining one or more placements for the one or more user interfaces based on one or more ergonomic conditions, an incidence of an interaction, and a length of time of the interaction, and positioning the one or more user interfaces into the smart work space in accordance with the determined one or more placements.
Example 22 may include the method of Example 21, further comprising performing posture modeling, wherein posture modeling includes predicting user postures to perform the interaction using user profile information and task modeling information, wherein if a user does not have a user profile, the method further comprising creating a user profile.
Example 23 may include the method of Example 21, wherein task modeling includes determining what information is to be displayed to perform the task, determining what inputs the user is to provide to the system to perform the task, estimating the incidence of the interaction and the length of time of the interaction, and creating a list of potential interaction alternatives based on convenience of the one or more user interfaces.
Example 24 may include the method of Example 23, further comprising evaluating each potential interaction alternative on the list to determine whether any safety hazards are involved, wherein if a potential interaction alternative includes at least one safety hazard, removing the potential interaction alternative from the list.
Example 25 may include the method of Example 24, wherein each potential interaction alternative on the list is weighted and scored as a function of the incidence of the interaction, the length of time of the interaction, a posture health parameter, a user convenience parameter, and a hazard parameter, wherein the potential interaction alternatives with highest scores are used as the one or more user interfaces.
Example 26 may include the method of Example 23, wherein estimating the incidence and length of time of the interaction includes reviewing entries of similar tasks in an activity history database, the activity history database includes information on how long previous interactions took and how long a user held a specific posture while performing the interaction and if any injuries occurred based on the specific posture.
Example 27 may include the method of Example 21, wherein user interface inputs include one or more of projected input devices, dynamic touch surfaces, keyboards, and wearable device environmental inputs.
Example 28 may include the method of Example 21, wherein user interface outputs include one or more of projected images, projected displays, physical displays, drones, robots, augmented reality wearable displays, speakers, audio outputs, haptic surfaces, odor generation outputs, and wearable device environmental outputs.
Example 29 may include the method of any one of Examples 21 to 28, wherein if the incidences are relatively minimal, the length of time of the interaction is relatively short and placement of the one or more user interfaces is for a relatively short period of time, positioning the one or more user interfaces in a location having a relatively high convenience to a user performing the task.
Example 30 may include the method of any one of Examples 21 to 28, wherein if there are multiple incidences, the length of time of interaction is relatively long and placement of the one or more user interfaces is for a relatively long period of time, positioning the one or more user interfaces in a location that provides a relatively high ergonomic result to a user performing the task.
Example 31 may include at least one computer readable storage medium comprising a set of instructions, which when executed by a computing system, cause the computing system to determine a task to be performed in a smart work space, perform task modeling, wherein task modeling includes determining one or more user interfaces involved with the task, determine one or more placements for the one or more user interfaces based on one or more ergonomic conditions, an incidence of an interaction, and a length of time of the interaction, and position the one or more user interfaces into the smart work space in accordance with the determined one or more placements.
Example 32 may include the at least one computer readable storage medium of Example 31, wherein the instructions, when executed, cause the computing system to predict user postures to perform the interaction using user profile information and task modeling information, wherein if a user does not have a user profile, the instructions, when executed, further cause the computing system to create a user profile for the user.
Example 33 may include the at least one computer readable storage medium of Example 31, wherein the instructions to cause the computing system to perform task modeling further includes instructions to cause the computing system to determine what information is to be displayed to perform the task, determine what inputs the user is to provide to the system to perform the task, estimate the incidence of the interaction and the length of time of the interaction, and create a list of potential interaction alternatives based on convenience of the one or more user interfaces.
Example 34 may include the at least one computer readable storage medium of Example 33, wherein the instructions, when executed, cause the computing system to evaluate each potential interaction alternative on the list to determine whether any safety hazards are involved, wherein if a potential interaction alternative includes at least one safety hazard, the instructions, when executed, further cause the computing system to remove the potential interaction alternative from the list.
Example 35 may include the at least one computer readable storage medium of Example 34, wherein each potential interaction alternative on the list is weighted and scored as a function of the incidence of the interaction, the length of time of the interaction, a posture health parameter, a user convenience parameter, and a hazard parameter, wherein the potential interaction alternatives with highest scores are used as the one or more user interfaces.
Example 36 may include the at least one computer readable storage medium of Example 33, wherein the instructions to cause the computing system to estimate the incidence and length of time of the interaction include further instructions, that when executed, cause the computing system to review entries of similar tasks in an activity history database, the activity history database includes information on how long previous interactions took and how long a user held a specific posture while performing the interaction and if any injuries occurred based on the specific posture.
Example 37 may include the at least one computer readable storage medium of Example 31, wherein user interface inputs include one or more of projected input devices, dynamic touch surfaces, keyboards, and wearable device environmental inputs.
Example 38 may include the at least one computer readable storage medium of Example 31, wherein user interface outputs include one or more of projected images, projected displays, physical displays, drones, robots, augmented reality wearable displays, speakers, audio outputs, haptic surfaces, odor generation outputs, and wearable device environmental outputs.
Example 39 may include the at least one computer readable storage medium of any one of Examples 31 to 38, wherein if the incidences are relatively minimal, the length of time of the interaction is relatively short and placement of the one or more user interfaces is for a relatively short period of time, further instructions, when executed, cause the computing system to position the one or more user interfaces in a location having a relatively high convenience to a user performing the task.
Example 40 may include the at least one computer readable storage medium of any one of Examples 31 to 38, wherein if there are multiple incidences, the length of time of interaction is relatively long and placement of the one or more user interfaces is for a relatively long period of time, further instructions, when executed, cause the computing system to position the one or more user interfaces in a location that provides a relatively high ergonomic result to a user performing the task.
Example 41 may include the at least one computer readable storage medium of Example 31, comprising further instructions, which when executed by the computing system, cause the computing system to continuously monitor the smart workspace for movement or changes by the user(s), update an activity history database with information about the task and the one or more user interfaces used, and if needed, update user profiles.
Example 42 may include the at least one computer readable storage medium of Example 41, wherein if the one or more user interfaces need to be changed based on movements or changes, the instructions, when executed, further cause the computing system to perform the task modeling to select one or more replacement user interfaces.
Example 43 may include the apparatus of Example 11, wherein the logic coupled to the substrate further to continuously monitor the smart workspace for movement or changes by the user(s), update an activity history database with information about the task and the one or more user interfaces used, and if needed, update user profiles.
Example 44 may include the apparatus of Example 43, wherein if the one or more user interfaces need to be changed based on movements or changes, the logic coupled to the substrate further to perform the task modeling to select one or more replacement user interfaces.
Example 45 may include the method of Example 21, further comprising continuously monitoring the smart workspace for movement or changes by the user(s), updating an activity history database with information about the task and the one or more user interfaces used, and if needed, updating user profiles.
Example 46 may include the method of Example 45, wherein if the one or more user interfaces need to be changed based on movements or changes, the method further comprising performing the task modeling to select one or more replacement user interfaces.
Example 47 may include the ergonomic control system of Example 1, comprising further instructions, which when executed by the computing system, cause the computing system to continuously monitor the smart workspace for movement or changes by the user(s), update an activity history database with information about the task and the one or more user interfaces used, and if needed, update user profiles.
Example 48 may include the ergonomic control system of Example 47, wherein if one or more user interfaces need to be changed based on movements or changes, the instructions, when executed, further cause the computing system to perform the task modeling to select one or more replacement user interfaces.
Example 49 may include at least one computer readable medium comprising a set of instructions, which when executed by a computing system, cause the computing system to perform the method of any one of Examples 21 to 30 and 45 to 46.
Example 50 may include an apparatus comprising means for performing the method of any one of Examples 21 to 30 and 45 to 46.
Example 51 may include the ergonomic control system of Example 1, wherein the one or more interfaces include one or more inputs or outputs.
Example 52 may include the ergonomic control system of Example 51, wherein the one or more inputs or outputs include one or more projected inputs or outputs.
Example 53 may include the apparatus of Example 11, wherein the one or more interfaces include one or more inputs or outputs.
Example 54 may include the apparatus of Example 53, wherein the one or more inputs or outputs include one or more projected inputs or outputs.
Example 55 may include the method of Example 21, wherein the one or more interfaces include one or more inputs or outputs.
Example 56 may include the method of Example 55, wherein the one or more inputs or outputs include one or more projected inputs or outputs.
Example 57 may include the at least one computer readable storage medium of Example 31, wherein the one or more interfaces include one or more inputs or outputs.
Example 58 may include the at least one computer readable storage medium of Example 57, wherein the one or more inputs or outputs include one or more projected inputs or outputs.
Embodiments are applicable for use with all types of semiconductor integrated circuit (“IC”) chips. Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, systems on chip (SoCs), SSD/NAND controller ASICs, and the like. In addition, in some of the drawings, signal conductor lines are represented with lines. Some may be different, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.
Example sizes/models/values/ranges may have been given, although embodiments are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments. Further, arrangements may be shown in block diagram form in order to avoid obscuring embodiments, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the computing system within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe Example embodiments, it should be apparent to one skilled in the art that embodiments can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting.
The term “coupled” may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms “first”, “second”, etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.
As used in this application and in the claims, a list of items joined by the term “one or more of” may mean any combination of the listed terms. For Example, the phrases “one or more of A, B or C” may mean A; B; C; A and B; A and C; B and C; or A, B and C.
Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments can be implemented in a variety of forms. Therefore, while the embodiments have been described in connection with particular Examples thereof, the true scope of the embodiments should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.
Number | Name | Date | Kind |
---|---|---|---|
9609262 | Kwong | Mar 2017 | B2 |
20080018591 | Pittel | Jan 2008 | A1 |
20090115721 | Aull | May 2009 | A1 |
20100225450 | Fischer | Sep 2010 | A1 |
20110075055 | Bilbrey | Mar 2011 | A1 |
20110149094 | Chen | Jun 2011 | A1 |
20110164226 | Wu | Jul 2011 | A1 |
20110243380 | Forutanpour | Oct 2011 | A1 |
20150278687 | Sculley, II | Oct 2015 | A1 |
20160187991 | Hung | Jun 2016 | A1 |
20180150186 | Norieda | May 2018 | A1 |
20180348615 | Sakai | Dec 2018 | A1 |
Number | Date | Country | |
---|---|---|---|
20190102047 A1 | Apr 2019 | US |