It has become increasingly difficult to compare the performance of central processing units (CPUs) by reviewing their specifications with recent advancements in computer architecture. Various benchmarking tests have been determined that allow comparison of different CPUs with respect to each other. The terms benchmarks, benchmarking, benchmark testing, or the like refer to tools designed to measure the performance of CPUs of electronic devices, such as mobile phones, mobile computing devices, mobile internet devices, tablet computers, laptop computers, video game consoles, portable media players, peripheral devices, internet capable appliances, and/or smart televisions, among others to provide some examples. These tools can run specific tasks or simulations that stress the CPUs to assess their performance. Often times, benchmarks are designed to mimic a particular type of workloads on the CPUs. These benchmarks can be classified as being synthetic benchmarks that execute specifically created programs that impose the workloads on the CPUs or application benchmarks that execute real-world programs, such as video games to provide an example, on the CPUs. These application benchmarks provide application benchmark scores that reflect the ability of the CPUs to handle real-work tasks of these real-world programs, such as video editing, three-dimensional graphical rendering and/or data analysis, among others. These application benchmark scores can be compared for different CPUs to compare performance of these different CPUs with respect to each other.
Some embodiments of this disclosure describe a method for operating a Central Processing Unit (CPU). The method includes estimating specific timeframes that workloads are to be completed to determine workload completion windows; identifying a process that is performing workloads from among processes that are being executed by the CPU over workload completion windows; provisioning a performance state from among different performance states to execute the process to complete workloads over the workload completion windows; determining whether the plurality of workloads being performed by the process are deadline-bound workloads; and executing, based on determining the workloads are the deadline-bound workloads, the workloads in accordance with the performance state.
In some embodiments, the estimating can include identifying the specific timeframes that coincide with swapping between a visible buffer and a working buffer within a frame buffer of a Graphics Processing Unit (GPU).
In some embodiments, the identifying can include identifying candidate processes from among processes that are representative of deadline-bound workloads over workload completion windows; estimating workloads completed by candidate processes over workload completion windows; statistically measuring variances of workloads completed by candidate processes over workload completion windows; and identifying the process as being a candidate process from among candidate processes having a lowest variance from among variances.
In some embodiments, the provisioning can include provisioning the performance state that optimizes power consumption or performance of the CPU while completing workloads over the workload completion windows. In these embodiments, the provisioning can include provisioning the performance state that optimizes power consumption or performance of the CPU while completing workloads over the workload completion windows less a deadline margin.
In some embodiments, the method can further include switching, in response to determining a compute-bound workload, from the performance state to a utilization-based control for the process to perform the compute-bound workload; and executing the compute-bound workload in accordance with the utilization-based control. In these embodiments, the method can further include provisioning the performance state to execute the process to complete workloads over the workload completion windows in response to completing the compute-bound workload.
Some embodiments of this disclosure describe a computing device having a Graphics Processing Unit (GPU) and a Central Processing Unit (CPU). The GPU has a visible buffer to store a visible video frame that is being displayed and a working buffer to store a working video frame that is currently being prepared by the GPU. And the GPU can swap the visible buffer and the working buffer at specific timeframes in response to the working video frame being completed. The CPU can estimate specific timeframes that workloads are to be completed to determine workload completion windows, identify a process that is performing workloads from among processes that are being executed by the CPU over workload completion windows, provision a performance state from among different performance states to execute the process to complete workloads over the workload completion windows, determine that workloads being performed by the process are deadline-bound workloads, and execute, based on determining the workloads are the deadline-bound workloads, the workloads in accordance with the performance state.
In some embodiments, the CPU can identify the specific timeframes that coincide with swapping between the visible buffer and the working buffer.
In some embodiments, the CPU can identify candidate processes from among processes that are representative of deadline-bound workloads over workload completion windows; estimate workloads completed by candidate processes over workload completion windows; statistically measure variances of workloads completed by candidate processes over workload completion windows; and identify the process as being a candidate process from among candidate processes having a lowest variance from among variances.
In some embodiments, the CPU can provision the performance state that optimizes power consumption or performance of the CPU while completing workloads over the workload completion windows. In these embodiments, the CPU can provision the performance state that optimizes power consumption or performance of the CPU while completing workloads over the workload completion windows less a deadline margin.
In some embodiments, the CPU can switch, in response to determining a compute-bound workload, from the performance state to a utilization-based control for the process to perform the compute-bound workload; and execute the compute-bound workload in accordance with the utilization-based control. In these embodiments, the CPU can provision the performance state to execute the process to complete workloads over the workload completion windows in response to completing the compute-bound workload.
Some embodiments of this disclosure describe a System on Chip (SoC) having a Graphics Processing Unit (GPU), a memory, and a Central Processing Unit (CPU). The CPU can estimate specific timeframes that workloads are to be completed to determine workload completion windows, identify a process that is performing workloads from among processes that are being executed by the CPU over workload completion windows, provision a performance state from among different performance states to execute the process to complete workloads over the workload completion windows, determine that workloads being performed by the process are deadline-bound workloads, and execute, based on determining the workloads are the deadline-bound workloads, the workloads in accordance with the performance state.
In some embodiments, the CPU can identify the specific timeframes that coincide with swapping between a visible buffer and a working buffer within a frame buffer of the GPU.
In some embodiments, the CPU can identify candidate processes from among processes that are representative of deadline-bound workloads over workload completion windows; estimate workloads completed by candidate processes over workload completion windows; statistically measure variances of workloads completed by candidate processes over workload completion windows; and identify the process as being a candidate process from among candidate processes having a lowest variance from among variances.
In some embodiments, the CPU can provision the performance state that optimizes power consumption or performance of the CPU while completing workloads over the workload completion windows. In these embodiments, the CPU can provision the performance state that optimizes power consumption or performance of the CPU while completing workloads over the workload completion windows less a deadline margin.
In some embodiments, the CPU can switch, in response to determining a compute-bound workload, from the performance state to a utilization-based control for the process to perform the compute-bound workload; and execute the compute-bound workload in accordance with the utilization-based control. In these embodiments, the CPU can provision the performance state to execute the process to complete workloads over the workload completion windows in response to completing the compute-bound workload.
This Summary is provided merely for illustrating some embodiments to provide an understanding of the subject matter described herein. Accordingly, the above-described features are merely examples and should not be construed to narrow the scope or spirit of the subject matter in this disclosure. Other features, aspects, and advantages of this disclosure will become apparent from the following Detailed Description, Figures, and Claims.
The accompanying drawings, which are incorporated herein and form part of the specification, illustrate the disclosure and, together with the description, further serve to explain the principles of the disclosure and enable a person of skill in the relevant art(s) to make and use the disclosure.
The disclosure is described with reference to the accompanying drawings. In the drawings, like reference numbers can indicate identical or functionally similar elements. Additionally, generally, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears.
Before describing an exemplary processor, such as an exemplary central processing unit (CPU) to provide an example, workloads of this exemplary processor are to be generally described. The workloads refer to the amount of processing, computing, and/or data handling, among others that is expected to be executed by the exemplary processor. The workloads of the exemplary processor can encompass processes, tasks, operations, demands, threads, or the like that are placed on the resources of the exemplary processor, such as processing power, clock speed, number of cores, and/or cache memory, among others to provide some examples. The workloads of the exemplary processor can vary widely, from simple workloads such as word processing, to complex workloads such as video rendering, scientific simulations, and/or database queries, among others. In some embodiments, the complex workloads can include deadline-bound workloads and/or compute-bound workloads. The deadline-bound workloads refer to workloads that need to be completed within specific timeframes. These workloads are often time sensitive and can require careful planning and execution to ensure that they are finished on time. Examples of the deadline-bound workloads can include real-time simulation, video rendering, audio synthesis, and/or network packet processing, among others. The compute-bound workloads refer to workloads that are primarily limited by the resources of the exemplary processor. The overall performance of the compute-bound workloads are often constrained by the speed at which the exemplary processor can execute these workloads.
Systems, methods, and apparatuses disclosed herein can operate in different performance states that provide different energy performance tradeoffs and, in some embodiments, can dynamically switch between these different performance states. These systems, methods, and apparatuses can estimate specific timeframes that workloads are to be completed. These systems, methods, and apparatuses can identify one or more processes that are being executed to perform the workloads. These systems, methods, and apparatuses can dynamically provision one or more performance states from among these different performance states to execute the process to complete the workloads within the specific timeframes. These systems, methods, and apparatuses can dynamically provision the one or more performance states for the one or more process that optimizes power consumption and/or performance while completing the workloads within the specific timeframes.
In the exemplary embodiment illustrated in
In the exemplary embodiment illustrated in
In the exemplary embodiment illustrated in
In the exemplary embodiment illustrated in
In the exemplary embodiment illustrated in
In the exemplary embodiment illustrated in
As part of the DVFM, the performance controller 112 can estimate specific timeframes that the workloads are to be completed to determine workload completion windows. As to be described in further detail below, these workload completion windows can be used to monitor the workloads being executed by the CPU 108. In some embodiments, the performance controller 112 can estimate one or more target performance requirements for the workloads, such as target frame rate, expressed in frames per second (FPS), target frame time, target load time, target latency, target render resolution, target texture quality, target graphic settings, target utilization, and/or target memory usage, among others. In some embodiments, the one or more target performance requirements can impose implicit constraints on the specific timeframes to complete the workloads. In these embodiments, the performance controller 112 can utilize these implicit constraints on the specific timeframes to complete the workloads to determine the workload completion windows. As discussed above, the GPU 102 can further include the frame buffer 110 including the front buffer, also referred to as the visible buffer, and the back buffer, also referred to as the working buffer. In some embodiments, the swaps between the visible buffer and the working buffer, as described above, can coincide with the specific timeframes that the workloads are to be completed. In these embodiments, the frame buffer 110 can notify the performance controller 112 of these swaps between the visible buffer and the working buffer as described above to determine the workload completion windows. In these embodiments, the performance controller 112 can determine the workload completion windows that have starting points and/or ending points that coincide with the swaps between the visible buffer and the working buffer. For example, a first swap between the visible buffer and the working buffer can represent a starting point for the workload completion windows to begin the workloads and a second swap between the visible buffer and the working buffer can represent an ending point for the workload completion windows to complete the workloads.
After determining the workload completion windows, the performance controller 112 can identify one or more processes, tasks, operations, demands, threads, or the like, simply referred to as one or more processes for convenience, that are being executed by the CPU 108 to perform the workloads. In some embodiments, the CPU 108 can execute multiple processes to perform multiple workloads that are placed on the resources of the computing device 100. In these embodiments, the performance controller 112 can identify the one or more processes that are being executed by the CPU 108 to perform the workloads from among the multiple processes that are being executed by the CPU 108. In some embodiments, the performance controller 112 can analyze the multiple processes to perform the multiple workloads over the workload completion windows to identify one or more candidate processes that are representative of the deadline-bound workloads over the workload completion windows. In these embodiments, the performance controller 112 can estimate the workloads completed by the one or more candidate processes over the workload completion windows in terms of, for example, availability, response time, processing speed, channel capacity, latency, completion time, service time, bandwidth, throughput, relative efficiency, scalability, power consumption, and/or compression ratio, among others. In some embodiments, the performance controller 112 can statistically measure the workloads completed by the one or more candidate processes over the workload completion windows, in terms of, for example, a mean, a median, a mean square, a root mean square, a variance, and/or a norm, among others, to identify the one or more processes that are being executed by the CPU 108 to perform the workloads. In these embodiments, the performance controller 112 can compare these statistics for the workloads completed by the one or more candidate processes over the workload completion windows with a deadline-bound workloads threshold to identify the one or more processes that are being executed by the CPU 108 to perform the deadline-bound workloads. For example, the deadline-bound workloads can be characterized as having a low variance for the workloads completed over the workload completion windows. In this example, the performance controller 112 can identify those processes from among the one or more candidate processes with the lowest variances, for example, less than approximately five (5) percent, as being the one or more processes that are being executed by the CPU 108 to perform the deadline-bound workloads.
After identifying the one or more processes, the performance controller 112 can dynamically provision the one or more performance states that can be implemented by the CPU 108 to execute the one or more processes to complete the workloads within the specific timeframe. In some embodiments, the performance controller 112 can dynamically provision the one or more performance states having the correct energy-performance tradeoff, as described above, to perform the workloads. In some embodiments, the performance controller 112 can dynamically provision the one or more performance states for the one or more processes that optimizes power consumption and/or performance of the CPU 108 while completing the workloads within the specific timeframe. Alternatively, or in addition to, the performance controller 112 can dynamically provision the one or more performance states for the one or more processes that optimize power consumption and/or performance of the CPU 108 while completing the workloads within the specific timeframes less a deadline margin, for example, twenty (20) percent. In these embodiments, the deadline margin can allow for fluctuations in completing the workloads while avoiding the frame loss. In the exemplary embodiment illustrated in
After dynamically provisioning the one or more performance states, the CPU 108 can implement the one or more performance states to execute the one or more processes to complete the workloads within the specific timeframe. In some embodiments, the performance controller 112 can monitor the one or more processes that are being executed by the CPU 108 to verify that the workloads are the deadline-bound workloads. As described above, the workloads that are placed on the resources of the computing device 100 can be the deadline-bound workloads or the compute-bound workloads. In some embodiments, the performance controller 112 can monitor the workloads being placed on the resources of the computing device 100 to determine whether these workloads are the deadline-bound workloads or the compute-bound workloads. In some embodiments, the performance controller 112 can monitor the CPU only workloads, also referred to as CPU serialization, to determine whether the workloads that are placed on the resources of the computing device 100 are the compute-bound workloads. In these embodiments, the one or more processes to perform the workloads can be executed by the GPU 102 only, the CPU 108 only, and/or a combination of the GPU 102 and the CPU 108. In these embodiments, the performance controller 112 can compare a percentage of time that the one or more processes are executed by the CPU 108 only to the total workloads to be performed by the computing device 100 to determine whether the workloads being executed by the one or more processes are the compute-bound workloads. In these embodiments, the performance controller 112 can compare the percentage of time to a variable threshold, for example, ninety (90) percent, to determine whether the workloads being executed by the one or more processes are the compute-bound workloads. After determining the workloads to be the compute-bound workloads, the performance controller 112 can switch from the one or more performance states to a utilization-based control to complete the compute-bound workloads. In some embodiments, the utilization-based control involves dynamically provisioning the resources of the computing device 100 based upon their usage. For example, if the usage of the resources of the computing device 100 is high, for example, near one hundred (100) percent utilization, the performance controller 112 can allocate more resources to critical tasks or processes. Otherwise, the performance controller 112 might throttle down certain processes to save power or allocate resources to other tasks. After completing the compute-bound workloads, the performance controller 112 can once again dynamically provision the one or more performance states that can be implemented by the CPU 108 to execute the one or more processes to complete the workloads within the specific timeframes as described above, and continue to monitor the one or more processes that are being executed by the CPU 108 to verify that the workloads are the deadline-bound workloads. Alternatively, after determining the workloads to be the deadline-bound workloads, the one or more processes can execute the one or more processes to complete the workloads within the specific timeframes to complete the workloads within the specific timeframe.
Exemplary Operation of an Exemplary Central Processing Unit (Cpu) within the Exemplary Electronic Device
At operation 202, the operational control flow 200 can estimate specific timeframes that the workloads are to be completed to determine workload completion windows. In some embodiments, the operational control flow 200 can estimate one or more target performance requirements for the workloads, such as target frame rate, expressed in frames per second (FPS), target frame time, target load time, target latency, target render resolution, target texture quality, target graphic settings, target utilization, and/or target memory usage, among others. In some embodiments, the one or more target performance requirements can impose implicit constraints on the specific timeframes to complete the workloads. In these embodiments, the operational control flow 200 can utilize these implicit constraints on the specific timeframes to complete the workloads to determine the workload completion windows in a substantially similar manner as described above. For example, the workloads can include video rendering workloads for a video game that generates video frames to be displayed by a display, such as the display 104 to provide an example. In this example, the video game being executed by the operational control flow 200 can specify a target frame rate of sixty (60) FPS to generate the video frames. As such, the operational control flow 200 is to generate one video frame approximately every sixteen (16) milliseconds (ms) to satisfy the target frame rate of sixty (60) FPS. In this example, the operational control flow 200 can use these approximately every sixteen (16) ms timeframes to determine the workload completion windows to monitor the workloads. In some embodiments, the operational control flow 200 can receive notifications of swaps between a visible buffer and a working buffer of a GPU, such as the GPU 102 to provide an example. In these embodiments, the operational control flow 200 can determine the workload completion windows that have starting points and/or ending points that coincide with the swaps between the visible buffer and the working buffer. For example, a first swap between the visible buffer and the working buffer can represent a starting point for the workload completion windows to begin the workloads and a second swap between the visible buffer and the working buffer can represent an ending point for the workload completion windows to complete the workloads.
At operation 204, the operational control flow 200 can identify a process, a task, an operation, a demand, a thread, or the like, simply referred to as a process for convenience, that is being executed by the one or more processors to perform the workloads. In some embodiments, the operational control flow 200 can identify the process that is being executed by the one or more processors to perform the workloads from among multiple processes that are being executed by the one or more processors in a substantially similar manner as described above. From the example above, the operational control flow 200 can identify the process that is being executed by the one or more processors to execute the video rendering workloads for the video game from among multiple processes that are being executed by the one or more processors, such as word processing, web browsing, email clients, spreadsheet software, media players, text editors, programming integrated development environments (IDEs), file compression/decompression tools, security software, operating system utilities, video games, three-dimensional modeling and rendering software, video editing software, machine learning and deep learning, scientific simulations, and/or cryptocurrency mining, among others. In this example, the operational control flow 200 can analyze these multiple processes over the workload completion windows from operation 202 to identify one or more candidate processes that include the video rendering workloads for the video game and other processes, such as three-dimensional modeling and rendering software, video editing software, machine learning and deep learning, scientific simulations, and/or cryptocurrency mining, among others, that are representative of the deadline-bound workloads over the workload completion windows from operation 202. In these embodiments, the operational control flow 200 can estimate the workloads completed by the one or more candidate processes over the workload completion windows from operation 202 in terms of, for example, availability, response time, processing speed, channel capacity, latency, completion time, service time, bandwidth, throughput, relative efficiency, scalability, power consumption, and/or compression ratio, among others. From the above example, the operational control flow 200 can measure the average work completed by the video rendering workloads over the workload completion windows from operation 202 for the video game and the other processes, such as the three-dimensional modeling and rendering software, the video editing software, the machine learning and deep learning, the scientific simulations, and/or the cryptocurrency mining, among others. In some embodiments, the operational control flow 200 can statistically measure the workloads completed by the one or more candidate processes over the workload completion windows from operation 202, in terms of, for example, a mean, a median, a mean square, a root mean square, a variance, and/or a norm, among others, to identify the one or more processes that are being executed by the one or more processors to perform the workloads. From the above example, the operational control flow 200 can measure the variances of average work completed by the video rendering workloads for the video game and the other processes, such as the three-dimensional modeling and rendering software, the video editing software, the machine learning and deep learning, the scientific simulations, and/or the cryptocurrency mining, among others, over the workload completion windows from operation 202. In this example, the operational control flow 200 can identify the process from among the one or more candidate processes with the lowest variance, for example, less than approximately five (5) percent, as being the process that are being executed by the one or more processors to execute the video rendering workloads for the video game. Typically, in this example, the video rendering workloads for the video game is a relatively constant workloads over the workload completion windows from operation 202 with the lowest variance when over the workload completion windows from operation 202 compared to the other processes that are variable workloads having higher variances over the workload completion windows from operation 202.
At operation 206, the operational control flow 200 dynamically provisions a performance state that can be implemented by the one or more processors to execute the identified process from operation 204 to complete the workloads within the specific timeframes from operation 202. In some embodiments, the operational control flow 200 can dynamically provision the performance state having the correct energy-performance tradeoff, as described above, to perform the workloads. In some embodiments, the operational control flow 200 can dynamically provision the performance state for the process from operation 204 that optimizes power consumption and/or performance of the one or more processors while completing the workloads within the specific timeframes in a substantially similar manner as described above. From the example above, the operational control flow 200 can dynamically provision the performance state for the process from operation 204 that executes the video rendering workloads that optimizes power consumption and/or performance of the one or more processors while completing the video rendering workloads within the specific timeframes from operation 202.
At operation 208, the operational control flow 200 determines whether the workloads being performed by the process from operation 204 are deadline-bound workloads. As described above, the workloads being performed by the process from operation 204 can be the deadline-bound workloads or the compute-bound workloads. In some embodiments, the operational control flow 200 can monitor the workloads being performed by the process from operation 204 to determine whether these workloads are the deadline-bound workloads or the compute-bound workloads in a substantially similar manner as described above. The operational control flow proceeds to operation 210 when the workloads being performed by the process from operation 204 are deadline-bound workloads. Otherwise, the workloads being performed by the process from operation 204 are compute-bound workloads and the operational control flow proceeds to operation 212.
At operation 210, the operational control flow 200 performs the workloads in accordance with the performance state from operation 206. The operational control flow 200 can revert to operation 208 to continue to determine whether these workloads being performed by the process from operation 204 are deadline-bound workloads in a substantially similar manner as described above.
At operation 212, the operational control flow 200 switches to utilization-based control for the process from operation 204 to perform the compute-bound workloads in a substantially similar manner as described above. The operational control flow 200 can revert to operation 206 after to once again provision the performance state that can be implemented by the one or more processors to execute the process from operation 204 to complete the workloads within the specific timeframes from operation 202.
Exemplary Workload Completion Windows that can be Determined by the Exemplary Cpu
Exemplary Processes that are being Executed by the Exemplary Cpu
Exemplary Performance States that can be Implemented by the Exemplary Cpu
As illustrated in
Embodiments of the disclosure can be implemented in hardware, firmware, software application, or any combination thereof. Embodiments of the disclosure can also be implemented as instructions stored on one or more computer-readable mediums, which can be read and executed by one or more processors. A computer-readable medium can include any mechanism for storing or transmitting information in a form readable by a computer (e.g., a computing circuitry). For example, a computer-readable medium can include non-transitory computer-readable mediums such as read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; and others. As another example, the computer-readable medium can include transitory computer-readable medium such as electrical, optical, acoustical, or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Further, firmware, software application, routines, instructions have been described as executing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software application, routines, instructions, etc.
It is to be appreciated that the Detailed Description section, and not the Summary and Abstract sections, is intended to be used to interpret the claims. The Summary and Abstract sections may set forth one or more but not all exemplary embodiments of the disclosure as contemplated by the inventor(s), and thus, are not intended to limit the disclosure and the appended claims in any way.
The disclosure has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately executed.
The foregoing description of the specific embodiments will so fully reveal the general nature of the disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan considering the teachings and guidance.
The breadth and scope of the disclosure should not be limited by any of the above-described exemplary embodiments but should be defined only in accordance with the following claims and their equivalents.
The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should only occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the United States, collection of, or access to, certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country.