Semantic segmentation is used to determine masks for images, the masks representing the positions and shapes of different objects and a background within an image. Masks may then be used to analyze or modify an image, such as by removing, replacing, or changing content associated with a particular mask without interfering with other objects in the image. Conventional methods for performing semantic segmentation are resource-intensive, requiring significant computational resources, time, and data.
The detailed description is set forth with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items or features.
While implementations are described in this disclosure by way of example, those skilled in the art will recognize that the implementations are not limited to the examples or figures described. It should be understood that the figures and detailed description thereto are not intended to limit implementations to the particular form disclosed but, on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope as defined by the appended claims. The headings used in this disclosure are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to) rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean “including, but not limited to”.
Semantic segmentation is a process by which an image is analyzed to determine masks (e.g., sets of pixels) that correspond to the locations and shapes of objects and a background of the image. The determined masks may then be used for a variety of image analyses or image editing operations. For example, the pixels of an image that correspond to a particular mask may be removed, modified, or replaced with other pixels, and so forth, without interfering with the pixels that correspond to other masks and represent other objects. Example applications of such a process include modifying an image of a human by adding, replacing, or changing a characteristic of clothing, hair, skin, or a background while retaining the human in the image, or modifying an image of a room by adding, replacing, or changing a characteristic of furniture, decorative objects, walls, floors, or ceilings while retaining other objects in the image.
Some conventional methods for performing semantic segmentation use machine learning systems, such as neural networks and computer vision techniques, to analyze the pixels of an image to determine semantic relationships. For example, a group of adjacent pixels having the same or a similar color may be determined to be part of the same object, while an abrupt change in color between adjacent pixels may represent the edge of an object. Use of machine learning systems in this manner requires a significant number of annotated training images for each type of object to be identified. Additionally, these techniques may be subject to inaccuracy when used to analyze complex or cluttered images, or images in which different objects have similar colors or other similar characteristics.
Described in this disclosure are techniques for determining masks that comprise sets of pixels within an image that are associated with the same semantic segments, that use less computational resources than conventional methods and do not require supervised training or use of annotated training data. A first image is processed using a Generative Adversarial Network (GAN) or another type of machine learning system, such as a neural network, deep learning network, convolutional network, transformer network, and so forth to generate a set of second images. The set of images generated by the machine learning system may then be analyzed to determine changes in color or other visual characteristics across the set of images. Sets of pixels that change in a similar manner across the images may be determined to be part of the same semantic segment and included in the same mask, while pixels that change differently relative to one another may be part of different segments that are included in different masks. Mask data is then generated that enables a set of pixels that correspond to a particular object or background (e.g., a mask) to be removed, replaced, or modified without interfering with other sets of pixels in the image that are associated with other objects.
Conceptually, the first image may have a set of structural (e.g., semantic) characteristics, such as the locations and shapes of objects and backgrounds depicted in the first image. The first image may also have a set of style characteristics, such as the color or other visual characteristics of pixels (e.g., luminance, brightness, chrominance). The GAN or other machine learning system may conceptually determine a set of layers based on the first image, each layer corresponding to a structural characteristic or a style characteristic of the first image. For example, a GAN may be provided with a layer value that indicates at least a portion of the style characteristics and does not indicate the structural characteristics to cause the GAN to generate images that retain the structural characteristics of the first image, but modify one or more of the style characteristics of the first image. Other types of machine learning systems may be provided with other types of parameters to cause generated images to retain the structural characteristics of the first image while modifying the style characteristics of the image. For example, each alternate image generated by a GAN or other type of system may include the same objects and background at the same positions within the alternate image as depicted in the first image, but the colors or other characteristics of the pixels in each alternate image may differ from those of the first image.
Using this style-mixing process, the GAN, or another type of machine learning system, may generate any number of images. In one implementation, the machine learning system may be used to generate 50 images, each image having generally identical semantic structure as the first image, but differing style characteristics, such as pixel-wise color correlation across different images. Based on the images generated by the machine learning system, a tensor may be determined, such as by concatenating the color value, or another value for another type of pixel characteristic, at each pixel location. As such, the tensor may, for at least a subset of the pixels in each of the images generated by the machine learning system, associate the location of the pixel with a corresponding color value. A clustering algorithm, such as k-means clustering, may then be used to determine sets of pixels for which changes in color (or other pixel characteristics) occur similarly across the set of images. For example, a first set of pixels associated with a first change in color value across the set of images may represent a first segment, while a second set of pixels associated with a second change in color value across the set of images may represent a second segment. Pixels within the same set may change color similarly across different images, while pixels in different sets may change differently from one another across different images. For example, pixels within the same set may be associated with changes in a color value or other characteristic value that are within a threshold range of the changes associated with other pixels in the set, while pixels in different sets may be associated with changes in values that are outside of the threshold range relative to one another.
In some cases, the machine learning system may be used to generate a first number of images, and a determination may be made regarding the differentiation between regions of pixels across the images. For example, if at least a threshold number of masks are not determined using the techniques described previously, the machine learning system may be used to generate a second number of images that is greater than the first number, and the process may be repeated. However, if masks representing segments within an image are able to be determined using a smaller number of images generated by the machine learning system, this may conserve time and computational resources.
In some cases, multiple machine learning systems may be used to generate images based on an input image. For example, if the accuracy of one or more GANs is not known, use of multiple GANs that are configured to generate images using the same or a similar seed value, may result in a homogenized set of images that reduces the effect of errors or inaccuracies associated with one or more of the GANs.
In some implementations, the techniques described previously may be used to determine segments based on a portion of an image, rather than an entire image. For example, a complex scene such as an image of a room within a structure may include multiple furniture objects having similar colors or other characteristics. In some cases, a machine learning system may modify the colors or other characteristics of these objects in a similar manner when generating alternate images. To prevent different objects from being clustered within the same semantic segment, an object recognition system may be used to determine objects that are present within the first image. A region that includes an object, such as a bounding box, may be determined, and the machine learning system may be used to generate alternate images based on the determined region. Changes in characteristics of pixels across the generated images may then be used to identify a set of pixels in the region of the image that corresponds to the object and one or more sets of pixels that do not correspond to the object (e.g., a background or other object). This process may be repeated to determine multiple regions in the image that include an object, and respective mask data for each region. Individual identification of objects in this manner may enable masks to be generated that allow for removal, replacement, or modification of individual objects within the image.
Implementations described herein may therefore enable mask data to be determined for an image using less computational resources than conventional methods, and that do not require supervised training or use of annotated training data. The techniques described herein may be used with any type of GAN or other machine learning system that is configured to generate images based on an input image, without requiring training or additional training data. For example, the changes in pixels across multiple images that are used to determine a mask may be determined using the same techniques, independent of the type of machine learning system that was used to generate the images.
In some implementations, the input image 108 may be received from a separate computing device. For example, a user accessing the system may provide the input image 108 for generation of mask data 102, such that the input image 108 may be analyzed or edited using the mask data 102. In other implementations, the input image 108 may be stored in association with the system (e.g., in a computing device or data storage accessible to the image generation module 110), and may be selected based on user input. In still other implementations, the input image 108 may be stored in association with the system or with a separate computing device, and the system may be configured to automatically access the input image 108 for generation of mask data 102. For example, the input image 108 may include an image depicting one or more items available for purchase using a website, and mask data 102 may be generated to enable the input image 108 to be modified by users of the website.
The example input image 108 shown in
The image generation module 110 may be configured to determine a set of generated images 112 based on the input image 108 by modifying various characteristics of the input image 108 in a random or pseudo-random manner. For example, a seed value 114 provided to the image generation module 110 may at least partially determine the characteristics of the input image 108 that are modified to determine the generated images 112. In some implementations, the image generation module 110 may determine layers associated with the input image 108, each layer representing a particular structural characteristic or style characteristic of the input image 108. For example, a GAN may be configured to determine the layers associated with an input image 108 using a mapping network, while a synthesis network of the GAN is used to determine generated images 112 by modifying one or more layers based in part on the seed value 114. In some implementations, the image generation module 110 may be provided with image generation parameters 116 that cause generated images 112 to be determined using the image generation module 110 to retain the structural characteristics of the input image 108 while modifying one or more style characteristics of the input image 108. For example, when using a GAN to determine the generated images 112, an image generation parameter 116 may include a layer value that indicates the style characteristics of the input image 108 and does not indicate the structural characteristics.
An image analysis module 118 may determine characteristics data 120 based on the generated images 112. In some implementations, the characteristics data 120 may include a tensor or another set of values that is determined by concatenating the pixel characteristics of the generated images 112 at pixel locations for at least a portion of the pixels in the generated images 112. For example, the characteristics data 120 may associate a pixel identifier 122(1) for a first pixel of the generated images 112 with a set of color values 124, each color value 124 indicating the color of the pixel (e.g., an RGB or LAB color space value). The pixel identifier 122(1) may include any type of data that may be used to differentiate a particular pixel from other pixels in the generated images 112. In some implementations, the pixel identifier 122(1) may indicate the location of a pixel within the generated images 112, such as a coordinate value indicating the horizontal and vertical position of the pixel. The first pixel identifier 122(1) may be associated with a first color value 124(1) representing the color of a first pixel in a first generated image 112, a second color value 124(2) representing the color of the first pixel in a second generated image 112, and any number of additional color values 124(X), each color value 124 representing the color of the first pixel in a respective generated image 112. Similarly,
A clustering module 126 may determine mask data 102 based on the characteristics data 120. In some implementations, the characteristics data 120 may include a tensor, and the clustering module 126 may use k-means clustering across the pixels of the generated images 112. The clustering module 126 may determine sets of pixels where, within the set of pixels, the pixels change color similarly across the generated images 112, while compared to other pixels outside of the set, the pixels change color differently. As such, a first set of pixels that changes color similarly across the generated images 112 may be associated with a first mask that represents a first object 104 or background 106, while a second set of pixels that changes color similarly across the generated images 112 may be associated with a second mask that represents a second object 104 or background.
Mask data 102 may be stored in association with the input image 108 and used in subsequent image analysis or image editing processes. For example, mask data 102 may be used to determine the locations of objects 104 and a background 106 within the input image 108, which may facilitate object recognition. As another example, mask data 102 may be used to remove, replace, or modify portions of the input image 108, such as by removing or replacing a particular object 104 without affecting pixels associated with other objects 104 depicted in the input image 108.
At 208, the first image 206(1) may be processed using an object recognition system that may determine regions 202 of the first image 206(1) that include objects 104. For example, an object recognition system may be trained to identify the presence of various types of objects within images. The object recognition system may determine specific regions of the first image 206(1), such as bounding boxes, that include objects 104 determined using the object recognition system. For example,
At 210, a region 202 of the first image 206(1) may be provided to a machine learning system as an input image 108, and generated images 112 may be received from the machine learning system. For example,
At 212, characteristics data 120 may be determined based on the generated images 112. The characteristics data 120 may represent a pixel characteristic for at least a subset of the pixels in the generated images 112. For example, as described with regard to
At 214, mask data 102(1) that indicates sets of pixels associated with changes in pixel characteristics for each pixel in the set, that are within a threshold range across the generated images 112, may be determined. For example, k-means clustering or another type of clustering algorithm may be used to determine sets of pixels where, within the set of pixels, the pixels change in color or another characteristic similarly across the generated images 112, while compared to other pixels outside of the set, the pixels change differently. Continuing the example, a first set of pixels that changes color similarly across the generated images 112 may be associated with a first mask that represents the object 104(3) depicted in the input image 108. A second set of pixels that changes color similarly across the generated images 112 may represent a portion of the background 106 depicted in the input image 108. Mask data 102(1) that represents any number of sets of pixels associated with similar changes in color or another characteristic may be determined using a clustering module 126.
As shown in
In some implementations, additional processing, such as a foreground identification process, may be performed, based on the mask data 102 for the first image 206(1). For example, determination of mask data 102 may not indicate the pixels within an image that are likely to be of interest to a user. A foreground identification process may be used to determine which sets of pixels indicated in the mask data 102 correspond to objects 104, and which sets of pixels correspond to a background 106. In some implementations, a corner minority approach may be used to determine foreground and background pixels. For example, for a given region 202, the corner minority approach may assume that an object 104 is located at the approximate center of the region 202, while the corners of the region 202 primarily include background pixels. Sets of pixels indicated in the mask data 102 having pixel characteristics that correspond to the pixels located in the corners of the region 202, within a threshold, may be classified as segments associated with the background 106, while other sets of pixels indicated in the mask data 102 may be classified as segments associated with a foreground object 104. In some cases, other characteristics such as a size or portion of a frame occupied by a set of pixels, color diversity of pixels, the presence or absence of particular colors, and so forth may be used to determine foreground objects 104. In other implementations, a saliency approach may be used to determine foreground pixels, in which a saliency map is used to approximate a probability for each pixel to belong to a foreground in a region 202. A pre-defined Gaussian heat map peaked at the center of an image may be used as the saliency map. Where Y is a given mask, and given a predefined threshold @saliency and cluster index m ∈{1, . . . , k}, foreground clusters may be identified by examining the average saliency for all pixels within the cluster m. Specifically, cluster m is a foreground cluster if Equation 1, below, is true.
At 218, input may be received indicating a modification to a region 202 of the first image 206(1). For example, one possible use of the mask data 102 may include permitting portions of the first image 206(1) that corresponds to a particular object 104 or background 106 to be removed, replaced, or modified, such as by changing a color or pattern associated with an object 104 or background 106, removing an object 104, or replacing an object 104 or background 106 with an alternate object 104 or background 106, without affecting pixels associated with other objects 104. Continuing the example,
At 222, based on the input and the mask data 102 for the region 202, one or more pixels associated with the region 202 may be modified. For example, a second image 206(2) may be generated that replaces the third object 104(3) with an alternate object 104, without affecting the pixels associated with other objects 104 depicted in the image. As a result, the mask data 102 may allow objects 104 to be selectively removed, replaced, or modified, such as when a user is accessing a website associated with the purchase or lease of objects 104.
At 304, at least a portion of the first image 206(1) may be provided to one or more machine learning systems. As described previously, in some implementations, the machine learning system(s) may include a GAN. In some cases, multiple machine learning systems may be used, while in other cases, a single machine learning system may be used. As described with regard to
At 306, a set of second images (e.g., generated images 112) may be received from the machine learning system(s), each second image having the first set of characteristics and a respective third set of characteristics that include visual characteristics of the pixels in the second images. The respective third sets of characteristics may differ from the second set of characteristics associated with the first image 206(1). For example, a generated image 112 determined by the machine learning system(s) may retain the shapes and locations of the objects 104 depicted in the first image 206(1), but may modify the color or other visual characteristics of one or more pixels that are presented in the generated images 112 relative to the first image 206(1).
At 308, based on the second set of images, at least a first set of pixels associated with a first change and a second set of pixels associated with a second change in the respective third set of characteristics may be determined. As described previously, sets of pixels that change in a similar manner across the generated images 112 may be determined to be part of the same semantic segment, while pixels that change differently relative to one another may be part of different segments. In some implementations, characteristics data 120, such as a tensor or other data structure, may be determined by concatenating the pixel characteristics of the generated images 112 at pixel locations for at least a portion of the pixels in the generated images 112. For example, the characteristics data 120 may associate a pixel identifier 122 for at least a subset of pixels in the generated images 112 with corresponding color values 124, or other values indicative of a pixel characteristic, for at least a subset of the generated images 112. K-means clustering or another type of clustering algorithm may be used to determine sets of pixels where, within the set of pixels, the pixels change color similarly across the generated images 112, while compared to other pixels outside of the set, the pixels change color differently.
At 310, first mask data 102 may be generated based on the first set of pixels and second mask data 102 may be generated based on the second set of pixels. The mask data 102 may associate mask identifiers 128 that represent particular masks, semantic segments, objects 104, or backgrounds 106, with corresponding sets of pixel identifiers 130 that represent the pixels included in a semantic segment of the first image 206(1). The mask data 102 may be used to modify pixels associated with a particular object 104 or background 106 without modifying pixels associated with different objects 104.
At 312, based on input to modify the second set of pixels, a third image may be generated that includes the first set of pixels and a third set of pixels at the location of the second set of pixels. For example, as described with regard to
One or more power supplies 404 may be configured to provide electrical power suitable for operating the components of the computing device 402. In some implementations, the power supply 404 may include a rechargeable battery, fuel cell, photovoltaic cell, power conditioning circuitry, and so forth.
The computing device 402 may include one or more hardware processor(s) 406 (processors) configured to execute one or more stored instructions. The processor(s) 406 may include one or more cores. One or more clock(s) 408 may provide information indicative of date, time, ticks, and so forth. For example, the processor(s) 406 may use data from the clock 408 to generate a timestamp, trigger a preprogrammed action, and so forth.
The computing device 402 may include one or more communication interfaces 410, such as input/output (I/O) interfaces 412, network interfaces 414, and so forth. The communication interfaces 410 may enable the computing device 402, or components of the computing device 402, to communicate with other computing devices 402 or components of the other computing devices 402. The I/O interfaces 412 may include interfaces such as Inter-Integrated Circuit (12C), Serial Peripheral Interface bus (SPI), Universal Serial Bus (USB) as promulgated by the USB Implementers Forum, RS-232, and so forth.
The I/O interface(s) 412 may couple to one or more I/O devices 416. The I/O devices 416 may include any manner of input devices or output devices associated with the computing device 402. For example, I/O devices 416 may include touch sensors, displays, touch sensors integrated with displays (e.g., touchscreen displays), keyboards, mouse devices, microphones, image sensors, cameras, scanners, speakers or other types of audio output devices, haptic devices, printers, and so forth. In some implementations, the I/O devices 416 may be physically incorporated with the computing device 402. In other implementations, I/O devices 416 may be externally placed.
The network interfaces 414 may be configured to provide communications between the computing device 402 and other devices, such as the I/O devices 416, routers, access points, and so forth. The network interfaces 414 may include devices configured to couple to one or more networks including local area networks (LANs), wireless LANs (WLANs), wide area networks (WANs), wireless WANs, and so forth. For example, the network interfaces 414 may include devices compatible with Ethernet, Wi-Fi, Bluetooth, ZigBee, Z-Wave, 5G, LTE, and so forth.
The computing device 402 may include one or more buses or other internal communications hardware or software that allows for the transfer of data between the various modules and components of the computing device 402.
As shown in
The memory 418 may include one or more operating system (OS) modules 420. The OS module 420 may be configured to manage hardware resource devices such as the I/O interfaces 412, the network interfaces 414, the I/O devices 416, and to provide various services to applications or modules executing on the processors 406. The OS module 420 may implement a variant of the FreeBSD operating system as promulgated by the FreeBSD Project; UNIX or a UNIX-like operating system; a variation of the Linux operating system as promulgated by Linus Torvalds; the Windows operating system from Microsoft Corporation of Redmond, Washington, USA; or other operating systems.
One or more data stores 422 and one or more of the following modules may also be associated with the memory 418. The modules may be executed as foreground applications, background tasks, daemons, and so forth. The data store(s) 422 may use a flat file, database, linked list, tree, executable code, script, or other data structure to store information. In some implementations, the data store(s) 422 or a portion of the data store(s) 422 may be distributed across one or more other devices including other computing devices 402, network attached storage devices, and so forth.
A communication module 424 may be configured to establish communications with one or more other computing devices 402. Communications may be authenticated, encrypted, and so forth.
The memory 418 may additionally store an image determination module 426. In some implementations, the image determination module 426 may receive images from external sources, such as images provided by users or administrators for analysis and generation of mask data 102. In other implementations, the image determination module 426 may access images stored in the data store 422, or obtain images from other computing devices or data storage, for generation of mask data 102 associated with the accessed images. For example, the image determination module 426 may be configured to automatically generate mask data 102 for images associated with a website or other collection of interfaces.
The memory 418 may also store an object recognition module 428. The object recognition module 428 may be trained to determine regions 202 of an image that include one or more types of objects 104. For example, the object recognition module 428 may include one or more image recognition systems, object detectors, zero-shot object detectors, and so forth. Use of an object recognition module 428 to determine regions 202 of an image that include objects 104 may be useful for complex or cluttered images where a foreground or object 104 of interest may not be defined by a type of object or its appearance and alignment. Additionally, use of an object recognition module 428 may prevent classification of multiple objects 104 within the same semantic segment, such as if two objects 104 in an image have a similar color or other visual characteristic.
The memory 418 may store the image generation module 110. The image generation module 110 may be configured to determine a set of generated images 112 based on an input image 108 by modifying various characteristics of the input image 108 in a random or pseudo-random manner. In some implementations, one or more of a seed value 114 or a layer value 116 may be used to at least partially control the characteristics of the input image 108 that are modified and the type of modifications that are used to determine the generated images 112. In some implementations, the image generation module 110 may include a GAN or other type of machine learning system, such as a neural network, deep learning network, convolutional network, transformer network, and so forth. Additionally, in some implementations, multiple machine learning systems may be used to determine generated images 112.
The memory 418 may additionally store the image analysis module 118. The image analysis module 118 may determine characteristics data 120 based on a set of generated images 112. Characteristics data 120 may associate identifiers, such as pixel locations, for at least a subset of the pixels in the generated images 112, with corresponding values indicative of the values of a pixel characteristic across at least a subset of the generated images 112, such as a color value 124. In some implementations, the characteristics data 120 may include a tensor.
The memory 418 may also store the clustering module 126. The clustering module 126 may determine mask data 102 based on the characteristics data 120, or in some implementations, based on the generated images 112. In some implementations, the clustering module 126 may use k-means clustering across the pixels of the generated images 112 to determine where, within the set of pixels, the pixels change in color or another characteristic similarly across the generated images 112, while compared to other pixels outside of the set, the pixels change color differently. The mask data may associate a mask identifier 128 that represents a particular mask, region 202, or object 104 with a corresponding set of pixels that may represent an object 104 or background 106 within the input image 108.
The memory 418 may store an interface module 430. The interface module 430 may generate user interfaces that include images or other data for presentation on one or more computing devices 402. For example, the interface module 430 may present an image for which mask data 102 was generated and receive user input 220 indicating one or more modifications to the image. Based on the mask data 102, the interface module 430 may determine an alternate image based on the modifications indicated in the user input 220, such as by removing, replacing, or modifying one or more objects 104 or a background 106 associated with the presented image.
Other modules 432 may also be present in the memory 418. For example, other modules 432 may include training modules to train the object recognition module 428, image generation module 110, clustering module 126, and so forth. Other modules 432 may include permission or authorization modules for modifying data associated with the computing device 402, such as threshold values, configurations or settings, training data, and so forth. Other modules 432 may also include encryption modules to encrypt and decrypt communications between computing devices 402, authentication modules to authenticate communications sent or received by computing devices 402, and so forth. Other modules 432 may additionally include modules for performing foreground identification processes to determine, based on mask data 102, the pixels within an image that are likely to be associated with foreground objects 104 or backgrounds within the image.
Other data 434 within the data store(s) 422 may include configurations, settings, preferences, and default or threshold values associated with computing devices 402, training data associated with machine learning modules, interface data for generation of user interfaces, and so forth. Other data 434 may also include encryption keys and schema, access credentials, and so forth.
The processes discussed in this disclosure may be implemented in hardware, software, or a combination thereof. In the context of software, the described operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more hardware processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. Those having ordinary skill in the art will readily recognize that certain steps or operations illustrated in the figures above may be eliminated, combined, or performed in an alternate order. Any steps or operations may be performed serially or in parallel. Furthermore, the order in which the operations are described is not intended to be construed as a limitation.
Embodiments may be provided as a software program or computer program product including a non-transitory computer-readable storage medium having stored thereon instructions (in compressed or uncompressed form) that may be used to program a computer (or other electronic device) to perform processes or methods described in this disclosure. The computer-readable storage medium may be one or more of an electronic storage medium, a magnetic storage medium, an optical storage medium, a quantum storage medium, and so forth. For example, the computer-readable storage media may include, but is not limited to, hard drives, optical disks, read-only memories (ROMs), random access memories (RAMs), erasable programmable ROMs (EPROMs), electrically erasable programmable ROMs (EEPROMs), flash memory, magnetic or optical cards, solid-state memory devices, or other types of physical media suitable for storing electronic instructions. Further, embodiments may also be provided as a computer program product including a transitory machine-readable signal (in compressed or uncompressed form). Examples of transitory machine-readable signals, whether modulated using a carrier or unmodulated, include, but are not limited to, signals that a computer system or machine hosting or running a computer program can be configured to access, including signals transferred by one or more networks. For example, the transitory machine-readable signal may comprise transmission of software by the Internet.
Separate instances of these programs can be executed on or distributed across any number of separate computer systems. Although certain steps have been described as being performed by certain devices, software programs, processes, or entities, this need not be the case, and a variety of alternative implementations will be understood by those having ordinary skill in the art.
Additionally, those having ordinary skill in the art will readily recognize that the techniques described above can be utilized in a variety of devices, environments, and situations. Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claims.