With the increasing ability of GANs to generate images hardly distinguishable from real photographs, there have been numerous attempts to estimate latent code of the network generating images that look very close to the given input photo. By manipulating those codes in a specific direction, one may alter the appearance of the input photo in a specific way while retaining the original visual features, e.g., adding more hair to a bald person while retaining the person's identity.
This has led to attempts to find an intuitive projection of latent codes into to a space in which important visual features will be disentangled so that the user can perform edits locally. When the visual features are disentangled, this allows for modifications to be made to one visual feature without affecting the other visual features. Unfortunately, there is usually a tradeoff between the ability to get accurate reconstruction while at the same time provide an excellent disentanglement.
Previous techniques assume that the entire image is represented by a single latent code. However, considering all pixels in the input image imposes a large number of constraints, which can make the code estimation difficult. This results in less realistic results and/or features that are not adequately disentangled, making editing of the images more difficult.
Introduced here are techniques/technologies that relate to segmented image generation. An image generation system includes one or more generator networks that are trained to generate realistic images. In some embodiments, the input image is segmented using a segmentation mask into multiple regions. Each region may be generated separately using a different generator. As no one generator is responsible for generating the entire image, the projection results in a more accurate image that represents that portion of the input image. Once the projections have been performed, then individual segments of the image can be edited by exploring the latent spaces. This provides more accurate results while also eliminating the risk of an edit made to one portion of the image changing a different portion of the image.
Once the segments have been generated, an output image is constructed by stitching the segments together. A segmentation loss can be used to adjust the segment boundaries to improve image quality. In some embodiments, image editing is performed iteratively, with the output of the differentiable pipeline serving as the new input image. Additionally, in some embodiments, any segments that have not been edited may use their corresponding segments from the input image, resulting in pixel perfect accuracy in the output image. Once all edits are complete, the resulting generated output image is returned to the user.
Additional features and advantages of exemplary embodiments of the present disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of such exemplary embodiments.
The detailed description is described with reference to the accompanying drawings in which:
Many image editing algorithms these days are framed as latent space optimization of differentiable models, such as pre-trained generative adversarial networks (GANs). For example, the Smart Portrait feature of ADOBE® PHOTOSHOP® enables the face of a person to be edited by first retrieving a most similar face from the latent space of a StyleGAN network and then exploring its neighborhood in the latent space in order to make the desired edits.
Retrieving the exact face of a real person is, however, a hard problem. Embodiments address this problem through segmentation. For example, better results (e.g., a more realistic likeness of the input image) can be achieved on both retrieval and editing if the output image is divided into multiple segments. For example, an input image of a face may be divided into a mouth segment, and eyes segment, a nose segment, etc. A different latent code may then be optimized for each segment. As a result, when an image is reconstructed, each segment can be generated separately, and the final image composed from the generated segments and/or segments of the original input image. This reduces the number of constraints, which enables estimation of latent codes that produce a more accurate segment, as compared to a single latent code for an entire image. This also ensures precise localization, where changes in the latent code of one segment cannot affect other segments of the image since they are separately generated. This helps to retain fidelity of the original image. Additionally, this technique generalizes to more complicated editing tasks using different generators other than StyleGAN, or even a combination of multiple completely different generators.
One or more embodiments include a framework which combines segmentation and latent optimization in a single framework in order to accomplish a general image editing task defined as optimization. The generative capacity of modern neural networks is high, but not unlimited. In image editing using GANs, optimization typically includes a step called projection, which searches the latent space of the GAN for the face (or other object) closest in appearance to that shown in the input image.
Projection can often find a similar face (or other object), but not exact. This results in an output image that, while resembling the input, is still noticeably not the same as the input to a human observer. There have been a number of attempts to improve this projection process. For example, by tweaking the optimization process e.g., by initializing multiple times or transforming the space on which optimization happens. Alternatively, by replacing or supplementing the optimization with a pre-trained encoder that attempts to project into “better” regions of latent space (e.g., training a separate model to help improve the projection step). Another attempted improvement includes applying post-processing to repair the identity after the fact. However, this is time and resource intensive. Additionally, attempts have been made at finetuning the underlying generative model to improve projection. These are applied in state of art solutions, which however still produce problematic results.
As discussed, because the generative capacity of the network is ultimately limited, prior techniques which attempt to generate an entire new image often produce a result that is close but still perceptibly different in ways that, in the case of face editing in particular, change the identity of the subject. This results in the user receiving a final result which shows a different person's face. As a result, a significant amount of post-processing is required to make the result acceptable.
Using segmentation, embodiments improve performance significantly over prior techniques by breaking the problem up into easier to solve chunks. Specifically, a single subject of an input image (e.g., a face, or other object), can be segmented and generated using a plurality of generators. As such, the portion of the image corresponding to the subject, and not the entire image, is segmented and then generated. Additionally, these prior optimizations may still be used with various embodiments, to further improve the results. Further, by introducing segmentation into the optimization loop, embodiments allow existing image generation differentiable optimization pipelines to produce much better results in terms of final cost, by breaking the problem up in the image domain into smaller segments that can each be solved more easily.
In some embodiments, this is performed by introducing a segmentation layer into the optimization and thus using multiple copies of a generative model (or of different generative models) to solve the problem per region. Specifically in the face editing scenario this results in more accurate projection, with superior identity preservation and the ability to cleanly spatially disentangle partial edits. As discussed, by segmenting the object being generated, the search is less constrained. Accordingly, the most accurate projection for a specific segment of the image may result in an overall image that does not closely resemble the input image. For example, while the generated eyes segment may closely match the input image, the rest of the generated face may not resemble the input at all. This allows for the most accurate segment to be used rather than having to attempt to identify the projection that matches the entirety of the image.
In a traditional optimization pipeline of this kind, a generator creates an image, and then a loss function evaluates it. The partial gradients of the loss are then evaluated and backpropagated to the generator's latent vector, iterating until convergence. Unlike traditional optimization pipelines, embodiments insert a segmentation and compositing layer into the pipeline. This layer can combine images from multiple generators (which may or may not be copies of the same model) with different latent codes and combine them into a single image by assigning pixels from different generators to different segments of the subject. Loss is then evaluated and gradients backpropagated to each generator's latent code.
Segmentation can be fixed, or dynamically computed based on the actual pixel values of all the images to minimize stitching errors. In some embodiments, the segmentation is itself differentiable. This enables the ability to optimize the latent code for improved segmentation results. This also allows part of the image to be completely fixed to facilitate localized edits; in such case, pixels from the original image are preserved explicitly. Accordingly, those portions of the output image perfectly match the input image, with only specific segments of the output image being generated.
Embodiments result in improved image quality, as each generator can more accurately replicate a segment of the subject of an image than the entire image. Additionally, images can be generated more quickly, as the quality improvements do not require the post processing of prior solutions.
Generative Adversarial Networks (GANs) are a type of machine learning technique which learn to generate new data that is similar to the data on which it was trained. For example, a GAN trained on images of cats will generate new images of cats, a GAN trained on images of faces will generate new images of faces, etc. With the increasing ability of GANs to generate images hardly distinguishable from real photographs, there have been numerous attempts to estimate latent code of the network generating images that look very close to the given input photo. By manipulating those codes in a specific direction, one may alter the appearance of the input photo in a specific way while retaining the original visual features, e.g., adding more hair to a bald person while retaining its identity.
As shown in
The input image may be segmented by segmentation manager 104. In some embodiments, segmentation manager 104 may include a pretrained semantic segmentation model. For example, a semantic segmentation model trained to segment faces may be used on input images of faces. Different semantic segmentation models may be used for segmenting input images of different objects. Additionally, or alternatively, segmentation manager 104 can provide a user interface through which the user can identify one or more segments in the input image (e.g., the user can “paint” all or portions of a segmentation mask on the input image).
The input image 102 and segmentation mask generated by segmentation manager 104, can then be provided to segmented optimization manager 106 at numeral 2. As shown in
The output image 114 is then output by the image generation system 100 at numeral 3. In some embodiments, the output image is returned to the user (e.g., displayed in a user interface of a digital design system, stored in a storage location accessible to the user, etc.). The user can then use the image as desired (e.g., incorporated into another document, etc.). Additionally, as discussed, in some embodiments, the input image 102 is a frame of an input video. In some embodiments, once one frame of the input video has been processed, then the next frame of the input video may be processed. This processing may continue until the entire input video has been processed. Alternatively, the user may specify a certain portion of the input video (e.g., by scene, by timestamps, etc.) to be processed by the image generation system 100. Once the specified portion of the input video has been processed (e.g., once each frame of the portion of the input video has been processed), then processing of the input video ends.
Optionally, in some embodiments, the images can be iteratively edited. For example, the output image 114 can then become the input image 102, and a further editing process can be performed, as shown at numeral 4. The output of a first edit becomes the input to a second edit. This enables users to make specific changes one at a time, which can provide a simpler and more intuitive editing process for the user.
Each GAN 302A-302D generates an image which has been optimized to reproduce just the segment that GAN is responsible for. For example, each GAN performs a projection into its latent space of the segment it is responsible for, to identify a latent code that produces an image having a similar segment. As such, the generated image may not resemble the input image except for the one segment. For example, if the input image is of a face, and the GAN is responsible for generating a segment corresponding to the mouth, the generated image may have a similar mouth but no other similar features (e.g., the input face and the generated face, as a whole, may not resemble one another). The resulting images are then provided to segmentation layer 304, which stitches (e.g., composites) the appropriate segments from the images to create generated image 306. This is provided to a loss function 308 (such as an image loss function) which calculates a loss representing a difference between the input image and the generated image. A gradient descent is then passed back to each of the GANs, as shown via dotted lines, and used to optimize their latent spaces to select another latent code to generate another image that results in a lower loss. This may be repeated a set number of times, until the loss is below a threshold value, or based on other quality metric.
For example, if the mouth of the face depicted in the input image is to be edited (e.g., to change the expression), then the input image and a mask indicating the mouth area of the image are provided. The generator 302A then optimizes for a mouth that resembles the segment of the input image that includes the mouth. The overall image generated by generator 302A may not closely resemble the input, and may look quite different, but the segments will resemble one another. Once the latent code has been found that produces an image with a similar segment, then the latent space can be explored to identify the desired changes to the segment. Once identified, the resulting image, input image, and mask, can be provided to the segmentation layer, which stitches the images together. This retains the pixel data of the original image except for the replaced segment which is taken from the image generated by generator 302A. As a result, the features within the segment are disentangled from the rest of the image, enabling changes to be made to the segment without affecting features of the rest of the generated image (as they are taken from the original image). As in the example of
The resulting gradients are then backpropagated and used to update the boundaries of the input image mask. In some embodiments, the gradients are backpropagated to the segmentation manager 104 which can update the segmentation of the input image. This process iterates and optimizes both the boundaries of the mask, and the closeness of the segment of the generated image to the corresponding segment of the input image. Once optimized, the segment of the image generated by generator 302A and the input image 102 are stitched together by segmentation layer 304 and provided as output image 114.
Additionally, an input image 102 and input image mask 400 are provided to image generation system 100. As discussed, in some embodiments, each GAN 600A-600C is responsible for generating an image that has been optimized such that a particular segment is visually close to the corresponding segment of the input image. By limiting any given GAN to matching a particular segment, rather than recreating the entire image, the quality of that segment can be greatly improved. Additionally, in the example of
As illustrated in
Additionally, the user interface manager 902 allows users to request the image generation system 900 to edit the generated image such as by changing features of the objects depicted in the input image. For example, where the input image includes a representation of a person's face, the user can request that the image generation system change the age, hair or eye color, expression, hairstyle, facial hair, etc. In some embodiments, the user interface manager 902 enables the user to view the resulting output image and/or request further edits to the image.
As illustrated in
As illustrated in
The function of S(I) depends on the method which provides the input based on I. In the case of gradient descent on some image loss , S(I) can be a compositing function which combines the region of interest defined by Sk with I, such that per-pixel loss outside of the region is 0. Some embodiments use a feed-forward code estimation, which may be implemented by providing a mask as part of the input and altering the training regime of the code-providing network.
Compositing the projected segments back into the final image poses some challenges, as in general, the methods of projecting do not guarantee there will be continuity on segment boundaries. One way to improve continuity is by ensuring the segments are partially overlapping—or equivalently, the projection loss function is conditioned to additionally consider a small neighborhood around the segment. If there are areas of the input image that do not need to be edited, then these areas can be excluded from the projection altogether and substituted with their corresponding portions of I.
Additionally, to achieve even better inversion, embodiments can fine tune the underlaying model for each segment, using techniques like Pivotal Tuning. Much like in the projection task, fine tuning is made considerably easier with reduced constraints offered by only considering the region of interest (e.g., the particular segment associated with that network).
Once the projection has been obtained, latent space edits are performed to achieve natural-looking changes in the input image, such as changing a person's gaze or haircut. Composites which reach best editability are achieved when S is close to semantic segmentation, such as including individual facial features, similar to the mask described above with respect to
The edits performed can include locally global edits, where all of the segments are changed in the same direction, in hopes of achieving a consistent change. For example, embodiments can perform various editedk=k+αD, with known directions D, and then compose the final image using S.
Another possible image modification is layered, where segments are optimized sequentially, edited, and the final image becomes new I for the next iteration of changes. This type is shown in
When compositing an edited image, even when the edits are consistent on all segments at the same time, continuity between segments will almost surely be affected to some degree, and thus correction is necessary. Accordingly, in some embodiments image editing techniques, such as Poisson image editing, can be used between the segments to hide minor discrepancies, such that the final image still looks natural.
Additionally, as discussed, the segmentation S does not need to hold constant throughout the search for X. The n projections can be batch-processed, and if using an iterative optimization method, the state of the total projection at each step can be observed. This offers an opportunity to check whether any changes should be applied to S (e.g., change the boundaries of the segments) in an attempt to obtain a better overall result.
In some embodiments, multiple goals of refinement can be defined. For example, one goal is avoiding segment lines going through low-frequency features. There are many ways to achieve this automatic refinement of segmentation. For example, level sets in the difference of gradient images of adjacent segments can be found and the segments adjusted to minimize this metric along a new hard segment boundary. Similarly, S should change during the editing, and it becomes especially helpful when performing edits that change features near a segment boundary, as the neighboring segments are no longer likely to be consistent.
As illustrated in
As illustrated in
As illustrated in
As further illustrated in
Each of the components 902-912 of the image generation system 900 and their corresponding elements (as shown in
The components 902-912 and their corresponding elements can comprise software, hardware, or both. For example, the components 902-912 and their corresponding elements can comprise one or more instructions stored on a computer-readable storage medium and executable by processors of one or more computing devices. When executed by the one or more processors, the computer-executable instructions of the image generation system 900 can cause a client device and/or a server device to perform the methods described herein. Alternatively, the components 902-912 and their corresponding elements can comprise hardware, such as a special purpose processing device to perform a certain function or group of functions. Additionally, the components 902-912 and their corresponding elements can comprise a combination of computer-executable instructions and hardware.
Furthermore, the components 902-912 of the image generation system 900 may, for example, be implemented as one or more stand-alone applications, as one or more modules of an application, as one or more plug-ins, as one or more library functions or functions that may be called by other applications, and/or as a cloud-computing model. Thus, the components 902-912 of the image generation system 900 may be implemented as a stand-alone application, such as a desktop or mobile application. Furthermore, the components 902-912 of the image generation system 900 may be implemented as one or more web-based applications hosted on a remote server. Alternatively, or additionally, the components of the image generation system 900 may be implemented in a suit of mobile device applications or “apps.” To illustrate, the components of the image generation system 900 may be implemented as part of an application, or suite of applications, including but not limited to ADOBE CREATIVE CLOUD, ADOBE PHOTO SHOP, ADOBE ACROBAT, ADOBE ILLUSTRATOR, ADOBE LIGHTROOM and ADOBE INDESIGN. “ADOBE”, “CREATIVE CLOUD,” “PHOTO SHOP,” “ACROBAT,” “ILLUSTRATOR,” “LIGHTROOM,” and “INDESIGN” are either registered trademarks or trademarks of Adobe Inc. in the United States and/or other countries.
As illustrated in
As illustrated in
As illustrated in
In some embodiments, the method further includes receiving a request to edit a first portion of the input image, determining a segment of the input image corresponding to the first portion of the input image, generating, by a generator corresponding to the segment of the input image, an edited image by exploring the latent space associated with the generator, and generating an edited output image by compositing the edited image with the input image.
In some embodiments, the plurality of generators are clones of a generator model. In some embodiments, the plurality of generators includes two or more different generator models. In some embodiments, the segmentation mask is dynamically updated based on a stitching loss calculated by a stitching layer.
In some embodiments, receiving an input image and a segmentation mask further includes processing the input image using a semantic segmentation model to generate the segmentation mask. In some embodiments, receiving an input image and a segmentation mask further includes receiving an input identifying at least one segment of the segmentation mask via a user interface, wherein the input includes painting the at least one segment on a representation of the input image.
Although
Similarly, although the environment 1100 of
As illustrated in
Moreover, as illustrated in
In addition, the environment 1100 may also include one or more servers 1104. The one or more servers 1104 may generate, store, receive, and transmit any type of data. For example, a server 1104 may receive data from a client device, such as the client device 1106A, and send the data to another client device, such as the client device 1102B and/or 1102N. The server 1104 can also transmit electronic messages between one or more users of the environment 1100. In one example embodiment, the server 1104 is a data server. The server 1104 can also comprise a communication server or a web-hosting server. Additional details regarding the server 1104 will be discussed below with respect to
As mentioned, in one or more embodiments, the one or more servers 1104 can include or implement at least a portion of the image generation system. In particular, the image generation system can comprise an application running on the one or more servers 1104 or a portion of the image generation system can be downloaded from the one or more servers 1104. For example, the image generation system can include a web hosting application that allows the client devices 1106A-1106N to interact with content hosted at the one or more servers 1104. To illustrate, in one or more embodiments of the environment 1100, one or more client devices 1106A-1106N can access a webpage supported by the one or more servers 1104. In particular, the client device 1106A can run a web application (e.g., a web browser) to allow a user to access, view, and/or interact with a webpage or website hosted at the one or more servers 1104.
Upon the client device 1106A accessing a webpage or other web application hosted at the one or more servers 1104, in one or more embodiments, the one or more servers 1104 can provide access to one or more digital images (e.g., the input image data, such as camera roll or an individual's personal photos) stored at the one or more servers 1104. Moreover, the client device 1106A can receive a request (i.e., via user input) to generate and/or edit an image similar to the input image and provide the request to the one or more servers 1104. Upon receiving the request, the one or more servers 1104 can automatically perform the methods and processes described above to generate the requested image. The one or more servers 1104 can provide the resulting image to the client device 1106A for display to the user.
As just described, the image generation system may be implemented in whole, or in part, by the individual elements 1102-1108 of the environment 1100. It will be appreciated that although certain components of the image generation system are described in the previous examples with regard to particular elements of the environment 1100, various alternative implementations are possible. For instance, in one or more embodiments, the image generation system is implemented on any of the client devices 1106A-N. Similarly, in one or more embodiments, the image generation system may be implemented on the one or more servers 1104. Moreover, different components and functions of the image generation system may be implemented separately among client devices 1106A-1106N, the one or more servers 1104, and the network 1108.
Embodiments of the present disclosure may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments within the scope of the present disclosure also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. In particular, one or more of the processes described herein may be implemented at least in part as instructions embodied in a non-transitory computer-readable medium and executable by one or more computing devices (e.g., any of the media content access devices described herein). In general, a processor (e.g., a microprocessor) receives instructions, from a non-transitory computer-readable medium, (e.g., a memory, etc.), and executes those instructions, thereby performing one or more processes, including one or more of the processes described herein.
Computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are non-transitory computer-readable storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the disclosure can comprise at least two distinctly different kinds of computer-readable media: non-transitory computer-readable storage media (devices) and transmission media.
Non-transitory computer-readable storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to non-transitory computer-readable storage media (devices) (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media (devices) at a computer system. Thus, it should be understood that non-transitory computer-readable storage media (devices) can be included in computer system components that also (or even primarily) utilize transmission media.
Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general-purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. In some embodiments, computer-executable instructions are executed on a general-purpose computer to turn the general-purpose computer into a special purpose computer implementing elements of the disclosure. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the disclosure may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
Embodiments of the present disclosure can also be implemented in cloud computing environments. In this description, “cloud computing” is defined as a model for enabling on-demand network access to a shared pool of configurable computing resources. For example, cloud computing can be employed in the marketplace to offer ubiquitous and convenient on-demand access to the shared pool of configurable computing resources. The shared pool of configurable computing resources can be rapidly provisioned via virtualization and released with low management effort or service provider interaction, and then scaled accordingly.
A cloud-computing model can be composed of various characteristics such as, for example, on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, and so forth. A cloud-computing model can also expose various service models, such as, for example, Software as a Service (“SaaS”), Platform as a Service (“PaaS”), and Infrastructure as a Service (“IaaS”). A cloud-computing model can also be deployed using different deployment models such as private cloud, community cloud, public cloud, hybrid cloud, and so forth. In this description and in the claims, a “cloud-computing environment” is an environment in which cloud computing is employed.
In particular embodiments, processor(s) 1202 includes hardware for executing instructions, such as those making up a computer program. As an example, and not by way of limitation, to execute instructions, processor(s) 1202 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 1204, or a storage device 1208 and decode and execute them. In various embodiments, the processor(s) 1202 may include one or more central processing units (CPUs), graphics processing units (GPUs), field programmable gate arrays (FPGAs), systems on chip (SoC), or other processor(s) or combinations of processors.
The computing device 1200 includes memory 1204, which is coupled to the processor(s) 1202. The memory 1204 may be used for storing data, metadata, and programs for execution by the processor(s). The memory 1204 may include one or more of volatile and non-volatile memories, such as Random Access Memory (“RAM”), Read Only Memory (“ROM”), a solid state disk (“SSD”), Flash, Phase Change Memory (“PCM”), or other types of data storage. The memory 1204 may be internal or distributed memory.
The computing device 1200 can further include one or more communication interfaces 1206. A communication interface 1206 can include hardware, software, or both. The communication interface 1206 can provide one or more interfaces for communication (such as, for example, packet-based communication) between the computing device and one or more other computing devices 1200 or one or more networks. As an example and not by way of limitation, communication interface 1206 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI. The computing device 1200 can further include a bus 1212. The bus 1212 can comprise hardware, software, or both that couples components of computing device 1200 to each other.
The computing device 1200 includes a storage device 1208 includes storage for storing data or instructions. As an example, and not by way of limitation, storage device 1208 can comprise a non-transitory storage medium described above. The storage device 1208 may include a hard disk drive (HDD), flash memory, a Universal Serial Bus (USB) drive or a combination these or other storage devices. The computing device 1200 also includes one or more input or output (“I/O”) devices/interfaces 1210, which are provided to allow a user to provide input to (such as user strokes), receive output from, and otherwise transfer data to and from the computing device 1200. These I/O devices/interfaces 1210 may include a mouse, keypad or a keyboard, a touch screen, camera, optical scanner, network interface, modem, other known I/O devices or a combination of such I/O devices/interfaces 1210. The touch screen may be activated with a stylus or a finger.
The I/O devices/interfaces 1210 may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, I/O devices/interfaces 1210 is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation.
In the foregoing specification, embodiments have been described with reference to specific exemplary embodiments thereof. Various embodiments are described with reference to details discussed herein, and the accompanying drawings illustrate the various embodiments. The description above and drawings are illustrative of one or more embodiments and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of various embodiments.
Embodiments may include other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. For example, the methods described herein may be performed with less or more steps/acts or the steps/acts may be performed in differing orders. Additionally, the steps/acts described herein may be repeated or performed in parallel with one another or in parallel with different instances of the same or similar steps/acts. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.
In the various embodiments described above, unless specifically noted otherwise, disjunctive language such as the phrase “at least one of A, B, or C,” is intended to be understood to mean either A, B, or C, or any combination thereof (e.g., A, B, and/or C). As such, disjunctive language is not intended to, nor should it be understood to, imply that a given embodiment requires at least one of A, at least one of B, or at least one of C to each be present.
This application claims the benefit of U.S. Provisional Application No. 63/245,628, filed Sep. 17, 2021, and U.S. Provisional Application No. 63/248,268, filed Sep. 24, 2021, which are hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63245628 | Sep 2021 | US | |
63248268 | Sep 2021 | US |