Computer users have used various drawing tools for creating art. Commonly, such art is created on a display screen of a computer's audiovisual display by use of a mouse. An artist can generate images by moving a cursor across the display screen and by performing a series of point-and-click actions. In addition, the artist may use a keyboard or the mouse for selecting colors to decorate elements within the generated images. In addition, art applications include various editing tools for adding or changing colors, shapes, and the like.
Systems and methods are needed whereby an artist can use computer input devices other than a mouse and keyboard for creating art. Further, it is desirable to provide systems and methods that increase the degree of a user's perceived interactivity with creation of the art.
Disclosed herein are systems and methods for decorating a display environment. In one embodiment, a user may decorate a display environment by making one or more gestures, using voice commands, using a suitable interface device, and/or combinations thereof. A voice command can be detected for user selection of an artistic feature, such as, for example, a color, a texture, an object, and/or a visual effect for decorating in a display environment. For example, the user can speak a desired color choice for coloring an area or portion of a display environment, and the speech can be recognized as selection of the color. Alternatively, the voice command can select one or more of a texture, an object, or a visual effect for decorating the display environment. The user can also gesture for selecting or targeting a portion of the display environment for decoration. For example, the user can make a throwing motion with his or her arm for selecting the portion of the display environment. In this example, the selected portion can be an area on a display screen of an audiovisual device that may be contacted by an object if thrown by the user at the speed and trajectory of the user's throw. Next, the selected portion of the display environment can be altered based on the selected artistic feature. The user's motions can be reflected in the display environment on an avatar. In addition, a virtual canvas or three-dimensional object can be displayed in the display environment for decoration by the user.
In another embodiment, a portion of a display environment may be decorated based on a characteristic of a user's gesture. A user's gesture may be detected by an image capture device. For example, the user's gesture may be a throwing movement, a wrist movement, a torso movement, a hand movement, a leg movement, an arm movement, or the like. A characteristic of the user's gesture may be determined. For example, one or more of a speed, a direction, a starting position, an ending position, and the like associated with the movement may be determined. Based on one or more of these characteristics, a portion of the display environment for decoration may be selected. The selected portion of the display environment may be altered based on the characteristic(s) of the user's gesture. For example, a position of the selected portion in the display environment, a size of the selected portion, and/or a pattern of the selected portion may be based on the speed and/or the direction of a throwing motion of the user.
In yet another embodiment, a captured image of an object can be used in a manner of stenciling for decorating in a display environment. An image of the object may be captured by an image capture device. An edge of at least a portion of the object in the captured image may be determined. A portion of the display environment may be defined based on the determined edge. For example, an outline of an object, such as the user, may be determined. In this example, the defined portion of the display environment can have a shape matching the outline of the user. The defined portion may be decorated, such as, for example, by coloring, by adding texture, and/or by a visual effect.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
The systems, methods, and computer readable media for altering a view perspective within a virtual environment in accordance with this specification are further described with reference to the accompanying drawings in which:
As will be described herein, a user may decorate a display environment by making one or more gestures, using voice commands, and/or using a suitable interface device. According to one embodiment, a voice command can be detected for user selection of an artistic feature, such as, for example, a color, a texture, an object, and a visual effect. For example, the user can speak a desired color choice for coloring an area or portion of a display environment, and the speech can be recognized as selection of the color. Alternatively, the voice command can select one or more of a texture, an object, or a visual effect for decorating the display environment. The user can also gesture for selecting a portion of the display environment for decoration. For example, the user can make a throwing motion with his or her arm for selecting the portion of the display environment. In this example, the selected portion can be an area on a display screen of an audiovisual device that may be contacted by an object if thrown by the user at the speed and trajectory of the user's throw. Next, the selected portion of the display environment can be altered based on the selected artistic feature.
In another embodiment, a portion of a display environment may be decorated based on a characteristic of a user's gesture. A user's gesture may be detected by an image capture device. For example, the user's gesture may be a throwing movement, a wrist movement, a torso movement, a hand movement, a leg movement, an arm movement, or the like. A characteristic of the user's gesture may be determined. For example, one or more of a speed, a direction, a starting position, an ending position, and the like associated with the movement may be determined. Based on one or more of these characteristics, a portion of the display environment for decoration may be selected. The selected portion of the display environment may be altered based on the characteristic(s) of the user's gesture. For example, a position of the selected portion in the display environment, a size of the selected portion, and/or a pattern of the selected portion may be based on the speed and/or the direction of a throwing motion of the user.
In yet another embodiment, a captured image of an object can be used in a manner of stenciling for decorating in a display environment. An image of the object may be captured by an image capture device. An edge of at least a portion of the object in the captured image may be determined. A portion of the display environment may be defined based on the determined edge. For example, an outline of an object, such as the user, may be determined. In this example, the defined portion of the display environment can have a shape matching the outline of the user. The defined portion may be decorated, such as, for example, by coloring, by adding texture, and/or by a visual effect.
As shown in
As shown in
According to one embodiment, the system 10 may be connected to the audiovisual device 16. The audiovisual device 16 may be any type of display system, such as a television, a monitor, a high-definition television (HDTV), or the like that may provide game or application visuals and/or audio to a user such as the user 18. For example, the computing environment 12 may include a video adapter such as a graphics card and/or an audio adapter such as a sound card that may provide audiovisual signals associated with the game application, non-game application, or the like. The audiovisual device 16 may receive the audiovisual signals from the computing environment 12 and may then output the game or application visuals and/or audio associated with the audiovisual signals to the user 18. According to one embodiment, the audiovisual device 16 may be connected to the computing environment 12 via, for example, an S-Video cable, a coaxial cable, an HDMI cable, a DVI cable, a VGA cable, or the like.
As shown in
In one embodiment, the computing environment 12 may recognize an open and/or closed position of a user's hand for timing the release of paint in the virtual environment. For example, as described above, an avatar can be controlled to “throw” paint onto the canvas 17. The avatar's movement can mimic the throwing motion of the user. During the throwing motion, the release of paint from the avatar's hand to throw the paint onto the canvas can be timed to correspond to when the user opens his or her hand. For example, the user can begin the throwing motion with a closed hand for “holding” paint. In this example, at any time during the user's throwing motion, the user can open his or her hand to control the avatar to release the paint held by the avatar such that it travels towards the canvas. The speed and direction of the paint on release from the avatar's hand can be directly related to the speed and direction of the user's hand speed and direction when the hand is opened. In this way, the throwing of paint by the avatar in the virtual environment can correspond to the user's motion.
In another embodiment, rather than applying paint onto the canvas 17 with a throwing motion or in combination with this motion, a user can move his or her wrist in a flicking motion to apply paint to the canvas. For example, the computing environment 12 can recognize a rapid wrist movement as being a command for applying a small amount of paint onto a portion of the canvas 17. The avatar's movement can reflect the user's wrist movement. In addition, an animation can be rendered in the display environment such that it appears that the avatar is using its wrist to flick paint onto the canvas. The resulting decoration on the canvas can be dependent on the speed and/or direction of motion of the user's wrist movement.
In another embodiment, user movements may be recognized only in a single plane in the user's space. The user may provide a command such that his or her movements are only recognized by the computing environment 12 in an X-Y plane, an X-Z plane, or the like with respect to the user such that the user's motion outside of the plane is ignored. For example, if only movement in the X-Y plane is recognized, movement in the Z-direction is ignored. This feature can be useful for drawing on a canvas by movement of the user's hand. For example, the user can move his or her hand in the X-Y plane, and a line corresponding to the user's movement may be generated on the canvas with a shape that directly corresponds to the user's movement in the X-Y plane. Further, in an alternative, limited movement may be recognized in other planes for effecting alterations as described herein.
System 10 may include a microphone or other suitable device to detect voice commands from a user for use in selecting an artistic feature for decorating the canvas 17. For example, a plurality of artistic features may each be defined, stored in the computing environment 12, and associated with voice recognition data for its selection. A color and/or graphics of a cursor 13 may change based on the audio input. In an example, a user's voice command can change a mode of applying decorations to the canvas 17. The user may speak the word “red,” and this word can be interpreted by the computing environment 12 as being a command to enter a mode for painting the canvas 17 with the color red. Once in the mode for painting with a particular color, a user may then make one or more gestures for “throwing” paint with his or her hand(s) onto the canvas 17. The avatar's movement can mimic the user's motion, and an animation can be rendered such that it appears that the avatar is throwing the paint onto the canvas 17.
As shown in
According to another example embodiment, a 3-D camera may be used to indirectly determine a physical distance from the image capture device 20 to the user's hand by analyzing the intensity of the reflected beam of light over time via various techniques including, for example, shuttered light pulse imaging. This information may also be used to determine movement of the user's hand and/or other user movement.
In another example embodiment, the image capture device 20 may use a structured light to capture gesture information. In such an analysis, patterned light (i.e., light displayed as a known pattern such as grid pattern or a stripe pattern) may be projected onto the scene via, for example, the IR light component 24. Upon striking the surface of the user's hand, the pattern may become deformed in response. Such a deformation of the pattern may be captured by, for example, the 3-D camera 26 and/or the RGB camera 28 and may then be analyzed to determine a physical distance from the capture device to the user's hand and/or other body part.
According to another embodiment, the capture device 20 may include two or more physically separated cameras that may view a scene from different angles, to obtain visual stereo data that may be resolved to generate gesture information.
The capture device 20 may further include a microphone 30. The microphone 30 may include transducers or sensors that may receive and convert sound into electrical signals. According to one embodiment, the microphone 30 may be used to reduce feedback between the capture device 20 and the computing environment 12 in the system 10. Additionally, the microphone 30 may be used to receive audio signals that may also be provided by the user to control the activity and/or appearance of an avatar, and/or a mode for decorating a canvas or other portion of a display environment.
In an example embodiment, the capture device 20 may further include a processor 32 that may be in operative communication with the image camera component 22. The processor 32 may include a standardized processor, a specialized processor, a microprocessor, or the like that may execute instructions that may include instructions for receiving the user gesture-related images, determining whether a user's hand or other body part may be included in the gesture image(s), converting the image into a skeletal representation or model of the user's hand or other body part, or any other suitable instruction.
The capture device 20 may further include a memory component 34 that may store the instructions that may be executed by the processor 32, images or frames of images captured by the 3-D camera or RGB camera, any other suitable information, images, or the like. According to an example embodiment, the memory component 34 may include random access memory (RAM), read only memory (ROM), cache, flash memory, a hard disk, or any other suitable storage component. As shown in
As shown in
Additionally, the capture device 20 may provide the user gesture information and images captured by, for example, the 3-D camera 26 and/or the RGB camera 28, and a skeletal model that may be generated by the capture device 20 to the computing environment 12 via the communication link 36. The computing environment 12 may then use the skeletal model, gesture information, and captured images to, for example, control an avatar's appearance and/or activity. For example, as shown, in
A graphics processing unit (GPU) 108 and a video encoder/video codec (coder/decoder) 114 form a video processing pipeline for high speed and high resolution graphics processing. Data is carried from the graphics processing unit 108 to the video encoder/video codec 114 via a bus. The video processing pipeline outputs data to an A/V (audio/video) port 140 for transmission to a television or other display. A memory controller 110 is connected to the GPU 108 to facilitate processor access to various types of memory 112, such as, but not limited to, a RAM (Random Access Memory). In one example, the GPU 108 may be a widely-parallel general purpose processor (known as a general purpose GPU or GPGPU).
The multimedia console 100 includes an I/O controller 120, a system management controller 122, an audio processing unit 123, a network interface controller 124, a first USB host controller 126, a second USB controller 128 and a front panel I/O subassembly 130 that are preferably implemented on a module 118. The USB controllers 126 and 128 serve as hosts for peripheral controllers 142(1)-142(2), a wireless adapter 148, and an external memory device 146 (e.g., flash memory, external CD/DVD ROM drive, removable media, etc.). The network interface 124 and/or wireless adapter 148 provide access to a network (e.g., the Internet, home network, etc.) and may be any of a wide variety of various wired or wireless adapter components including an Ethernet card, a modem, a Bluetooth module, a cable modem, and the like.
System memory 143 is provided to store application data that is loaded during the boot process. A media drive 144 is provided and may comprise a DVD/CD drive, hard drive, or other removable media drive, etc. The media drive 144 may be internal or external to the multimedia console 100. Application data may be accessed via the media drive 144 for execution, playback, etc. by the multimedia console 100. The media drive 144 is connected to the I/O controller 120 via a bus, such as a Serial ATA bus or other high speed connection (e.g., IEEE 1394).
The system management controller 122 provides a variety of service functions related to assuring availability of the multimedia console 100. The audio processing unit 123 and an audio codec 132 form a corresponding audio processing pipeline with high fidelity and stereo processing. Audio data is carried between the audio processing unit 123 and the audio codec 132 via a communication link. The audio processing pipeline outputs data to the A/V port 140 for reproduction by an external audio player or device having audio capabilities.
The front panel I/O subassembly 130 supports the functionality of the power button 150 and the eject button 152, as well as any LEDs (light emitting diodes) or other indicators exposed on the outer surface of the multimedia console 100. A system power supply module 136 provides power to the components of the multimedia console 100. A fan 138 cools the circuitry within the multimedia console 100.
The CPU 101, GPU 108, memory controller 110, and various other components within the multimedia console 100 are interconnected via one or more buses, including serial and parallel buses, a memory bus, a peripheral bus, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can include a Peripheral Component Interconnects (PCI) bus, PCI-Express bus, etc.
When the multimedia console 100 is powered ON, application data may be loaded from the system memory 143 into memory 112 and/or caches 102, 104 and executed on the CPU 101. The application may present a graphical user interface that provides a consistent user experience when navigating to different media types available on the multimedia console 100. In operation, applications and/or other media contained within the media drive 144 may be launched or played from the media drive 144 to provide additional functionalities to the multimedia console 100.
The multimedia console 100 may be operated as a standalone system by simply connecting the system to a television or other display. In this standalone mode, the multimedia console 100 allows one or more users to interact with the system, watch movies, or listen to music. However, with the integration of broadband connectivity made available through the network interface 124 or the wireless adapter 148, the multimedia console 100 may further be operated as a participant in a larger network community.
When the multimedia console 100 is powered ON, a set amount of hardware resources are reserved for system use by the multimedia console operating system. These resources may include a reservation of memory (e.g., 16 MB), CPU and GPU cycles (e.g., 5%), networking bandwidth (e.g., 8 kbs), etc. Because these resources are reserved at system boot time, the reserved resources do not exist from the application's view.
In particular, the memory reservation preferably is large enough to contain the launch kernel, concurrent system applications and drivers. The CPU reservation is preferably constant such that if the reserved CPU usage is not used by the system applications, an idle thread will consume any unused cycles.
With regard to the GPU reservation, lightweight messages generated by the system applications (e.g., popups) are displayed by using a GPU interrupt to schedule code to render popup into an overlay. The amount of memory required for an overlay depends on the overlay area size and the overlay preferably scales with screen resolution. Where a full user interface is used by the concurrent system application, it is preferable to use a resolution independent of application resolution. A scaler may be used to set this resolution such that the need to change frequency and cause a TV resynch is eliminated.
After the multimedia console 100 boots and system resources are reserved, concurrent system applications execute to provide system functionalities. The system functionalities are encapsulated in a set of system applications that execute within the reserved system resources described above. The operating system kernel identifies threads that are system application threads versus gaming application threads. The system applications are preferably scheduled to run on the CPU 101 at predetermined times and intervals in order to provide a consistent system resource view to the application. The scheduling is to minimize cache disruption for the gaming application running on the console.
When a concurrent system application requires audio, audio processing is scheduled asynchronously to the gaming application due to time sensitivity. A multimedia console application manager (described below) controls the gaming application audio level (e.g., mute, attenuate) when system applications are active.
Input devices (e.g., controllers 142(1) and 142(2)) are shared by gaming applications and system applications. The input devices are not reserved resources, but are to be switched between system applications and the gaming application such that each will have a focus of the device. The application manager preferably controls the switching of input stream, without knowledge the gaming application's knowledge and a driver maintains state information regarding focus switches. The cameras 27, 28 and capture device 20 may define additional input devices for the console 100.
In
The computer 241 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
The computer 241 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 246. The remote computer 246 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 241, although only a memory storage device 247 has been illustrated in
When used in a LAN networking environment, the computer 241 is connected to the LAN 245 through a network interface or adapter 237. When used in a WAN networking environment, the computer 241 typically includes a modem 250 or other means for establishing communications over the WAN 249, such as the Internet. The modem 250, which may be internal or external, may be connected to the system bus 221 via the user input interface 236, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 241, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
At 510, one or more of the user's gestures and/or the user's voice commands are detected for targeting or selecting a portion of a display environment. For example, an image capture device may capture a series of images of a user while the user makes one or more of the following movements: a throwing movement, a wrist movement, a torso movement, a hand movement, a leg movement, an arm movement, or the like. The detected gestures may be used in selecting a position of the selected portion in the display environment, a size of the selected portion, a pattern of the selected portion, and/or the like. Further, a computing environment may recognize that the combination of the user's positions in the captured images corresponds to a particular movement. In addition, the user's movements may be processed for detecting one or more movement characteristics. For example, the computing environment may determine a speed and/or direction of the arm's movement based on a positioning of an arm in the captured images and the time elapsed between two or more of the images. In another example, based on the captured images, the computing environment may detect a position characteristic of the user's movement in one or more of the captures images. In this example, a user movement's starting position, ending position, intermediate position, and/or the like may be detected for selecting a portion of the display environment for decoration.
In an embodiment, using the one or more detected characteristics of the user's gesture, a portion of the display environment may be selected for decoration in accordance with a selected artistic feature at 505. For example, if a user selects a color mode for coloring red and makes a throwing motion as shown in
At 515, the selected portion of the display environment is altered based on the selected artistic feature. For example, the selected portion of the display environment can be colored red or any other color selected by the user using the voice command. In another example, the selected portion may decorated with any other two-dimensional imagery selected by user, such as a striped pattern, a polka dot pattern, any color combination, any color mixture, or the like.
An artistic feature may be any imagery suitable for display within a display environment. For example, two-dimensional imagery may be displayed within a portion of the display environment. In another example, the imagery may appear to be three-dimensional to a viewer. Three-dimensional imagery can appear to have texture and depth to a viewer. In another example, an artistic feature can be an animation feature that changes over time. For example, the imagery can appear organic (e.g., a plant or the like) and grow over time within the selected portion and/or into other portions of the display environment.
In one embodiment, a user can select a virtual object for use in decorating in the display environment. The object can be, for example, putty, paint, or the like for creating a visual effect at a portion of the display environment. For example, after selection of the object, an avatar representing the user can be controlled, as described herein, to throw the object at the portion of the display environment. An animation of the avatar throwing the object can be rendered, and the effect of the object hitting the object can be displayed. For example, a ball of putty thrown at a canvas can flatten on impact with the canvas and render an irregular, three-dimensional shape of the putty. In another example, the avatar can be controlled to throw paint at the canvas. In this example, an animation can show the avatar picking up paint out of a bucket, and throwing the paint at the canvas such that the canvas is painted in a selected color in an irregular, two-dimensional shape.
In an embodiment, the selected artistic feature may be an object that can be sculpted by user gestures or other input. For example, the user may use a voice command or other input for selecting an object that appears three-dimensional in a display environment. In addition, the user may select an object type, such as, for example, clay that can be molded by user gestures. Initially, the object can be spherical in shape, or any other suitable shape for molding. The user can then make gestures that can be interpreted for molding the shape. For example, the user can make a patting gesture for flattening a side of the object. Further, the object can be considered a portion of the display environment that can be decorated by coloring, texturing, a visual effect, or the like, as described herein.
At 610, an edge of at least a portion of the object in the captured image is determined. The computing environment can be configured to recognize an outline of the user or another object. The outline of the user or object can be stored in the computing environment and/or displayed on a display screen of an audiovisual display. In an example, a portion of an outline of the user or another object can be determined or recognized. In another example, the computing environment can recognize features in the user or object, such as an outline of a user's shirt, or partitions between different portions in an object.
In one embodiment, a plurality of the user's image or another object's image can be captured over a period of time, and an outline of the captured images displayed in the display environment in real time. The user can provide a voice command or other input for storing the displayed outline for display. In this way, the user can be provided with real-time feedback on the current outline prior to capturing the image for storage and display.
At 615, a portion of a display environment is defined based on the determined edge. For example, a portion of the display environment can be defined to have a shape matching the outline of the user or another object in the captured image. The defined portion of the display environment can then be displayed. For example,
At 620, the defined portion of the display environment is decorated. For example, the defined portion may be decorated in any of the various ways described herein, such as, by coloring, by texturing, by adding a visual effect, or the like. Referring again to
Referring to
Referring to
In one embodiment, a user may utilize voice commands, gestures, or other inputs for adding and removing components or elements in a display environment. For example, shapes, images, or other artistic features contained in image files may be added to or removed from a canvas. In another example, the computing environment may recognize a user input as being an element in a library, retrieve the element, and display the element in the display environment for alteration and/or placement by the user. In addition, objects, portions, or other elements in the display environment may be identified by voice commands, gestures, or other inputs, and a color or other artistic feature of the identified object, portion, or element may be changed. In another example, a user may select to enter modes for utilizing a paint bucket, a single blotch feature, fine swath, or the like. In this example, selection of the mode effects the type of artistic feature rendered in the display environment when the user makes a recognized gesture.
In one embodiment, gesture controls in the artistic environment can be augmented with voice commands. For example, a user may use a voice command for selecting a section within a canvas. In this example, the user may then use a throwing motion to throw paint, generally in the section selected using the voice command.
In another embodiment, a three-dimensional drawing space can be converted into a three-dimensional and/or two-dimensional image. For example, the canvas 17 shown in
In one embodiment, the computing environment may dynamically determine a screen position of a user in the user's space by analyzing one or more of the user's shoulder position, reach, stance, posture, and the like. For example, the user's shoulder position may be coordinated with the plane of a canvas surface displayed in the display environment such that the user's shoulder position in the virtual space of the display environment is parallel to the plane of the canvas surface. The user's hand position relative to the user's shoulder position, stance, and/or screen position may be analyzed for determining whether the user intends to use his or her virtual hand(s) to interact with the canvas surface. For example, if the user reaches forward with his or her hand, the gesture can be interpreted as a command for interacting with the canvas surface for altering a portion of the canvas surface. The avatar can be shown to extend its hand to touch the canvas surface in a movement corresponding to the user's hand movement. Once the avatar's hand touches the canvas surface, the hand can affect elements on the canvas, such as, for example, by moving colors (or paint) appearing on the surface. Further, in the example, the user can move his or her hand to effect a movement of the avatar's hand to smear or mix paint on the canvas surface. The visual effect, in this example, is similar to finger painting in a real environment. In addition, a user can select to use his or her hand in this way move artistic features in display environment. Further, for example, the movement of the user in real space can be translated to the avatar's movement in the virtual space such that the avatar moves around a canvas in the display environment.
In another example, the user can use any portion of the body for interacting with a display environment. Other than use of his or her hand, the user may use feet, knees, head, or other body part for effecting an alteration to a display environment. For example, a user may extend his or her foot, similar to moving a hand, for causing the avatar's knee to touch a canvas surface, and thereby, alter an artistic feature on the canvas surface.
In one embodiment, a user's torso gestures may be recognized by the computing environment for effecting artistic features displayed in the display environment. For example, the user may move his or her body back-and-forth (or in a “wiggle” motion) to effect artistic features. The torso movement can distort an artistic feature, or “swirl” a displayed artistic feature.
In one embodiment, an art assist feature can be provided for analyzing current artistic features in a display environment and for determining user intent with respect to these features. For example, the art assist feature can ensure that there are no empty, or unfilled, portions in the display environment or a portion of the display environment, such as, for example, a canvas surface. Further, the art assist feature can “snap” together portions in the display environment.
In one embodiment, the computing environment maintains an editing toolset for editing decorations or art generated in a display environment. For example, the user may undo or redo input results (e.g., alterations of display environment portions, color changes, and the like) using a voice command, a gesture, or other input. In other examples, a user may layer artistic features in the display environment, zoom, stencil, and/or apply/reject for fine work. Input for using the toolset may be by voice commands, gestures, or other inputs.
In one embodiment, the computing environment may recognize when a user does not intend to create art. In effect, this feature can pause the creation of art in the display environment by the user, so the user can take a break. For example, the user can generate a recognized voice command, gesture, or the like for pausing. The user can resume the creation of art by a recognized voice command, gesture, or the like.
In yet another embodiment, art generated in accordance with the disclosed subject matter may be replicated on real world objects. For example, a two-dimensional image created on the surface of a virtual canvas may be replicated onto a poster, coffee mug, calendar, and the like. Such images may be downloaded from a user's computing environment to a server for replication of a created image onto an object. Further, the images may be replicated on virtual world objects such as an avatar, a display wallpaper, and the like.
It should be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered limiting. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or the like. Likewise, the order of the above-described processes may be changed.
Additionally, the subject matter of the present disclosure includes combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or processes disclosed herein, as well as equivalents thereof.