This disclosure relates generally to computer graphics, and more particularly but not exclusively, relates to methods, systems, and computer readable media to provide graphical representations of clothing over an underlying graphical object, such as layered clothing for three-dimensional (3D) avatars in a virtual experience.
Multi-user electronic gaming or other types of virtual experience environments may involve the use of avatars, which represent the users in the virtual experience. Different three-dimensional (3D) avatars differ in geometry/shapes from one avatar to another. For example, avatars may have different body shapes (e.g., tall, short, muscular, thin, etc.), may be of different types (e.g., male, female, human, animal, alien, etc.), may have any number and types of limbs, etc. Avatars may be customizable with multiple pieces of clothing and/or accessories worn by the avatar (e.g., shirt worn over the torso, jacket worn over the shirt, scarf worn over the jacket, hat worn over the head, etc.).
It may be difficult to obtain satisfactory visual results, in a computationally efficient manner, when layering or otherwise fitting pieces of clothing onto an avatar.
Some implementations were conceived in light of the above.
The background description provided herein is for the purpose of presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the prior disclosure.
Implementations of the present disclosure relate to fitting clothing onto three-dimensional (3D) avatars. For example, the fitting uses a variety of techniques to obtain high quality results while minimizing human labor and computational resources. As an example, the fitting may fit shoes onto an avatar with improved visual quality. For example, the techniques may include identifying a reference region, determining a reference dimension of the reference region, identifying a first region of a clothing item to be fitted, identifying at least one second dimension of the clothing item, identifying a relationship between the dimensions, changing the first dimension to correspond to the reference dimension, and changing the at least one dimension of the clothing item to scale the clothing item along the at least one second dimension.
A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by a data processing apparatus, cause the apparatus to perform the actions.
According to one aspect, a computer-implemented method to provide fitted clothing on three-dimensional (3D) avatars is provided, the method comprising: identifying a reference region of an avatar body of an avatar; determining a reference dimension of the reference region of the avatar body; identifying a first region of a clothing item that is to be fitted to the reference region of the avatar body, wherein the first region has a first dimension; identifying at least one second dimension of the clothing item; determining at least one first relationship between the first dimension and the at least one second dimension; changing the first dimension to correspond to the reference dimension of the reference region of the avatar body so as to fit the first region of the clothing item to the reference region of the avatar body, wherein changing the first dimension correspondingly causes a scaling in size of the clothing item; and changing the at least one second dimension of the clothing item to scale the clothing item along the at least one second dimension, wherein the at least one first relationship between the first dimension and the at least one second dimension is maintained after completion of the changing of the at least one second dimension.
Various implementations of the computer-implemented method are described herein.
In some implementations, the clothing item comprises a shoe.
In some implementations, identifying the reference region of the avatar body comprises identifying an ankle of the avatar body, determining the reference dimension of the reference region of the avatar body comprises determining a diagonal of the ankle, identifying the first region of the clothing item that is to be fitted to the reference region of the avatar body comprises identifying a mouth of the shoe that is to be fitted to the ankle of the avatar body, wherein the first dimension comprises a diagonal of the mouth of the shoe, identifying the at least one second dimension of the clothing item comprises identifying a height of the shoe, determining the at least one first relationship between the first dimension and the at least one second dimension comprises determining a ratio between the diagonal of the mouth of the shoe and the height of the shoe, changing the first dimension to correspond to the reference dimension of the reference region of the avatar body comprises changing the diagonal of the mouth of the shoe to match the diagonal of the ankle, and changing the at least one second dimension of the clothing item to scale the clothing item along the at least the second dimension comprises changing the height of the shoe.
In some implementations, identifying the at least one second dimension of the clothing item further comprises identifying a toe length of the shoe, determining the at least one first relationship further comprises determining a ratio between the toe length of the shoe and the height of the shoe, and changing the at least one second dimension of the clothing item to scale the clothing item along the at least the second dimension comprises changing the toe length, wherein the ratio between the toe length of the shoe and the height of the shoe is maintained after completion of the changing of the toe length.
In some implementations, the computer-implemented method further comprises performing a tapering of a toe of the shoe, wherein the tapering is based at least in part on an original toe length and an original width of the shoe prior to changing the first dimension and changing the at least one second dimension.
In some implementations, the at least one first relationship comprises a ratio between the first dimension and the at least one second dimension, the ratio being within a predetermined range of values.
In some implementations, changing the first dimension and the at least the second dimension of the clothing item comprises modifying a cage of the clothing item.
According to another aspect, a non-transitory computer-readable medium is provided. The non-transitory computer-readable medium with instructions stored thereon that, responsive to execution by a processing device, causes the processing device to perform operations comprising: identifying a reference region of an avatar body of an avatar; determining a reference dimension of the reference region of the avatar body; identifying a first region of a clothing item that is to be fitted to the reference region of the avatar body, wherein the first region has a first dimension; identifying at least one second dimension of the clothing item; determining at least one first relationship between the first dimension and the at least one second dimension; changing the first dimension to correspond to the reference dimension of the reference region of the avatar body so as to fit the first region of the clothing item to the reference region of the avatar body, wherein changing the first dimension correspondingly causes a scaling in size of the clothing item; and changing the at least one second dimension of the clothing item to scale the clothing item along the at least one second dimension, wherein the at least one first relationship between the first dimension and the at least one second dimension is maintained after completion of the changing of the at least one second dimension.
Various implementations of the non-transitory computer-readable medium are described herein.
In some implementations, the clothing item comprises a shoe.
In some implementations, identifying the reference region of the avatar body comprises identifying an ankle of the avatar body, determining the reference dimension of the reference region of the avatar body comprises determining a diagonal of the ankle, identifying the first region of the clothing item that is to be fitted to the reference region of the avatar body comprises identifying a mouth of the shoe that is to be fitted to the ankle of the avatar body, wherein the first dimension comprises a diagonal of the mouth of the shoe, identifying the at least one second dimension of the clothing item comprises identifying a height of the shoe, determining the at least one first relationship between the first dimension and the at least one second dimension comprises determining a ratio between the diagonal of the mouth of the shoe and the height of the shoe, changing the first dimension to correspond to the reference dimension of the reference region of the avatar body comprises changing the diagonal of the mouth of the shoe to match the diagonal of the ankle, and changing the at least one second dimension of the clothing item to scale the clothing item along the at least the second dimension comprises changing the height of the shoe.
In some implementations, identifying the at least one second dimension of the clothing item further comprises identifying a toe length of the shoe, determining the at least one first relationship further comprises determining a ratio between the toe length of the shoe and the height of the shoe, and changing the at least one second dimension of the clothing item to scale the clothing item along the at least the second dimension comprises changing the toe length, wherein the ratio between the toe length of the shoe and the height of the shoe is maintained after completion of the changing of the toe length.
In some implementations, the operations further comprise performing a tapering of a toe of the shoe, wherein the tapering is based at least in part on an original toe length and an original width of the shoe prior to changing the first dimension and changing the at least one second dimension.
In some implementations, the at least one first relationship comprises a ratio between the first dimension and the at least one second dimension, the ratio being within a predetermined range of values.
In some implementations, changing the first dimension and the at least the second dimension of the clothing item comprises modifying a cage of the clothing item.
According to another aspect, a system is disclosed, comprising: a memory with instructions stored thereon; and a processing device, coupled to the memory, the processing device configured to access the memory, wherein the instructions when executed by the processing device cause the processing device to perform operations including: identifying a reference region of an avatar body of an avatar; determining a reference dimension of the reference region of the avatar body; identifying a first region of a clothing item that is to be fitted to the reference region of the avatar body, wherein the first region has a first dimension; identifying at least one second dimension of the clothing item; determining at least one first relationship between the first dimension and the at least one second dimension; changing the first dimension to correspond to the reference dimension of the reference region of the avatar body so as to fit the first region of the clothing item to the reference region of the avatar body, wherein changing the first dimension correspondingly causes a scaling in size of the clothing item; and changing the at least one second dimension of the clothing item to scale the clothing item along the at least one second dimension, wherein the at least one first relationship between the first dimension and the at least one second dimension is maintained after completion of the changing of the at least one second dimension.
Various implementations of the system are described herein.
In some implementations, the clothing item comprises a shoe.
In some implementations, identifying the reference region of the avatar body comprises identifying an ankle of the avatar body, determining the reference dimension of the reference region of the avatar body comprises determining a diagonal of the ankle, identifying the first region of the clothing item that is to be fitted to the reference region of the avatar body comprises identifying a mouth of the shoe that is to be fitted to the ankle of the avatar body, wherein the first dimension comprises a diagonal of the mouth of the shoe, identifying the at least one second dimension of the clothing item comprises identifying a height of the shoe, determining the at least one first relationship between the first dimension and the at least one second dimension comprises determining a ratio between the diagonal of the mouth of the shoe and the height of the shoe, changing the first dimension to correspond to the reference dimension of the reference region of the avatar body comprises changing the diagonal of the mouth of the shoe to match the diagonal of the ankle, and changing the at least one second dimension of the clothing item to scale the clothing item along the at least the second dimension comprises changing the height of the shoe.
In some implementations, identifying the at least one second dimension of the clothing item further comprises identifying a toe length of the shoe, determining the at least one first relationship further comprises determining a ratio between the toe length of the shoe and the height of the shoe, and changing the at least one second dimension of the clothing item to scale the clothing item along the at least the second dimension comprises changing the toe length, wherein the ratio between the toe length of the shoe and the height of the shoe is maintained after completion of the changing of the toe length.
In some implementations, the operations further comprise performing a tapering of a toe of the shoe, wherein the tapering is based at least in part on an original toe length and an original width of the shoe prior to changing the first dimension and changing the at least one second dimension.
In some implementations, the at least one first relationship comprises a ratio between the first dimension and the at least one second dimension, the ratio being within a predetermined range of values.
According to yet another aspect, portions, features, and implementation details of the systems, methods, and non-transitory computer-readable media may be combined to form additional aspects, including some aspects which omit and/or modify some or portions of individual components or features, include additional components or features, and/or other modifications, and all such modifications are within the scope of this disclosure.
In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative implementations described in the detailed description, drawings, and claims are not meant to be limiting. Other implementations may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein. Aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are contemplated herein.
References in the specification to “some implementations,” “an implementation,” “an example implementation,” etc. indicate that the implementation described may include a particular feature, structure, or characteristic, but every implementation may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same implementation. Further, when a particular feature, structure, or characteristic is described in connection with an implementation, such feature, structure, or characteristic may be effected in connection with other implementations whether or not explicitly described.
The present disclosure describes techniques to automatically fit clothing items onto an avatar, such as fitting a clothing item over a body part of the avatar and/or over an underlying piece of clothing being worn by the avatar (e.g., so as to provide layered clothing). In this manner, precisely fitted clothing item(s) are tailored for the avatar, thereby giving the avatar a more stylized appearance.
The fitting (including tailoring) techniques may be applied to avatars that are used in a virtual experience. Such virtual experiences are sometimes described herein in the context of an electronic game. It is understood that such implementations described in the context of electronic games are for purposes of convenience in providing examples and illustrations.
The fitting techniques described herein can be used for other types of virtual experiences in a three-dimensional (3D) environment that may not necessarily involve an electronic game having one or more players represented by avatars. Examples of virtual experiences may include a virtual reality (VR) conference, a 3D session (e.g., an online lecture or other type of presentation involving 3D avatars), an augmented reality (AR) session, or in other types of 3D environments in which one or more users are represented in the 3D environment by one or more 3D avatars.
With layered clothing, an automated cage-to-cage fitting technique may be used for 3D avatars. The technique allows any body geometry to be fitted with any clothing geometry, including enabling layers of clothing to be fitted over underlying layer(s) of clothing, thereby providing customization without the limits imposed by pre-defined geometries, or requiring complex computations to make a clothing item compatible with arbitrary body shapes of avatars or other clothing items.
The cage-to-cage fitting may also be performed automatically by a gaming platform or gaming software (or other virtual experience platform/software that operates to provide a 3D environment), without requiring avatar creators (also referred to as avatar body creators, or body creators) or clothing item creators to perform complex computations. The terms “clothing” or “piece of clothing” or other analogous terminology used herein are understood to include graphical representations of clothing and accessories, and any other item that can be placed on an avatar in relation to specific parts of an avatar cage.
At runtime during a game or other virtual experience session, a player/user accesses a body library to select a particular avatar body and accesses a clothing library to select pieces of clothing to place on the selected body. A 3D environment platform that presents avatars implements the cage-to-cage fitting techniques to adjust (by suitable deformations, determined automatically) a piece of clothing to conform to the shape of the body, thereby automatically fitting the piece of clothing onto the body (and any intermediate layers, if worn by the avatar).
When the piece of clothing is fitted over the body and/or underlying piece of clothing, the techniques described herein may be performed to deform or otherwise fit the piece of clothing more precisely to the avatar, such as in terms of scale (e.g., proportionality), shape, etc. The user may further select an additional piece of clothing to fit over an underlying piece of clothing, with the additional piece of clothing being deformed to match the geometry of the underlying piece of clothing.
The implementations described herein are based on the concept of “cages” and “meshes.” A body “mesh” (or “render mesh”) is the actual visible geometry of an avatar. A body “mesh” includes graphical representations of body parts such as arms, legs, torso, head parts, etc. and may be of arbitrary shape, size, and geometric topology. Analogously, a clothing “mesh” (or “render mesh”) may be any arbitrary mesh that graphically represents a piece of clothing, such as a shirt, pants, hat, shoes, etc. or parts thereof.
By comparison, a “cage” represents an envelope of features points around the avatar body that is simpler than the body mesh and has weak correspondence to the corresponding vertices of the body mesh. As is explained in further detail later below, a cage may also be used to represent not only the set of feature points on an avatar body, but also a set of feature points on a piece of clothing.
The system architecture 100 (also referred to as “system” herein) includes online virtual experience server 102, data store 120, client devices 110a, 110b, and 110n (generally referred to as “client device(s) 110” herein), and developer devices 130a and 130n (generally referred to as “developer device(s) 130” herein). Virtual experience server 102, data store 120, client devices 110, and developer devices 130 are coupled via network 122. In some implementations, client devices(s) 110 and developer device(s) 130 may refer to the same or same type of device.
Online virtual experience server 102 can include, among other things, a virtual experience engine 104, one or more virtual experiences 106, and graphics engine 108. In some implementations, the graphics engine 108 may be a system, application, or module that permits the online virtual experience server 102 to provide graphics and animation capability. In some implementations, the graphics engine 108 may perform one or more of the operations described below in connection with the flowcharts shown in
A developer device 130 can include a virtual experience application 132, and input/output (I/O) interfaces 134 (e.g., input/output devices). The input/output devices can include one or more of a microphone, speakers, headphones, display device, mouse, keyboard, game controller, touchscreen, virtual reality consoles, etc.
System architecture 100 is provided for illustration. In different implementations, the system architecture 100 may include the same, fewer, more, or different elements configured in the same or different manner as that shown in
In some implementations, network 122 may include a public network (e.g., the Internet), a private network (e.g., a local area network (LAN) or wide area network (WAN)), a wired network (e.g., Ethernet network), a wireless network (e.g., an 802.11 network, a Wi-Fi® network, or wireless LAN (WLAN)), a cellular network (e.g., a 5G network, a Long Term Evolution (LTE) network, etc.), routers, hubs, switches, server computers, or a combination thereof.
In some implementations, the data store 120 may be a non-transitory computer readable memory (e.g., random access memory), a cache, a drive (e.g., a hard drive), a flash drive, a database system, or another type of component or device capable of storing data. The data store 120 may also include multiple storage components (e.g., multiple drives or multiple databases) that may also span multiple computing devices (e.g., multiple server computers). In some implementations, data store 120 may include cloud-based storage.
In some implementations, the online virtual experience server 102 can include a server having one or more computing devices (e.g., a cloud computing system, a rackmount server, a server computer, cluster of physical servers, etc.). In some implementations, the online virtual experience server 102 may be an independent system, may include multiple servers, or be part of another system or server.
In some implementations, the online virtual experience server 102 may include one or more computing devices (such as a rackmount server, a router computer, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a desktop computer, etc.), data stores (e.g., hard disks, memories, databases), networks, software components, and/or hardware components that may be used to perform operations on the online virtual experience server 102 and to provide a user with access to online virtual experience server 102. The online virtual experience server 102 may also include a website (e.g., a web page) or application back-end software that may be used to provide a user with access to content provided by online virtual experience server 102. For example, users may access online virtual experience server 102 using the virtual experience application 112 on client devices 110.
In some implementations, virtual experience session data are generated via online virtual experience server 102, virtual experience application 112, and/or virtual experience application 132, and are stored in data store 120. With permission from virtual experience participants, virtual experience session data may include associated metadata, e.g., virtual experience identifier(s); device data associated with the participant(s); demographic information of the participant(s); virtual experience session identifier(s); chat transcripts; session start time, session end time, and session duration for each participant; relative locations of participant avatar(s) within a virtual experience environment; purchase(s) within the virtual experience by one or more participants(s); accessories utilized by participants; etc.
In some implementations, online virtual experience server 102 may be a type of social network providing connections between users or a type of user-generated content system that allows users (e.g., end-users or consumers) to communicate with other users on the online virtual experience server 102, where the communication may include voice chat (e.g., synchronous and/or asynchronous voice communication), video chat (e.g., synchronous and/or asynchronous video communication), or text chat (e.g., 1:1 and/or N: N synchronous and/or asynchronous text-based communication). A record of some or all user communications may be stored in data store 120 or within virtual experiences 106. The data store 120 may be utilized to store chat transcripts (text, audio, images, etc.) exchanged between participants.
In some implementations, the chat transcripts are generated via virtual experience application 112 and/or virtual experience application 132 or and are stored in data store 120. The chat transcripts may include the chat content and associated metadata, e.g., text content of chat with each message having a corresponding sender and recipient(s); message formatting (e.g., bold, italics, loud, etc.); message timestamps; relative locations of participant avatar(s) within a virtual experience environment, accessories utilized by virtual experience participants, etc. In some implementations, the chat transcripts may include multilingual content, and messages in different languages from different sessions of a virtual experience may be stored in data store 120.
In some implementations, chat transcripts may be stored in the form of conversations between participants based on the timestamps. In some implementations, the chat transcripts may be stored based on the originator of the message(s).
In some implementations of the disclosure, a “user” may be represented as a single individual. However, other implementations of the disclosure encompass a “user” (e.g., creating user) being an entity controlled by a set of users or an automated source. For example, a set of individual users federated as a community or group in a user-generated content system may be considered a “user.”
In some implementations, online virtual experience server 102 may be a virtual gaming server. For example, the gaming server may provide single-player or multiplayer games to a community of users that may access as “system” herein) includes online virtual experience server 102, data store 120, client or interact with virtual experiences using client devices 110 via network 122. In some implementations, virtual experiences (including virtual realms or worlds, virtual games, other computer-simulated environments) may be two-dimensional (2D) virtual experiences, three-dimensional (3D) virtual experiences (e.g., 3D user-generated virtual experiences), virtual reality (VR) experiences, or augmented reality (AR) experiences, for example. In some implementations, users may participate in interactions (such as gameplay) with other users. In some implementations, a virtual experience may be experienced in real-time with other users of the virtual experience.
In some implementations, virtual experience engagement may refer to the interaction of one or more participants using client devices (e.g., 110) within a virtual experience (e.g., 106) or the presentation of the interaction on a display or other output device (e.g., 114) of a client device 110. For example, virtual experience engagement may include interactions with one or more participants within a virtual experience or the presentation of the interactions on a display of a client device.
In some implementations, a virtual experience 106 can include an electronic file that can be executed or loaded using software, firmware or hardware configured to present the virtual experience content (e.g., digital media item) to an entity. In some implementations, a virtual experience application 112 may be executed and a virtual experience 106 rendered in connection with a virtual experience engine 104. In some implementations, a virtual experience 106 may have a common set of rules or common goal, and the environment of a virtual experience 106 shares the common set of rules or common goal. In some implementations, different virtual experiences may have different rules or goals from one another.
In some implementations, virtual experiences may have one or more environments (also referred to as “virtual experience environments” or “virtual environments” herein) where multiple environments may be linked. An example of an environment may be a three-dimensional (3D) environment. The one or more environments of a virtual experience 106 may be collectively referred to as a “world” or “virtual experience world” or “gaming world” or “virtual world” or “universe” herein. An example of a world may be a 3D world of a virtual experience 106. For example, a user may build a virtual environment that is linked to another virtual environment created by another user. A character of the virtual experience may cross the virtual border to enter the adjacent virtual environment.
It may be noted that 3D environments or 3D worlds use graphics that use a three-dimensional representation of geometric data representative of virtual experience content (or at least present virtual experience content to appear as 3D content whether or not 3D representation of geometric data is used). 2D environments or 2D worlds use graphics that use two-dimensional representation of geometric data representative of virtual experience content.
In some implementations, the online virtual experience server 102 can host one or more virtual experiences 106 and can permit users to interact with the virtual experiences 106 using a virtual experience application 112 of client devices 110. Users of the online virtual experience server 102 may play, create, interact with, or build virtual experiences 106, communicate with other users, and/or create and build objects (e.g., also referred to as “item(s)” or “virtual experience objects” or “virtual experience item(s)” herein) of virtual experiences 106.
For example, in generating user-generated virtual items, users may create characters, decoration for the characters, one or more virtual environments for an interactive virtual experience, or build structures used in a virtual experience 106, among others. In some implementations, users may buy, sell, or trade virtual experience objects, such as in-platform currency (e.g., virtual currency), with other users of the online virtual experience server 102. In some implementations, online virtual experience server 102 may transmit virtual experience content to virtual experience applications (e.g., 112). In some implementations, virtual experience content (also referred to as “content” herein) may refer to any data or software instructions (e.g., virtual experience objects, virtual experience, user information, video, images, commands, media item, etc.) associated with online virtual experience server 102 or virtual experience applications. In some implementations, virtual experience objects (e.g., also referred to as “item(s)” or “objects” or “virtual objects” or “virtual experience item(s)” herein) may refer to objects that are used, created, shared or otherwise depicted in virtual experiences 106 of the online virtual experience server 102 or virtual experience applications 112 of the client devices 110. For example, virtual experience objects may include a part, model, character, accessories, tools, weapons, clothing, buildings, vehicles, currency, flora, fauna, components of the aforementioned (e.g., windows of a building), and so forth.
It may be noted that the online virtual experience server 102 hosting virtual experiences 106, is provided for purposes of illustration. In some implementations, online virtual experience server 102 may host one or more media items that can include communication messages from one user to one or more other users. With user permission and express user consent, the online virtual experience server 102 may analyze chat transcripts data to improve the virtual experience platform. Media items can include, but are not limited to, digital video, digital movies, digital photos, digital music, audio content, melodies, website content, social media updates, electronic books, electronic magazines, digital newspapers, digital audio books, electronic journals, web blogs, real simple syndication (RSS) feeds, electronic comic books, software applications, etc. In some implementations, a media item may be an electronic file that can be executed or loaded using software, firmware or hardware configured to present the digital media item to an entity.
In some implementations, a virtual experience 106 may be associated with a particular user or a particular group of users (e.g., a private virtual experience), or made widely available to users with access to the online virtual experience server 102 (e.g., a public virtual experience). In some implementations, where online virtual experience server 102 associates one or more virtual experiences 106 with a specific user or group of users, online virtual experience server 102 may associate the specific user(s) with a virtual experience 106 using user account information (e.g., a user account identifier such as username and password).
In some implementations, online virtual experience server 102 or client devices 110 may include a virtual experience engine 104 or virtual experience application 112. In some implementations, virtual experience engine 104 may be used for the development or execution of virtual experiences 106. For example, virtual experience engine 104 may include a rendering engine (“renderer”) for 2D, 3D, VR, or AR graphics, a physics engine, a collision detection engine (and collision response), sound engine, scripting functionality, animation engine, artificial intelligence engine, networking functionality, streaming functionality, memory management functionality, threading functionality, scene graph functionality, or video support for cinematics, among other features. The components of the virtual experience engine 104 may generate commands that help compute and render the virtual experience (e.g., rendering commands, collision commands, physics commands, etc.) In some implementations, virtual experience applications 112 of client devices 110, respectively, may work independently, in collaboration with virtual experience engine 104 of online virtual experience server 102, or a combination of both.
In some implementations, both the online virtual experience server 102 and client devices 110 may execute a virtual experience engine/application (104 and 112, respectively). The online virtual experience server 102 using virtual experience engine 104 may perform some or all the virtual experience engine functions (e.g., generate physics commands, rendering commands, etc.), or offload some or all the virtual experience engine functions to virtual experience engine 104 of client device 110. In some implementations, each virtual experience 106 may have a different ratio between the virtual experience engine functions that are performed on the online virtual experience server 102 and the virtual experience engine functions that are performed on the client devices 110. For example, the virtual experience engine 104 of the online virtual experience server 102 may be used to generate physics commands in cases where there is a collision between at least two virtual experience objects, while the additional virtual experience engine functionality (e.g., generate rendering commands) may be offloaded to the client device 110. In some implementations, the ratio of virtual experience engine functions performed on the online virtual experience server 102 and client device 110 may be changed (e.g., dynamically) based on virtual experience engagement conditions. For example, if the number of users engaging in a particular virtual experience 106 exceeds a threshold number, the online virtual experience server 102 may perform one or more virtual experience engine functions that were previously performed by the client devices 110.
For example, users may be playing a virtual experience 106 on client devices 110, and may send control instructions (e.g., user inputs, such as right, left, up, down, user election, or character position and velocity information, etc.) to the online virtual experience server 102. Subsequent to receiving control instructions from the client devices 110, the online virtual experience server 102 may send experience instructions (e.g., position and velocity information of the characters participating in the group experience or commands, such as rendering commands, collision commands, etc.) to the client devices 110 based on control instructions. For instance, the online virtual experience server 102 may perform one or more logical operations (e.g., using virtual experience engine 104) on the control instructions to generate experience instruction(s) for the client devices 110. In other instances, online virtual experience server 102 may pass one or more or the control instructions from one client device 110 to other client devices (e.g., from client device 110a to client device 110b) participating in the virtual experience 106. The client devices 110 may use the experience instructions and render the virtual experience for presentation on the displays of client devices 110.
In some implementations, the control instructions may refer to instructions that are indicative of actions of a user's character within the virtual experience. For example, control instructions may include user input to control action within the experience, such as right, left, up, down, user selection, gyroscope position and orientation data, force sensor data, etc. The control instructions may include character position and velocity information. In some implementations, the control instructions are sent directly to the online virtual experience server 102. In other implementations, the control instructions may be sent from a client device 110 to another client device (e.g., from client device 110b to client device 110n), where the other client device generates experience instructions using the local virtual experience engine 104. The control instructions may include instructions to play a voice communication message or other sounds from another user on an audio device (e.g., speakers, headphones, etc.), for example voice communications or other sounds generated using the audio spatialization techniques as described herein.
In some implementations, experience instructions may refer to instructions that enable a client device 110 to render a virtual experience, such as a multiparticipant virtual experience. The experience instructions may include one or more of user input (e.g., control instructions), character position and velocity information, or commands (e.g., physics commands, rendering commands, collision commands, etc.).
In some implementations, characters (or virtual experience objects generally) are constructed from components, one or more of which may be selected by the user, that automatically join together to aid the user in editing.
In some implementations, a character is implemented as a 3D model and includes a surface representation used to draw the character (also known as a skin or mesh) and a hierarchical set of interconnected bones (also known as a skeleton or rig). The rig may be utilized to animate the character and to simulate motion and action by the character. The 3D model may be represented as a data structure, and one or more parameters of the data structure may be modified to change various properties of the character, e.g., dimensions (height, width, girth, etc.); body type; movement style; number/type of body parts; proportion (e.g., shoulder and hip ratio); head size; etc.
One or more characters (also referred to as an “avatar” or “model” herein) may be associated with a user where the user may control the character to facilitate a user's interaction with the virtual experience 106.
In some implementations, a character may include components such as body parts (e.g., hair, arms, legs, etc.) and accessories (e.g., t-shirt, glasses, decorative images, tools, etc.). In some implementations, body parts of characters that are customizable include head type, body part types (arms, legs, torso, and hands), face types, hair types, and skin types, among others. In some implementations, the accessories that are customizable include clothing (e.g., shirts, pants, hats, shoes, glasses, etc.), weapons, or other tools.
In some implementations, for some asset types, e.g., shirts, pants, etc. the online virtual experience platform may provide users access to simplified 3D virtual object models that are represented by a mesh of a low polygon count, e.g., between about 20 and about 30 polygons.
In some implementations, the user may also control the scale (e.g., height, width, or depth) of a character or the scale of components of a character. In some implementations, the user may control the proportions of a character (e.g., blocky, anatomical, etc.). It may be noted that in some implementations, a character may not include a character virtual experience object (e.g., body parts, etc.) but the user may control the character (without the character virtual experience object) to facilitate the user's interaction with the virtual experience (e.g., a puzzle game where there is no rendered character game object, but the user still controls a character to control in-game action).
In some implementations, a component, such as a body part, may be a primitive geometrical shape such as a block, a cylinder, a sphere, etc., or some other primitive shape such as a wedge, a torus, a tube, a channel, etc. In some implementations, a creator module may publish a user's character for view or use by other users of the online virtual experience server 102. In some implementations, creating, modifying, or customizing characters, other virtual experience objects, virtual experiences 106, or virtual experience environments may be performed by a user using a I/O interface (e.g., developer interface) and with or without scripting (or with or without an application programming interface (API)). It may be noted that for purposes of illustration, characters are described as having a humanoid form. It may further be noted that characters may have any form such as a vehicle, animal, inanimate object, or other creative form.
In some implementations, the online virtual experience server 102 may store characters created by users in the data store 120. In some implementations, the online virtual experience server 102 maintains a character catalog and virtual experience catalog that may be presented to users. In some implementations, the virtual experience catalog includes images of virtual experiences stored on the online virtual experience server 102. In addition, a user may select a character (e.g., a character created by the user or other user) from the character catalog to participate in the chosen virtual experience. The character catalog includes images of characters stored on the online virtual experience server 102. In some implementations, one or more of the characters in the character catalog may have been created or customized by the user. In some implementations, the chosen character may have character settings defining one or more of the components of the character.
In some implementations, a user's character can include a configuration of components, where the configuration and appearance of components and more generally the appearance of the character may be defined by character settings. In some implementations, the character settings of a user's character may at least in part be chosen by the user. In other implementations, a user may choose a character with default character settings or character setting chosen by other users. For example, a user may choose a default character from a character catalog that has predefined character settings, and the user may further customize the default character by changing some of the character settings (e.g., adding a shirt with a customized logo). The character settings may be associated with a particular character by the online virtual experience server 102.
In some implementations, the client device(s) 110 may each include computing devices such as personal computers (PCs), mobile devices (e.g., laptops, mobile phones, smart phones, tablet computers, or netbook computers), network-connected televisions, gaming consoles, etc. In some implementations, a client device 110 may also be referred to as a “user device.” In some implementations, one or more client devices 110 may connect to the online virtual experience server 102 at any given moment. It may be noted that the number of client devices 110 is provided as illustration. In some implementations, any number of client devices 110 may be used.
In some implementations, each client device 110 may include an instance of the virtual experience application 112, respectively. In one implementation, the virtual experience application 112 may permit users to use and interact with online virtual experience server 102, such as control a virtual character in a virtual experience hosted by online virtual experience server 102, or view or upload content, such as virtual experiences 106, images, video items, web pages, documents, and so forth. In one example, the virtual experience application may be a web application (e.g., an application that operates in conjunction with a web browser) that can access, retrieve, present, or navigate content (e.g., virtual character in a virtual environment, etc.) served by a web server. In another example, the virtual experience application may be a native application (e.g., a mobile application, app, virtual experience program, or a gaming program) that is installed and executes local to client device 110 and allows users to interact with online virtual experience server 102. The virtual experience application may render, display, or present the content (e.g., a web page, a media viewer) to a user. In an implementation, the virtual experience application may also include an embedded media player (e.g., a Flash® or HTML5 player) that is embedded in a web page.
According to aspects of the disclosure, the virtual experience application may be an online virtual experience server application for users to build, create, edit, upload content to the online virtual experience server 102 as well as interact with online virtual experience server 102 (e.g., engage in virtual experiences 106 hosted by online virtual experience server 102). As such, the virtual experience application may be provided to the client device(s) 110 by the online virtual experience server 102. In another example, the virtual experience application may be an application that is downloaded from a server.
In some implementations, each developer device 130 may include an instance of the virtual experience application 132, respectively. In one implementation, the virtual experience application 132 may permit a developer user(s) to use and interact with online virtual experience server 102, such as control a virtual character in a virtual experience hosted by online virtual experience server 102, or view or upload content, such as virtual experiences 106, images, video items, web pages, documents, and so forth. In one example, the virtual experience application may be a web application (e.g., an application that operates in conjunction with a web browser) that can access, retrieve, present, or navigate content (e.g., virtual character in a virtual environment, etc.) served by a web server. In another example, the virtual experience application may be a native application (e.g., a mobile application, app, virtual experience program, or a gaming program) that is installed and executes local to developer device 130 and allows users to interact with online virtual experience server 102. The virtual experience application may render, display, or present the content (e.g., a web page, a media viewer) to a user. In an implementation, the virtual experience application may also include an embedded media player (e.g., a Flash® or HTML5 player) that is embedded in a web page.
According to aspects of the disclosure, the virtual experience application 132 may be an online virtual experience server application for users to build, create, edit, upload content to the online virtual experience server 102 as well as interact with online virtual experience server 102 (e.g., provide and/or engage in virtual experiences 106 hosted by online virtual experience server 102). As such, the virtual experience application may be provided to the developer device(s) 130 by the online virtual experience server 102. In another example, the virtual experience application 132 may be an application that is downloaded from a server. Virtual experience application 132 may be configured to interact with online virtual experience server 102 and obtain access to user credentials, user currency, etc. for one or more virtual experiences 106 developed, hosted, or provided by a virtual experience developer.
In some implementations, a user may login to online virtual experience server 102 via the virtual experience application. The user may access a user account by providing user account information (e.g., username and password) where the user account is associated with one or more characters available to participate in one or more virtual experiences 106 of online virtual experience server 102. In some implementations, with appropriate credentials, a virtual experience developer may obtain access to virtual experience virtual objects, such as in-platform currency (e.g., virtual currency), avatars, special powers, accessories, that are owned by or associated with other users.
In general, functions described in one implementation as being performed by the online virtual experience server 102 can also be performed by the client device(s) 110, or a server, in other implementations if appropriate. In addition, the functionality attributed to a particular component can be performed by different or multiple components operating together. The online virtual experience server 102 can also be accessed as a service provided to other systems or devices through suitable application programming interfaces (APIs), and thus is not limited to use in websites.
The body cage 200 comprises a plurality of feature points 202 that define or otherwise identify or correspond to the shape of the mannequin. In some implementations, the feature points 202 are formed by the vertices of segments/sides 204 of multiple polygons (or other geometric shape) on the mannequin. According to various implementations (and although not depicted as such in
The body cage 200 of
Cages may be provided for any arbitrary avatar body shape or clothing shape. The body cage 300 in the example of
Compared to the body cage 200 of
In some implementations, for bandwidth and performance/efficiency purposes or other reason(s), the number of feature points of a cage may be reduced to a smaller number than those provided above, such as 475 feature points (or some other number of feature points). Furthermore, in some implementations, the feature points (vertices) in a body cage may be arranged into a plurality of groups (e.g., 15 groups) that each represent a portion of the body shape.
More particularly, the 15 body parts illustrated in
Each of the 15 groups/parts in
Moreover, this separation into multiple groups (such as illustrated in
When a jacket is subsequently selected for fitting over that 3D avatar, the right lower arm, right upper arm, and torso of the jacket may be deformed to fit over the corresponding right lower arm, right upper arm, and torso of the 3D avatar (body mannequin), and the left lower arm and the left upper arm of the jacket are not deformed (e.g., remains rigid in its original form from its parent space) since there is no left arm cage in the body mannequin to deform against.
The clothing layer 500 includes an inner cage (not illustrated in
In some implementations, this mapping includes mapping the feature points of the inner cage of the clothing layer 500 directly onto the coordinate locations of the corresponding feature points of the arms and torso of the body cage 400. Such mapping may involve a 1:1 correspondence when both cages have same number of feature points, and the mapping may be n:1 or 1:n (wherein n is an integer greater than 1), in which case multiple feature points in one cage may be mapped to the same feature point of the other cage (or some feature points may be unmapped).
The clothing layer 500 further includes an outer cage having feature points that are spaced apart from and linked to the corresponding feature points of its inner cage of the clothing layer 500. The feature points of the outer cage of the clothing layer 500 define or are otherwise located along the external surface contours/geometry of the jacket, so as to define features such as a hood 504, cuffs 506, straight-cut torso 508, etc. of the jacket.
According to various implementations, the spatial distances (e.g., a spatial distance between a feature point of the inner cage of the clothing layer 500 and a corresponding feature point of the outer cage of the clothing layer 500) are kept constant during the course of fitting the clothing layer 500 over an outer cage of an existing layer (or avatar body). In this manner, the feature points of the inner cage of the clothing layer 500 may be mapped to the feature points of the body cage 400, so as to “fit” the inside of the jacket over the avatar's torso and arms.
Then, with the distances between the feature points of the inner cage of the clothing layer 500 and the corresponding feature points of the outer cage of the clothing layer 500 being kept constant, the outer contours of the jacket may also be deformed so as to match the shape of the avatar body, thereby resulting in at least partial preservation of the visual appearance (graphical representation) of the hood, cuffs, straight-cut torso and other surface features of the jacket while at the same time matching the shape of the avatar body as illustrated in
In some implementations, additional clothing layers over other clothing layer(s) may be placed (e.g., in response to user selection).
For example, the exposed outer surfaces 602 of the jacket (formed by the body, hood, and sleeves of the jacket) provide a set of feature points and the exposed legs, hands, head, and part of the chest of the body that are not covered by the jacket provide another set of feature points, and these two sets of feature points (combined) provide the feature points of the outer cage 600.
The feature points of the outer cage 600 in
For instance, additional feature points may be computed for the outer cage 600 encompassed by the jacket area (as compared to the outer cage of the clothing layer 500 of
In operation, if the user provides input to fit an additional clothing layer (such as an overcoat or other article of clothing) over the jacket (clothing layer 500) and/or over other parts of the avatar body, then the feature points of the inner cage of such additional clothing layer are mapped to the corresponding feature points of the outer cage 600. Deformation may thus be performed in a manner similar to that described with respect to
Thus, in accordance with the examples of
For example, the feature points may be vertices with both position and texture space coordinates. Texture space coordinates are usually expressed in a range [0,1] each for U,V coordinates. The texture space may be thought of as an “unwrapped” normalized coordinate space for the vertices. By performing the correspondence of the two sets of vertices in the UV space and not using their positions, vertex-to-vertex correspondence may be performed in the normalized space there by removing the hard requirement of exact vertex to vertex index mapping.
In the techniques for layering clothing, each avatar body and clothing item is thus associated with an “inner cage” and an “outer cage.” In the case of the avatar body, the inner cage represents a default “mannequin” (and different mannequins may be provided for different avatar body shapes) and the “outer cage” of the avatar body represents the envelope around the shape of the avatar body. For the clothing items, the “inner cage” represents the inner envelope that is used to define how the clothing item wraps around an underlying body (or around a body with prior clothing layers already fitted on it), and the “outer cage” represents the way that the next layer of clothing is wrapped around this particular clothing item when worn on the avatar body.
Each shoe 700 and 702 may have a different style, brand, shape, color, size, design, or other appearance-related characteristic For example, the shape, color, design, etc. of a particular shoe 700 or shoe 702 may correspond to the branding and product line of various shoe manufacturers. As other examples, the shoe 700 or shoe 702 may be a customized shoe generated by a graphical artist or other clothing creator who creates graphical objects that are placed in a library for consumption by virtual experience users, and such shoe may not necessarily have branding associated with the shoe that is specifically tied to an actual/physical shoe provided by a shoe manufacturer/vendor.
As illustrated in the examples of
One reason for the misshapen/distorted appearance in
With respect to the shoe examples of
For example, the appearance of the shoe 700 and the shoe 702 may be iconic to the shoe manufacturer/vendor, and any unintentional distortion may adversely affect the branding (e.g., product recognition, marketing, etc.). As another example, a graphics artist (who places template shoes in a library for consumption by users, and who spends a great amount of effort in the design and creation of the shoes) may have a negative reaction, when the graphics artist sees his/her work badly distorted when fitted by a user onto an avatar.
A second issue is that the final shape and proportions of the shoe are controlled primarily by the avatar's outer cage, such as previously explained above. A consequence of this is that the creator of the shoe (e.g., the above graphics artist) may have no control over the shape of the wrapped shoe. That is, the graphics artist has no control over the outer cage, which may vary from one situation to another, dependent on the particular shape/size of the avatar being used by the user, the underlying clothing item(s) being fitted onto the avatar, etc.
The graphics artist may create the inner cage for a clothing item but has no control over how such inner cage may be mapped to an outer cage (of an underlying avatar body part or clothing item) during the wrapping/deformation process so as to fit the clothing item onto the underlying avatar body part or item of clothing.
One possible technique to address the above issues is a rigid approach to fitting a clothing item onto an avatar.
However, while a naïve “as rigid as possible” approach may preserve the original shape or other visual characteristics, the resulting proportions or other visual appearance may be unappealing. In the example of
Nonetheless, the result is visually unappealing, because the shoes 702 have a height that goes up to the knees of the avatar 900 and the shoes 702 appear like clown shoes that are too large as compared to the size and shape of the avatar 900 and its feet, even though the shoes 702 do properly fit the ankles of the avatar 900.
In the example of
To address the above and/or other issues, the implementations of the fitting technique described herein divide the shoe into multiple regions: the toe (e.g., the toe and bridge) region, and the heel (e.g., the arch, heel, and mouth) region. The mouth of the shoe is constrained to fit the ankle of the avatar, which correspondingly constrains the length and width of the heel region, but the shoe may scale more freely in height, and the toe of the shoe may scale more freely in length and taper as well. These various scales may be computed to preserve certain aspect ratios from the original shoe as closely as possible.
Referring first to
A top view 1100b in
Next in
More particularly, the operation previously illustrated in
For additional context, some avatars have extremely flat feet (e.g., appearing paper thin or “pancake feet”). Performing a scaling and/or deformation of a shoe, so as to conform the height of the shoe solely to the height/thickness of the avatar's feet (so to match the ankle height), while being fitted to the ankle size at the mouth of the shoe, may result in a highly distorted/misshapen flat shoe.
Therefore, in the implementation illustrated in
As an illustration, the original height of the shoe 702 may be 1 cm (as a non-limiting example) and the original mouth diagonal of the shoe may be 3 cm (as a non-limiting example), prior to the scaling performed in
In some implementations, the user may additionally adjust the height of the shoe 702 (e.g., upwards or downwards) in a manner that deviates from the base ratio of 1:3. For instance, some avatars may have relatively wider or narrower ankles, relatively flatter or thicker feet, etc. The additional adjustment by the user may be used to more closely conform the height of the shoe to the foot, while still being constrained to some extent by the ratio. In such implementations, minimum and maximum limits may be configured so as to avoid extreme changes in height that may begin to unintentionally distort the visual appearance of the shoe 702.
Next in
In more detail, the shoe 702 in previous
For instance, if the proportion of the toe length versus shoe height of the original template shoe 702 (prior to any scaling or deformation) is 2 cm (as a non-limiting example) to 1 cm (as a non-limiting example) (a ratio of 2:1), then the toe length 1302 in
According to various implementations, one or more of the ratios described herein may comprise a ratio within a range of ratios. For instance, there may be a minimum ratio and maximum ratio, and scaling is permitted so that the resulting ratio falls within the maximum and minimum. In other implementations, the minimum and maximum ratios may be permitted deviations or tolerances from a base ratio. Furthermore, a user may further adjust the scaling within the permitted range of ratios (or even outside of the permitted ratios), depending on the resulting visual effect preferred by the user.
In some implementations, the relationship between dimensions/sizing may be based on some other methodology that may not necessarily involve the ratios such as described herein by example. For instance, if the diagonal 1108 has been resized from 3 cm to 2 cm (e.g., a reduction of 33%), then such a change may be used as a constraint specifying that the height of the shoe may be reduced to a maximum of 33%, plus some amount of permitted deviation. As another example, a more elaborate mathematical formula may be used to relate the height to the diagonal, the height to the toe length, etc., with or without further adjustment factors or permitted deviations.
A top view 1300b in
Nevertheless, the shoe 702 may have a blunt “squashed toe” appearance after the toe length scaling is performed, such as illustrated in the top view 1300b. One reason for this appearance is that in the top view 1100b of
Referring to
In example layered clothing techniques previously described above, an outer cage is generated over the avatar's foot. With respect to
Accordingly, in the implementations of the fitting method disclosed herein, measurements of the avatar (e.g., angle diagonal, toe/foot length, foot height, etc.) and of the corresponding regions (e.g., toe region, heel region, diagonal of the mouth, etc.) of the template shoe are first computed or otherwise obtained, such as illustrated in
First, the inner cage of the template shoe may be deformed using the measurements in this vector field, and then the deformed inner cage replaces/deletes any outer cage that the avatar's foot may or may not have. This inner cage deformation operation is sufficient to deform the shoe to approximately the correct fit for the foot.
The reshaped cage of the shoe may also be used to deform other layers above and below the shoe so that such layers fit the new shoe shape. This technique may be performed, for example, if there is a preference for open toes, to reshape the foot, to add any layers that might go over the shoe (such as extra-long pants), and so forth.
To correct any RBF interpolation/extrapolation errors, the vector field may be used again in some implementations to deform (reshape) the original shoe geometry more precisely, which then replaces the wrap-deformed shoe geometry that may have errors. For instance,
Accordingly, appropriate corrections using the measurements in the vector field may be applied everywhere on the shoe to reshape the shoe (such as explained with reference to
The vector field of some implementations may be a piecewise vector field. For example, the pieces of the vector field may include a scaling factor for the toes, a scaling factor for the ankle which may be different from the scaling factor for the toes, a transition between these scaling factors, the corresponding different deformation fields for these scaling factors, regions, etc.
The example method 2000 may include one or more operations illustrated by one or more blocks, such as blocks 2002 to 2014. The various blocks of the method 2000 and/or of any other process(es) described herein may be combined into fewer blocks, divided into additional blocks, supplemented with further blocks, and/or eliminated based upon the implementation.
The method 2000 of
At block 2002, a reference region of an avatar body is identified. The reference region may be used as one constraint in scaling a clothing item. For a shoe, an example of a reference region of the avatar body is the ankle of the avatar body. Block 2002 may be followed by block 2004.
At block 2004, a dimension of the reference region, such as the diagonal of the ankle for a shoe, is determined. For other types of clothing/accessories (e.g., hats, gloves, watches, shirts, pants, etc.), other reference dimensions may be determined, such as neck circumference, finger length and circumference, wrist circumference, torso width/circumference/height, arm length, ear size, distance between eye pupils, etc. Block 2004 may be followed by block 2006.
At block 2006, a region of a clothing item, such the mouth of the shoe (prior to scaling or other deformation) that is to be fitted to the ankle of the avatar body, is identified. The mouth of the shoe may have a first dimension, such as the length of a diagonal across the mouth of the shoe. For other types of clothing/accessories (e.g., hats, gloves, watches, shirts, pants, etc.), the first region and first dimension an opening of a glove and a diagonal across or a circumference of the opening, the neck opening of a shirt and a diagonal across or the circumference of the opening, the opening or bill of a hat and a diagonal across or a circumference of the opening/bill, etc.
Other values may be used for the first dimension, such as radius, diameter, triangulation or other projection points, etc., which may not necessarily involve a diagonal and instead may be based on some other single-or multi-coordinate (e.g., x, y, z, etc.) computation. Block 2006 may be followed by block 2008.
At block 2008, one or more other dimensions of the clothing item (e.g., a shoe) prior to scaling or other deformation, such as the height, width, toe length, etc. are identified. Block 2008 may be followed by block 2010.
At block 2010, a first relationship (such as a ratio) between the first dimension (e.g., a dimension of the mouth of the shoe, such as the diagonal) and a second dimension (e.g., the height of the shoe) is determined. The relationship may be established/determined via some other technique (alternatively or additionally to a ratio), such as a mathematical formula (other than a basic ratio), interpolation, multiple ratios etc., with or without deviation/adjustment values.
For instance, for five fingers or toes there may be four or other number of ratios. For a wing, there may be a “folded” to “unfolded” ratio. For an arm, there may be a forearm length to elbow-to-shoulder length ratio. These are just some examples that are possible. Block 2010 may be followed by block 2012.
At block 2012, the dimension(s) of the clothing item (e.g., a shoe) are changed so as to fit the mouth of the shoe to the ankle of the avatar, for example via isotropic scaling in the context of a shoe. In some implementations scaling may be performed in a non-isotropic manner, such that some regions of the clothing item may be scaled by some amount and other regions of the clothing item are scaled by some other (different) amount or not scaled at all. Block 2012 may be followed by block 2014.
At block 2014, the clothing item is scaled along the second dimension, such as a shoe being scaled along its height. The first relationship (e.g., ratio) between the first dimension (e.g., the diagonal of the mouth of the shoe) and the at least the second dimension (e.g., the height of the shoe) is maintained (i.e., kept) within a range of values after completion of the scaling. For example, the changing that may occur in block 2012 and block 2014 may include modifying a cage of the clothing item.
One or more further operations, such as scaling the toe length of the shoe, performing tapering, removing distortions, blending, applying user adjustments to the scaling within permissible ranges, etc. may also be performed for the shoe, and analogously for other types of clothing items.
While the method 2000 has been described with reference to an avatar that has a foot with associated dimensions (e.g., ankle dimensions) and a shoe, which is an accessory that has corresponding dimensions (e.g., shoe height, diagonal of a mouth of the shoe, etc.), it may be understood that, in various implementations, the method 2000 may be performed for any avatar with any type of body parts with corresponding dimensions.
For example, an avatar may have other types of body parts (e.g., fins, wings, etc.) or multiple numbers of body parts (e.g., fewer or more limbs, twenty hands, ten feet, three heads, etc.). Further, body parts may be associated with corresponding dimensions (e.g., a length of a forearm; a minimum or maximum circumference of a forearm; a wrist with a wrist circumference; a wing with a folded length and one or more unfolded lengths; or any other type of dimension).
In various implementations, reference dimensions may be one-dimensional (e.g., length, width, circumference, etc.); two dimensional (e.g., area of wing, area of torso, etc.); three-dimensional (e.g., volume of torso, etc.), and corresponding dimensions of the clothing item or accessory that is to be fitted may be used.
With other clothing items or accessories, other reference regions may be identified (e.g., wrist for watches, wristbands, other wrist-worn items, etc.; hand for gloves or other hand-worn items; fingers for rings, etc.; neck for neck-worn items such as scarves, etc.; torso; or any other part of the avatar body. The dimensions for these clothing items and accessories may include a diameter or circumference of the wrist band of the wrist-worn item, a diameter or circumference of the part of a ring that fits over a finger, the length of a bill of a cap, etc.
The example method 2100 may include one or more operations illustrated by one or more blocks, such as blocks 2102 to 2114. The various blocks of the method 2100 and/or of any other process(es) described herein may be combined into fewer blocks, divided into additional blocks, supplemented with further blocks, and/or eliminated based upon the implementation.
The method 2100 of
The operations performed in method 2100 of
At block 2102, an ankle of an avatar body may be identified. The reference region (here, the ankle) may be used as one constraint in scaling a clothing item. Thus, for a shoe, an example of a reference region of the avatar body is the ankle of the avatar body. Block 2102 may be followed by block 2104.
At block 2104, a dimension of the diagonal of the ankle for a shoe, may be determined as a reference dimension. Block 2104 may be followed by block 2106.
At block 2106, the mouth of the shoe (prior to scaling or other deformation) that is to be fitted to the ankle of the avatar body, may be identified. The mouth of the shoe may have a first dimension, such as the length of a diagonal across the mouth of the shoe. Block 2106 may be followed by block 2108.
At block 2108, a height of the shoe prior to scaling or other deformation, as a second dimension of the clothing item (the shoe) may be identified. Block 2108 may be followed by block 2110.
At block 2110, a ratio between the diagonal of the mouth of the shoe and the height of the shoe may be determined. The relationship may be established/determined via some other technique (alternatively or additionally to a ratio), such as a mathematical formula (other than a basic ratio), interpolation, multiple ratios etc., with or without deviation/adjustment values. For instance, for five toes there may be four or other number of ratios. Block 2110 may be followed by block 2112.
At block 2112, the diagonal of the mouth of the shoe may be changed so as to fit the mouth of the shoe to the diagonal of the ankle of the avatar, for example via isotropic scaling in the context of a shoe. In some implementations scaling may be performed in a non-isotropic manner, such that some regions of the clothing item (the shoe) may be scaled by some amount and other regions of the clothing item (the shoe) are scaled by some other (different) amount or not scaled at all. Block 2112 may be followed by block 2114.
At block 2114, the height of the shoe may be changed. The first relationship (e.g., ratio) between the first dimension (e.g., the diagonal of the mouth of the shoe) and the at least the second dimension (e.g., the height of the shoe) may be maintained (i.e., kept) in a certain range of values after completion of the scaling.
The example method 2200 may include one or more operations illustrated by one or more blocks, such as blocks 2202 to 2208. The various blocks of the method 2200 and/or of any other process(es) described herein may be combined into fewer blocks, divided into additional blocks, supplemented with further blocks, and/or eliminated based upon the implementation.
The method 2200 of
At block 2202, a toe length of the shoe is identified. Such a toe length is used subsequently when preserving certain aspect ratios from the original shoe as closely as possible. Block 2202 may be followed by block 2204.
At block 2204, a ratio between the toe length of the shoe and the height of the shoe is determined. Such a ratio is used subsequently when preserving certain aspect ratios from the original shoe as closely as possible. Block 2204 may be followed by 2206.
At block 2206, the toe length of the shoe is changed while maintaining the ratio between the toe length of the shoe and the height of the shoe after changing the toe length. Hence, toe length may be scaled to match the avatar's toe length while being constrained relative to the original toe length, the original height, and the new height. Maintaining this relationship prevents the toe from being crushed more than a default or user-defined amount. Block 2206 may be followed by 2208.
At block 2208, a toe of the shoe is tapered. For example, to improve the shape of the squashed to, a width taper may be applied. Such a width taper may be computed relative to the original toe length and width and the new toe length and width. A magnitude of this taper may also be user-defined. After the tapering, an improved version of the shoe is available.
Processor 2302 can be one or more processors and/or processing circuits to execute program code and control basic operations of the computing device 2300. A “processor” includes any suitable hardware and/or software system, mechanism or component that processes data, signals or other information. A processor may include a system with a general-purpose central processing unit (CPU), multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing require not be limited to a particular geographic location or have temporal limitations. For example, a processor may perform its functions in “real-time,” “offline,” in a “batch mode,” etc. Portions of processing may be performed at different times and at different locations, by different (or the same) processing systems. A computer may be any processor in communication with a memory.
Memory 2304 is typically provided in computing device 2300 for access by the processor 2302, and may be any suitable processor-readable storage medium, e.g., random access memory (RAM), read-only memory (ROM), Electrical Erasable Read-only Memory (EEPROM), Flash memory, etc., suitable for storing instructions for execution by the processor, and located separate from processor 2302 and/or integrated therewith. Memory 2304 can store software operating on the computing device 2300 by the processor 2302, including an operating system 2308, virtual experience application 2310, a fitting and tailoring application 2312, and other applications (not shown). In some implementations, application 2310 and/or application 2312 can include instructions that enable processor 2302 to perform the functions (or control the functions of) described herein, e.g., some or all of the methods described with respect to
For example, applications 2310 can include a fitting and tailoring application 2312, which as described herein can fit and tailor clothing to avatars within an online virtual experience server (e.g., 102). Elements of software in memory 2304 can alternatively be stored on any other suitable storage location or computer-readable medium. In addition, memory 2304 (and/or other connected storage device(s)) can store instructions and data used in the features described herein. Memory 2304 and any other type of storage (magnetic disk, optical disk, magnetic tape, or other tangible media) can be considered “storage” or “storage devices.”
I/O interface 2306 can provide functions to enable interfacing the computing device 2300 with other systems and devices. For example, network communication devices, storage devices (e.g., memory and/or data store 120), and input/output devices can communicate via interface 2306. In some implementations, the I/O interface can connect to interface devices including input devices (keyboard, pointing device, touchscreen, microphone, camera, scanner, etc.) and/or output devices (display device, speaker devices, printer, motor, etc.).
The audio/video input/output devices 2314 can include a user input device (e.g., a mouse, etc.) that can be used to receive user input, a display device (e.g., screen, monitor, etc.) and/or a combined input and display device, that can be used to provide graphical and/or visual output.
For ease of illustration,
A user device can also implement and/or be used with features described herein. Example user devices can be computer devices including some similar components as the computing device 2300, e.g., processor(s) 2302, memory 2304, and I/O interface 2306. An operating system, software and applications suitable for the client device can be provided in memory and used by the processor. The I/O interface for a client device can be connected to network communication devices, as well as to input and output devices, e.g., a microphone for capturing sound, a camera for capturing images or video, a mouse for capturing user input, a gesture device for recognizing a user gesture, a touchscreen to detect user input, audio speaker devices for outputting sound, a display device for outputting images or video, or other output devices. A display device within the audio/video input/output devices 2314, for example, can be connected to (or included in) the computing device 2300 to display images pre-and post-processing as described herein, where such display device can include any suitable display device, e.g., an LCD, LED, or plasma display screen, CRT, television, monitor, touchscreen, 3-D display screen, projector, or other visual display device. Some implementations can provide an audio output device, e.g., voice output or synthesis that speaks text.
One or more methods described herein (e.g., method 2000, 2100, and/or 2200) can be implemented by computer program instructions or code, which can be executed on a computer. For example, the code can be implemented by one or more digital processors (e.g., microprocessors or other processing circuitry), and can be stored on a computer program product including a non-transitory computer readable medium (e.g., storage medium), e.g., a magnetic, optical, electromagnetic, or semiconductor storage medium, including semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), flash memory, a rigid magnetic disk, an optical disk, a solid-state memory drive, etc. The program instructions can also be contained in, and provided as, an electronic signal, for example in the form of software as a service (SaaS) delivered from a server (e.g., a distributed system and/or a cloud computing system). Alternatively, one or more methods can be implemented in hardware (logic gates, etc.), or in a combination of hardware and software. Example hardware can be programmable processors (e.g., Field-Programmable Gate Array (FPGA), Complex Programmable Logic Device), general purpose processors, graphics processors, Application Specific Integrated Circuits (ASICs), and the like. One or more methods can be performed as part of or component of an application running on the system, or as an application or software running in conjunction with other applications and operating systems.
One or more methods described herein can be run in a standalone program that can be run on any type of computing device, a program run on a web browser, a mobile application (“app”) run on a mobile computing device (e.g., cell phone, smart phone, tablet computer, wearable device (wristwatch, armband, jewelry, headwear, goggles, glasses, etc.), laptop computer, etc.). In one example, a client/server architecture can be used, e.g., a mobile computing device (as a client device) sends user input data to a server device and receives from the server the final output data for output (e.g., for display). In another example, all computations can be performed within the mobile app (and/or other apps) on the mobile computing device. In another example, computations can be split between the mobile computing device and one or more server devices.
Although the description has been described with respect to particular implementations thereof, these particular implementations are merely illustrative, and not restrictive. Concepts illustrated in the examples may be applied to other examples and implementations.
The functional blocks, operations, features, methods, devices, and systems described in the present disclosure may be integrated or divided into different combinations of systems, devices, and functional blocks as would be known to those skilled in the art. Any suitable programming language and programming techniques may be used to implement the routines of particular implementations. Different programming techniques may be employed, e.g., procedural or object-oriented. The routines may execute on a single processing device or multiple processors. Although the steps, operations, or computations may be presented in a specific order, the order may be changed in different particular implementations. In some implementations, multiple steps or operations shown as sequential in this specification may be performed at the same time.
This application claims priority to U.S. Provisional Application No. 63/531,885, entitled “AUTOMATIC FITTING AND TAILORING FOR STYLIZED AVATARS,” filed on Aug. 10, 2023, the content of which is incorporated herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63531885 | Aug 2023 | US |