Method and device for setting a multi-user virtual reality chat environment

Information

  • Patent Grant
  • 11138780
  • Patent Number
    11,138,780
  • Date Filed
    Friday, August 28, 2020
    4 years ago
  • Date Issued
    Tuesday, October 5, 2021
    3 years ago
Abstract
A method for setting a multi-user virtual reality chat environment. The method adjusts a distance between avatars of participants in the VR chat scene from a first distance to a second distance according to detected volume received from the participants.
Description
BACKGROUND
1. Technical Field

The disclosure relates to computer techniques, and more particularly to a method for providing a multi-user virtual reality chat environment.


2. Description of Related Art

A virtual reality (VR) system may provide various VR functions to support immersive user experiences. While a major part of VR adoption has been made within the gaming community for playing video games, VR application is not limited to the art of video gaming. As social activities supportive technologies, such as social networks, advance, building connections between VR and social technologies may unlock promising synergy. Hence, designing VR functions for social activities becomes an important research area in this field.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of an embodiment of a virtual reality system of the disclosure;



FIG. 2 is a block diagram of an alternative embodiment of a virtual reality system of the disclosure;



FIG. 3 is a block diagram of an embodiment of a virtual reality device executing a method of the disclosure;



FIG. 4 is a block diagram of an embodiment of a plurality of participants joining a chat environment through a standalone VR server;



FIG. 5 is a block diagram of an embodiment of a plurality of participants joining a chat environment provided by a VR system of a participant;



FIG. 6 is a block diagram of an embodiment of a method of the disclosure;



FIG. 7 is a block diagram of a VR chat scene;



FIG. 8 is a block diagram of a VR chat scene;



FIG. 9 is a block diagram of a VR chat scene;



FIG. 10 is a block diagram of a VR chat scene; and



FIG. 11 is a block diagram showing VR chat scene adjustment.





DETAILED DESCRIPTION

The disclosure provides a method for setting a multi-user virtual reality chat environment executable by an electronic device. The electronic device executing the disclosed method may be a VR device, a game console, a personal computer, or a smart phone.


With reference to FIG. 1, a VR system 110 for setting up a multi-user virtual reality chat environment has a head-mount display (HMD) 102 and a processing unit 101. The processing unit 101 outputs one or more virtual reality (VR) scenes to the HMD 102. The HMD 102 receives and displays the VR scene to an active user. The HMD 102 may be disposed on a wearable VR headset. The MD 102 provides VR visions to the active user when the wearable VR headset is put on and covers visions of the active user. The HMD 102 has a display 121, an audio output 122, optical blocks 123, locators 124, and position sensors 125, and an inertial measurement unit (IMU) 126. An exemplary embodiment of the HMD 102 may be referenced to a VR headset as disclosed in US patent publication No. 20170131774.


In some embodiments, the processing unit 101 may be a computer, a VR server, a smart phone, a gaming console, or any device capable of controlling and driving the HMD 102. An exemplary embodiment of the processing unit 101 may be referenced to a VR console as disclosed in US patent publication No. 20170131774. The processing unit 101 includes an application store 111, a tracking module 112, and a VR engine 113. The application store 111 stores VR applications. Each of the VR application provides at least one VR scene. The tracking module 112 tracks and outputs movement and positions of the HMD 102 to the VR engine 113. The VR engine 113 determines a position of an avatar of the active user in the VR scene.


The processing unit 101 may connect to the HMD 102 through a wireless communication channel. Alternatively, the processing unit 101 may connect to the HMD 102 through a wire-lined communication channel. The processing unit 101 may be rigidly coupled to the HMD 102 such that the processing unit 101 and the HMD 102 act as a rigid entity. With reference to FIG. 2, for example, a VR device 110d is an embodiment of the VR system 110 wherein the processing unit 101 and the HMD 102 are rigidly coupled. Alternatively, the processing unit 101 may be non-rigidly coupled to the HMD 102 such that the HMD 102 is mobile related to the processing unit 101.


With reference to FIG. 3, the method of the disclosure may be implemented by computer programs stored in storage media, such as mass storage 903 in a device 900. The computer programs implementing the method, when loaded to a memory 902 by a processor 901, directs the processor 901 in the device 900 to execute the method for setting up the multi-user virtual reality chat environment. The processor 901 communicates with other entities through a networking interface 904.


With reference to FIG. 4, each of VR systems 110a, 110b, and 110c is an embodiment of the VR system 110. A VR server 120, for example, may be an embodiment of the processing unit 101. An instance of the application store 111 in VR server 120 provides a VR application among a plurality of VR applications to the VR systems 110a, 110b, and 110c. An instance of the tracking module 112 in VR server 120 may track positions and movement of a user 130a of the VR system 110a, a user 130b of the VR system 110b, and a user 130c of the VR system 110c. An instance of the VR engine 113 in VR server 120 may determine positions of avatars of the users 130a, 130b, and 130c and place the avatars in a scene of the provided VR application.


With reference to FIG. 5, for example, the processing unit 101 in the VR system 110b may function as the VR server 120. Any of the processing unit 101 that is in charge of executing a method of setting a multi-user VR chat environment as disclosed in FIG. 6 is referred to as a primary processing unit. For example, the primary processing unit is the VR server 120 in FIG. 4, and the primary processing unit is the processing unit 101 in the VR system 110b in FIG. 5. Alternatively, either one of the processing unit 101 in the VR system 110a or 110c in FIG. 5 may serve as the primary processing unit according to user operations


With reference to FIG. 6, a primary processing unit receives user data of the users 130a, 130b, and 130c (Block S1) and creates personal profiles of the users 130a, 130b, and 130c based on the received user data (Block S2). The personal profile of the user, for example, may include a portrait image, gender, height, and weight of the user. The primary processing unit creates an avatar for each of the users 130a, 130b, and 130c based on the personal profiles of the users 130a, 130b, and 130c (Block S3). The primary processing unit receives an invitation to join a chat, for example, from the VR system 110a, and transfers the invitation to the VR systems 110b and 110c (Block S4). The primary processing unit allows the users to select a virtual reality scene from a plurality of virtual reality scenes, then sets up the selected VR scene of a VR application associated with the chat (Block S5). The primary processing unit allows each of the users 110a, 110b, and 110c to join the chat as a participant and allows the avatars of the users 110a, 110b, and 110c to enter the selected VR scene associated with the chat (Block S6).


With reference to FIG. 7, the primary processing unit allocates an avatar 201a of the user 130a and an avatar 201b of the user 130b into the VR chat scene 200 in Block S7.


The primary processing unit monitors user voices and processes background music in Block S8. As an example, the primary processing unit receives voice signals of the users 130a and 130b respectively from the VR system 110a and 110b.


The primary processing unit obtains a first volume value x1 representing the volume of the voice signals associated with a speaking participant, such as the user 130a. The primary processing unit adjusts a distance between a first avatar of a first participant and a second avatar of a second participant in the VR chat scene from a first distance y1 to a second distance y2 according to the first volume x1. The primary processing unit rearranges the selected VR scene and the avatars (Block S9). In the Block S9, the primary processing unit reduces the distance between the first avatar of the first participant and the second avatar of the second participant in the VR chat scene if at least one of the first volume value from the first participant and a second volume value from the second participant exceeds a volume threshold value x2. For example, the primary processing unit obtains the second distance y2 from a formula (1):










y
2

=



y
1
2


(

10


x
1

-


x
2

/
10



)







(
1
)







Referring to FIG. 7, the primary processing unit interprets the definition of a free operating area 202a enclosing the avatar 201a of the user 130a and a free operating area 202b enclosing the avatar 201b of the user 130b.


In the Block S9, the primary processing unit rearranges the VR scene and prevents the first avatar of the first participant from entering a second free operating area of the second participant, and prevents the second avatar of the second participant from entering a first free operating area of the first participant while reducing the distance between the first avatar and the second avatar in the VR chat scene. With reference to FIG. 8, for example, the primary processing unit prevents the avatar 201a of the user 130a from entering the free operating area 202b of the user 130b and prevents the avatar 201b of the user 130b from entering the free operating area 202a while reducing the distance between the avatar 201a and the avatar 201b in the VR chat scene 200.


The primary processing unit determines whether an additional participant joins the chat (Block S10)? With reference to FIG. 9, when a third participant, such as the user 130c, asks or is invited to join the chat, the primary processing unit allows the user 130c to join the chat (Block S6), places an avatar 201c of the user 130c into the VR chat scene 200 (Block S7), and receives voice signals of the users 130a, 130b, and 130c respectively from the VR system 110a, 110b, and 110c (Block S8). The primary processing unit rearranges the VR scene and the avatars (step S9). The primary processing unit identifies the definition of a free operating area 202c as a third free operating area enclosing the avatar 201c of the user 130c. In the Block S9, the primary processing unit rearranges the VR scene and prevents the third avatar of the third participant from entering the first free operating area, and the first avatar from entering the third free operating area while reducing the distance between the third avatar and the first avatar in the VR chat scene. The primary processing unit prevents the third avatar of the third participant from entering the second free operating area, and the second avatar from entering the third free operating area while reducing the distance between the third avatar and the second avatar in the VR chat scene.


The primary processing unit may relocate relative positions of the first avatar, the second avatar, and the third avatar according to the formula (1) while reducing the distance between the first avatar of the first participant and the second avatar of the second participant in the VR chat scene. Similarly, the primary processing unit may allow more participants to join the chat.


The primary processing unit receives the selection of an active chat mode among a plurality of chat modes from a participant, such as the user 130a. When a whisper mode is selected and activated as the active chat mode, the primary processing unit receives the selection of a recipient participant solely receiving voice signals associated with the user 130a in the whisper mode Block S11. The selection may be issued from the VR system 110a by the user 130a. For example, the primary processing unit is notified by the VR system 110a that the user 130 selects 130c as the recipient participant solely receiving voice signals associated with the user 130a. The primary processing unit prompts a message to ask the user 130b agrees to change position and step back for the user 130c. With reference to FIG. 10, if the user 130b agrees to change position and step back for the user 130c, the primary processing unit rearranges positions of the avatars 201c and 201b and creates a channel dedicated to audio communication between the VR systems 110a and 110c.


The primary processing unit may adjust the dimension of the chat scene 200 according to a number of avatars in the scene 200, that is, the number of users joined to the chat. With reference to FIG. 11, a VR scene 200a is an instance of the chat scene 200 before the adjustment, and a VR scene 200b is an instance of the chat scene 200 after the adjustment. The dimension of the chat scene 200a is measured by a width Y1 and a length X1. The magnitude of the chat scene 200a is measured by a width Y2 and a length X2. The primary processing unit may complete the scene adjustment by transiting the scene 200a to VR scene 200b in a transition period T.


The primary processing unit may determine the transition period T to complete the scene adjustment using a lookup table. For example, the primary processing unit uses Table 1 to retrieve one index D among indices D1, D2, D3, D4, and D5.










TABLE 1








X1












(X2 − X1)
>200
200~150
150~100
100~50
<50





>100
D1
D2
D3
D4
D5


100~75 
D2
D3
D4
D5
D5


75~50
D3
D4
D5
D5
D5


50~25
D3
D4
D5
D5
D5


 <25
D3
D4
D5
D5
D5









The primary processing unit obtains a parameter S according to the following formula (2):









S
=





X

1

-

X

2



X

1








(
2
)







The primary processing unit uses the parameter S and Table 2 to retrieve one value K among time value K1, K2, K3, K4, and K5 and designates K to be the value of the transition period T. The primary processing unit performs and completes VR scene adjustment within the period T.










TABLE 2








S














D
>0.4
0.4~0.3
0.3~0.2
0.2~0.1
<0.1






D1
K1
K2
K3
K3
K3



D2
K2
K3
K4
K4
K4



D3
K3
K4
K5
K5
K5



D3
K4
K5
K5
K5
K5



D3
K5
K5
K5
K5
K5









In conclusion, the present application discloses methods for setting a multi-user virtual reality chat environment, adjusting a distance between avatars of participants in the VR chat scene from a first distance to a second distance according to the detected volume of the participants. VR scene adjusting may proceed in a limited time period determined based on a lookup table. The method allows whisper between two VR chat scene participants in a whisper mode.


It is to be understood, however, that even though numerous characteristics and advantages of the disclosure have been set forth in the foregoing description, together with details of the structure and function of the present disclosure, the disclosure is illustrative only, and changes may be made in detail, especially in matters of shape, size, and arrangement of parts within the principles of the present disclosure to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed.

Claims
  • 1. A device for setting a multi-user virtual reality chat environment comprising: one or more processors;a memory coupled to the one or more processors providing a plurality of virtual reality (VR) scenes, the memory storing a plurality of instructions executable by the one or more processors, the plurality of instructions comprises;allowing selection of a VR chat scene from the plurality of virtual reality scenes;allowing a first participant and a second participant to join the selected VR chat scene;placing a first avatar of the first participant and a second avatar of the second participant into the selected VR chat scene;receiving voice signals associated with the first participant and voice signals associated with the second participant;obtaining a first volume value representing a volume of the voice signals associated with the first participant and a second volume value representing a volume of the voice signals associated with the second participant;reducing a distance between the first avatar of the first participant and the second avatar of the second participant in the VR chat scene from a first distance to a second distance based on the obtained first volume and the second volume value;determining a time period to complete the reducing the distance according to a difference between the first distance and the second distance, wherein the time period is determined using a lookup table; andrearranging a position of at least one of the first avatar and the second avatar when the reducing the distance is completed according to the determined time period so as to create a channel dedicated to an audio communication between a virtual reality system of the first avatar and a virtual reality system of the second avatar.
  • 2. The device of claim 1, further comprising: reducing the distance between the first avatar of the first participant and the second avatar of the second participant in the VR chat scene if at least one of the first volume and the second volume value exceeds a volume threshold value.
  • 3. The device of claim 1, further comprising: defining a first free operating area enclosing the first avatar of the first participant and a second free operating area enclosing the second avatar of the second participant, and preventing the first avatar of the first participant from entering the second free operating area and the second avatar of the second participant from entering the first free operating area while reducing the distance between the first avatar of the first participant and the second avatar of the second participant in the VR chat scene.
  • 4. The device of claim 3, further comprising: allowing a third participant to join the virtual reality chat; placing a third avatar of the third participant into the VR chat scene; defining a third free operating area enclosing the avatar of the third participant; and preventing the third avatar of the third participant from entering the first free operating area and the second free operating area and the first avatar of the first participant and the second avatar of the second participant from entering the third free operating area while reducing a distance between the first avatar of the third participant and the second avatar of the second participant in the VR chat scene.
  • 5. The device of claim 4, further comprising: relocating relative positions of the first avatar, the second avatar, and the third avatar while reducing the distance between the first avatar of the first participant and the second avatar of the second participant in the VR chat scene.
  • 6. The device of claim 4, further comprising: receiving a selection of an active chat mode among a plurality of chat modes from the first participant, and receiving a selection of a recipient participant solely receiving voice signals associated with the first participant from the first participant when the active chat mode is a whisper mode.
US Referenced Citations (53)
Number Name Date Kind
6241609 Rutgers Jun 2001 B1
6784901 Harvey Aug 2004 B1
8187093 Hideya May 2012 B2
8424075 Walsh Apr 2013 B1
8653349 White Feb 2014 B1
9195305 Markovic Nov 2015 B2
9311742 Glover Apr 2016 B1
9729820 Holmes Aug 2017 B1
10168768 Kinstner Jan 2019 B1
10181218 Goetzinger, Jr. Jan 2019 B1
10225656 Kratz Mar 2019 B1
10275098 Clements Apr 2019 B1
10768776 Roche Sep 2020 B1
10776933 Faulkner Sep 2020 B2
10846898 Lee Nov 2020 B2
11054272 Bejot Jul 2021 B2
20010044725 Matsuda Nov 2001 A1
20020013813 Matsuoka Jan 2002 A1
20030234859 Malzbender Dec 2003 A1
20040109023 Tsuchiya Jun 2004 A1
20080294721 Berndt Nov 2008 A1
20090106670 Berndt Apr 2009 A1
20090240359 Hyndman Sep 2009 A1
20100077034 Alkov Mar 2010 A1
20100077318 Alkov Mar 2010 A1
20110269540 Gillo Nov 2011 A1
20120069131 Abelow Mar 2012 A1
20120131478 Maor May 2012 A1
20130083154 Kim Apr 2013 A1
20130155169 Hoover Jun 2013 A1
20130218688 Roos Aug 2013 A1
20130321568 Suzuki Dec 2013 A1
20160320847 Coleman Nov 2016 A1
20170034226 Bostick Feb 2017 A1
20170123752 Nadler May 2017 A1
20170132845 Everman, II May 2017 A1
20170326457 Tilton Nov 2017 A1
20170359467 Norris Dec 2017 A1
20180005439 Evans Jan 2018 A1
20180015362 Terahata Jan 2018 A1
20180045963 Hoover Feb 2018 A1
20180123813 Milevski May 2018 A1
20180350144 Rathod Dec 2018 A1
20190026071 Tamaoki Jan 2019 A1
20190217198 Clark Jul 2019 A1
20190320143 Izumihara Oct 2019 A1
20190349464 Ma Nov 2019 A1
20190387299 Evans Dec 2019 A1
20200099891 Valli Mar 2020 A1
20200311995 Lee Oct 2020 A1
20200371737 Leppanen Nov 2020 A1
20200394829 Lee Dec 2020 A1
20210037063 Takahashi Feb 2021 A1
Related Publications (1)
Number Date Country
20200394829 A1 Dec 2020 US
Continuations (1)
Number Date Country
Parent 16367388 Mar 2019 US
Child 17005623 US