Claims
- 1. An audio user-interfacing method in which items are represented in an audio field by corresponding synthesized sound sources from where sounds related to the items appear to emanate, the user being able also to hear real-world sounds from the environment; the method including the step of cyclically changing the position in said audio field of the or each synthesized sound source of a group of at least one synthesised sound source whereby to assist the user in distinguishing sounds emanating from the sound source from said real-world sounds.
- 2. A method according to claim 1, wherein the said group of at least one sound source is associated with an audio-field reference relative to which the member sound sources of the group are positioned, the audio-field reference being offset relative to a presentation reference determined by a mounting configuration of audio output devices used to synthesise said sound sources such as to world stabilise the audio-field reference as the user moves; the or each group sound source representing a corresponding augmented reality service that has an associated real-world location, and the or each group sound source being positioned, on average, relative to the audio field reference such that for a user located in a notional reference position, the sound source lies on average in the same direction as the associated real-world location.
- 3. A method according to claim 1, wherein the or each sound source of said group is given a cyclic change in position by cyclically varying the offset of the associated audio field reference.
- 4. A method according to claim 1, wherein the or each sound source of said group is given a cyclic change in position by cyclically varying its position relative to the associated audio field reference.
- 5. A method according to claim 1, wherein the cyclic change in position of the sound source takes the form of linear oscillations.
- 6. A method according to claim 1, wherein the cyclic change in position of the sound source takes the form of circular movements.
- 7. An audio user-interfacing method in which items are represented in an audio field by corresponding synthesized sound sources from where sounds related to the items appear to emanate, the user being able also to hear real-world sounds from the environment; the method including the step of applying a distinctive presentation effect to the item-related sounds emanating from a group of at least one synthesised sound source whereby to assist the user in distinguishing these sounds from said real-world sounds; said group of at least one sound source being associated with an audio-field reference relative to which the sound sources of the group are positioned, and the audio-field reference being moved relative to a presentation reference determined by a mounting configuration of audio output devices used to synthesise said sound sources such as to impart an underlying stabilisation to the audio-field reference as the user moves, said distinctive presentation effect being that movement of the audio field reference to impart said underlying stabilisation is done only at intervals.
- 8. A method according to claim 7, wherein the audio-field reference, between being moved to impart said underlying stabilisation, has a stabilisation corresponding to that inherent to the presentation reference.
- 9. A method according to claim 7, wherein the or each group sound source represents an augmented reality service that has an associated real-world location, said underlying stabilisation being a world stabilisation and the or each group sound source being positioned relative to the audio field reference such that for a user located in a notional reference position, the sound source lies in the same direction as the associated real-world location when the audio field reference has been just been moved to effect world stabilisation of the sound source.
- 10. An audio user-interfacing method in which items are represented in an audio field by corresponding synthesized sound sources from where sounds related to the items appear to emanate, the user being able also to hear real-world sounds from the environment; the method involving applying a distinctive presentation effect to the item-related sounds emanating from a group of at least one synthesised sound source whereby to assist the user in distinguishing these sounds from said real-world sounds; the said distinctive presentation being an underlying stabilisation to which the group of sound sources is only periodically updated.
- 11. Apparatus for providing an audio user interface in which items are represented in an audio field by corresponding synthesized sound sources from where sounds related to the items appear to emanate, the apparatus comprising:
rendering-position determining arrangement for determining, for each said sound source, an associated rendering position at which the sound source is to be synthesized to sound in the audio field, the rendering-position determining arrangement including a unit for cyclically changing the position of each sound source whereby to assist the user in distinguishing these sounds from said real-world sounds; and a rendering subsystem, including audio output devices, for generating an audio field in which said sound sources are synthesized at their associated rendering positions, the audio output devices being such as to permit the user also to hear real-world sounds from the environment.
- 12. Apparatus according to claim 11, wherein the rendering-position determining arrangement further includes:
a setting arrangement for setting the location of the or each group sound source relative to an audio-field reference, the unit for cyclically changing the position of each sound source being arranged to imparta cyclic variation to said location; a control arrangement for controlling an offset between the audio field reference and a presentation reference, the presentation reference being determined by a mounting configuration of the audio output devices; and a deriving arrangement operative to derive the rendering position of the or each group sound source based on its location relative to the audio-field reference and said offset.
- 13. Apparatus according to claim 11, wherein the rendering-position determining arrangement further includes:
a setting arrangement for setting the location of the or each group sound source relative to an audio-field reference; a control arrangement for controlling an offset between the audio field reference and a presentation reference, the presentation reference being determined by a mounting configuration of the audio output devices, and the unit for cyclically changing the position of each sound source being arranged to impart a cyclic variation to said offset; and a deriving arrangement operative to derive the rendering position of the or each group sound source based on its location relative to the audio-field reference and said offset.
- 14. Apparatus according to claim 11, wherein the or each group sound source represents a corresponding augmented reality service that has an associated real-world location, the rendering-position determining arrangement being arranged to world-stabilise the audio field reference and to position the or each group sound source relative to the audio field reference such that for a user located in a notional reference position, the sound source lies on average in the same direction as the corresponding said real-world location.
- 15. Apparatus for providing an audio user interface in which items are represented in an audio field by corresponding synthesized sound sources from where sounds related to the items appear to emanate, the apparatus comprising:
rendering-position determining arrangement for determining, for each said sound source, an associated rendering position at which the sound source is to be synthesized to sound in the audio field, the rendering-position determining arrangement comprising:
a setting arrangement for setting the location of the or each said group sound source relative to an audio-field reference; a control arrangement for controlling an offset between the audio field reference and a presentation reference, the presentation reference being determined by a mounting configuration of the audio output devices; and a deriving arrangement operative to derive the rendering position of the or each group sound source based on its location relative to the audio-field reference and said offset; the control arrangement being arranged to control said offset such as to impart an underlying stabilisation to the audio-field reference as the user moves, with changes to said offset only being done at intervals; and a rendering subsystem, including audio output devices, for generating an audio field in which said sound sources are synthesized at their associated rendering positions, the audio output devices being such as to permit the user also to hear real-world sounds from the environment.
- 16. Apparatus according to claim 15, wherein the control arrangement is further arranged such that between changes in said offset effected to impart said underlying stabilisation, the audio-field reference is given a stabilisation corresponding to that inherent to the presentation reference.
- 17. Apparatus according to claim 25, wherein the or each group sound source represents a corresponding augmented reality service that has an associated real-world location, the rendering-position determining arrangement being operative to world-stabilise the audio field reference and to position the or each group sound source relative to the audio field reference such that, just after an offset change to update the world stabilisation of the audio field reference, the sound source lies in the same direction as the corresponding said real-world location for a user located in a notional reference position.
Priority Claims (2)
Number |
Date |
Country |
Kind |
0102230.0 |
Jan 2001 |
GB |
|
0127754.0 |
Nov 2001 |
GB |
|
Parent Case Info
[0001] This application is a continuation-in-part of our earlier U.S. patent application Ser. No. 10/058052 filed Jan. 29, 2002.
Continuation in Parts (1)
|
Number |
Date |
Country |
Parent |
10058052 |
Jan 2002 |
US |
Child |
10355262 |
Jan 2003 |
US |