Embodiments of the present disclosure relate to playback. Some relate to playback in virtual reality or augmented reality displays.
Some electronic devices, such as some mobile devices, are configured to provide playback of video and/or audio content.
It would be desirable to improve playback by an electronic device.
According to various, but not necessarily all, embodiments there is provided an apparatus comprising means for:
In some examples, segmenting the video stream into a plurality of display areas comprises determining at least one object in the video stream.
In some examples, the video stream is a live video stream of a real-world scene.
In some examples, segmenting the video stream is performed prior to an event in the subset of the plurality of display areas of the video stream and wherein enabling user playback control within the subset of the plurality of display areas enables a user to review the event.
In some examples, enabling user playback control comprises enabling a user to watch again footage from the video stream that has already been presented in the field of view of the user.
In some examples, enabling user playback control within the subset of the plurality of display areas comprises enabling at least one of: pausing, rewinding, fast-forwarding, and adjusting playback speed of at least one display areas within the subset of display areas.
In some examples, the means are configured to reposition display of at least one display area under user playback control.
In some examples, the means are configured to, when the video stream is viewed separately by a plurality of users, enable independent playback control of at least one display area for different users.
In some examples, the means are configured to provide the segmented video stream to at least one display device.
In some examples, the means are configured to discard the video stream for area or areas outside of the subset of areas.
According to various, but not necessarily all, embodiments there is provided a display device comprising an apparatus as described herein.
According to various, but not necessarily all, embodiments there is provided an electronic device comprising an apparatus as described herein and at least one input configured to receive the video stream.
According to various, but not necessarily all, embodiments there is provided a method comprising:
In some examples, segmenting the video stream into a plurality of display areas comprises determining at least one object in the video stream.
In some examples, the video stream is a live video stream of a real-world scene.
In some examples, segmenting the video stream is performed prior to an event in the subset of the plurality of display areas of the video stream and wherein enabling user playback control within the subset of the plurality of display areas enables a user to review the event
In some examples, enabling user playback control comprises enabling a user to watch again footage from the video stream that has already been presented in the field of view of the user.
In some examples, enabling user playback control within the subset of the plurality of display areas comprises enabling at least one of: pausing, rewinding, fast-forwarding, and adjusting playback speed of at least one display areas within the subset of display areas.
In some examples, the method comprises repositioning display of at least one display area under user playback control.
In some examples, the method comprises, when the video stream is viewed separately by a plurality of users, enabling independent playback control of at least one display area for different users.
In some examples, the method comprises providing the segmented video stream to at least one display device.
In some examples, the method comprises discarding the video stream for area or areas outside of the subset of areas.
According to various, but not necessarily all, embodiments there is provided a computer program comprising instructions for causing an apparatus to perform: receiving a video stream;
In some examples, segmenting the video stream into a plurality of display areas comprises determining at least one object in the video stream.
In some examples, the video stream is a live video stream of a real-world scene.
In some examples, segmenting the video stream is performed prior to an event in the subset of the plurality of display areas of the video stream and wherein enabling user playback control within the subset of the plurality of display areas enables a user to review the event
In some examples, enabling user playback control comprises enabling a user to watch again footage from the video stream that has already been presented in the field of view of the user.
In some examples, enabling user playback control within the subset of the plurality of display areas comprises enabling at least one of: pausing, rewinding, fast-forwarding, and adjusting playback speed of at least one display areas within the subset of display areas.
In some examples, the computer program comprising instructions for causing an apparatus to perform repositioning display of at least one display area under user playback control.
In some examples, the computer program comprising instructions for causing an apparatus to perform, when the video stream is viewed separately by a plurality of users, enabling independent playback control of at least one display area for different users.
In some examples, the computer program comprising instructions for causing an apparatus to perform providing the segmented video stream to at least one display device.
In some examples, the computer program comprising instructions for causing an apparatus to perform discarding the video stream for area or areas outside of the subset of areas.
According to various, but not necessarily all, embodiments there is provided an apparatus comprising
According to various, but not necessarily all, embodiments there is provided an apparatus comprising means for performing at least part of one or more methods disclosed herein.
According to various, but not necessarily all, embodiments there is provided examples as claimed in the appended claims.
The description of a function and/or action should additionally be considered to also disclose any means suitable for performing that function and/or action.
Some examples will now be described with reference to the accompanying drawings in which:
Examples of the disclosure relate to apparatus, methods and/or computer programs for and/or involved in playback.
Some examples of the disclosure relate to apparatus, methods and/or computer programs for enabling user playback control of a subset of display areas of a video stream.
The following description and FIGs describe various examples of an apparatus 10 comprising means for:
In examples, the apparatus 10 is configured to receive a video stream 12, for example from a remote apparatus and/or device such as a remote camera.
In examples, the apparatus 10 is configured to receive the video stream 12 from a local apparatus and/or device such as a local and/or integrated camera.
In examples, the apparatus 10 is configured to receive one or more user inputs 30 for playback control of at least one display area 14 of the video stream 12. The apparatus can be configured to receive the one or more user inputs 30 in any suitable way.
In examples, the apparatus 10 can be comprised and/or integrated in an electronic device 28. See, for example,
In examples, the video stream 12 is a live video stream of a real-world scene. For example, a video stream 12 of a live lecture, a video stream of a live sporting event and so on.
In examples, the apparatus 10 is configured to segment the video stream 12 into a plurality of display areas 14. For example, segmenting the video stream 12 into a plurality of display areas 12 can comprise determining at least one object 20 in the video stream 12.
As used herein, the term “determining” (and grammatical variants thereof) can include, at least: calculating, computing, processing, deriving, investigating, looking up (for example, looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (for example, receiving information), accessing (for example, accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing, and the like.
In examples, the apparatus 10 is configured to segment the video stream 12 prior to an event 22 in a subset 16 of the plurality of display areas 14 of the video stream 12. For example, with regard to a live lecture, the segmentation can occur prior to a particular slide being shown on a whiteboard. For example, with regard to a live sporting event, the segmentation can occur prior to a goal being scored and so on.
In examples, the apparatus 10 is configured to record the video stream 12 for a subset 16 of the plurality of display areas 14. For example, the apparatus 10 can be configured to store the video stream 12 for a subset 16 of the plurality of display areas 14 in local and/or remote memory.
In examples, the apparatus 10 is configured to enable user playback control of the recorded video stream 12 within the subset 16 of the plurality of display areas 14 without enabling user playback control of the video stream 12 outside of the subset 16 of the display areas 14.
In examples, the apparatus 10 is configured to enable user playback control of the recorded video stream 12 within the subset 16 of the plurality of display areas 14 while playing the video stream 12 outside of the subset 16 of the display areas 14 without user playback control.
In examples, enabling user playback control comprises enabling a user to watch again footage from the video stream 12 that has already been presented in the field of view of the user.
In examples, while watching again footage from the video stream 12 that has already been presented in the field of view of the user, the user can continue seeing the video stream 12 outside of the subset 16 of the display areas 14 in real time.
In examples, the apparatus 10 is configured to reposition display of at least one display area 14 under user playback control. For example, the apparatus 10 can receive one or more user inputs 30 and based, at least in part on the received one or more user inputs 30, reposition display of at least one display area 14 under user playback control.
In examples, the apparatus 10 is configured to provide the segmented video stream 12 to at least one display device 26. For example, the apparatus 10 can be configured to provide the segmented video stream 12 to a plurality of display devices 26, such as a plurality of virtual reality display devices, and to enable independent playback control of at least one display area 14 for different users.
In examples, the apparatus 10 is configured to control a display device 26 to display the video stream 12 with associated playback control as described herein.
In examples, the apparatus 10 is configured to discard the video stream, after display, for area or areas 14 outside of the subset 16 of areas 14.
In examples, the apparatus 10 can comprise any number of additional elements not illustrated in the example of
In the illustrated example, the electronic device 28 comprises means 34 for receiving a video stream 12, a display 32 and an apparatus 10 as described in relation to
The electronic device 28 can comprise any suitable electronic device 28. For example, the electronic device 28 can comprise any suitable processing and/or display device.
In examples, the electronic device 28 can comprise any suitable personal device, and/or any suitable mobile device, and/or any suitable wearable device and so on. In examples, electronic device 28 comprises a virtual reality display device and/or an augmented reality display device.
In examples, the electronic device 28 can be considered an apparatus.
The electronic device 28 can comprise any suitable means 34 for receiving a video stream 12, for example any suitable antenna and/or receiver and/or transceiver and so on.
In examples, the means 34 for receiving a video stream 12 can comprise any suitable means for generating and/or creating a video stream 12, for example one or more cameras and/or one or more volumetric capture devices.
In examples, the electronic device 28 can comprise any suitable display 32, such as any suitable virtual reality and/or augmented reality display.
In the example of
However, in examples, the display is remote and/or separate from the electronic device 28 and the electronic device 28 can comprise means configured to transmit the video stream 12, for example one or more antennas, and/or transmitters and/or transceivers and so on.
In examples, the electronic device 28 comprises a virtual reality display device 26, such as a head mounted virtual reality display device 26, and is, for example, configured to display a video stream 12 from a remote location.
Accordingly, in examples, the electronic device 28 can provide a telepresence for a user of the virtual reality display device 26 at the location of the camera(s) from which the video stream 12 is provided. For example, a telepresence from a camera and/or volumetric capture device, such as a 360 degree camera, at a location in a lecture theatre or classroom can be provided.
Accordingly, examples of the disclosure provide for user playback control of a video stream 12 of a location at which the user is not present.
In some examples, the electronic device 28 comprises an augmented reality display device 26, such as a head mounted augmented reality display device 26, and is, for example, configured to record at least part of a scene being witnessed in person by a user of the electronic device 28.
Accordingly, examples of the disclosure provide for user playback control of a video stream 12 of a location at which the user is present.
In examples, the electronic device 28 can comprise any number of additional elements not illustrated in the example of
Additionally, or alternatively, one or more elements of the electronic apparatus 28 illustrated in the example of
As illustrated in the example of
In examples, the electronic device 28 can be considered to comprise at least one input 36 configured receive the video stream 12.
Accordingly, in examples,
One or more features discussed in relation to
In examples, method 300 can be considered a method 300 of enabling user playback control of a video stream 12.
In examples, method 300 can be considered a method 300 of reducing memory requirements for enabling user play back control of a video stream 12.
In examples, method 300 can be considered a method 300 for enabling user playback control of a subset 16 of display areas 14 of a video stream 12.
In examples, method 300 can be performed by any suitable apparatus comprising any suitable means for performing the method 300.
In examples, method 300 can be performed by the apparatus of
At block 302, method 300 comprises receiving a video stream 12.
In examples, receiving a video stream 12 can be performed in any suitable way using any suitable method.
In examples, the video stream 12 can be received from a remote device, for example from a device at a location in which a user is not present.
In examples, the video stream 12 can be received from a local device, for example from a device at a location in which a user is present.
In examples, the video stream 12 can be received, directly or indirectly, from any suitable camera device/system, such as, for example, a 360 degree camera device/system configured to provide substantially 360 degrees field of view.
In examples, the video stream 12 can be received, directly or indirectly, from any suitable storage, such as any suitable memory. For example, the video stream 12 can be recorded and stored in memory and receiving the video stream 12 can comprise retrieving the video stream 12 from the memory. In examples, the memory can be local or remote.
In examples, a video stream 12 can be considered information configured to allow control of at least one display 32 to display a video. In examples a video can be considered to comprise associated audio information.
In examples, a video stream 12 can be considered audio and/or visual data configured to be rendered and/or presented by an apparatus and/or device.
Accordingly, in examples, a video stream can comprise audio data without visual data.
In examples, a video stream 12 can be considered information for presentation to a user via one or more devices, such as one or more display devices.
In examples, a video stream 12 can be considered information captured by one or more systems comprising one or more cameras.
In examples, the video stream 12 is a live video stream 12 of a real-world scene. For example, the video stream 12 can comprise information of a live real-world scene recorded and/or captured by a system comprising one or more cameras and/or one or more audio capture devices, such as one or more microphones.
For example, the video stream 12 can be a live video stream 12 of a sporting event such as a football match.
For example, the video stream 12 can be a live video stream 12 of a meeting, for example a business meeting, or a lecture such as a school lesson. See, for example,
In examples, the video stream 12 can be provided to at least one user who is not present at the real-world scene in real-time to allow the user to be virtually present, for example using a virtual reality display device 26.
In examples, the video stream 12 is a live video stream of a location at which a user is present and is, for example, using an augmented reality display device 26.
At block 304, method 300 comprises segmenting the video stream 12 into a plurality of display areas 14.
In examples, segmenting the video stream 12 into a plurality of display areas 14 can be performed in any suitable way using any suitable method. In examples, the video stream 12 can be segmented into any suitable number of display areas 14.
For example, any suitable image segmentation technique or techniques can be used.
In examples, segmenting the video stream 12 into a plurality of display areas 14 is based, at least in part, on user input. For example, a user can provide input to indicate one or more display areas 14 and/or one or more objects 20.
In examples, segmenting the video stream 12 into a plurality of display areas 12 is, at least partly, automated.
In examples, segmenting the video stream 12 into a plurality of display areas 12 is performed automatically, for example by detecting objects, without user input and/or intervention.
A display area 14 can be considered a display portion, and/or a display segment, and/or a display section, and/or a display part, and/or a display zone and so on.
In examples a display area 14 can be considered a sub-area of the overall area to be displayed when displaying the video stream 12.
In examples, a display area 14 can be considered a part of the video stream 12 that is viewable by a user when the video stream 12 is displayed.
By way of example, reference is made to
In the example of
In the illustrated example, of
In examples, the video stream 12, when viewed, can have any suitable size and/or shape, as indicated by the three dots to the sides of the rectangle in the example of
In examples, a user may not be able to view all of the viewable area of the video stream 12 at once. For example, a user using a head mounted virtual reality display device 26 or augmented reality display device 26 may have to turn their head to be able to move the field of view 24 of the user and to be able to view different parts of the video stream 12.
In the example of
In the example of
In examples, the display areas 14 can vary with time. For example, the size and/or shape and/or position of one or more of the display areas 14 can vary with time. For example, the size and/or shape and/or positions of one or more of the display areas 14 can vary at different times in the video stream 12.
In examples, pixels of the video stream 21 are assigned and/or allocated to different display areas 14.
In examples, segmenting the video stream 12 into a plurality of display areas 14 comprises determining at least one object 20 in the video stream 14.
Determining at least one object 20 in the video stream 14 can be performed in any suitable way using any suitable method. For example, any suitable object detection techniques or techniques can be used.
In examples, determining at least one object 20 in the video stream 14 is based, at least in part, on user input. For example, a user can provide one or more inputs to indicate one or more objects 20 in the video stream 12.
In examples, pixels of the video stream 12 are assigned to different objects 20.
In examples, different display areas 14 can be associated with and/or assigned to different objects 20, determined in the video stream 12.
By way of example, reference is made to
The example of
In the example of
For example, in the illustrated example, the areas of the video stream 12 associated with the person, the bed and the window pane can be determined as display areas 14 and so on.
In examples, display areas 14 can be determined around one or more objects 20 determined in a video stream 12. For example, in the example of
Segmenting the video stream 12 into a plurality of display areas 14 can be performed at any suitable time. For example, segmenting the video stream 12 into a plurality of display areas 14 can be performed at the start of the video stream 12.
In examples, segmenting the video stream 12 can be performed as soon as possible. For example, determining one or more objects 20 in the video stream 12 can be performed when one or more objects 20 become present in the video stream 12.
With reference to the example of
Accordingly, in examples, the video stream 12 can be segmented any number of times for any suitable reason. In examples the video stream 12 can be re-segmented based, at least in part, on the content of the video stream 12, such as removal and/or introduction of one or more objects, changes to one or more objects and so on.
In examples, segmenting the video stream 12 is performed prior to an event 22 in the video stream 12. See, for example,
Referring back to
In examples, recording the video stream 12 for a subset of the plurality of display areas 14 can be performed in any suitable way using any suitable method.
In examples, recording the video stream 12 for a subset 16 of the plurality of display areas 14 comprises saving and/or storing information/data from the video stream 12 for the subset 16 of the plurality of display areas 14.
In examples, recording the video stream 12 for a subset 16 of the plurality of display areas 14 comprises saving and/or storing information/data to enable user playback control within and/or of the subset 16 of the plurality of display areas 14.
In examples, the information/data for the subset 16 of the plurality of display areas 14 can be saved and/or stored in any suitable way. For example, the information/data for the subset 16 of the plurality of display areas 14 can be saved and/or stored in local and/or remote memory.
Accordingly, in examples, method 300 comprises discarding the video stream 12 for area or areas 14 outside of the subset 16 of areas 14. In examples, this provides in a large reduction in storage requirements for the video stream 12. This can be particularly relevant with regard to 360 degree video and/or high resolution video.
In examples, method 300 comprises discarding the video stream 12 for area or areas 14 outside of the subset 16 of areas 14.
Areas 14 outside of the subset 16 of areas 14 can be considered areas 14 not included in the subset 16 of areas 14.
In examples, the subset 16 of areas 14 can be determined in any suitable way using any suitable method.
In examples, the subset 16 of areas 14 can be automatically determined and/or determined based, at least in part, on one or more user inputs 30.
For example, the subset 16 of areas 14 can be determined based, at least in part, on one or more objects 20 recognised in the video stream 12.
For example, in the example of
For example, a user can provide one or more inputs to indicate which area 14 or areas 14 should be included in the subset 16 of display areas 14.
Referring again to the example of
In the example of
However, in other examples, areas ‘A’, ‘E’ and ‘C’ can represent the subset 16 of the display areas 14 and so on.
Accordingly, in the example of
In examples, segmenting the video stream 12 is performed prior to an event 22 in the subset 16 of the plurality of display areas 14 of the video stream 12. By way of example, reference is made to
In the example of
At time t1, the segmenting of the video stream 12 is performed. For example, the objects 20 in
At time t2, which is after t1, an event 22 occurs in the video stream 12. The event 22 occurs until time t3, as indicated by the dashed line in the example of
In examples, the event 22 can be any suitable event 22, for example particular information being shown on a whiteboard, a particular sentence being spoken, a goal being scored and so on.
Accordingly, it can be seen from the example of
In examples, occurrence of the event 22 is not predetermined. For example, during a football match that is being recorded, it is not known if one or more goals will be scored.
Referring back to
Consequently,
In examples, block 308 can be considered to comprise enabling user playback control of the recorded video stream 12 within the subset 16 of the plurality of display areas 14 while playing the video stream outside of the subset 16 of the display areas 14 without user playback control.
In examples, block 308 can be performed in any suitable way using any suitable method.
Enabling user playback control can be considered enabling a user to control playback of a video stream 12 in any suitable way, for example, using any suitable user input or inputs. For example, one or more control interfaces can be provided to a user to control playback of the recorded video stream 12 within the subset 16 of the plurality of display areas 14.
In examples, user playback control can be enabled for the subset 16 of the plurality of display areas 14 but a user can choose to control playback within one or more of the subset 16 of the plurality of display areas 14 individually.
For example, with regard to
In examples, enabling user playback control within the subset 16 of the plurality of display areas 14 comprises enabling at least one of: pausing, rewinding, fast-forwarding and adjusting playback speed of at least one display area 14 within the subset 16 of display areas 16.
In examples, rewinding is intended to comprise any change in temporal position backwards from a current temporal play position of the video stream 12.
For example, with regard to
In examples, fast-forwarding is intended to comprise any change in temporal position forwards from a current temporal play position of the video stream 12.
For example, with regard to
In examples, enabling user playback control comprises enabling a user to watch again footage from the video stream 12 that has already been presented in the field of view 24 of the user.
For example, the footage may have previously been presented in a display area 14 that the user was looking at while the footage was presented.
For example, the footage may be of an event the user has witnessed live and wishes to review using an augmented reality display device 26. Accordingly, in examples, enabling a user to watch again footage from the video stream 12 that has already been presented in the field of view 24 of the user can be considered to comprise enabling a user to watch footage that occurred in the user's real world field of view.
In examples, segmenting the video stream 12 is performed prior to an event 22 in the subset 16 of the plurality of display areas 14 of the video stream 12 and enabling user playback control within the subset 16 of the plurality of display areas 14 enables a user to review the event 22.
For example, with regard to
In examples, method 300 comprises repositioning display of at least one display area 14 under user playback control.
In examples, repositioning display of at least one display area 14 under user playback control can be performed in any suitable way using any suitable method.
For example, a user can provide one or more user inputs 30 to control repositioning of the display of at least one display area 14.
By way of example, reference is made to
In the example of
In the example of
However, in order to not miss new information provided on the whiteboard, the user repositions the display area 14 of the whiteboard so that a real-time version of the whiteboard can be seen, in addition to the whiteboard display area 14 that is under user playback control.
This is illustrated in the example of
In examples, method 300 comprises, when the video stream 12 is viewed separately by a plurality of users, enabling independent playback control of at least one display area 14 for different users.
Accordingly, in examples, different users receiving the video stream 12 can control one or more display areas independently of each other.
For example, in the example of
In examples, method 300 comprises provided an indication to at least one user that at least one other user is using user playback control of at least one display area 14.
In examples, a user can control one or more display areas 14 within the subset 16 differently and independently from each other. For example, a user can rewind display area ‘A’ in
For example, a user may rewind display areas ‘C’ and ‘D’ of
Examples of the disclosure are advantageous and/or provide technical benefits.
For example, examples of the disclosure allow for user playback control of significant areas of a video stream, while reducing the amount of information that must be stored.
For example, discarding the video stream outside of display area(s) for which playback control is to be provided and/or is to be provided reduces significantly storage requirements for the video stream.
For example, examples of the disclosure allow for a user to review one or more events without losing connection with a real-time scene.
For example, examples of the disclosure allow a user to ‘pause’ one or more areas of a scene/video to allow the user to focus on a certain area or areas which continue playing in real time. For example, a user in a meeting can review a previous slide on a display but does not lose view of the present slide and/or other participants continuing in the meeting. Without this functionality the user would have to skip some of the discussion to catch up, or will then lag the other participants' footage and be unable to join in with real-time discussion.
For example, multiple users can review/share an old view of one or more areas/objects and return to real time without disturbing other users.
Implementation of a controller 830 may be as controller circuitry. The controller 830 may be implemented in hardware alone, have certain aspects in software including firmware alone or can be a combination of hardware and software (including firmware).
As illustrated in
The processor 832 is configured to read from and write to the memory 834. The processor 832 may also comprise an output interface via which data and/or commands are output by the processor 832 and an input interface via which data and/or commands are input to the processor 832.
The memory 834 stores a computer program 836 comprising computer program instructions (computer program code) that controls the operation of the apparatus when loaded into the processor 832. The computer program instructions, of the computer program 836, provide the logic and routines that enables the apparatus to perform the methods illustrated in
The apparatus therefore comprises:
As illustrated in
Computer program instructions for causing an apparatus to perform at least the following or for performing at least the following:
The computer program instructions may be comprised in a computer program, a non-transitory computer readable medium, a computer program product, a machine readable medium. In some but not necessarily all examples, the computer program instructions may be distributed over more than one computer program.
Although the memory 862 is illustrated as a single component/circuitry it may be implemented as one or more separate components/circuitry some or all of which may be integrated/removable and/or may provide permanent/semi-permanent/dynamic/cached storage.
In examples, the memory 834 comprises a random-access memory 858 and a read only memory 860. In examples, the computer program 836 can be stored in the read only memory 860.
Although the processor 832 is illustrated as a single component/circuitry it may be implemented as one or more separate components/circuitry some or all of which may be integrated/removable. The processor 832 may be a single core or multi-core processor.
References to ‘computer-readable storage medium’, ‘computer program product’, ‘tangibly embodied computer program’ etc. or a ‘controller’, ‘computer’, ‘processor’ etc. should be understood to encompass not only computers having different architectures such as single/multi-processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry. References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device etc.
As used in this application, the term ‘circuitry’ may refer to one or more or all of the following:
This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit for a mobile device or a similar integrated circuit in a server, a cellular network device, or other computing or network device.
The blocks illustrated in the
Where a structural feature has been described, it may be replaced by means for performing one or more of the functions of the structural feature whether that function or those functions are explicitly or implicitly described.
Thus, the apparatus can comprise means for:
In examples, an apparatus can comprise means for performing one or more methods, and/or at least part of one or more methods, as disclosed herein.
In examples, an apparatus can be configured to perform one or more methods, and/or at least part of one or more methods, as disclosed herein.
The term ‘comprise’ is used in this document with an inclusive not an exclusive meaning. That is any reference to X comprising Y indicates that X may comprise only one Y or may comprise more than one Y. If it is intended to use ‘comprise’ with an exclusive meaning then it will be made clear in the context by referring to “comprising only one . . . ” or by using “consisting”.
In this description, reference has been made to various examples. The description of features or functions in relation to an example indicates that those features or functions are present in that example. The use of the term ‘example’ or ‘for example’ or ‘can’ or ‘may’ in the text denotes, whether explicitly stated or not, that such features or functions are present in at least the described example, whether described as an example or not, and that they can be, but are not necessarily, present in some of or all other examples. Thus ‘example’, ‘for example’, ‘can’ or ‘may’ refers to a particular instance in a class of examples. A property of the instance can be a property of only that instance or a property of the class or a property of a sub-class of the class that includes some but not all of the instances in the class. It is therefore implicitly disclosed that a feature described with reference to one example but not with reference to another example, can where possible be used in that other example as part of a working combination but does not necessarily have to be used in that other example.
Although examples have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the claims.
Features described in the preceding description may be used in combinations other than the combinations explicitly described above.
Although functions have been described with reference to certain features, those functions may be performable by other features whether described or not.
Although features have been described with reference to certain examples, those features may also be present in other examples whether described or not.
The term ‘a’ or ‘the’ is used in this document with an inclusive not an exclusive meaning. That is any reference to X comprising a/the Y indicates that X may comprise only one Y or may comprise more than one Y unless the context clearly indicates the contrary. If it is intended to use ‘a’ or ‘the’ with an exclusive meaning then it will be made clear in the context. In some circumstances the use of ‘at least one’ or ‘one or more’ may be used to emphasis an inclusive meaning but the absence of these terms should not be taken to infer any exclusive meaning.
The presence of a feature (or combination of features) in a claim is a reference to that feature or (combination of features) itself and also to features that achieve substantially the same technical effect (equivalent features). The equivalent features include, for example, features that are variants and achieve substantially the same result in substantially the same way. The equivalent features include, for example, features that perform substantially the same function, in substantially the same way to achieve substantially the same result.
In this description, reference has been made to various examples using adjectives or adjectival phrases to describe characteristics of the examples. Such a description of a characteristic in relation to an example indicates that the characteristic is present in some examples exactly as described and is present in other examples substantially as described.
Whilst endeavouring in the foregoing specification to draw attention to those features believed to be of importance it should be understood that the Applicant may seek protection via the claims in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not emphasis has been placed thereon.
Number | Date | Country | Kind |
---|---|---|---|
21383080.5 | Nov 2021 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/082204 | 11/17/2022 | WO |