The present disclosure relates to a mobile electronic device configured for operation with an arrangement of mobile electronic devices to provide a virtual playout screen for visual media, a media server configured for communication with an arrangement of mobile electronic devices that provide a virtual playout screen for visual media, corresponding methods, and corresponding computer program products.
Mobile electronic devices, such as mobile phones, are increasingly being used for video services, such as gaming applications, social media applications, live peer-to-peer video streaming communications (e.g., Facetime, etc.), and entertainment applications (e.g., Netflix, YouTube, etc.). Mobile electronic devices necessarily have limitations on their display size, resolution, and processing capabilities in order to facilitate mobility, aesthetics and longer battery life. A concept called “Junkyard Jumbotron” has been proposed for transforming the display devices of a group of mobile electronic devices into a virtual larger display that is used to display an image. This concept requires a user to send a photo of the arranged mobile electronic devices to a network server, which analyzes the photo to determine the layout of the display devices and how to slice-up and distribute an image to the mobile electronic devices for collective display. This concept requires certain joint operational capabilities of the mobile electronic devices and the connected server and requires involvement of users which create significant limitations to this concept being deployed for use with visual media playout.
Some embodiments disclosed herein are directed to a mobile electronic device that is configured for operation with an arrangement of mobile electronic devices to provide a virtual playout screen for visual media. The mobile electronic device includes a wireless network interface circuit, a movement sensor, a display device, a processor, and a memory. The wireless network interface circuit is configured for communication through a wireless communication link. The movement sensor is configured to sense movement of the mobile electronic device. The processor operationally connects the display device, the movement sensor, and the wireless network interface circuit. The memory stores program code that is executed by the processor to perform operations. The operations include generating a movement vector identifying direction and distance that the mobile electronic device has been moved from a reference location to a playout location where the display device will form a component of the virtual playout screen based on tracking movement indicated by the movement sensor of the mobile electronic device while being moved. The operations provide the movement vector to a media splitting module that determines how to split the visual media into a set of cropped portions for display on assigned ones of the mobile electronic devices based on the movement vector. The operations obtain a cropped portion of the visual media that has been assigned by the media splitting module to the mobile electronic device, and then display the cropped portion of the visual media on the display device.
A potential advantage of these operations is that a more optimally configured virtual playout screen can be more automatically created through use of the movement vectors generated from output of the movement sensor. These operations can be performed independently of any server such as by a master one of the mobile electronic devices and/or may be performed a combination of a media server and the mobile electronic devices. This enables many more options for what component of the system determines the layout of the mobile electronic devices that will provide the virtual playout screen and for what component of the system splits the visual media into the set of cropped portions for display through the mobile electronic devices.
Some other embodiments disclosed herein are directed to a media server configured for communication with an arrangement of mobile electronic devices that provide a virtual playout screen for visual media. The media server includes a network interface circuit, a processor, and a memory. The network interface circuit is configured for communication with the mobile electronic devices. The processor is operationally connected to the network interface circuit. The memory stores program code that is executed by the processor to perform operations. The operations include receiving movement vectors from the mobile electronic devices, where each of the movement vectors identifies direction and distance that one of the mobile electronic devices have been moved from a reference location to a playout location where a display device of the mobile electronic device will form a component of the virtual playout screen. The operations split the visual media into a set of cropped portions for display on assigned ones of the mobile electronic devices based on the movement vectors. The operations also route the cropped portions of the visual media toward the assigned ones of the mobile electronic devices for display.
Other mobile electronic devices, media servers, and corresponding methods and computer program products according to embodiments of the inventive subject matter will be or become apparent to one with skill in the art upon review of the following drawings and detailed description. It is intended that all such additional mobile electronic devices, media servers, methods, and computer program products be included within this description, be within the scope of the present inventive subject matter and be protected by the accompanying claims. Moreover, it is intended that all embodiments disclosed herein can be implemented separately or combined in any way and/or combination
Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying drawings. In the drawings:
Inventive concepts will now be described more fully hereinafter with reference to the accompanying drawings, in which examples of embodiments of inventive concepts are shown. Inventive concepts may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of various present inventive concepts to those skilled in the art. It should also be noted that these embodiments are not mutually exclusive. Components from one embodiment may be tacitly assumed to be present or used in another embodiment.
Some embodiments are directed to methods and operations by mobile electronic devices and a media server to display visual media through a virtual playout screen that is provided by an arrangement of mobile electronic devices. Methods and operations for splitting the visual media into a set of cropped portions that are assigned for playback by the set of mobile electronic devices may be performed by one of the mobile electronic devices functioning as a master device or may be performed by the media server. These approaches enable many more options for how the system determines the present layout of the mobile electronic devices to provide the virtual playout screen for the visual media, and which can reduce the technical complexity and cost of implementing virtual playout screens both to the software developers and to the end-users.
First Aspects of Visual Media Splitting and Display Operations:
Various operations are now described in the context of the non-limiting embodiment of
The MDs 110 can be configured to sense the user's triggering event by many alternative ways. One way that an MD can sense the event is by sensing when a user taps on a topmost one of the stacked MDs 110 or when a user taps on a table or other structure supporting the MDs 110. Another way that an MD can sense the event is by sending an audible trigger, such as a knock sound, clap sound, spoken or other user audible command still another way that an MD can sense the event is through receiving a defined user input through a user interface of the MD, such as a touch screen interface and/or mechanical button. Each of the MDs 110 may operate to separately identify occurrence of the triggering event or only one of the MDs 110, such as a master MD, may operate to identify occurrence of the triggering event and then notify the other MDs 110 through wireless communication link that the triggering event has been sensed.
The MDs track their movement while being moved responsive to determining or being informed of the triggering event, and stop tracking their movement and generate a movement vector responsive to another event. For example, the application may display a prompt to the user with a movement start/stop icon which is selected (e.g., tapped) to initiate tracking of movement by the MDs 110 and which is further selected to cease tracking movement and generate movement vectors, and initiate further operations by one or more of the applications and/or by a media server to determine how to split a visual media into a set of cropped portions for display on assigned ones of the MDs 110 based on the tracked movement, and cause the cropped potions to be distributed to the assigned ones of the MDs 110 for playout through the virtual playout screen. Alternatively, each of the MDs 110 may generate a movement vector that is communicated to the master MD or to the media server when the MD has remained stationary for at least a threshold time after being moved.
Further example operations are now explained in the context of the example movements of the MDs 110 from the initial stacked arrangement shown in
In particular, the user next repeats the process using the now topmost MD2 by making and input to trigger the event which causes the MD2 to begin tracking its movement relative to the reference location 120 while the user moves (rotates to landscape mode and translates) MD2 to the left of MD1 with a side edge of MD2 aligned with a bottom edge of MD1, and then makes the other input to stop tracking movement and generate a movement vector. The user next repeats the process using the now topmost MD3 by making an input to trigger the event which causes MD3 to begin tracking its movement relative to the reference location 120 while the user moves MD3 to have a side edge below and immediately adjacent to the lower side edge of MD2 and the bottom edge of MD1 and rotated to landscape mode, and then makes the other input to stop tracking movement and generate a movement vector. The user next repeats the process using the now topmost MD4 by making an input to trigger the event which causes MD4 to begin tracking its movement relative to the reference location 120 while the user moves MD4 to the right of MD1 with a side edge of MD4 aligned with a bottom edge of MD1 and rotated in landscape mode, and then makes the other input to stop tracking movement and generate a movement vector. The user next repeats the process using the remaining MD5 by making the input which triggers the event which causes MD5 to begin tracking its movement relative to the reference location 120 while the user moves MD5 below MD1 and MD4 and rotated in landscape mode, and then making the other input to stop tracking movement and generate a movement vector.
Although the MD movements illustrated between
For the example operations, MD1 is assumed to operate as a master as explained below. MD1 may be selected from among the MDs 110 to operate as a master device based on comparison of one or more capabilities of the MDs 110. For example, MD1 may be selected as the master based on it having the greatest processing capabilities, highest quality of service link to the media server 200, and/or another highly ranked capability. Alternatively or additionally, the media server 200 may select the master device from among the MDs 110 and/or the user may select which of the MDs 110 will operate as the master.
Referring to
MD2-MD5 may separately communicate with MD1 using any wireless communication protocol, although a low latency protocol such as a device-to-device communication protocol may be particularly beneficial. The wireless communication protocol may be, for example, use side-link communication features of LTE or New Radio (NR) or may use a cellular radio interface for communications through a radio base station, which may preferably have a relatively low communication around-trip time (e.g., less than 3 ms for NR).
MD1 identifies occurrence of a trigger event, which as described above may correspond to a movement sensor (e.g., accelerometer) sensing a user tapping on a housing or supporting table of MD1, or may correspond to a touchscreen interface or physical switch sensing a defined user input, or may correspond to an audio input such as a spoken command identified via Apple's Siri feature. To reduce the likelihood of falsely identified trigger events, MD1 may require a defined sequence to be sensed, such as a distinct knock sequence. Responsive to sensing the trigger event, MD1 starts tracking its movement via a movement sensor and may communicate 314 a movement tracking command to MD2-MD5 to cause them to start tracking their movement when the user separately moves each of MD2-MD5 to where the user desires the respective display devices to form components of the virtual playout screen. Alternatively as described above for
Responsive to identifying 312 the trigger event, MD1 may notify 316 the user to move at least MD1 to its desired location for the virtual playout screen. MD1 tracks its movement via the movement sensor while being moved by the user. Responsive to the user entering another input and/or sensing no further movement during a threshold elapsed time, MD1 generates 318 a movement vector identifying direction and distance that the MD1 has been moved from a reference location 120 to a playout location where the display device will form a component of the virtual playout screen, based on tracking movement indicated by the movement sensor while being moved. The movement vector may indicate the distance and direction along one or more axes that MD1 moved from the reference location 120 to the final resting location. The movement vector may additionally or alternatively indicate rotation(s) of MD1 along one or more axes that MD1 moved relative to the reference location 120. MD2-MD5 similarly track their movement when separately moved by the user to their respective locations to form the virtual playout screen, and generate 320-326 respective movement vectors indicating their locations relative to the reference location 120. MD2-MD5 can separately report 328-334 their generated movement vectors to MD1, which serves as the master according to this example embodiment.
MD1 provides the movement vectors to a media splitting module that determines how to split the visual media into a set of cropped portions for display on assigned ones of MD1-MD5 based on their respective movement vectors, and which may be further determined based on the individual display characteristics of each of MD1-MD5. According to the embodiment of
MD1 operating as the master initiates coordinated playout of the visual media using the cropped portions thereof and the arrangement of the virtual play out screen. The operations for performing the coordinated playout can vary depending upon which element of the system generates the cropped portions of the media server.
Referring to
According to the first scenario the master MD1 generates the cropped portions of the visual media, MD1 can perform the operations of the media splitting module to determine how to split the visual media into the set of cropped portions for display on assigned ones of the MD1-MD5, perform the operations to split the visual media into the cropped portions, and then distribute the assigned ones of the cropped portions to MD2-MD5 for display. MD1 may receive the visual media as a file or as a stream from the media server 200 or may have the visual media preloaded in a local memory. The distribution may be performed through a low latency protocol such as a device-to-device communication protocol, although other communication protocols may be used such as explained above.
According to the second scenario the master MD1 can perform the operations of the media splitting module to determine how to split the visual media into the set of cropped portions for display on assigned ones of the MD1-MD5, which results in generating splitting instructions. MD1 sends the splitting instructions for their respective use in performing the operations to split the visual media into their respective cropped portion that is to be locally displayed on a display device. MD1-MD5 may receive the visual media as a file or as a stream from the media server 200 or may have the visual media preloaded in a local memory. Alternatively, each of MD1-MD5 can operate in a coordinated manner to perform the operations of the media splitting module to determine how to split the visual media into the set of cropped portions for display on assigned ones of the MD1-MD5, which can result in generating the splitting instructions which they each use to control how the visual media is split into the cropped portions.
According to the third scenario the media server 200 generates the cropped portions of visual media from a copy in local memory for distribution to MD1-MD5. The media server 200 may locally perform the operations of the media splitting module to determine how to split the visual media into the set of cropped portions, may receive splitting instructions from MD1 that identify how the visual media is to be split for all of the MD1-MD5, or may receive splitting instructions individually from each of MD1-MD5 identifies how the visual media is to be split for the individual MD. For example, MD1 may operate to perform 336 the media splitting module operations to determine how many cropped portions that are to be generated and characteristics (e.g., size, aspect ratio, resolution, etc.) of the cropped portions which results in generating the splitting instructions, and provide the splitting instructions to the media server 200 to perform the media splitting operation and subsequent sending of the cropped portions to the assigned MD1-MD5. The media server 200 may send each of the cropped portions addressed for transmission directly to the assigned one of the MD1-MD5, or may communicate all of the cropped portions addressed to MD1 for forwarding to the assigned ones of the other MD2-MD5.
Regarding the third scenario,
Each of MD1-MD5 generates 404 a movement vector when it is moved to its virtual screen location, and then reports 406 its movement vector to the media server 200. The media server 200 performs the operations of the media splitting module to determine 408 how to split the visual media into the set of cropped portions. The media server 200 generates the cropped portions of the visual media and then routes 410 the cropped portions to the assigned MD1-MD5. MD1-MD5 receive and display their respective assigned cropped portion of the visual media, he can control timing for when a cropped portion of an individual picture or when a crop portion of a video frame is displayed so that it occurs with timing synchronization across the set of MDs 110.
MD1-MD5 operate to display 340-348 their assigned cropped portion of the visual media so that the collection of cropped portions is played out through the virtual playout screen. In the example of
Moreover, it is noted that MD3 and MD5 have larger display areas than MD1, MD2 and MD4, which the media splitting module operations can be aware of and use to when deciding on size, aspect ratio, and/or resolution for MD3 and MD5 versus MD1, MD2 and MD4.
Although various operations have been disclosed in the context of using five MDs 110 such as in the manner of
1) For two MDs:
2) For three MDs:
3) For four MDs:
As explained above, aspects of the methods and operations have been described above are not limited to particular disclosed embodiments, but instead are intended to be applicable for any system that can benefit from splitting of digital media for display through a set of MDs that form components of a virtual play out screen. Aspects of these further embodiments are now more generally described with regard to
Referring to
As explained above regarding
As explained above, one of the MDs can operate as a master. Referring to
The master MD may identify 312 occurrence of a trigger event indicative of a user being ready to move individual ones of the MDs from a stacked on top of each other arrangement associated with the reference location to an arrangement spaced apart from the reference location and configured to provide the virtual playout screen for playout of the visual media. The master MD, responsive to identification of the occurrence of the trigger event, communicate 314 a command to the other MDs via a wireless network interface circuit that initiates generation of respective movement vectors by the other MDs when moved to the spaced apart arrangement relative to the reference location, and initiates generation of the movement vector by the MD. Alternatively each of the MDs may separately identify occurrence of a trigger event. The operation to identify occurrence of a trigger event can include identifying occurrence of a momentary vibration that is characteristic of a physical tap by the user on a portion of the master MD or receipt of a defined input from the user via a user input interface of the master MD.
The master MD may be selected to operate as the master to perform the operations of the media splitting module, based on comparison of a media processing capability that is provided by each of the MDs.
The operation of the media splitting module to determine 336 how to split the visual media into the set of cropped portions, can include determining scaling ratios to be applied to scale respective ones of the cropped portions of the visual media for display on assigned ones of the MDs based on media processing capabilities of the assigned ones of the MDs. The media processing capability can include at least one of: display size, display resolution, display bezel size, display color temperature, display brightness, processor speed, memory capacity, and communication quality of service for receiving a cropped portion of the visual media.
The master MD may determine from the movement vectors when a condition is satisfied indicating that one of the MDs has moved at least a threshold distance. Responsive to determining that the condition is satisfied, the master MD may initiate repetition of performance of the operations of the media splitting module to determine how to split the visual media into the set of cropped portions for display on assigned ones of the MDs based on the movement vectors.
The master MD may determine when a condition is satisfied indicating that one of the other MDs is no longer available to operate to display a component of the virtual playout screen. The master MD may respond to the condition becoming satisfied by removing the one of the other MDs from a listing of available MDs. Moreover, responsive to determining that the condition has become satisfied, the master MD may initiate repetition of performance of the operations of the media splitting module to determine how to split the visual media into the set of cropped portions for display on assigned ones of the listing of available MDs.
As explained above, the master MD may operate to split the visual media into the set of cropped portions and route the cropped portions to the assigned ones of the MDs. Referring to the associate operations shown in
When the master MD is operating according to
When the media splitting module operations are performed by a media server, the master MD can operate to communicate the movement vector to the media server via a wireless network interface circuit so that the media server can perform the operations of the media splitting module. The master MD can also receive the cropped portion of the visual media from the media server via the wireless network interface circuit.
In some further embodiments the splitting operation 802 can include determining scaling ratios to be applied to scale respective ones of the cropped portions of the visual media for display on assigned ones of the MDs based on media processing capabilities of the assigned ones of the MDs. The media processing capability can include at least one of: display size, display resolution, display bezel size, display color temperature, display brightness, processor speed, memory capacity, communication quality of service for receiving a cropped portion of the visual media from the media server.
According to some other aspects, the master MD can delegate responsibility for performing the media splitting module operations to another one of the MDs based on one or more defined rules. For example, the master MD may delegate those operations to another MD that has one or more media processing capability that better satisfy a defined rule then the master MD and other MDs, such as by having one or more of a faster processing speed, greater memory capacity, better communication quality of service for receiving the visual media, etc.
According to some other aspects, the virtual screen application can provide guidance to a user for how to more optimally arrange the MDs to create the virtual playout screen. For example, the application may use the display characteristics of the MDs to compute an optimal arrangement or a set of recommended arrangements for how the MDs should be arranged. In one embodiment, the application determines the optimal arrangement and/or recommended arrangements based on any one or more of the following: the physical sizes of the MD displays, the MD physical sizes, the MD display aspect ratios, the MD display resolutions, and/or the MD display framing widths and/or thicknesses. For example, the arrangement may be computed to necessitate the shortest distances and/or the least amount of rotations during the user's relocation of the MDs to become arranged in the optimal or recommended arrangement as components of the virtual playout screen. The application may determine an amount of overlap for one or more of the MDs by one or more other MDs, such as by having smaller phones overlapping a portion or portions of a tablet computer display. The application may display instructions or other visual indicia and/or provide audible guidance to the user for how to rearrange the MDs to create virtual playout screen.
Adapting to Movement or Loss of an MD that is Part of a Virtual Play out Screen:
According to some other aspects, the virtual screen application can trigger repetition of the operations for splitting the visual media into the cropped portions responsive to determining that one or more of the MDs has been relocated and/or responsive to determining the one or more of the MDs is no longer available for such use.
The media server may redetermine how to split the visual media into a set of cropped portions responsive to determining that at least one of the MDs has been moved. In one embodiment, the operations by the media server include determining from the movement vectors when a condition is satisfied indicating that one of the MDs has moved at least a threshold distance. Responsive to determining that the condition is satisfied, the media server repeats performance of the operation 802 of splitting it into the visual media into the set of cropped portions for display on assigned ones of the MDs.
Alternatively or additionally, the media server may redetermine how to split the visual media into a set of cropped portions responsive to determining that one of the MDs is no longer available. In one embodiment, the operations by the media server include determining when a condition is satisfied indicating that one of the MDs is no longer available to operate to display a component of the virtual playout screen. The operations remove the one of MDs from a listing of available MDs. Responsive to determining that the condition is satisfied, the media server repeats performance of the operation 802 of splitting the visual media into the set of cropped portions for display on assigned ones of the MDs among the listing of available MDs.
Adjusting Media Displayed on MDs Based on their Depths:
According to some other aspects, as explained above the movement sensors and tracking operations can be configured to track movement with respect to any number of axes, such as along three orthogonal axes and rotations about any one or more of the axes. Thus, for example, a user may rearrange the MDs 110 to provide a non-planar three-dimensional arrangement of the MDs 110 to create the virtual playout screen. The operations of the media splitting module can compute from the movement vectors the depths as perpendicular distances between the major planar surfaces of the display devices of the MDs, and can perform responsive operations when generating the cropped components, such as scaling any one or more of the zoom ratio (e.g., magnification), physical size, pixel resolution, or aspect ratio cropped portions that are assigned to various of the MDs based on their respective depths. For example, in one embodiment operations can proportionally increase the zoomed image displayed on a MD based on a distance that it is further away from the major planar surface of a closer MD to the user.
According to some other aspects, the MDs may be configured to allow a user to adjust the zoom magnification of a cropped component of the visual media on one of the MDs and to responsively cause the other MDs to adjust their zoom magnifications of the respective cropped components of visual media that they separately display. For example, in one embodiment the user may use an outward pinch gesture to zoom-in on the cropped component being displayed on one of the MDs to cause that MD and the other MDs to appear to simultaneously and proportionally zoom-in on their respective displayed cropped components. The user may similarly use an inward pinch gesture to zoom-out on the cropped component being displayed on one of the MDs to cause that MD and the other MDs to appear to simultaneously and proportionally zoom-out on their respective displayed cropped components.
Some or all operations described above as being performed by the MDs and/or the media server may alternatively be performed by another node that is part of a cloud computing resource. For example, those operations can be performed as a network function that is close to the edge, such as in a cloud server or a cloud resource of a telecommunications network operator, e.g., in a CloudRAN or a core network, and/or may be performed by a cloud server or a cloud resource of a media provider, e.g., iTunes service provider.
In one embodiment the movement sensor 930 includes a multi-axis accelerometer that outputs data indicating sensed accelerations along orthogonal axes. The operation to generate, e.g., 318-326 in
In another embodiment the movement sensor 930 includes a camera that outputs video. The operation to generate, e.g., 318-326 in
In the above-description of various embodiments of present inventive concepts, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of present inventive concepts. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which present inventive concepts belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense expressly so defined herein.
When an element is referred to as being “connected”, “coupled”, “responsive”, or variants thereof to another element, it can be directly connected, coupled, or responsive to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected”, “directly coupled”, “directly responsive”, or variants thereof to another element, there are no intervening elements present. Like numbers refer to like elements throughout. Furthermore, “coupled”, “connected”, “responsive”, or variants thereof as used herein may include wirelessly coupled, connected, or responsive. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Well-known functions or constructions may not be described in detail for brevity and/or clarity. The term “and/or” includes any and all combinations of one or more of the associated listed items.
It will be understood that although the terms first, second, third, etc. may be used herein to describe various elements/operations, these elements/operations should not be limited by these terms. These terms are only used to distinguish one element/operation from another element/operation. Thus, a first element/operation in some embodiments could be termed a second element/operation in other embodiments without departing from the teachings of present inventive concepts. The same reference numerals or the same reference designators denote the same or similar elements throughout the specification.
As used herein, the terms “comprise”, “comprising”, “comprises”, “include”, “including”, “includes”, “have”, “has”, “having”, or variants thereof are open-ended, and include one or more stated features, integers, elements, steps, components or functions but does not preclude the presence or addition of one or more other features, integers, elements, steps, components, functions or groups thereof. Furthermore, as used herein, the common abbreviation “e.g.”, which derives from the Latin phrase “exempli gratia,” may be used to introduce or specify a general example or examples of a previously mentioned item, and is not intended to be limiting of such item. The common abbreviation “i.e.”, which derives from the Latin phrase “id est,” may be used to specify a particular item from a more general recitation.
Example embodiments are described herein with reference to block diagrams and/or flowchart illustrations of computer-implemented methods, apparatus (systems and/or devices) and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions that are performed by one or more computer circuits. These computer program instructions may be provided to a processor circuit of a general purpose computer circuit, special purpose computer circuit, and/or other programmable data processing circuit to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, transform and control transistors, values stored in memory locations, and other hardware components within such circuitry to implement the functions/acts specified in the block diagrams and/or flowchart block or blocks, and thereby create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block(s).
These computer program instructions may also be stored in a tangible computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the functions/acts specified in the block diagrams and/or flowchart block or blocks. Accordingly, embodiments of present inventive concepts may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.) that runs on a processor such as a digital signal processor, which may collectively be referred to as “circuitry,” “a module” or variants thereof.
It should also be noted that in some alternate implementations, the functions/acts noted in the blocks may occur out of the order noted in the flowcharts. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Moreover, the functionality of a given block of the flowcharts and/or block diagrams may be separated into multiple blocks and/or the functionality of two or more blocks of the flowcharts and/or block diagrams may be at least partially integrated. Finally, other blocks may be added/inserted between the blocks that are illustrated, and/or blocks/operations may be omitted without departing from the scope of inventive concepts. Moreover, although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction to the depicted arrows.
Many variations and modifications can be made to the embodiments without substantially departing from the principles of the present inventive concepts. All such variations and modifications are intended to be included herein within the scope of present inventive concepts. Accordingly, the above disclosed subject matter is to be considered illustrative, and not restrictive, and the appended examples of embodiments are intended to cover all such modifications, enhancements, and other embodiments, which fall within the spirit and scope of present inventive concepts. Thus, to the maximum extent allowed by law, the scope of present inventive concepts are to be determined by the broadest permissible interpretation of the present disclosure including the following examples of embodiments and their equivalents, and shall not be restricted or limited by the foregoing detailed description.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2019/059077 | 4/10/2019 | WO | 00 |