This invention relates to a method and apparatus for encoding image data.
In the field of computer graphics, there is often a need to provide protection and security against image data being copied. For example, in the automotive industry a typical cluster and infotainment system uses computer graphics made up of multiple layers of artwork to display information etc. to a user. In some cases, around 10 GB of textures may be combined to generate a composite image to display to a user, with many masking layers being used to provide desired shadow and light effects. A significant amount of work, often carried out by a dedicated team of people, is required to generate the artwork, optimise the layers and introduce the details (e.g. gradients, shadows, light effects, etc.) in order to produce the desired, uniform visual effect. Accordingly, there is significant commercial value within the image data used within such cluster and infotainment systems.
In a cluster and infotainment system, such a composite image scene may be generated from a large number of individual graphics objects, for example representing background images, foreground images, icons for messages, errors and warnings, dials, etc. Furthermore, such a composite image scene may be adapted depending on various conditions such as whether it is daytime or night time, a selected profile (e.g. comfort, sport, etc.). As the amount of information required to be displayed by modern cluster and infotainment systems increases, the amount of image data required to enable such information to be displayed increases significantly.
A problem with the implementation illustrated in
A prior art solution proposed in Chapter 36 of “GPU Gems 3”, written by Hubert Nguyen, comprises providing an AES (Advanced Encryption Standard) implementation within the GPU 140, thereby allowing the GPU 140 to perform the task of decrypting the image data. In this manner, the read/write accesses by the CPU 130 would no longer be required, reducing the bus overhead as well as reducing the load on the CPU 130. However, such an implementation would require a high end GPU, which is not feasible within embedded applications such as those used in the automotive industry for cluster and infotainment systems.
The present invention provides a method of encoding image data defining a graphics object, a processing device for encoding image data defining graphics objects, a method of generating image data for display, a processing device for generating image data for display and a non-transitory computer program product having executable program code stored therein as described in the accompanying claims.
Specific embodiments of the invention are set forth in the dependent claims.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
Further details, aspects and embodiments of the invention will be described, by way of example only, with reference to the drawings. In the drawings, like reference numbers are used to identify like or functionally similar elements. Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale.
The present invention will now be described with reference to the accompanying drawings, and in particular with reference to a multi-core system on chip comprising a general processing unit arranged to generate image data for display in accordance with some examples of one aspect of the present invention. However, it will be appreciated that such an aspect of the present invention is not limited to being implemented within a system on chip device, and may equally be implemented within alternative forms of processing devices, and in particular within alternative forms of data processing integrated circuit devices such as, say, general purpose microprocessors, microcontrollers, network processors, digital signal processor (DSP).
Furthermore, because the illustrated embodiments of the present invention may for the most part, be implemented using electronic components and circuits known to those skilled in the art, details will not be explained in any greater extent than that considered necessary as illustrated below, for the understanding and appreciation of the underlying concepts of the present invention and in order not to obfuscate or distract from the teachings of the present invention.
In summary, some examples of the present invention comprise a method and apparatus for encoding image data defining a graphics object. The method comprises partitioning the graphics object into a plurality of sub-images, deriving digital image data for each sub-image, the digital image data defining the respective sub-image, deriving sub-image position data defining the relative positioning of the sub-images within the graphics object, scrambling the digital image data for the plurality of sub-images, encrypting sub-image position data, and outputting encoded image data defining the graphics object comprising the scrambled sub-image data and the encrypted sub-image position data.
In this manner, the original graphics object may be recreated by de-scrambling the sub-image data in accordance with the (un-encrypted) sub-image position data, and re-combining the sub-images in their appropriate relative positions. However, because the sub-image position data is encrypted, recreating the original graphics object from the encoded data is made difficult without the ability to decrypt the encrypted sub-image position data. Thus, protection and security is provided to the graphics object against image data being copied.
Significantly, only the sub-image position data for each graphics object is encrypted, representing only a small fraction of the complete information for each graphics object. As such, the processing overhead required for decryption, a processing heavy task, is considerably reduced compared with that of the prior art solution illustrated in
Referring now to
Each of the processor cores 210, 241 may be configured to execute instructions and to process data according to a particular instruction set architecture (ISA), such as x86, PowerPC, SPARC™, MIPS™, and ARM™, for example. Those of ordinary skill in the art also understand the present invention is not limited to any particular manufacturer's microprocessor design. The processor core may be found in many forms including, for example, any 32-bit or 64-bit microprocessor manufactured by Freescale™, Motorola™, Intel™, AMD™, Sun™ or IBM™. However, any other suitable single or multiple microprocessors, microcontrollers, or microcomputers may be utilized. In addition, the term “core” refers to any combination of hardware, software, and firmware typically configured to provide a processing functionality with respect to information obtained from or provided to associated circuitry and/or modules (e.g., one or more peripherals, as described below). Such cores include, for example, digital signal processors (DSPs), central processing units (CPUs), microprocessors, and the like. These cores are often also referred to as masters, in that they often act as a bus master with respect to any associated peripherals.
The processor cores 210 and accelerator(s) 241 are in communication with the interconnect bus 250 which manages data flow between the cores 210, 241 and memory. The interconnect bus 250 may be configured to concurrently accommodate a large number of independent accesses that are processed on each clock cycle, and enables communication data requests from the processor cores 210, 214 to system memory 263 and/or an on-chip non-volatile memory 262, as well as data responses therefrom. In selected embodiments, the interconnect bus 250 may include logic (such as multiplexers or a switch fabric, for example) that allows any core 210, 241 to access any bank of memory, and that conversely allows data to be returned from any memory bank to any core 210, 241. The interconnect bus 250 may also include logic to queue data requests and/or responses, such that requests and responses may not block other activity while waiting for service. Additionally, the interconnect bus 250 may be configured as a chip-level arbitration and switching system (CLASS) to arbitrate conflicts that may occur when multiple cores attempt to access a memory or vice versa.
Memory controller 261 is arranged to provide access to the optional SoC internal (on-chip) non-volatile memory 262 or system memory 263. For example, memory controller 261 may be configured to manage the transfer of data between the multi-core SoC 205 and system memory 263. In some embodiments, multiple instances of memory controller 261 may be implemented, with each instance configured to control a respective bank of system memory 263. Memory controller 261 may be configured to interface to any suitable type of system memory, such as Double Data Rate or Double Data Rate 2 or Double Data Rate 3 Synchronous Dynamic Random Access Memory (DDR/DDR2/DDR3 SDRAM), or Rambus DRAM (RDRAM), for example. In some embodiments, memory controller 261 may be configured to support interfacing to multiple different types of system memory. In addition, the Direct Memory Access (DMA) controller 242 may be provided which controls the direct data transfers to and from system memory 263 via memory controller 261.
In the illustrated example, the multi-core SoC 205 comprises a dedicated GPU 200. The GPU 200 may be configured to manage the transfer of data between the multi-core SoC 205 and the GPU 200, for example, through the interconnect bus 250. The GPU 200 may include one or more processor cores for supporting hardware accelerated graphics generation. The graphics generated by the GPU 200 may be outputted to one or more displays via any display interface such as low-voltage differential signalling (LVDS), high definition multimedia interface (HDMI), digital visual interface (DVI) and the like.
Referring now to
The next part of the example encoding process illustrated in
In some example embodiments, it is contemplated that the process of scrambling the digital sub-image data 420 may additionally comprise rotating some or all of the sub-images. Additionally/alternatively, it is contemplated that if the digital sub-image data 420 comprises the same information for two or more of the sub-images, the scrambled digital image data defining the plurality of sub-images 422 need only comprise one instance of the digital sub-image data 420 defining the two or more sub-images. In such a case, the sub-image position data 430 may define multiple positions for a single instance of digital sub-image data 420 within the original graphics object 410.
The example encoding process illustrated in
(i) scrambled digital image data defining a plurality of sub-images 422; and
(ii) encrypted sub-image position data 430.
The original graphics object 410 may easily be recreated by de-scrambling the sub-image data 422 in accordance with the (un-encrypted) sub-image position data 425, and re-combining the sub-images 420 in their appropriate relative positions. However, because the sub-image position data 425 is encrypted, recreating the original graphics object 410 from the encoded data 305 is made extremely difficult without the ability to decrypt the encrypted sub-image position data 430. For example, assuming a case where an attacker is able to take pictures of a displayed (composite) image, and to capture the scrambled digital image data 422 defining the sub-images 420 for a graphics object 410. If the attacker were to employ a brute force algorithm to reconstruct the original graphics object 410, for the illustrated simplified example of
In some examples, it is contemplated that different scrambling functions are applied to the sub-image position data 420 for different graphics objects. In this manner, even if an attacker manages to obtain the sub-image position data 425 for descrambling the scrambled sub-image data 422 for one graphics object 410, the obtained sub-image position data 425 will not enable the attacker to de-scramble the sub-image data 420 for all graphics objects 410. For example, as described above the scrambling function may comprise sequentially performing a swap operation on the sub-image data for each sub-image 420, whereby the sub-image data for the subject sub-image is swapped with the sub-image data for another randomly selected sub-image 420. As such, the swapping of sub-images 420 is performed in a substantially random manner, resulting in the scrambling of the sub-image data 420 being performed in a substantially random way.
The encryption of the sub-image position data 425 may be performed based on any suitable encryption methodology. One simple example encryption approach comprises a key correlation approach. However, it will be appreciated that the present invention is not limited to any specific encryption methodology, and a person skilled in the art would easily be able to implement such encryption based on various different decryption strategies.
In the example illustrated in
Furthermore, it will be appreciated that the present invention is not limited to partitioning the graphics object based on dissection points. For example, the graphics object 410 may be partitioned based on predefined or dynamically (e.g. randomly) defined sub-image block sizes.
Referring back to
In the example illustrated in FIG.'S 3 and 4, only the sub-image position data 425 for each graphics object 410 is encrypted, representing only a small fraction of the complete information for each graphics object 410, e.g. 2-3%. As such, the processing overhead required for decryption, a processing heavy task, is considerably reduced compared with that of the prior art solution illustrated in
Having decrypted the encrypted the sub-image position data 430 for a graphics object 410, the de-scrambling of the scrambled sub-image data 422 for the graphics object 410 is a relatively simple task, requiring minimal processing overhead. Significantly, the processing overhead required for the reduced decryption and de-scrambling of the encoded image data 305 in the example illustrated in FIG.'S 3 and 4 is sufficiently reduced as compared with the prior art solution of
Accordingly, in the example illustrated in
In some alternative examples, encryption/decryption techniques may be implemented that do not require a decryption key to be stored for decrypting the encrypted sub-image position data. For example, physical clonable functions may be embedded within hardware.
The GPU 200 may then generate a composite image to be displayed based on the individual graphics objects, for example representing background images, foreground images, icons for messages, errors and warnings, dials, etc. Furthermore, such a composite image may be adapted depending on various conditions such as whether it is daytime or night time, a selected profile (e.g. comfort, sport) etc. The image data 310 for the composite image is then made available to a display controller 320 for displaying on a display 330, for example by way of the GPU 200 writing the image data 310 for the composite image back to system memory 263.
In the example illustrated in
Referring now to
Referring now to
In the illustrated example, the decryption module 620 and the scrambling module 625 have been illustrated as comprising discrete component for ease of understanding. However, it will be appreciated that they may equally be implemented jointly within a single hardware component.
The scrambling module 625 writes the encoded image data 305 for each graphics object comprising the scrambled digital sub-image data and the encrypted sub-image position data into system memory 263, from where it may be processed etc. for display. In the example illustrated in
The encoded image data 305 for each graphics object to be displayed may then be read by the GPU 200. The GPU 200 is then able to decrypt the encrypted sub-image position data using the decryption key within the secure area of memory 350 and recombine the sub-images to obtain the graphics objects. The GPU 200 may then generate a composite image to be displayed based on the individual graphics objects, for example representing background images, foreground images, icons for messages, errors and warnings, dials, etc. Furthermore, such a composite image may be adapted depending on various conditions such as whether it is daytime or night time, a selected profile (e.g. comfort, sport) etc. The image data 310 for the composite image is then made available to a display controller 320 for displaying on a display 330, for example by way of the GPU 200 writing the image data 310 for the composite image back to system memory 263. In some alternative embodiments, the image data 310 for the composite image may be made available to the display controller 320 by way of writing the image data 310 for the composite image to a secure area of memory, such as performed in the example illustrated in
Referring now to
The method then moves on to 760, where the digital sub-image data for the plurality of sub-images is scrambled. As described in greater detail above with reference to
The method then ends at 790.
Referring now to
The method then ends at 870.
In accordance with some alternative examples of the present invention, it is contemplated that compression may be applied to the digital image data for at least some of the sub-images into which a graphics object is partitioned, to reduce the size of the encoded data for a graphics object.
In the illustrated example, the graphics object 910 is partitioned into sub-images 980 each of which defined on the basis of at least one geometric primitive. Each of the geometric primitives provides a two-dimensional surface, which is to be filled with an image detail of the unprocessed image in accordance with the positioning of the sub-image 980. The texture mapping operation is applied for texture lookup to determine the texture pixels, the so-called texels, from a texture image to fill the surface. The texture image data represents the data basis, to which the texture mapping operation is applied to fill the surface. In other words, a two-dimensional texture image is mapped to the surface of the graphic primitive. The two-dimensional texture image is defined in texture image space and a so-called texture lookup, which is the texture mapping, is performed using interpolated texture coordinates to determine the pixels to fill the surface of the graphic primitive.
The image data corresponding to each sub-image (in accordance with the positioning of the sub-image within the image space) may further be analysed in order to consider whether the image data thereof may be compressible or not. In response to the analysis result, a texture image is generated for each of the sub-images. For example, change of pixel values of the image data subset is describable on the basis of a texture mapping operation in the pixel value space of the texture image. Texture mapping operations may include for instance interpolation operations such as nearest neighbour sampling, linear interpolation, bilinear interpolation, cubic interpolation and be-cubic interpolation. Thus, data may be considered to be compressible if the retrieved pixel values of the subset of image data are describable on the basis of a texture mapping operation and selected pixel values of the subset of image data. Two examples are described below for the sake of a deeper understanding:
1. Image Data Subset with Similar Pixel Values:
The subset of the image data for the graphics object 910 corresponding to the sub-image 980 may be retrieved and it is determined from the subset of image data whether the pixel values of the retrieved subset of image data have the same pixel value or have similar values. Similar pixel values should be understood in that the pixel values differ within a predefined distance (colour range) in colour space. This means that the predefined distance in colour space allows for defining a measure of similarity of the pixel values. Quantifying metrics are known in the art to determine the difference or distance between two colours in colour space. For instance, such metrics make use of the Euclidean distance in a device independent colour space.
If the pixel values have the same value or similar pixel values then data of a texture image in compressed form may be defined, which has a single pixel with a pixel value corresponding to the same pixel value or a pixel value representative of the pixel values differing within a predefined distance in colour space. The representative pixel value may be an average pixel value with respect to the colour space. Hence, the texture image may comprise a single pixel having assigned the pixel value resulting from the analysis of the subset of image data. Texture mapping data is assigned to the geometry data of the sub-image, which texture mapping data enables mapping the defined compressed texture image onto the surface of the at least one geometric primitive of the sub-image.
2. Image Data Subset with Colour Gradient:
The subset of the image data for the graphics object 910 corresponding to the sub-image 980 may be retrieved and it is determined from the subset of image data whether the pixel values of the retrieved subset of image data change in accordance with a colour gradient extending over the range of the image data subset. The change of pixel values may follow a colour gradient within a predefined range of variation described by a distance in colour space.
If the pixel values are determined to show a colour gradient extending over the sub-image describable by initial gradient values and final gradient values then data of a texture image in compressed form is defined, which has pixels with pixel values corresponding to initial gradient values and pixels with pixel values corresponding to final gradient values. In particular, each of the gradient values comprises one or two initial gradient values depending on the interpolation mapping operation used. The texture mapping operation reconstructs the colour gradient by interpolation between the initial gradient values and the final gradient values to obtain the values lying in-between within the texture pixel space. The interpolation operation may be a linear interpolation operation on the basis of one gradient value and the final gradient value, a cubic interpolation operation on the basis of one gradient value and the final gradient value, a bi-linear interpolation operation on the basis of one or two gradient values and the one or two final gradient values, or bi-cubic interpolation operation on the basis of one or two gradient values and the one or two final gradient values. Hence, the texture image may comprise only two to four pixels having assigned the pixel values resulting from the analysis of the subset of image data. Texture mapping data is assigned to the geometry data of the sub-image, which texture mapping data enables mapping the defined compressed texture image onto the surface of the at least one geometric primitive of the sub-image.
If the subset of the image data for the graphics object 910 corresponding to the sub-image 980 is considered not to be compressible (e.g., the pixel values of the image data subset are not describable on the basis of a texture mapping operation and selected pixel values of the image data subset), data of a texture image in uncompressed form may be defined. The data of the texture image in uncompressed form comprises pixels with pixel values corresponding to the pixel values of the retrieved subset of image data. Texture mapping data is assigned to the geometry data of the sub-image, which texture mapping data enables mapping the defined texture image onto the at least one geometric primitive of the sub-image.
Referring now to
Having partitioned the graphics object into sub-images, the method moves on to 1025 where, for each sub-image at least one geometric primitive is defined 1033. The at least one geometric primitive may be defined by assigning a set of vertices, which defines the positioning of the at least one geometric primitive in the image domain with respect to the image space. Accordingly, the at least one geometric primitive represents the geometry of the respective sub-image and the positioning of the respective sub-image with respect to the image space. Each vertex comprises for instance a two-dimensional position information, a coordinate, defining the positions x and y with respect to a predefined coordinate origin of the image space, which may be for instance located at the top left corner of the image as exemplarily illustrated in
The so-called geometric primitive is a basic geometric object supported by the target graphics subsystem. For instance, a sub-image may be defined to comprise N×M pixels of the image. The set of vertices may comprise four vertices, which define two triangle primitives sharing two vertices. Provided that only rectangular sub-images are defined, two vertices may be sufficient to define such a rectangular sub-image, e.g. the coordinates 990 and 993 illustratively shown in
Further, the subset of image data of the graphics object corresponding to the respective sub-image is retrieved, at 1034, and the pixel values of the retrieved subset of data of the image are analysed in order to determine whether the image data of the subset is compressible on the basis of a texture mapping operation in pixel value space, at 1035.
If the pixels of the retrieved subset of the image data are re-constructible or derivable from one or more selected pixels on the basis of a texture mapping operation in pixel value space and the selected pixels are input parameters to the texture mapping operation, a texture image in compressed form is defined. This means that texture image data is defined, at 1050, comprising the selected pixels which are representative of the pixel of the retrieved subset of the image data. Texture mapping data (e.g. coordinates) is then generated, at 1055, for the set of vertices defining the analysed sub-image to point to the selected pixels of the texture image for the analysed sub-image. In this manner, compressed image data comprising the geometry data defining the geometric primitive, the texture image data comprising the selected pixels and texture mapping data for mapping the texture image data to the selected pixels is derived defining the sub-image.
Otherwise, if the pixels of the retrieved subset of the image data are not re-constructible or derivable from selected pixels representative of the pixel of the retrieved subset of the image data, texture image data in uncompressed form is defined. This means that texture image data is defined, at 1040, comprising the pixels with values of the subset of image data corresponding to the analysed sub-image. Texture mapping data (e.g. coordinates) is then generated, at 1045, for the set of vertices defining the analysed sub-image to point to the texture image data for this analysed sub-image. The subset of data of the uncompressed image, which corresponds to the analysed sub-image, may be copied or extracted to create the texture image data. In this manner, uncompressed image data comprising the geometry data defining the geometric primitive, the texture image data representing the pixels within the retrieved subset of image data of the graphics object corresponding to the sub-image and texture mapping data for mapping the texture image data to the pixels within the sub-image.
Once the image data for the last sub-image of the graphics object has been generated, at 1030, the method moves on to 1060, where the digital sub-image data for the plurality of sub-images is scrambled. As described in greater detail above with reference to
The method then ends at 1090.
In this manner, the method illustrated in
In some examples, and as illustrated in
Examples of such a sub-image data compression technique is described in greater detail in the Applicant's co-pending U.S. patent application Ser. No. 14/310,005 filed on 20 Jun. 2014, the content of which is incorporated herein by reference. It will be appreciated that the present invention is not limited to such examples of sub-image data compression techniques, and it is contemplated that any suitable form of image data compression may additionally/alternatively be implemented for compressing the digital image data defining some or all of the sub-images. In particular, it is contemplated that such compressed image data is not limited to compressed image data comprising geometry data, texture mapping data and texture image data. For example, raw pixel data may be compressed using well known data compression techniques that exploit statistical redundancy within the data being compressed.
Referring now to
The invention may be implemented in a computer program for running on a computer system, at least including code portions for performing steps of a method according to the invention when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the invention.
A computer program is a list of instructions such as a particular application program and/or an operating system. The computer program may for instance include one or more of: a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.
The computer program may be stored internally on a tangible and non-transitory computer readable storage medium or transmitted to the computer system via a computer readable transmission medium. All or some of the computer program may be provided on computer readable media permanently, removably or remotely coupled to an information processing system. The tangible and non-transitory computer readable media may include, for example and without limitation, any number of the following: magnetic storage media including disk and tape storage media; optical storage media such as compact disk media (e.g., CD-ROM, CD-R, etc.) and digital video disk storage media; non-volatile memory storage media including semiconductor-based memory units such as FLASH memory, EEPROM, EPROM, ROM; ferromagnetic digital memories; MRAM; volatile storage media including registers, buffers or caches, main memory, RAM, etc.
A computer process typically includes an executing (running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process. An operating system (OS) is the software that manages the sharing of the resources of a computer and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system.
The computer system may for instance include at least one processing unit, associated memory and a number of input/output (I/O) devices. When executing the computer program, the computer system processes information according to the computer program and produces resultant output information via I/O devices.
In the foregoing specification, the invention has been described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications and changes may be made therein without departing from the scope of the invention as set forth in the appended claims and that the claims are not limited to the specific examples described above.
The connections as discussed herein may be any type of connection suitable to transfer signals from or to the respective nodes, units or devices, for example via intermediate devices. Accordingly, unless implied or stated otherwise, the connections may for example be direct connections or indirect connections. The connections may be illustrated or described in reference to being a single connection, a plurality of connections, unidirectional connections, or bidirectional connections. However, different embodiments may vary the implementation of the connections. For example, separate unidirectional connections may be used rather than bidirectional connections and vice versa. Also, plurality of connections may be replaced with a single connection that transfers multiple signals serially or in a time multiplexed manner. Likewise, single connections carrying multiple signals may be separated out into various different connections carrying subsets of these signals. Therefore, many options exist for transferring signals.
Any arrangement of components to achieve the same functionality is effectively ‘associated’ such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as ‘associated with’ each other such that the desired functionality is achieved, irrespective of architectures or intermediary components. Likewise, any two components so associated can also be viewed as being ‘operably connected,’ or ‘operably coupled,’ to each other to achieve the desired functionality.
Furthermore, those skilled in the art will recognize that boundaries between the above described operations merely illustrative. The multiple operations may be combined into a single operation, a single operation may be distributed in additional operations and operations may be executed at least partially overlapping in time. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments.
However, other modifications, variations and alternatives are also possible. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word ‘comprising’ does not exclude the presence of other elements or steps then those listed in a claim. Furthermore, the terms ‘a’ or ‘an,’ as used herein, are defined as one or more than one. Also, the use of introductory phrases such as ‘at least one’ and ‘one or more’ in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles ‘a’ or ‘an’ limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases ‘one or more’ or ‘at least one’ and indefinite articles such as ‘a’ or ‘an.’ The same holds true for the use of definite articles. Unless stated otherwise, terms such as ‘first’ and ‘second’ are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage.