The present invention relates to a method and an apparatus for receiving and displaying electronic messages and, more particularly, but not exclusively to a method and a portable apparatus for receiving and displaying electronic messages.
One of the most popular communication technologies that have been developed for mobile communications systems is text messaging. Text messaging services allow communication that is based on typed text between two or more mobile users.
The most common communication that provides such a service is the short message service (SMS). The SMS allows mobile users to receive text messages via wireless communication devices, including SMS-capable cellular mobile phones. Mobile and stationary users may send an electronic message by entering text and a destination address of a recipient user who is either a mobile or a non-mobile user.
Another example for such a communication service is a mobile instant messaging (MIM) service. The MIM service allows real-time communication that is based on typed text between two or more mobile users. The text is conveyed via one or more cellular networks.
Generally, an emoticon is represented in a text format by combining the characters of a keyboard or keypad. Recent developments have been designed with the ability to allow the inclusion of icons indicative of emotions, which may referred to as emoticons, into the text. Such emoticons may include a smiling figure, a frowning figure, a laughing figure or a crying figure, a figure with outstretched arms and other figures expressing various feelings. A graphic emoticon is transmitted to a mobile communication terminal by first selecting one of the graphic emoticons, which are stored in a user's mobile communication terminal as image data. Subsequently, the selected graphic emoticon is transmitted to another mobile communication terminal using a wireless data service.
For example, U.S. Patent Application No. 2007/0101005, published May 3, 2007, discloses an apparatus and method for transmitting emoticons in mobile communication terminals. The apparatus and the method include receiving a transmission request message in a first mobile communication terminal, the transmission request message related to a first graphic emoticon and including identification information for the first graphic emoticon, identifying a second graphic emoticon according to the transmission request message, and transmitting the second graphic emoticon to a second mobile communication terminal, wherein the second graphic emoticon comprises image data in a format decodable by the second mobile communication terminal.
In addition, during the last years, standards have been introduced for services including multimedia message services (MMSs) and enhanced message services (EMSs), which are standards for a telephony messaging systems that allow sending messages with multimedia objects, such as images, audio, video, rich text etc., have become very common. The MMS and EMS allow the message sender to send an entertaining message that includes an image or a video that visually expresses his or her feelings or thoughts and visually presents a certain subject matter.
A number of developments have been designed to provide services using the MMS and EMS standards. For example, U.S. Patent Application No. 2004/0121818, published Jun. 24, 2004 discloses a system, an apparatus and a method for providing MMS ringing images on mobile calls. In one embodiment, a ringing image comprises a combination of sound and images/video with optional textual information and a presentation format. The method includes receiving an incoming call from an originating mobile station; receiving an MMS message associated with the incoming call that contains ringing image data including image data and ring tone data, presenting the ringing image data to a user of the terminating mobile station, and in response to presentation of the ringing image data, receiving an indication from the user to answer the incoming call.
Though such services improve the user experience of receiving electronic messages, they require adjusted devices and additional network capabilities. In addition, more bandwidth, which is needed in order to send such electronic messages, and more computational complexity, which is needed for rendering it, are required for sending and displaying an MMS rather than a plain SMS. Moreover, these services do not inter-operate with existing SMS services in a seamless manner.
In view of the foregoing discussion, there is a need for a system that can overcome the drawbacks of these new services and provide new advanced capabilities.
According to one aspect of the present invention there is provided a mobile apparatus for receiving an electronic message including a text message from a sender. The mobile device comprises a contact records repository that comprises a plurality of user identifiers, one or more of the user identifiers is associated with a digital image. The mobile device further comprises a text analysis module configured for identifying predefined expressions in the received text message, an image-editing module configured for matching one of the user identifiers with the sender and editing the associated digital image to correspond with the identified predefined expression, and an output module configured for outputting the edited digital image.
According to another aspect of the present invention there is provided a method for editing an electronic message comprising a text message. The method comprises a) receiving the electronic message from a sender via a wireless network, b) matching the sender with one of a plurality of user identifiers, each the user identifier being associated with a digital image, c) identifying a predefined expression in the text message, and d) editing at least one of the digital images to accord with the predefined expression, the at least one edited digital image being associated with the matched user identifier.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The materials, methods, and examples provided herein are illustrative only and not intended to be limiting.
Implementation of the method and the apparatus of the present invention involves performing or completing certain selected tasks or steps manually, automatically, or a combination thereof. Moreover, according to actual A instrumentation and equipment of preferred embodiments of the method and the apparatus of the present invention, several selected steps could be implemented by hardware or by software on any operating system of any firmware or a combination thereof. For example, as hardware, selected steps of the invention could be implemented as a chip or a circuit. As software, selected steps of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In any case, selected steps of the method and the apparatus of the invention could be described as being performed by a data processor, such as a computing platform for executing a plurality of instructions.
The invention is herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in order to provide what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice.
In the drawings:
The present embodiments comprise a mobile apparatus, such as a cellular phone, for receiving electronic messages, such as SMSs and IMs, from a sender that is connected to a network, such as a cellular or a computer network. The mobile apparatus comprises a receiving module for receiving the electronic message and a contact records repository with a number of user identifiers, each associated with a digital image that preferably depicts the face of a related contact person and a background. Optionally, the mobile apparatus is a cellular phone and the user identifiers are members of the contact list or address book thereof. The mobile apparatus further comprises a text analysis module and an image-editing module. In use, when the receiving module receives an electronic message from a sender, it forwards the electronic message to the text analysis module that analyzes the electronic message and matches one of the user identifiers with the sender. Then, the image-editing module edits the matched digital image according to an analysis of the text in the received message. The edited and matched digital image may now be displayed together with or instead of the text in the message. In such a manner, when a certain sender send an electronic message to mobile apparatus, his or her face, which is depicted in the matched digital image, the background, or both, may be edited to reflect the content of the text in his or her message. Such an embodiment provides a more vivid experience to the user of the mobile device. For example, a message comprising a text may be presented in association with an edited version of the digital image of the sender that is animated to reflect his or her sadness. An edited digital image may be understood as a manipulated digital image, animated digital image, and a digital image with added graphical objects, a sequence of edited digital images, or any combination of these digital images. Editing a digital image may be understood as animating the digital image, manipulating the digital image, generating a sequence of digital images, adding graphical objects to the digital image, or any combination of the these actions.
The principles and operation of an apparatus and method according to the present invention may be better understood with reference to the drawings and accompanying description.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments or of being practiced or carried out in various ways. In addition, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
A network may be understood as a cellular network, a computer network, a wireless IP-based network, a WLAN, or the combination thereof.
A sender may be understood as a mobile phone, a dual-mode phone, a personal digital assistant (PDA), or any other system or facility that is capable of providing information transfer between persons and equipment.
An electronic message may be understood as an SMS, an MIM, an email, or any other message that comprises an analyzable message.
A mobile device may be understood as a mobile phone, a dual-mode phone, a personal digital assistant (PDA), or any other portable device or facility that is capable of receiving electronic messages.
Reference is now made to
The text analysis module 7 is designed to identify predefined expressions, such as words, terms, sentences, and emoticons in the text message. Optionally, text analysis module 7 is designed to identify predefined expressions such as a certain font. Each one of the predefined expressions is associated with a set of instructions, which is designed to animate or manipulate a digital image that depicts a figure in a manner that the figure, the background of the figure, or both visually express the predefined expressions, preferably as describe below.
As described above, the contact records repository 4 comprises a number of digital images of a number of contact persons. In one embodiment of the present invention, the contact records repository 4 is the contact list of the mobile device 1. Each one of the digital images is associated with a user identifier such as a network user ID, for example a phone number or a subscriber ID. In such a manner, the user of the mobile device 1 may be able to upload a digital image that is, in the mind of the contact list owner, closely related to the contact person who has the network user ID. In one embodiment of the present invention, each one of the network user IDs in the contact list is associated with a digital image, a sequence of digital images, such as a video file or both.
As commonly known, each electronic message includes a network user ID that indicates the address of the sender. The text analysis module 7 uses the network user ID of the sender to identify a digital image that is associated with a respective network user ID in the contact records repository 4. The identified digital image, which may be referred to as the matching digital image, preferably depicts the face of the sender.
In particular, the electronic message may be an SMS, an MIM, or any other type of electronic message that comprises an analyzable message. As commonly known, the SMS—point-to-point (SMS-PP) and the SMS—Cell Broadcast (SMS-CB) protocols, which are defined respectively in the GSM 03.40 and GSM 03.41 recommendations, which are disclosed herein by reference, define the protocols for allowing electronic text messages to be transmitted to a mobile device in a specified geographical area. A transmission of SMSs may be done via different protocols, such as signaling system No. 7 (SS7) that is incorporated by herein by reference, within the standard GSM MAP framework or transmission control protocol internet protocol (TCP/IP) within the same standard. Messages are sent with the additional MAP operation forward_short_message that is limited by the constraints of the signaling protocol to precisely 140 bytes. Characters in languages such as Arabic, Chinese, Korean, Japanese or Slavic languages are encoded using the 16-bit UCS-2 character encoding. Each electronic message includes a text that comprises a number of characters, such as letters, numbers, symbols, and emoticons. The text analysis module 7 is designed to analyze the characters in the text message and to identify predefined letters or strings therein. Optionally, the text analysis module 7 is designed to identify predefined emoticons in the text message. Optionally, the text analysis module 7 is designed to identify certain terms, words, or sentences in the text message. The identification may be a straightforward identification that is based on a matching table, as described below with reference to
As described above, the image-editing module 3 is designed to animate or to manipulate the digital image that is associated with the respective network user ID. Optionally, the image-editing module 3 animates a face area in the digital image that depicts the face of the sender. In such an embodiment, the image-editing module 3 delimits the face area before it is animated, as further described below. Optionally, the animation or manipulation is defined using a face mask, such as a basic generic mask, for example a two dimensional (2D) generic mask or a three dimensional (3D) generic mask. In use, the basic generic mask is positioned over a face area that is identified in the matching digital image.
Optionally, the image-editing module 3 is designed to apply lip movement on the face in the associated digital image according to one or more of the identified predefined expressions within the text messages. In such a manner, the figure in the digital image may be animated to express the identified predefined expressions. For example, the figure in the digital image may be given a lip movement that stands for a certain facial expression, such as a smile, or with a set of lip movements that animates the figure in the digital image to look as though he or she is saying the identified predefined expressions.
Optionally, the image-editing module 3 is designed to apply graphic effects, object animation, 2D and 3D animations to predefined objects, and 2D and 3D image manipulations, which are associated with the sender.
Optionally, such a sender dependent animation is based on the network ID number of the sender. For example, animating a different background to a sender that calls using a public switched telephone network (PSTN) and a different background to a sender that calls using a cellular network. In another example, a different animation is provided according to the area dialing code of the sender. Optionally, such a sender dependent animation is based on the analysis of information that is stored in the contact list of the mobile device 1 or associated with his or her user identifier.
For example, the animation is determined according to the caller group of the sender. Optionally, such a sender dependent animation is based on the time the electronic message has been received. Animation may also be understood as sound effects, such as voice clips and sound effects, which are taken from a designated sound effect library. Optionally, the animation is changed on a random basis, in a manner that the same electronic message from the same sender may be animated differently according to a deterministic or a random rule.
Reference is now made to
In order to allow the manipulation of the face area using the generic mask 100, the face area has to be identified in the digital image. Optionally, the device further comprises a face detection module that detects the face area within the boundaries of the digital image. The face area delimits the face that is depicted in the digital image. face area Preferably, in order to support the delimitation of the face area, the contrast between the face area and the rest of the image is sharpened.
Preferably, the HSV color space may be helpful for identifying the area of the digital image where the face is found. The delimitation of the face area is based on color information of the color pixels of the digital image. It has become apparent from statistical analysis of large data sets that the hue distribution of human skin is in a certain range. Such a range thus provides a common hue level that can be used to identify those color pixels that represent human skin. The common hue level may thus be used to detect a cluster of color pixels that represents the skin of the face in the digital image.
Preferably, the saturation level of each pixel may be used in addition to the hue level in order to augment the determination of whether the pixel represents human skin or not. Optionally, the used hue level is in a range determined in relation to a shifted Hue space. The delimitation of the face area is preferably performed once, optionally when the digital image is uploaded to the contact records repository. As the boundaries of the face are set only once, such an embodiment reduces the computational complexity of the editing of the digital image.
After the face area has been detected, a movement vector that comprises a rotation value, an x-scale value and a y-scale value, is identified according to a transformation from the generic mask to the face area. Optionally, the transformation is generalized, for example to provide a projection transformation such as one that allows face pan.
The movement vector is used to match between the vertexes 101 and respective pixels or sub-pixels on the face in the digital image. After the vertexes have been matched, the generic mask 100 may be used to manipulate the face area in the digital image. As depicted in
As described above, one or more of the digital images may be avatars or graphical objects. In such an embodiment, the face area is not delimited, the mask is preferably not correlated to the depicted face, and the animation is performed according to a set of instructions that animates the depicted figure according to predefined parameters.
Reference is now made to
Clearly, as described above, the depicted cellular phone 201 is a nonbinding example of a mobile device and other mobile devices may be used.
Another example of image manipulation, which is done on another matching digital image, is provided in
Reference is now made jointly to
Optionally, the editing of the digital image is performed by adding graphical objects, as shown at 401, 402, and 403 to predefined points in the digital image, according to a set of instructions that is associated with one or more predefined expressions in the received electronic message. Each one of the graphical objects may comprise a texture that is preferably placed in a predefined position in relation to the face in the image. Optionally, the graphic objects are positioned in a predefined position on the generic mask or at a predefined distance therefrom. Optionally, one or more graphical objects are displayed sequentially, for example in a cyclical manner. For example, as shown in
Optionally, the editing of the digital image is performed by changing the background of the digital image. As described above, the face area area is detected and delimited either in a preprocessing step or during the process of receiving a related electronic message. In such an embodiment, one or more backgrounds are associated with different characters, emoticons, numbers or symbols that may appear in the text of the electronic message. For example,
Reference is now made, once again, to
Optionally, in order to improve the performance of the editing, the differences between the generic mask 100 and each one of the different faces may be compensated. In one embodiment of the present invention, the vertexes are divided into a number of groups. Optionally, the mesh 100 is divided into a group of 20 vertexes that defines the boundaries of the face 101, a group that defines the mouth area 104, and a group that defines the eyes area 105. The movements of the vertexes in the mouth area group are scaled in the x direction by the mouth length, and in the y direction by the distance between the eyes and the mouth. The movements of the vertexes in the eyes area group are scaled in the x direction and the y direction by the distance between the eyes. For all other vertexes, the movement is scaled by the distance between the eyes in x direction, and a distance between the eyes and the mouth in y direction. Optionally, the eye closing animation is limited in order to avoid overlapping between the upper and the lower parts.
As described above, the generic mask is used for editing the digital image of the contact person that corresponds to the received electronic message according to the text message thereof. In order to allow the generation of the edited digital image, a certain digital image has to be matched and the vertexes of the generic mask maybe correlated with pixels or sub-pixels in the digital image, as described above. Preferably, the edited digital image is presented with the text of the electronic message to the user of the mobile device. Optionally, if no digital image has been matched a default digital image is edited by the image-editing module. Likewise, if an image is available but the vertexes of the mask have not been successfully correlated with pixels or sub-pixels of the matching digital image, or for any other reason the image cannot be used, then such a default image can be used instead.
Reference is now made to
After the electronic message is received, the text message is analyzed, as shown at 508. As described above, one or more predefined expressions such as text sections, words, terms, sentences, or emoticons are defined and stored, preferably in the memory of the mobile device.
Optionally, a data structure, such as a lookup table (LUT) is used for storing a list of predefined expressions in association with image editing instructions. An exemplary LUT is depicted in
Reference is now made to
Reference is now made, once again, to
After the associated digital image has been edited, as described above with reference to
It is expected that during the life of this patent many relevant devices and systems will be developed and the scope of the terms herein, particularly of the terms cellular phone, mobile device, electronic message, text message, and SMS are intended to include all such new technologies a priori.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. All publications, patents, and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention.