The present disclosure relates to the field of sharing information by using a network. More particularly, the present disclosure relates to projecting an image based on a content transmitted from a remote place.
In recent years, IoT (Internet of Things) technologies or devices attract rising attention. In an IoT age, any devices are expected to be connectable to a network and a projector is not an exception to the expectation.
A system for projecting a content image, the system comprising: a processor; a memory storing instructions, that when executed by the processor, cause the processor to perform operations including: receiving a plurality of content data each including at least text data transmitted from a content provider; receiving a plurality of instruction data each indicating a display format; selecting first content data from the received plurality of content data; generating a first image based on the selected first content data in accordance with first instruction data which is one of the plurality of instruction data and correspond to the first content data; and projecting by a projector the generated first image, wherein the first image includes at least characters generated by editing text data in accordance with a display format indicated by the first instruction.
A method for projecting a content image, the method comprising: receiving a plurality of content data each including at least text data transmitted from a content provider; receiving a plurality of instruction data each indicating a display format; selecting first content data from the received plurality of content data; generating a first image based on the selected first content data in accordance with first instruction data which is one of the plurality of instruction data and correspond to the first content data; and projecting by a projector the generated first image, wherein the first image includes at least characters generated by editing text data in accordance with a display format indicated by the first instruction.
A system for projecting a content image, the system comprising: a content manager that communicates with a plurality of content providers generating content data including at least text data via a network and a plurality of projectors projecting an image generated based on the content data generated by the content providers; wherein the content manager receives the content data and brief instruction data with respect to the received content data, translates the brief instruction data into specific instruction data that instructs a method for generating an image based on the content data, and transmits the content data and the specific instruction data with respect to the content data to the projectors.
In view of the foregoing, the present disclosure, through one or more of its various aspects, embodiments and/or specific features or sub-components, is thus intended to bring out one or more of the advantages as specifically noted below.
Methods described herein are non-limiting illustrative examples, and as such are not intended to require or imply that any particular process of any embodiment be performed in the order presented. Words such as “thereafter,” “then,” “next,” etc. are not intended to limit the order of the processes, and these words are instead used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an” or “the”, is not to be construed as limiting the element to the singular.
The content managing system 200 include a content server 210, and may further include a computer 220 for inputting information to the content server 210. The content server 210 performs a management for controlling or operating contents described hereinafter in detail. Each of the content providers 300A and 300B transmits their contents (e.g., news or Ads) to the content server 210 via the network 10. Then the content server 210 receives the contents and stores them in a memory. The content providers 300A and 300B may also transmit instructions which correspond to the transmitted contents and indicate display formats for the transmitted contents when projecting the contents, and preset instructions managing the contents and/or instructions. Then the content server 210 receives the instructions and the preset instructions, then stores them in a memory. The computer 220 may have a display and any inputs such as a mouse, a keyboard, a touch panel, like that. The computer 220 may input the contents, the instructions or the preset instructions to the content server 210 as well as the content providing group 300.
The content output system 100 includes a projector 110, and may further include a speaker 120. The projector 110 receives contents and instructions from the content server 210 via the network 10, generates an image based on one of the received contents, then projects the generated image on a wall. The projector 110 may be attached and fixed on a ceiling or a wall in a certain room, and may project the generated image on any places other than the wall. In addition, when the contents include music data, the speaker 120 may output the music.
The processor 211 may translate the received instruction data into an actual instruction data designating a display form specifically based on a preset instruction. For instance, the content provider 300A transmits content data A with brief instruction data A to the content server 210. In this case, the content server 210 has preset instruction data A corresponding to the content provider 300A and preset instruction B corresponding to the content provider 300B. The preset instruction data A and the preset instruction B are registered in advance by their correspondent providers. The content providers 300A and 300B may transmit their preset instruction data A and B in order to be stored in the memory 212 by the processor 211. Otherwise, the operator of the content managing system 200 may input the preset instruction data A in response to requests from the content provider 300A and/or the preset instruction data B in response to requests from the content provider 300B. In such a situation, when the processor 211 receives the brief instruction data A with respect to the content data A from the content provider 300A, the processor 211 confirms a source of the brief instruction data A, then translates the brief instruction data A into the specific instruction data A in accordance with the preset instruction data A corresponding to the content provider 300A. The specific instruction data B is translated similarly. The specific instruction data A (or B) designates a display format with respect to the content data A (or B) more specifically. Therefore the system 1000 could reduce communication traffics between the content providing group 300 and the content managing system 200 because the content providing group 300 is not necessary to send specific instruction data to the content server 210 every time. However, the content providing group 300 may certainly transmit specific instruction data to the content server 210. For instance, specific instruction data D with respect to content data D is directly transmitted from another content provider, other than the content providers 300A and 300B, who has not registered preset instruction data. In addition, the processor 211 may generate time schedule for each content (then store it in the memory 212).
The projector 100 includes a processor 111 as a main controller for the content output system 100, a memory 112 for stocking contents received from the content server 210, a network interface (IF) 113 as a transmitter and/or receiver, a light projector (light sources) 114 for projecting an image generated based on one of contents in the memory 112, lens unit 115 being structured by a plurality of lens which may be movable. The processor 111 receives, via the network IF 113, a plurality of content data with correspondent specific instruction data transmitted from the content server 210 via the network 10. In addition, the processor 111 may receive the time schedule generated by the content server 210. Then the processor 111 selects one of contents stored in the memory 112 according to the time schedule, generates an image based on the selected contents and specific instruction data thereof, and cause the light projector 114 to project the generated image via the lens unit 115. As a result, the content output system 100 can let people see the image showing information sent from a remote place. The generated image may be stored in the memory 112 in volatile or non-volatile, or deleted after projecting it.
The projector 110 further include an input interface (IF) 119 for receiving image data from peripheral computer 130 which may or may not be a part of the content output system 100. As described below, the processor 111 may also project a content during projecting the image data input by the peripheral computer 130. Otherwise, the processor 111 may stop projecting a content when projecting the image data input by the peripheral computer 130. The projector 110 may further include an output interface (IF) 116 for outputting data with respect to content data stored in the memory 112. For instance, if one of content data include music data, the projector 110 may output the music data to a speaker 120 to play it. In addition, in case that the processor 111 generates an image based on the content data including the music data, the processor 111 may control the light projector 114 and the speaker 120 such that both devices work at the same time. In other word, the processor 111 may cause the speaker 120 to play the music data when the generated image based on the content including the music data is projected. The projector 110 may further include a camera 117 for picturing and a human sensor for sensing a human nearby the projector 110.
As a first example, it is explained about a case of a full instruction. An original instruction data 20A which is provided by the content providing group 300 and transmitted to the content managing system 200 include the full instruction. In this case, the processor 211 does not perform a translation because the original instruction data 20A, as the full instruction, has already all of information which designates all of items (elements) X, Y and Z being necessary for determining a display format of a content with respect to the original instruction data 20A. Thus the processor 211 does not change any information about the original instruction data 20A. In other word, the specific instruction data 22A is the same information as the original instruction data 20A, substantially.
As a second example, it is explained about a case of a partial instruction. An original instruction data 20B which is provided by the content providing group 300 and transmitted to the content managing system 200 is translated into specific instruction data 22B by the processor 211. The original instruction data 20B include the partial instruction designating about the item X, but not about the item Y and Z. In this case, the processor 211 checks preset instruction data which is stored in the memory 212 and registered by a content provider being a source of the original instruction data 20B because the original instruction data 20B, as the partial instruction, do not include all of information for determining a display format. In the second example, the preset instruction data define standard settings for each item based on requirements from the content provider 300 being a source of the original instruction data 20B, and instruct that the standard settings are used to items unless the items are instructed with respect to a display format in the original instruction data 20B. Therefore, for generating the specific instruction data, the item X is determined from the original instruction data 20B. On the other hand, the items Y and Z are applied by the standard settings due to lacks of designations for the item Y and Z by the original instruction data 20B. Thus the standard settings are not limited to a specific item. The standard settings may be applicable to any items.
As a third example, it is explained about a case of no instruction for a data format. An original instruction data 20C which is provided by the content providing group 300 and transmitted to the content managing system 200 is translated into the specific instruction data 22C by the processor 211. The original instruction data 20C do not include instructions for a data format, in other word, do not designate about the items X, Y, and Z. In this case, standard settings defined by preset instruction data which is stored in the memory 212 and registered by a content provider being a source of the original instruction data 20C are used as well as the second example. Thus all of the items X, Y and Z are applied by the standard settings for generating the specific instruction data 22C.
As a forth example, it is explained about a case of a category instruction. An original instruction data 20D which is provided by the content providing group 300 and transmitted to the content managing system 200 is translated into the specific instruction data 22D by the processor 211. The original instruction data 22D merely designates a category of a content (such as politics, sports, movies, or like that). However, preset instruction data 24D stored in the memory 212 includes a plurality of patterns defining specific instruction data for each category (politics, sports and movies). Thereby, when the processor 211 receives the original instruction data 22D indicating “politics” as a category, the processor 211 selects one of the patterns corresponding to the “politics” and registers the selected pattern as the specific instruction data 22D.
As a forth example, it is explained about a case of a keyword instruction. An original instruction data 20E which is provided by the content providing group 300 and transmitted to the content managing system 200 is translated into the specific instruction data 22E by the processor 211. The processor 211 extracts keywords 26E from text data of a content upon receiving the original instruction data 20E, as the keyword instruction. Then the processor 211 may determine a category in accordance with the extracted keywords 26E. For example, if the processor 211 determines the category of the content corresponding to the original instruction data 22D as a “politics”, the processor 211 selects a certain specific instruction corresponding to “politics” among from instruction patterns (for politics, sports, movies) stored in the memory 212. Then the selected specific instruction data is registered as the specific instruction data 22E. In another option, the processor may generate the specific instruction data 22E based on the extracted keywords 26E without determining the category.
The text data 50 include title text data which correspond to “Final presidential debate is today” and subtitle text data which correspond to “The debate, on Oct. 19, in Las Vegas, would focus on a specific problem which is . . . ” in
In addition, the image 51 includes “BG IMG 4”, as a background image, behind the characters. A plurality of background images may be stored in the memory 112 in advance such that the processor 111 can pick up one of background images when generating an image in response to an instruction. Further the image 51 include QR code 51A, as a link information, to be connected to a web page showing detail information about a content of the image 51. In this case, the image 51 discloses merely a short topic for political news. If a person, however, read the link information with his/her personal device (e.g., a mobile phone, a tablet, or a PC), the personal device can be transferred to the web page displaying the detail information, like whole text for that news.
The image 52 includes a news IMG 52B, as a partial image, which is displayed with characters and is not fully overlapped with the characters. The partial image 52B may be transmitted from the content providing group 300 to the projector 110 via the content server 210, as a part of the specific instruction data. The image 53 includes a provider IMG 53B, as a partial image, which is displayed with characters and is not fully overlapped with the characters. The provider IMG 53B may correspond to a logo or a trade mark with respect to a content provider of the image 53. The provider IMG 53B may be stored in the memory 112 in advance. In addition, the specific instruction data 53 designate “today” as a keyword, to be emphasized in red. Thus “today” within the title text of the image 53 becomes red color to be emphasized. Similarly, “today” within the title text of the image 54 is emphasized in bold. Also the image 54 includes URL 54A as link information to be connected to a web page showing detail information about a content of the image 54, as well as the image 51.
In addition, the projector 110 may transmit a signal, as link information, by visible light communication instead of QR code or URL. Thus the processor 111 may control the light projector 114 to broadcast a certain light pattern within a predetermined range which may provide specific link information. Then the light pattern may be captured by a camera of mobile device (e.g., mobile phone) and be accessed to a web page showing detail information.
In step S41, the processor 111 sets options to be selected based on a time group. For instance, when it is a time group TG 1 which is between 2 PM and 3 PM as shown in
The emergency content is for example a breaking news of which the content providers 300 want to inform as soon as possible. Thus the emergency content has a higher priority than regular contents as the contents A-H during the time group TG1. Accordingly, when the processor 111 receives the emergency content from the content server 210 or the content providers 300 directly (Yes of step S47), the processor 111 generates a new image based on the emergency content with a specific instruction. The memory 112 has the specific instruction indicating a special or unique display format for such an emergency signal in advance. Otherwise, the specific instruction may be transmitted from the content server 210 or the content providers 300 directly together with the emergency content. After receiving the emergency content at step S47, the processor 111 projects the newly generated image based on the emergency contents at step S43. Therefore, the processor 111 changes the projected image to the new image based on the emergency content even if the content display mode is still on at step S44, the predetermined time has not elapsed since projecting an image at step S45, or the time group has not been changed to the next one at step S46. In other words, the new image based on the emergency content may interrupt the regular content projecting.
As illustrated in
Moreover, the computer system 2100 includes at least one of a main memory 2106 and a static memory 2108. The main memory 2106 and the static memory 2108 can communicate with each other via a bus 2110. Memories described herein are tangible storage mediums that can store data and executable instructions, and are non-transitory during the time instructions are stored therein. Again, as used herein, the term “non-transitory” is to be interpreted not as an eternal characteristic of a state, but as a characteristic of a state that will last for a period of time. The term “non-transitory” specifically disavows fleeting characteristics such as characteristics of a particular carrier wave or signal or other forms that exist only transitorily in any place at any time. The memories are an article of manufacture and/or machine component. Memories described herein are computer-readable mediums from which data and executable instructions can be read by a computer. Memories as described herein may be random access memory (RAM), read only memory (ROM), flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a removable disk, tape, compact disk read only memory (CD-ROM), digital versatile disk (DVD), floppy disk, blu-ray disk, or any other form of storage medium known in the art. Memories may be volatile or non-volatile, secure and/or encrypted, unsecure and/or unencrypted.
As shown, the computer system 2100 may further include a video display device 2112, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid state display, or a cathode ray tube (CRT). The video display device 2112 may be integrated with or physically separate from the components of the computer system 2100 described herein. For example, the video display device 2112 may comprise the display or signage.
Additionally, the computer system 2100 may include an input device 2114, such as a keyboard/virtual keyboard or touch-sensitive input screen or speech input with speech recognition. The computer system 2100 may also include a cursor control device 2116, such as a mouse or touch-sensitive input screen or pad, a microphone, etc. The computer system 2100 may also include a signal generation device 2118, such as a speaker or remote control, a disk drive unit 2120, and a network interface device 2122.
In a particular embodiment, as depicted in
As described above, a system for projecting a content image comprising: a processor; a memory storing instructions, that when executed by the processor, cause the processor to perform operations including: receiving a plurality of content data each including at least text data transmitted from a content provider; receiving a plurality of instruction data each indicating a display format; selecting first content data from the received plurality of content data; generating a first image based on the selected first content data in accordance with first instruction data which is one of the plurality of instruction data and correspond to the first content data; and projecting by a projector the generated first image, wherein the first image includes at least characters generated by editing text data in accordance with a display format indicated by the first instruction.
As described above, in the system, the operations further include: selecting second content data from the received plurality of content data; generating a second image based on the selected second content data in accordance with second instruction data which is another one of the plurality of instruction data and corresponds to the second content data; stopping projecting the generated first image; and projecting the generated second image.
As described above, in the system, the first instruction data include a plurality of items to determine the display format, the first image is generated using a standard item as one of items to determine the display format when at least one of the plurality of items is not designated in the first instruction, wherein the standard item is applicable to the plurality of content data transmitted from the content provider.
As described above, in the system, the first instruction data indicate a category of the first content data; the first image is generated in accordance with one of a plurality of preset instruction data corresponding to the category indicated in the first content data.
As described above, in the system, the operations further include: selecting second content data from the received plurality of content data; generating a second image based on the selected second content data in accordance with standard instruction data stored in the memory when not receiving instruction data corresponding to the second content data, wherein the standard instruction data is applicable to another one of the plurality of content data transmitted from the content provider.
As described above, in the system, the first image includes text information and link information accessible to more information than the text information.
As described above, in the system, the first content data is selected in a first method during a first time period, and the second content data is selected in a second method during a second time period.
As described above, in the system, the operations further include: receiving third content data having higher priority than the first content data from one of the content providers during projecting the first image generated based on the first content data; generating a third image based on the third content data; stopping projecting the first image; and projecting the generated third image.
As described above, a method for projecting a content image, comprising: receiving a plurality of content data each including at least text data transmitted from a content provider; receiving a plurality of instruction data each indicating a display format; selecting first content data from the received plurality of content data; generating a first image based on the selected first content data in accordance with first instruction data which is one of the plurality of instruction data and correspond to the first content data; and projecting by a projector the generated first image, wherein the first image includes at least characters generated by editing text data in accordance with a display format indicated by the first instruction.
As described above, the method further comprising: selecting second content data from the received plurality of content data; generating a second image based on the selected second content data in accordance with second instruction data which is another one of the plurality of instruction data and corresponds to the second content data; stopping projecting the generated first image; and projecting the generated second image.
As described above, in the method, the first instruction data include a plurality of items to determine the display format, the first image is generated using a standard item as one of items to determine the display format when at least one of the plurality of items is not designated in the first instruction, wherein the standard item is applicable to the plurality of content data transmitted from the content provider.
As described above, in the method, the first instruction data indicate a category of the first content data; the first image is generated in accordance with one of a plurality of preset instruction data corresponding to the category indicated in the first content data.
As described above, method further comprising: selecting second content data from the received plurality of content data; generating a second image based on the selected second content data in accordance with standard instruction data stored in the memory when not receiving instruction data corresponding to the second content data, wherein the standard instruction data is applicable to another one of the plurality of content data transmitted from the content provider.
As described above, in the method, the first image includes text information and link information accessible to more information than the text information.
As described above, in the method, the first content data is selected in a first method during a first time period, and the second content data is selected in a second method during a second time period.
As described above, a system for projecting a content image comprising: a content manager that communicates with a plurality of content providers generating content data including at least text data via a network and a plurality of projectors projecting an image generated based on the content data generated by the content providers; wherein the content manager receives the content data and brief instruction data with respect to the received content data, translates the brief instruction data into specific instruction data that instructs a method for generating an image based on the content data, and transmits the content data and the specific instruction data with respect to the content data to the projectors.
It is noted that the foregoing examples have been provided merely for the purpose of explanation and are in no way to be construed as limiting of the present invention. While the present invention has been described with reference to exemplary embodiments, it is understood that the words which have been used herein are words of description and illustration, rather than words of limitation. Changes may be made, within the purview of the appended claims, as presently stated and as amended, without departing from the scope and spirit of the present invention in its aspects. Although the present invention has been described herein with reference to particular structures, materials and embodiments, the present invention is not intended to be limited to the particulars disclosed herein; rather, the present invention extends to all functionally equivalent structures, methods and uses, such as are within the scope of the appended claims.
The present invention is not limited to the above described embodiments, and various variations and modifications may be possible without departing from the scope of the present invention.
Number | Date | Country | |
---|---|---|---|
62412431 | Oct 2016 | US |