The present disclosure relates to an information processing device, an information processing method, and a program.
In recent years, the amount of information handled by a user has been increasing along with an increase in communication speed and computing power of information processing devices. However, there is a limit to the screen size of an information processing device. Therefore, various proposals have been made to efficiently display information using a limited screen area.
Frequently performing user operations, such as enlarging, shrinking or scrolling, in order to provide many pieces of information using a limited screen area increases the burden on the user. In view of this, the burden on the user can be reduced by assisting in the generation of layouts for efficiently providing information. In view of assisting in the automatic layout generation, Patent Literature 1, for example, discloses a system for assisting in the operation of laying out newspaper advertisements on the page space based on past record information.
However, where information is laid out on the display screen of an information processing device, the arrangement of the display area itself may change. In view of this, the present disclosure proposes an information processing device, an information processing method, and a program capable of displaying information in a suitable manner for a characteristic of a display area.
According to the present disclosure, there is provided an information processing device including an obtaining section configured to obtain a content item to be displayed in a first display area of a display screen, and an image generating section configured to generate a display image by laying out the content item based on an arrangement of the first display area.
Further, according to the present disclosure, there is provided an information processing method including obtaining a content item to be displayed in a first display area of a display screen, and generating a display image by laying out the content item based on an arrangement of the first display area.
Further, according to the present disclosure, there is provided a program for causing a computer to function as an information processing device including an obtaining section configured to obtain a content item to be displayed in a first display area of a display screen, and an image generating section configured to generate a display image by laying out the content item based on an arrangement of the first display area.
As described above, according to the present disclosure, it is possible to display information in a suitable manner for a characteristic of a display area.
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the appended drawings. Note that, in this specification and the drawings, elements that have substantially the same function and structure are denoted with the same reference signs, and repeated explanation is omitted.
Note that descriptions will be made in the following order.
1. Overview
2. Functional configuration
3. Example arrangement of display area
4. Display example and operation example
5. Layout process
6. Operation example
7. Hardware configuration example
<1. Overview>
First, referring to
Referring to
The present disclosure provides a method for efficiently displaying the sub-information. In order to efficiently provide the sub-information, it is preferred that the sub-display area DA2 is initially a very small partial area of the display screen, which can be enlarged by a user operation. That is, the arrangement of the sub-display area DA2 is changed in response to an operation. In view of this, it is preferred that the sub-information is laid out based on the arrangement of the sub-display area DA2.
As described above, the information processing device 100 generates a layout of an advertisement from the content data 40 based on the arrangement of the sub-display area DA2. For an object to be advertised, an advertiser often has poster image data or campaign Web page already produced. In view of this, the advertisement to be provided in the present disclosure enables an advertisement to be generated using such content data 40 without the advertiser having to newly generate data for Web advertisement. The information processing device 100 can generate the display image to be displayed in the sub-display area DA2 by cutting out a partial area of the content data 40 or re-arranging part objects included in the content data 40. Thus, it is possible to significantly reduce the burden on the advertiser. For example, as the advertiser inputs an URL (Uniform Resource Locator) indicating the location where the content data 40 is stored and presses an auto-create button 22 on the advertisement creating screen 20 shown in
<2. Functional Configuration>
Now, referring to
The information processing device 100 may be an information processing device such as a mobile phone, a PHS (Personal Handyphone System), a portable music player device, a portable video processing device, or a portable game device, for example. Alternatively, the information processing device 100 may be an information processing device such as a PC (Personal Computer), a household video processing device (a DVD recorder, a VCR, or the like), a PDA (Personal Digital Assistants), a household game device, or a household electric appliance.
The information processing device 100 primarily includes a content analysis section 105, a display area analysis section 110, an image generating section 115, a display section 120, and an operation information obtaining section 125.
(Content Analysis Section 105)
The content analysis section 105 is an example of a content analysis information obtaining section for obtaining analysis information of the content data 40. That is, while the description below is directed to a case where the information processing device 100 has a function of analyzing the content data 40, the present technique is not limited to such an example. For example, the information processing device 100 may have a function of obtaining content analysis information, which is the result of an analysis of the content data 40 by an external device.
The content analysis section 105 generates content analysis information obtained by analyzing a characteristic of the content data 40. The content analysis section 105 is capable of analyzing the attribute of part objects included in the content data 40. Now, where the content data 40 is HTML data, the part objects may be image data or text data included in the content data 40. Where the content data 40 is one piece of image data, the content analysis section 105 may generate part objects by analyzing the image data and trimming parts included in the image data.
Note that the attribute analyzed by the content analysis section 105 may be a static attribute or may be a dynamic attribute. For example, an example of the attribute analyzed herein may be the size, shape, type, target, presence/absence of an event, display history, etc. For example, the size of a part object may be represented by an absolute size or may be represented by a proportion to be occupied by the part object with respect to the entire content data 40. The type of a part object may be image, text, button, counter, etc., for example. The target of a part object indicates whether the content of the part object is associated with the entire content data 40 or is associated with one or more item included in the content data 40. For example, a logo mark is associated with the entire content data 40, and text data indicating a price is associated with one or more item included in the content data 40. The presence/absence of an event of a part object indicates whether there is an event to be triggered in response to an operation made on the part object. For example, an event may be a transition to a Web page, a voting event, or the like. Where a fashion-related advertisement is displayed, as illustrated in
(Display Area Analysis Section 110)
The display area analysis section 110 has a function of analyzing the arrangement state of the sub-display area DA2. The sub-display area DA2 changes its arrangement state in response to a user operation. Therefore, the display area analysis section 110 is capable of analyzing the arrangement state of the sub-display area DA2 each time the arrangement state changes. Now, the arrangement of the sub-display area DA2 analyzed by the display area analysis section 110 includes the size, position, shape, etc., of the sub-display area DA2. The size of the sub-display area DA2 analyzed may be represented by an absolute value or may be represented by a proportion of the sub-display area with respect to the entire display screen. Where a plurality of sub-display areas DA2 are included, the relative position between the sub-display areas DA2 may be analyzed. The display area analysis section 110 is capable of supplying arrangement information of the sub-display area DA2.
(Image Generating Section 115)
The image generating section 115 is capable of generating, from the content data 40, a display image to be displayed in the sub-display area DA2, based on the arrangement information of the sub-display area DA2 supplied from the display area analysis section 110. Based on the arrangement information of the sub-display area DA2, the image generating section 115 may generate a display image to be displayed in the sub-display area DA2 by cutting out a portion of the content data 40. The image generating section 115 may also generate a display image to be displayed in the sub-display area DA2 by laying out part objects included in the content data 40.
The image generating section 115 may generate a display image based on the content analysis information supplied from the content analysis section 105. Then, the image generating section 115 can generate a display image based on the attribute of a part object. For example, the image generating section 115 can generate a layout of a display image based on whether the part object is associated with the entirety or with an item. For example, in a state where the sub-display area DA2 is displayed in a very small part of the display screen, the image generating section 115 can increase the weight of a part object that is associated with the entirety. In a state where an enlarging operation has been performed on the sub-display area DA2 so that a fashion item or an example of coordination of the content data 40 is displayed, the image generating section 115 can increase the weight of a part object that is associated with an item. The image generating section 115 can also decrease, when a layout is generated again, the weight of a part object displayed while in a state where the sub-display area DA2 is displayed in a very small part of the display screen, for example.
(Display Section 120)
The display section 120 may include a display control section for controlling the display of a display image generated by the image generating section 115, and a display device, for example. The display section 120 can also control the display of the display screen so that the arrangement of the display area is changed based on operation information obtained by the operation information obtaining section 125. While the description herein is directed to the information processing device 100 having a display device and it is therefore assumed that the display section 120 is included in the display device, the present technique is not limited to such an example. For example, where the information processing device 100 does not include a display device, the display section 120 may be a display control section.
(Operation Information Obtaining Section 125)
The operation information obtaining section 125 may include an input section for allowing a user to input information, for example, and an input control circuit for generating an input signal based on the input, etc. For example, an example of the input section may be a touch panel, a mouse, a keyboard, a button, a microphone, a switch, a lever, or the like.
An example of the function of the information processing device 100 according to the present embodiment has been described above. Each component described above may be implemented by using a general-purpose member or circuit, or may be implemented by hardware specialized in the function of the component. The function of each component may be performed by loading a control program, describing procedures for an arithmetic unit such as a CPU (Central Processing Unit) to implement the function, from a storage medium such as a ROM (Read Only Memory) or a RAM (Random Access Memory) storing the control program, and by interpreting and executing the program. Therefore, the configuration to be used may be changed as necessary depending on the level of technology at the time of carrying out the present embodiment. Note that an example of the hardware configuration of the information processing device 100 will be described later in detail.
Note that a computer program for implementing the functions of the information processing device 100 according to the present embodiment as described above may be produced and installed on a personal computer, or the like. It is also possible to provide a computer-readable recording medium having such a computer program stored therein. The recording medium may be a magnetic disk, an optical disk, a magneto optical disk, a flash memory, or the like, for example. The computer program described above may be distributed via a network, for example, without using a recording medium.
<3. Example Arrangement of Display Area>
Next, referring to
As shown in Pattern 3, the sub-information may be arranged in a lower layer under the main information. Sub-information arranged in a location of a window for exposing the lower layer in the main display area DA1 is displayed. As shown in Pattern 4, the sub-information may be arranged in an upper layer over the main information. As shown in Pattern 5, the sub-information may be arranged in the same layer as the main information and so that the sub-information surrounds the main information in an L-letter shape. As shown in Pattern 6, the sub-information may be arranged in a corner area of the rectangular area in which the main information is displayed. As shown in Pattern 7, the sub-information may be arranged in a circular frame area provided within the main display area.
As described above, the main information and the sub-information may be arranged in the same layer or may be arranged in different layers. The shape of the sub-display area DA2 is not limited to the example shown in
<4. Display Example and Operation Example>
Next, referring to
Referring first to
Referring next to
Referring next to
As shown in
<5. Layout Process>
Referring next to
Here, a method for automatically laying out an object in a limited area (the sub-display area DA2) will be illustrated. The object to be laid out here may be text, an image, a video, a pattern, etc., for example.
For this, the information processing device 100 analyzes objects included in a content item and the characteristic of the sub-display area DA2. The content analysis section 105 extracts objects included in the content item. For example, where the content item is HTML content, the content analysis section 105 can analyze image data included in the HTML file. For example, the content analysis section 105 can analyze an image file Ito extract an effective area in the image data. Image data sometimes includes blank areas. Therefore, the content analysis section 105 may extract, as one object, an effective area A of the image data where information is included. The content analysis section 105 may analyze the center-of-gravity position of the effective area A. The content analysis section 105 can analyze a feature point P in the effective area A. The feature point P may be the face position P1 if a human is included in the image as shown in the image I1, for example. Where the image is a car, the feature point P may be the position of the emblem. The content analysis section 105 can also analyze the color of the image, font size, type (a human, a scenery, text), target, presence/absence of an event, etc. A text area may be extracted as an effective area A2, as in an image file I2, and the center of gravity of the effective area A2 is defined as C2. Alternatively, where no blank area is included in the original image data as in the image file I3, the content analysis section 105 may determine that the entire image file I3 is the effective area, and analyze the center of gravity C3 of the image file I3. When analyzing an HTML file, the content analysis section 105 not only analyzes the objects, but also analyzes the tree structure, syntax, style and colors being used, etc., of the HTML.
The display area analysis section 110 also analyzes the characteristic of the sub-display area DA2. For example, the display area analysis section 110 can analyze the shape, size and display DPI (Dots Per Inch) of the sub-display area DA2.
Next, as shown in
<6. Operation Example>
Referring next to
Referring first to
On the other hand, if it is determined in step S105 that an animation is in progress, the area animation and the layout are next updated (S125). The updated layout as used herein refers to a layout of an object in the sub-display area DA2.
On the other hand, if it is determined in step S100 that a touch is in progress, it is next determined whether the sub-display area DA2 is being displayed (S130). Now, if the sub-display area DA2 is being displayed, the area animation and the layout are updated (S135).
After these processes are performed, it is determined whether the display has ended (S140). If the display has not ended, the operation returns to step S100 to repeat the process described above.
Referring next to
Then, the image generating section 115 generates a layout based on the content item analysis result of step S200 and the display area analysis result of step S210 (S215). Then, the image generating section 115 generates a display image to be displayed on the display device (S220). Then, the display section 120 outputs the generated display image using the display device (S225). Then, it is determined whether the display has ended (S230), and the process described above is repeated until the display ends.
<7. Hardware Configuration Example>
Referring next to
Now, an example of a configuration of the information processing device 100 will be described. Referring to
(Telephone Network Antenna 817)
The telephone network antenna 817 is an example of an antenna having a function of wirelessly connecting to a mobile telephone network for telephone calls and communications. The telephone network antenna 817 is capable of supplying call signals received via the mobile telephone network to the telephone processing section 819.
(Telephone Processing Section 819)
The telephone processing section 819 has a function of performing various signal processes on signals transmitted/received by the telephone network antenna 817. For example, the telephone processing section 819 is capable of performing various signal processes on the audio signal received via the microphone 857 and encoded by the encoder 855, and supplying it to the telephone network antenna 817. The telephone processing section 819 is capable of performing various signal processes on the audio signal supplied from the telephone network antenna 819, and supplying it to the decoder 851.
The GPS antenna 821 is an example of an antenna for receiving signals from positioning satellites. The GPS antenna 821 is capable of receiving GPS signals from a plurality of GPS satellites, and inputting the received GPS signals to the GPS processing section 823.
The GPS processing section 823 is an example of a calculation section for calculating position information based on signals received from positioning satellites. The GPS processing section 823 calculates the current position information based on a plurality of GPS signals input from the GPS antenna 821, and outputs the calculated position information. Specifically, the GPS processing section 823 calculates the position of each GPS satellite from the orbit data of the GPS satellite, and calculates the distance from each GPS satellite to the information processing device 100 based on the time difference between the transmission time and the reception time of the GPS signal. Then, it is possible to calculate the current three-dimensional position based on the position of each GPS satellite and the distance from the GPS satellite to the information processing device 100 calculated. Note that the orbit data of the GPS satellite used herein may be included in the GPS signal, for example. Alternatively, the orbit data of the GPS satellite may be obtained from an external server via the communication antenna 825.
The Wifi antenna 825 is an antenna having a function of transmitting/receiving communication signals with a wireless LAN (Local Area Network) communication network in compliance with the Wifi specifications, for example. The Wifi antenna 825 is capable of supplying the received signal to the communication processing section 827.
The Wifi processing section 827 has a function of performing various signal processes on signals supplied from the Wifi antenna 825. The Wifi processing section 827 is capable of supplying, to the CPU 839, a digital signal generated from the supplied analog signal.
The geomagnetic sensor 829 is a sensor for detecting the geomagnetism as a voltage value. The geomagnetic sensor 829 may be a 3-axis geomagnetic sensor for detecting the geomagnetism in each of the X-axis direction, the Y-axis direction and the Z-axis direction. The geomagnetic sensor 829 is capable of supplying the detected geomagnetism data to the CPU 839.
The acceleration sensor 831 is a sensor for detecting acceleration as a voltage value. The acceleration sensor 831 may be a 3-axis acceleration sensor for detecting the acceleration along the X-axis direction, the acceleration along the Y-axis direction, and the acceleration along the Z-axis direction. The acceleration sensor 831 is capable of supplying the detected acceleration data to the CPU 839.
The gyro sensor 833 is a type of a measuring instrument for detecting the angle or the angular velocity of an object. The gyro sensor 833 may be a 3-axis gyro sensor for detecting, as a voltage value, the velocity (angular velocity) at which the rotational angle changes about the X axis, the Y axis and the Z axis. The gyro sensor 833 is capable of supplying the detected angular velocity data to the CPU 839.
The atmospheric pressure sensor 835 is a sensor for detecting the ambient atmospheric pressure as a voltage value. The atmospheric pressure sensor 835 is capable of detecting the atmospheric pressure at a predetermined sampling frequency, and supplying the detected atmospheric pressure data to the CPU 839.
The image pickup section 837 has a function of recording a still image or a moving picture through a lens under the control of the CPU 839. The image pickup section 837 may store the recorded image in the storage section 859.
The CPU 839 functions as an arithmetic processing device and a control device, and controls the overall operation within the information processing device 100 by various programs. The CPU 839 may be a microprocessor. The CPU 839 is capable of implementing various functions by various programs.
The ROM 841 is capable of storing programs, operation parameters, etc., used by the CPU 839. The RAM 843 is capable of temporarily storing a program to be used while being executed by the CPU 839, and parameters, or the like, which appropriately vary during the execution.
The operation section 847 has a function of generating an input signal for performing a desired operation. The operation section 847 may include an input section for inputting information, such as a touch sensor, a mouse, a keyboard, a button, a microphone, a switch and a lever, for example, and an input control circuit for generating an input signal based on the input and outputting it to the CPU 839, etc.
The display section 849 is an example of an output device, and may be a display device such as a liquid crystal display (LCD: Liquid Crystal Display) device, an organic EL (OLED: Organic Light Emitting Diode) display device, or the like. The display section 849 is capable of providing information by displaying a screen.
The decoder 851 has a function of performing a decoding, an analog conversion, etc., of the input data under the control of the CPU 839. The decoder 851 is capable of, for example, performing a decoding, an analog conversion, etc., of the audio data which has been input via the telephone network antenna 817 and the telephone processing section 819 to output the audio signal to the speaker 853. The decoder 851 is also capable of, for example, performing a decoding, an analog conversion, etc., of the audio data which has been input via the Wifi antenna 825 and the Wifi processing section 827 to output the audio signal to the speaker 853. The speaker 853 is capable of outputting sound based on the audio signal supplied from the decoder 851.
The encoder 855 has a function of performing a digital conversion, an encoding, etc., of the input data under the control of the CPU 839. The encoder 855 is capable of performing a digital conversion, an encoding, etc., of the audio signal which is input from the microphone 857 to output audio data. The microphone 857 is capable of collecting sound to output the sound as an audio signal.
The storage section 859 is a device for data storage, and may include a storage medium, a recording device for recording data on a storage medium, a reading device for reading out data from a storage medium, a deleting device for deleting data recorded on a storage medium, etc. The storage medium may be, for example, a nonvolatile memory such as a flash memory, an MRAM (Magnetoresistive Random Access Memory), an FeRAM (Ferroelectric Random Access Memory), a PRAM (Phase change Random Access Memory), and an EEPROM (Electronically Erasable and Programmable Read Only Memory), a magnetic recording medium such as an HDD (Hard Disk Drive), or the like.
The preferred embodiments of the present invention have been described above with reference to the accompanying drawings, whilst the present invention is not limited to the above examples, of course. A person skilled in the art may find various alterations and modifications within the scope of the appended claims, and it should be understood that they will naturally come under the technical scope of the present invention.
For example, while an advertisement is displayed in a sub-display area in the embodiment described above, the present technique is not limited such an example. For example, the sub-display area may be a dictionary display area for displaying description text for a word displayed in the main display area DA1. In the embodiment described above, the advertisement displayed in the sub-display area may be of content unrelated to the main information displayed in the main display area, or information related to the main information may be extracted and displayed in the sub-display area. An advertisement as sub-information may also be information that reflects user's preferences based on results of the learning function.
While it is assumed in the above embodiment that processes such as generating a display image are performed on the information processing device 100, which is the client unit, the technical scope of the present disclosure is not limited to such an example. Some of the functions of the information processing device 100 may be implemented on a server connected to the client unit via a network. Such a server can perform processes such as analyzing a content item, analyzing a display area, or generating a display image, for example, in response to an instruction transmitted from the client unit, and transmit a display image or a display control signal to the client unit. Such embodiments are also included within the technical scope of the present disclosure.
Note that steps listed in flow charts in the present specification not only include those processes that are performed chronologically in the order they are listed, but also include those processes that may not be performed chronologically but are performed in parallel or individually. It is understood that even steps that are processed chronologically can be in some cases performed in a different order as necessary.
Additionally, the present technology may also be configured as below.
(1)
An information processing device including:
an obtaining section configured to obtain a content item to be displayed in a first display area of a display screen; and
an image generating section configured to generate a display image by laying out the content item based on an arrangement of the first display area.
(2)
The information processing device according to (1), wherein the content item includes sub-information different from main information displayed in the display screen.
(3)
The information processing device according to (2), wherein the sub-information is information subordinate to the main information.
(4)
The information processing device according to (2) or (3), wherein the first display area is arranged in a lower layer under a second display area in which the main information is displayed.
(5)
The information processing device according to any one of (1) to (4), wherein
the display screen includes a plurality of first display areas, and
the image generating section lays out the content item based on an arrangement of the plurality of first display areas.
(6)
The information processing device according to (5), wherein the image generating section determines an angle at which the object is arranged based on the arrangement of the plurality of first display areas.
(7)
The information processing device according to any one of (1) to (6), wherein
a size of the first display area is changed in response to an operation, and
the image generating section generates the display image in accordance with the size of the first display area.
(8)
The information processing device according to any one of (1) to (7), wherein
the obtaining section obtains objects included in the content item, and
the image generating section lays out the content item by determining an arrangement of the objects.
(9)
The information processing device according to (8), wherein the image generating section generates the display image including one or more of the objects included in the content item.
(10)
The information processing device according to (8) or (9), wherein the image generating section generates the display image in which the objects are arranged based further on attributes of the objects.
(11)
The information processing device according to (9) or (10), wherein
a size of the first display area is enlarged in response to an enlarging operation, and
before the enlarging operation, the image generating section generates the display image which preferentially includes objects associated with the entire content item.
(12)
The information processing device according to (11), wherein after the enlarging operation, the image generating section determines a degree of priority of each object based on a display history of the object.
(13)
An information processing method including:
obtaining a content item to be displayed in a first display area of a display screen; and
generating a display image by laying out the content item based on an arrangement of the first display area.
(14)
A program for causing a computer to function as an information processing device including:
an obtaining section configured to obtain a content item to be displayed in a first display area of a display screen; and
an image generating section configured to generate a display image by laying out the content item based on an arrangement of the first display area.
Number | Date | Country | Kind |
---|---|---|---|
2012-027026 | Feb 2012 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2012/083824 | 12/27/2012 | WO | 00 |