This application is based on and claims priority under 35 USC 119 from Japanese Patent Application No. 2020-129302 filed Jul. 30, 2020.
The present disclosure relates to an information processing apparatus and a non-transitory computer readable medium.
For example, an application for adding augmented reality (AR) content to as image taken in a city is available on smartphones. For example, services such as a service of overlaying information concerning a building or a shop (hereinafter referred to as a “building or the like”) on top of the building or the like within an image, a service of overlaying translation of a text on top of the text within as image, and a service of overlaying a virtual creature or the like on an image are available.
See, for example, Japanese Unexamined Patent Application Publication No. 2013-12328.
It is expected that the number of pieces of information and an amount of information associated with a subject within an image increase in the future as the quality of the services improve. However, displaying too many pieces of information or too much information within a limited space of a display surface deteriorates viewability of information, resulting in poor accessibility to target information.
Aspects of non-limiting embodiments of the present disclosure relate to a technique of giving information associated with a subject within a taken image in various manners as compared with a case where such information is given in a uniform manner.
Aspects of certain non-limiting embodiments of the present disclosure overcome the above disadvantages and/or other disadvantages not described above. However, aspects of the non-limiting embodiments are not required to overcome the disadvantages described above, and aspects of the non-limiting embodiments of the present disclosure may not overcome any of the disadvantages described above.
According to an aspect of the present disclosure, there is provided an information processing apparatus including a processor configured to display an image taken by an imaging device in each of a plurality of regions provided within a display surface and set, for each of the plurality of regions, which information is to be displayed in association with a position of a subject within the image.
Exemplary embodiments of the present disclosure will be described in detail based on the following figures, wherein:
Exemplary embodiments of the present disclosure are described below with reference to the drawings.
System Configuration
The network system 1 includes a mobile terminal 10 operated by a user and a server (hereinafter referred to as an “AR server”) 20 that offers an AR service to the mobile terminal 10.
The AR service is a service of reflecting content (hereinafter referred to as “AR content”) created by a computer in an image of the real world taken by the mobile terminal 10. The AR content in the present exemplary embodiment is an image.
The AR service in the present exemplary embodiment becomes available in response to an instruction given from a user on the mobile terminal 10. This instruction is, for example, an instruction to execute an application program (hereinafter referred to as an “app”) for using the AR service. In a case where the AR service is offered as a cloud service, access to the AR server 20 is also an instruction from a user.
In the present exemplary embodiment, a marker-less AR service that does not need a marker to display AR content is Intended. Note, however, that the AR service may be a location-based AR service that uses only positional information.
The AR content is displayed in association with a subject recognized as a target of the AR service in an image taken by the mobile terminal 10. In this respect, the AR content in the present exemplary embodiment is an example of information displayed in association with a position of a subject. An image reflecting the AR content is also referred to as an “AR composite image”. A user sees an AR composite image displayed on the mobile terminal 10.
The mobile terminal 10 according to the present exemplary embodiment communicates with the AR server 20 through generations of mobile communication systems or a network combining a wireless local area network (LAN) and the Internet. Which path is used for communication is decided depending on an environment in which the mobile terminal 10 is used or by user's selection.
In the present exemplary embodiment, the mobile communication systems are classified into the third generation (i.e., 3G) and the fourth generation (i.e., 4G) of a relatively low communication speed and the fifth generation (i.e., 5G) and the sixth generation (i.e., 6G) of a relatively high communication speed.
In the present exemplary embodiment, a user can use the AR service in real time without restriction in a case where 5G or 6G communication is available. Meanwhile, in a case where only 3G or 4G communication is available, there may be delay or restrictions of contents when a user uses the AR service.
In the present exemplary embodiment, any one of IEEE802.11a, 11b, 11g, 11n, 11ac, 11ad, and 11ax is used as the wireless LAN. Note, however, that the entire path of communication may be a wired path. Even a communication speed of communication passing a wired path depends on an environment in which the mobile terminal 10 is used.
Assumed examples of the mobile terminal 10 according to the present exemplary embodiment include a smartphone, a tablet terminal, a gaming console, and a wearable terminal.
The mobile terminal 10 is a computer and includes a camera, a display, and a communication module. Although only one mobile terminal 10 is illustrated in
The AR server 20 according to the present exemplary embodiment works together with the mobile terminal 10 and causes an AR composite image to be displayed on the mobile terminal 10. Although single AR server 20 is illustrated in
Configurations of Devices
Configuration of Mobile Terminal 10
The mobile terminal 10 illustrated in
In the present exemplary embodiment, the internal memory 109 and the external memory 110 are semiconductor memories. The internal memory 109 has a read only memory (ROM) in which a Basic Input Output System (BIOS) and the like are stored and a random access memory (RAM) used as a first storage device. The CPU 101 and the internal memory 109 constitute a computer. The CPU 101 uses the RAM as a program work space. In the external memory 110, firmware and apps are stored.
The display 102 is, for example, an organic Electro Luminescent (EL) display or a liquid crystal display. The display 102 according to the present exemplary embodiment is provided on a single substrate. In the present exemplary embodiment, an image and other kinds of information are displayed on a surface (i.e., a display surface) of the display 102. Examples of the image include an image (hereinafter referred to as a “taken image”) taken by the camera 106. The display 102 according to the present exemplary embodiment is not bendable nor foldable.
The film sensor 103 is disposed on the surface of the display 102. The film sensor 103 does not hinder observation of information displayed on the display 102 and detects a position operated by a user on the basis of a change in capacitance.
The GPS module 104 is used to measure a position of the mobile terminal 10. Note that the position of the mobile terminal 10 may be measured by using sensor technology other than the GPS module 104. For example, the position of the mobile terminal 10 may be measured on the basis of a Bluetooth (Registered Trademark) signal or a WiFi signal received from a beacon.
The inertial sensor 105 is, for example, a 6-axis sensor that detects an acceleration and an angular velocity. A posture of the mobile terminal 10 can be detected by using the inertial sensor 105. In addition, the mobile terminal 10 may include a geomagnetic sensor. A direction in which an image is taken may be specified by using the geomagnetic sensor.
The camera 106 uses, for example, a Complementary Metal Oxide Semiconductor (CMOS) image sensor or a Charge-Coupled Device (CCD) image sensor. In the present exemplary embodiment, the camera 106 is integral with the mobile terminal 10. The mobile terminal 10 may include plural cameras 106. The camera 106 is an example of an imaging device.
The microphone 107 is a device that converts user's voice or ambient sound into an electric signal. In the present exemplary embodiment, the microphone 107 is used to receive a voice instruction from a user. For example, the microphone 107 is used to receive an instruction concerning the number of regions used to display AR content and the kind of AR content displayed in each region.
The speaker 108 is a device that converts an electric signal into sound and outputs the sound.
The communication module 111 has a module that supports a mobile communication system and a module that supports a wireless LAN.
The CPU 101 monitors a state of communication of the communication module 111 (see
This determining process is performed, for example, on the basis of a used communication method and an effective speed of communication with the AR server 20. For example, in a case where the speed of communication is lower than a predetermined speed, the CPU 101 obtains a negative result in step 1. Conversely, in a case where the speed of communication is higher than a predetermined speed, the CPU 101 obtains a positive result in step 1.
In a case where a positive result is obtained in step 1, the CPU 101 determines whether or not, a display mode is a mode for displaying an image taken by the camera 106 (see
In a case where a positive result is obtained in both of steps 1 and 2, the CPU 101 reflects AR content designated by the user in an image displayed in each region (step 3). In the present exemplary embodiment, the number of regions is two. The user may preset three or more regions. However, setting regions means dividing the display surface, and therefore setting too many regions relative to the area of the display surface may undesirably deteriorate viewability of AR content. In view of this, in the present exemplary embodiment, the number of regions in which the same image taken by the camera 106 is displayed is two.
The AR content displayed in each region can be designated by the user. By designating AR content displayed in each region, the number of pieces of AR content displayed in each region can be reduced, and association between AR content and a subject can be easily checked. Note that the AR content displayed in each region can be set in advance.
In the present exemplary embodiment, the user can give an instruction for changing a size of each region. Although the two regions basically have the same size, the sizes of the regions may be changed on the basis of an instruction from the user. For example, the size of one region may be set to 60% of the display surface and the size of the other region may be set to 40% of the display surface.
In addition, the user can perform a pinch-out gesture of increasing a distance between two fingers to enlarge an image displayed in a region and a pinch-in gesture of decreasing a distance between two fingers to reduce an image displayed in a region while keeping the sizes of the regions.
In a case where a negative result is obtained in step 1 or 2, the CPU 101 displays AR content in a single region (step 4). This case corresponds to a situation where it is difficult to reflect AR content instantly due to a low communication speed or a situation where the user wants AR content to be displayed in a single region.
Configuration of AR server 20
The AR server 20 illustrated in
In the present exemplary embodiment, the semiconductor memory 202 is constituted by a ROM in which a BIOS and the like are stored and a RAM used as a first storage device. The CPU 201 and the semiconductor memory 202 constitute a computer. The CPU 201 uses the RAM as a program work space.
The hard disk device 203 is an auxiliary storage device in which an operating system and apps are stored. The apps include a program used to offer the AR service.
Furthermore, the hard disk device 203 according to the present exemplary embodiment stores therein, for example, a database used for detection of a subject included in an image taken by the mobile terminal 10 (see
The subject according to the present exemplary embodiment is a thing recognized as an object, for example, by artificial is intelligence among things included in an image taken by the camera 106 (see
The CPU 201 acquires information on a position of the mobile terminal 10 and image data of a taken image through communication with the mobile terminal 10 (see
In a case where the position of the mobile terminal 10 is known, accuracy of recognition of a subject is sometimes improved. Note, however, that in a case where a subject is characteristic, a position where the image is taken can be specified only from image data. In a case where the image contains unique building or landscape, the position of the mobile terminal 10 can be specified.
In a case where the AR service is translation of text information taken as a subject, information on a position where the subject is taken is unnecessary. That is, information on the position is not necessarily needed depending on the kind of AR service used by the user.
Although image data of images taken by the camera 106 (see
Alternatively, only some of image data extracted through preprocessing performed by the mobile terminal 10 may be uploaded from the mobile terminal 10 to the AR server 20 or information indicative of characteristics of an image extracted through preprocessing of the mobile terminal 10 may be uploaded from the mobile terminal 10 to the AR server 20.
In an environment in which 3G or 4G is used for communication between the mobile terminal 10 and the AR server 20, AR content can be offered in a shorter time by reducing an amount of uploaded information.
Next, the CPU 201 acquires AR content that has been associated or to be associated with a subject extracted from the image data (step 12) and transmits the acquired AR content and a position where the AR content associated within the image (step 13).
For example, in a case where the subject is a building or a natural object, information associated with the subject or retrieved information about the specified building or the like is transmitted as AR content. Meanwhile, in a case where the subject is text information, a text converted from the subject or translation of the subject is transmitted as AR content.
The CPU 201 transmits a position where the transmitted AR content is associated within the image.
Screen Examples
Screen examples in the first exemplary embodiment are described below with reference to
Screen Example 1
In
In
In
In
As described above, the number of pieces of AR content associated with a single object will increase in the future as the quality of services improves.
In the present exemplary embodiment, the screen 120 is displayed, for example, when a setting button (not illustrated) or a specific position is tapped. On the screen 120, three options are displayed. The three options are a mode for displaying AR content in two regions, a mode for displaying AR content in a single region, and a mode for displaying no AR content.
The mode for displaying AR content in a single region is a mode for displaying an image taken by the mobile terminal 10 and AR content associated with a subject within the image on the display surface as illustrated in
The mode for displaying AR content in two regions is a mode for displaying an image taken by the mobile terminal 10 and AR content related to a subject within the image in two regions within the display surface, respectively. In the present exemplary embodiment, the display surface is divided into left and right regions that have the same size.
The mode for displaying no AR content is a mode for displaying only an image taken by the mobile terminal 10 on the display 102 as illustrated in
In the case of the screen 120, a checkbox corresponding to the mode for displaying AR content in two regions has been checked. Accordingly, the screen 121 is used to set AR consent to be displayed in left and right two regions.
On the screen 121, options concerning AR content to be displayed in the left and right two regions are displayed. Hereinafter, a region on a left side of the display surface is referred to as a “left region”, and a region on a right side of the display surface is referred to as a “right region”.
In
Among the options, “FOOD”, “CLOTHING”, “REAL ESTATE”, “SIGHTSEEING”, “CLEANING”, “SUPERMARKET”, “BARBER”, “THEATER”, “OFFICE”, and “ACCOMMODATION” are examples of kinds of subjects and are examples of standards used to determine which AR content is to be displayed.
Among the options, “DISPLAY ADS”, “DISPLAY MENUS”, “WTTHIN 10 M”, and “10 M OR FARTHER AWAY” are examples of standards used to determine which AR content is to be displayed.
These options may be given irrespective of a taken image or may be given in accordance with a taken image. In the present exemplary embodiment, the AR server 20 recognizes a subject included in a taken image. Note, however, that the mobile terminal 10 may recognize a subject.
The option. “DISPLAY ADS” is an option for causing advertisements, which are subcontent, to be included in AR content.
The option “DISPLAY MENUS” is an option for causing menus to be included in AR content.
The option “WITHIN 10 M” is an option for restricting displayed AR content to AR content of subjects within 10 m from the mobile terminal 10 in a direction in which the image is taken.
The option “10 M OR FARTHER AWAY” is an option for restricting displayed AR content to AR content of subjects that are 10 m or farther away from the mobile terminal 10 in a direction in which the image is taken. A distance to a subject may be a rough value.
For example, in a case where a position of a subject is registered in the AR server 20, a distance between the mobile terminal 10 and the subject can be calculated by using information on a position measured by the mobile terminal 10. In a case where the mobile terminal 10 is provided with a module for Light Detection and Ranging (LiDAR) or in a case where the mobile terminal 10 has an app for measuring a distance to a subject on the basis of a taken image, the distance to the subject can be measured by the mobile terminal 10 alone.
Although two standards (“WITHIN 10 M” and “10 M OR FARTHER AWAY”) are prepared as standards concerning a distance and no upper limit is set in
Note that, the threshold value “10 M” is merely an example, and one or more other threshold values may be used. Furthermore, a distance that gives a displayed range may be changed or designated by a user.
Settings using the screen 120 and the screen 121 illustrated in
In the former case, the mobile terminal 10 uses the above settings for selection of AR content to be displayed on the regions from among AR content given by the AR server 20.
In the latter case, the AR server 20 uses the above settings for selection of AR content that satisfies a standard designated by the user. In this case, the AR server 20 gives only selected AR content to the mobile terminal 10, and the mobile terminal 10 causes the given AR content to be displayed in a corresponding region.
AR content to be displayed on the display 102 is designated by the user in the present exemplary embodiment, AR content to be displayed on the display 102 may be decided in accordance with a subject within an image by the AR server 20.
Screen Example 2
In
In
In
Meanwhile, the number of pieces of AR content displayed over the image in the left region 102A and the number of pieces of AR content displayed over the image in the right region 102B are smaller than that in
Although viewability of AR content is improved in the case of
In the screen example illustrated in
In
Although viewability of AR content in the right region 102B whose size becomes narrower may decrease in
Screen Example 3
In
In the left region 102A illustrated in
In the right region 102B illustrated in
The menus and advertisements displayed as AR content are registered in the AR server 20.
These pieces of information are, for example, registered by shops in advance, registered by a provider of the AR service in advance, registered by a user of the AR service, or collected from the Internet.
Screen Example 4
Also in
In
In
In a case where 5G communication, which allows a user to transmit and receive a large volume of data without perceiving a delay, is available, an image taken by the camera 106 (see
Meanwhile, when communication switches from 5G to 4G, the mobile terminal 10 switches to a mode for displaying AR content in a single region. Furthermore, the mobile terminal 10 limits the number of displayed pieces of AR content.
By setting a limit on the number of displayed pieces of AR content, the AR content can follow the taken image better.
In the case of 4G communication, the number of pieces of AR content displayed on the display 102 is limited to a predetermined threshold value or less. The threshold value may be set by an individual user. A threshold value to which the number of displayed pieces of AR content is limited may vary depending on displayed AR content.
The number of displayed pieces of AR content may be set in advance or may be set by a user.
AR content displayed after switching to 4G communication may be random, may be limited to one for each kind, or may be limited to AR content of a specific kind. This also can be set by a user.
Screen Example 5
The “edit” includes new registration. The screen illustrated in
On the edit screen, a screen for receiving entry of a comment is displayed in the left region 102A of the display 102, and an enlarged image of a subject with which the comment is to be associated is displayed in the right region 102B. The edit screen is an example of information used for setting of AR content associated with a subject.
In
In the present exemplary embodiment, the entered comment can be associated not with an entire subject but with a specific position of the subject. In the example of
Information on a comment on a subject entered by the mobile terminal 10 and a position where the comment is associated is sent from the mobile terminal 10 to the AR server 20 and is stored in the AR server 20. The comment is given as AR content from the AR server 20 to the mobile terminal 10.
In a case where plural pieces of AR content are registered for a single subject, overlap among these pieces of AR content results in low user's viewability.
In view of this, in the present exemplary embodiment, the mobile terminal 10 is provided with a function for improving viewability of AR content.
The CPU 101 determines whether or not plural pieces of AR content are associated with the same subject when acquiring AR content from the AR server 20 (see
In the present exemplary embodiment, AR content registered by a user of the AR service is shared among plural users using the same service. Note, however, that displayed AR content may be managed for each user and AR content edited by other users may be excluded from displayed AR content.
In a case where a positive result is obtained in step 21, the CPU 101 adjusts positions of the plural pieces of AR content so that the plural pieces of AR content do not overlap one another on the screen (step 22). This processing improves viewability of AR content. Note that the adjustment of the positions depends on the size of the display 102 of the mobile terminal 10. The larger the size is, the easier reduction of overlap among the plural pieces of AR content is.
In a case where no AR content or only a single piece of AR content is associated with a single subject, the CPU 101 obtains a negative result in step 21. In this case, the CPU 101 does not adjust a position of the AR content.
Examples of Configurations of System and Device
In the network system 1A according to the present exemplary embodiment, a mobile terminal 10A whose display surface is deformable is connected to an AR server 20.
A body 11 of the mobile terminal 10A has two body panels 11A and 11B and a hinge 12. The two body panels 11A and 11B are connected to each other with the hinge 12 interposed therebetween. The hinge 12 is an example of a bending part. The hinger 12 is located so as to divide the display 102 into halves in a longitudinal direction. In
Although the hinge 12 is illustrated as an example of a bending part in the present exemplary embodiment, the bending part may be, for example, a deformable material such as plastic, a resin, or rubber or connecting fitting constituted by movable components.
The hinge 12 used in the present exemplary embodiment can be bent so that a surface on which the display 102 is provided becomes a ridge side. The display 102 and a film sensor 103 used in the present exemplary embodiment are made of a foldable material. The display 102 and the film sensor 103 are, for example, provided on a film-shaped plastic substrate.
In the present exemplary embodiment, deformation of the outer shape of the mobile terminal 10A is associated with display settings of AR content illustrated in
Specifically, the unfolded shape of the mobile terminal 10A is associated with the mode for displaying AR content in a single region, and the folded shape of the mobile terminal 10A is associated with the mode for displaying AR content in two regions.
In the present exemplary embodiment, a left side and a right side of the 102 with respect to the position of the hinge 12 are a left region 102A and a right region 102B, respectively.
The mobile terminal 10A illustrated in
Screen Examples
Screen examples according to the second exemplary embodiment are described below with reference to
Screen. Example 1
The shape of
The example illustrated in
Screen Example 2
In the case of
Screen Example 3
The screen example illustrated in
In
Screen Example 4
A change from
This example is a combination or the screen example 2 and the screen example 3 and shows a case where a mode is changed depending on whether the display 102 is unfolded for the first time or the second time.
In
Screen Example 5
The screen example illustrated in
As described above, according to the mobile terminal 10A whose display 102 is deformable, the deformation can be used to change a display mode. The edit screen is an example of information used to set AR content associated with a subject.
In the above exemplary embodiments, a case where only one display is provided on a single side of a body has been described. In the present exemplary embodiment, a case where a display is provided on both of a front surface and a rear surface of a body will be described.
The mobile terminal 10B used in the present exemplary embodiment is configured such that a first display 102 is provided on a front surface side of a body 11, and a second display 102C is provided on a rear surface side of the body 11. A size of the second display 102C is about half of a size of the first display 102.
The display 102 of the mobile terminal 10B used in the present exemplary embodiment is also deformable as with the display 102 of the mobile terminal 10A used in the second exemplary embodiment. However, the mobile terminal 10B according to the present exemplary embodiment is folded so that the surface on which the first display 102 is provided becomes a valley side. Than is, a hinge 12A operates in a direction opposite to the hinge 12 used in the second exemplary embodiment.
In the present exemplary embodiment, the mobile terminal 10B is folded so that the first display 102 becomes a valley side. Accordingly, when the mobile terminal 10B is folded, only the second display 102C is observable from an outside. Accordingly, in the state where the mobile terminal 10B is folded, an image is displayed only on the second display 102C.
In the present exemplary embodiment, an image is displayed only on the second display 102C in the state where the mobile terminal 10B is folded. Meanwhile, an image is displayed only on the first display 102 in a state where the mobile terminal 10B is unfolded.
Also in the present exemplary embodiment, deformation of the mobile terminal 10B may be associated with display settings as in the mobile terminal 10A (see
Specifically, display settings employed in the state where an image is displayed only on the second display 102C may correspond to the display settings employed in the state where the mobile terminal 10A is folded, and display settings employed in the state where an image is displayed only on the first display 102 may correspond to the display settings employed in the state where the mobile terminal 10A is unfolded.
The exemplary embodiments of the present disclosure have been described above, but the technical scope of the present disclosure is not limited to the scope described in the above exemplary embodiments. It is apparent from the claims that various changes or modifications of the above exemplary embodiments are also encompassed within the technical scope of the present disclosure.
(1) The above exemplary embodiments have discussed a case where AR content is an image. However, sound may be used as AR content. For example, a sound effect or explanation sound according to a subject may be added. Alternatively, MR content may be added as information associated with a subject within an image taken by a camera 106 instead of the AR content.
The MR content is information that creates mixed reality (MR) whose degree of fusion with a real world is higher than AR. In mixed reality, MR content is associated in a real space like an object in a real world, and therefore plural users can recognize MR content from plural directions at the same time. Examples of the MR content include a signboard, a traffic sign, and a direction board. Furthermore, example of the MR content include image information according to an observation position like an object in a real world.
(2) In the above exemplary embodiments, part or all of processing executed by the AR server 20 (see
Similarly, part or all of processing executed by the mobile terminal 10 etc. in the above description may be executed by the AR server 20.
(3) The above exemplary embodiments have discussed a case where the mobile terminal 10A has a single deformable display 102 (see
The mobile terminal 10C illustrated in
In the case of the mobile terminal 10C, two body panels 11A and 11B are attached to a hinge 13 so as to be rotatable in both directions. The hinge 13 has a rotary shaft to which the body panel 11A is attached so as to be rotatable in both directions and a rotary shaft to which the body panel 11B is attached so as to be rotatable in both directions. This allows the mobile terminal 10C to be folded so that the two displays 102 face each other and allows the mobile terminal 10C to be folded so that each of the two displays 102 faces an outer side.
The body panel 11A, the body panel 11B, and the display 102 used in the present exemplary embodiment have high rigidity and are not deformable.
(4) Although the display surface is made deformable into plural shapes by connecting the body panel 11A (see
According to the mobile terminal 10D illustrated in
The mobile terminal 10E illustrated in
According to the mobile terminal 10E illustrated in
(5) Although the display 102 (see
According to this display method, an original image hidden by the AR content can be easily checked.
(6) Although a case where the camera 106 (see
(7) In the embodiments above, the term “processor” refers to hardware in a broad sense. Examples of the processor include general processors (e.g., CPU: Central Processing Unit) and dedicated processors (e.g., GPU: Graphics Processing Unit, ASIC: Application Specific Integrated Circuit, FPGA: Field Programmable Gate Array, and programmable logic device).
In the embodiments above, the term “processor” is broad enough to encompass one processor or plural processors in collaboration which are located physically apart from each other but may work cooperatively. The order of operations of the processor is not limited to one described in the embodiments above, and may be changed.
The foregoing description of the exemplary embodiments of the present disclosure has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The embodiments were chosen and described in order to best explain the principles of the disclosure and its practical applications, thereby enabling others skilled in the art to understand the disclosure for various embodiments and with the various modifications as are suited to the particular use contemplated. It is intended that the scope of the disclosure be defined by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2020-129302 | Jul 2020 | JP | national |
Number | Date | Country |
---|---|---|
2013125328 | Jun 2013 | JP |
2013-183333 | Sep 2013 | JP |
2015-075832 | Apr 2015 | JP |
2019125345 | Jul 2019 | JP |
2013088819 | Jun 2013 | WO |
Entry |
---|
Information Processor, Information Processing Method, Program and System (Makoto Tomioka et al.) JP2019125345 (Year: 2019). |
“Office Action of Japan Counterpart Application”, issued on Jan. 23, 2024, with English translation thereof, pp. 1-6. |
Number | Date | Country | |
---|---|---|---|
20220038637 A1 | Feb 2022 | US |