This application is based on Japanese Patent Application No. 2016-003011 filed on Jan. 8, 2016, the contents of which are hereby incorporated by reference.
Field of the Invention
The present invention relates to an image forming apparatus such as an MFP (Multi-Functional Peripheral) and its relevant technique.
Description of the Background Art
As a user interface system displayed on an operation part of an image forming apparatus such as an MFP or the like, as well as a system using an operation screen which is originally provided in the MFP, a system using a user-specific operation screen (my panel screen) which is customized for each user has been used (see Japanese Patent Application Laid Open Gazette No. 2012-168819 (Patent Document 1)).
Further, there is a technique in which on each of two different platforms constructed in the MFP, used is an individual user interface (implemented by software). One of these two user interfaces (the first user interface) is a user interface (referred to also as a “standard user interface”) operating on a platform for controlling the MFP (referred to also as a “standard platform”). The other one (the second user interface) is, for example, a user interface (hereinafter, referred to also as an “IWS (Internal Web Server) user interface”) operating on an “IWS platform”. Herein, the IWS platform refers to a platform for transmitting/receiving information between an internal web server which is provided inside the MFP and a web browser which is provided inside the MFP.
In the above first user interface (standard user interface), for example, there are many operation screens, and display buttons and the like for receiving many types of operations are provided.
Further, in the second user interface (e.g., IWS user interface), a user-specific operation screen (customized screen) which is customized for each user can be used. When each user customizes an operation screen (panel screen) in accordance with his own preference, the user can use a convenient operation screen (customized screen) in consideration of the frequency of use of each button, and the like.
In the conventional IWS user interface (customized screen), however, only main buttons can be customizably arranged, and some buttons cannot be used in the conventional IWS user interface (customized screen).
Herein, in order to transmit an instruction content given by a user operation (user manipulation) in the IWS user interface (customized screen) on the IWS platform, generally, provided is an interface (software interface) for performing transmission/reception of information between the standard platform and the IWS platform. In more detail, the interface is provided as an API (Application Programming Interface) or the like.
Conventionally, however, due to various circumstances, the APIs (APIs for cooperation between the two platforms) are prepared in advance only for some of all the buttons, i.e., for main buttons, in the standard user interface. In other words, among all the buttons in the standard user interface, there are some buttons for which corresponding APIs (APIs for cooperation between the two platforms) are not prepared. As a result, the button for which the corresponding API is not prepared cannot be used in the conventional IWS user interface. For this reason, as described above, there are some buttons which cannot be used in the conventional IWS user interface.
Then, additionally providing (additionally generating) an API for cooperation between the two platforms is one proposal for reducing the buttons which cannot be used in the customized screen. It is not preferable, however, to additionally provide a corresponding API for cooperation between the two platforms for each of all the many buttons since this needs a large number of steps for development.
It is an object of the present invention to provide a technique which makes it possible to exclude any limitation due to whether or not there is an action instruction code for cooperation (API for cooperation, or the like) between two user interfaces and arrange relatively diverse buttons in a customized screen of one of the two user interfaces.
The present invention is intended for an image forming apparatus having a first user interface operating on a first platform and a second user interface operating on a second platform and capable of being customized by a user. According to a first aspect of the present invention, the image forming apparatus comprises a determination part for determining recognition information which is information to be used for recognizing an instruction content given by a user operation, among action instruction information based on the user operation and information on an operation position of the user operation, which are two different types of information, when the user operation is performed in a customized screen of the second user interface, a conversion part for converting operation position information in the second user interface into operation element information which is information of a corresponding operation element in a corresponding operation screen of the first user interface when the information on the operation position of the user operation is determined as the recognition information, and a recognition part for recognizing the instruction content given by the user operation on the basis of at least one of the action instruction information and the operation element information.
The present invention is also intended for a non-transitory computer-readable recording medium. According to a second aspect of the present invention, the non-transitory computer-readable recording medium records therein a computer program to be executed by a computer embedded in an image forming apparatus to realize a first user interface operating on a first platform and a second user interface operating on a second platform and capable of being customized by a user, to cause the computer to perform the steps of a) determining recognition information which is information to be used for recognizing an instruction content given by a user operation, among action instruction information based on the user operation and information on an operation position of the user operation, which are two different types of information, when the user operation is performed in a customized screen of the second user interface, b) converting operation position information in the second user interface into operation element information which is information of a corresponding operation element in a corresponding operation screen of the first user interface when the information on the operation position of the user operation is determined as the recognition information, and c) recognizing the instruction content given by the user operation on the basis of at least one of the action instruction information and the operation element information.
According to a third aspect of the present invention, the non-transitory computer-readable recording medium records therein a computer program to be executed by a computer embedded in an image forming apparatus to realize a first user interface operating on a first platform and a second user interface operating on a second platform and capable of being customized by a user, to cause the computer to perform the steps of a) determining information to be transferred from the second platform on which the second user interface operates to the first platform on which the first user interface operates, among information on an operation position of a user operation and action instruction information based on the user operation, which are two different types of information, when the user operation is performed in a customized screen of the second user interface, b) converting operation position information indicating the operation position of the user operation in the second user interface into operation element information on a corresponding operation screen in the first user interface and generating the operation element information as the information on the operation position of the user operation, when the information on the operation position of the user operation is determined to be transferred to the first platform, and c) transferring at least one of the action instruction information and the operation element information obtained after conversion in the step b), which is the information determined in the step a), from the second platform to the first platform, as recognition information which is information to be used in the first user interface for recognizing an instruction content given by the user operation performed in the second user interface.
These and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
Hereinafter, the preferred embodiments of the present invention will be described with reference to the accompanying drawings.
<1-1. Overall Configuration>
The constituent elements 10 and 70 in the present system 1 are communicably connected to each other via a network 108. The network 108 includes a LAN (Local Area Network), the internet, and the like. The connection to the network 108 may be wired or wireless.
In the client computer 70, installed is an application software program (hereinafter, also referred to simply as an “application”). In more detail, an application for generating a user interface (UI) (in more detail, a customized screen) in a touch panel 25 (the application is also referred to as a UI builder (customized screen generation application)) and the like operation, or the like is installed. By using the UI builder, a user can generate a customized screen. Further, when data and the like on the customized screen are transmitted to the image forming apparatus 10 by using the UI builder, the user can use the customized screen in the image forming apparatus 10.
<1-2. Constitution of Image Forming Apparatus>
The MFP 10 is an apparatus (also referred to as a multifunction machine) having a scanner function, a copy function, a facsimile function, a box storage function, and the like. Specifically, as shown in the functional block diagram of
The image reading part 2 is a processing part which optically reads (in other words, scans) an original manuscript placed on a predetermined position of the MFP 10 and generates image data of the original manuscript (also referred to as an “original manuscript image” or a “scan image”). The image reading part 2 is also referred to as a scanning part.
The printing part 3 is an output part which prints out an image to various media such as paper on the basis of the data on an object to be printed.
The communication part 4 is a processing part capable of performing facsimile communication via public networks or the like. Further, the communication part 4 is also capable of performing communication (network communication) via a communication network.
The storage part 5 is a storage unit such as a hard disk drive (HDD) or/and the like.
The operation part 6 comprises an operation input part 6a for receiving an operation input which is given to the MFP 10 and a display part 6b for displaying various information thereon.
The MFP 10 is provided with a substantially plate-like operation panel part 6c (see
The controller 9 is a control unit for generally controlling the MFP 10. The controller 9 is a computer system which is embedded in the MFP 10 and comprises a CPU, various semiconductor memories (RAM and ROM), and the like. The controller 9 causes the CPU to execute a predetermined software program (hereinafter, also referred to simply as a program) stored in the ROM (e.g., EEPROM (registered trademark)), to thereby implement various processing parts. Further, the program (in more detail, a group of program modules) may be installed in the MFP 10 via the network. Alternatively, the program may be recorded in one of various portable recording media (in other words, various non-transitory computer-readable recording media), such as a USB memory or the like, and read out from the recording medium to be installed in the MFP 10.
As shown in
The platform for controlling the MFP (standard platform) P1 is a platform for controlling various operations of the MFP.
The IWS platform P2 is a platform capable of transmitting/receiving information between a web server (internal server) provided inside the MFP and a web browser provided inside the MFP. The web browser displayed on the touch panel 25 transmits, to the web server, various information (input information) (information of a touch operation position, and the like) received by an input operation to the touch panel 25, and the web server performs a process (a setting process, a screen changing process, and the like) based on the various information.
Further, the platforms P1 and P2 are each constructed as a set of program modules.
Furthermore, on the two platforms P1 and P2, user interfaces UI1 and UI2 individually operates, respectively. The user interfaces UI1 and UI2 are each a user interface implemented by software.
On the platform (standard platform) P1 for controlling the MFP, a first user interface (also referred to as a “standard user interface”) UI1 operates. In the standard user interface UI1, various operation screens (MFP panel screens) which are generated for the MFP 10 in advance are (selectively) displayed.
On the other hand, on the IWS platform P2, a second user interface (IWS user interface) UI2 operates. The IWS user interface UI2 has a customized screen (also referred to as a “my panel screen”). In other words, in the IWS user interface UI2, at least one display element (also referred to as an “operation element (manipulation element)”) selected among a plurality of display elements (e.g., display buttons and the like) in the first user interface UI1 can be arranged in accordance with user's preference (customizably arranged). In the IWS user interface UI2, a customized screen for each user is displayed on the touch panel 25, and an operation is performed to the MFP 10 by using the customized screen.
Further, in the UI builder performed by the client computer 70, the customized screen and screen data and the like of the customized screen are generated in accordance with the operation of the user and an application P11 (also referred to as an application using customized screen) for the MFP 10 is prepared. Then, the application P11 using the customized screen, the screen data of the customized screen, an API information table 510 (described later), a coordinate conversion table 520 (described later), and the like are transmitted from the client computer 70 (UI builder) to the MFP 10. When the application using the customized screen is performed in the MFP 10 by using the screen data, the data tables 510 and 520, and the like, and so on, the user can use the customized screen in the MFP 10.
Specifically, as shown in
The input control part 11 is a control part for controlling an operation inputting operation to the operation input part 6a (the touch panel 25 or the like). For example, the input control part 11 controls an operation for receiving an operation input (a specification input from the user, or the like) to an operation screen displayed on the touch panel 25.
The display control part 12 is a processing part for controlling a display operation on the display part 6b (the touch panel 25 or the like). The display control part 12 displays the operation screen or the like for operating the MFP 10 on the touch panel 25.
The communication control part 13 is a processing part for controlling a communication operation with other apparatus(es) (the client computer 70 or/and the like) in cooperation with the communication part 4 and the like. The communication control part 13 has a transmission control part for controlling transmission of various data and a reception control part for controlling reception of various data.
When a user operation (user manipulation) is performed in the customized screen of the IWS user interface U12, the determination part 14 serves as a processing part for determining information (hereinafter, also referred to as “recognition information”) to be used for recognizing an instruction content given by the user operation. It can be expressed that the determination part 14 is a processing part for determining information (hereinafter, also referred to as “transmission/reception target information”) to be transferred to the standard platform P1 when the user operation is performed in the IWS user interface U12. The recognition information (or the transmission/reception target information) is determined, for example, among two types of information (described later). One of the two types of information is information on an operation position (manipulation position) of the user operation (e.g., operation position information indicating the operation position (coordinate information of a touch position, or the like)). Further, the other type of information is action instruction information which is information instructing an action of the MFP 10, based on the user operation (e.g., an action instruction code (in more detail, an action instruction code formed by using an API)).
The information transmitting/receiving part 15 is a processing part for transferring the information determined by the determination part 14 from the IWS platform P2 to the standard platform P1. In other words, the information transmitting/receiving part 15 is a processing part for transferring the recognition information (in more detail, the information to be used for causing the standard user interface UI1 to recognize the instruction content given by the user operation in the IWS user interface UI2) to the standard platform P1.
The conversion part 16 is a processing part for converting the operation position information in the IWS user interface Ui2 into operation element information of a corresponding operation screen (information of a corresponding operation element in the corresponding operation screen, or the like) in the standard user interface UI1. The operation element information obtained after the conversion includes, for example, a screen ID of the corresponding operation screen, a representative position (center position or the like) of the corresponding operation element (corresponding display element) (a display button or the like), and the like.
The recognition part 17 is a processing part for recognizing the instruction content (instruction content intended by the user) given by the user operation in the IWS user interface UI2, on the basis of the information transferred from the IWS platform P2 to the standard platform P1, or the like.
<1-3. Constitution of User Interfaces UI1 and UI2>
Next, the two types of user interfaces UI1 and UI2 will be described.
In the screen 210 of
Further, when the user intends to perform an “Application Setting” process (in more detail, for example, setting of “Frame Erase”), the user presses an application-setting button 219 disposed on the lower right in the screen 210. In response to this pressing operation, the screen 220 of
On the other hand,
In this customized screen 310, the user can arrange, for example, buttons which he uses with relatively high frequency. When the user presses the frame-erase button 313, for example, the screen 230 can be immediately displayed on the touch panel 25.
If the same setting process is performed by using the above-described standard user interface UI1, it is necessary for the user to take a procedure of looking for the application-setting button 219 in the screen 210 (
In contrast to the above case, when the customized screen 310 (
Actually, this customized screen 310 is a screen operating on the IWS user interface UI2. As described earlier, there are some buttons (functions) which cannot be used in the conventional IWS user interface. As to the “Frame Erase” button (function), for example, no API (interface between the two platforms) corresponding to the button (in detail, the function assigned to the button) is prepared, and therefore the “Frame Erase” button cannot be used in the conventional IWS user interface.
On the other hand, in the present preferred embodiment, when the button for which no corresponding API (interface between the platforms P1 and P2) is prepared (the button for which the corresponding API is undefined) is pressed, touch coordinates in the customized screen 310 are transferred from the IWS platform P2 to the standard platform P1. Then, the standard platform P1 uses the coordinate conversion table 520 to convert the touch coordinates in the IWS user interface UI2 into the operation element information in the standard user interface UI1. As described above, the operation element information obtained after the conversion includes, for example, the screen ID of the corresponding operation screen, the representative position (center position or the like) of the corresponding display element (the display button or the like), and the like. Then, on the basis of the operation element information obtained after the conversion, the standard platform P1 understands (recognizes) the instruction content given by the user operation in the IWS user interface UI2. With this method, it is not necessary to additionally define the API, and the standard user interface UI1 can understand the instruction content of the user instruction. Hereinafter, such an aspect of the present invention will be described in detail.
<1-4. Operation>
<Generation of Customized Screen by UI Builder of Computer 70, Etc.>
As shown in
It is assumed herein that respective corresponding APIs for some buttons 311 (K1), 312 (K2), and 315 (K5) among the five software buttons (software keys) 311 to 315 arranged in the customized screen 310 (
In the UI builder, for example, in accordance with the user operation, respective positions (arrangement positions) of the buttons in the customized screen 310 are specified. Further, the user of the UI builder specifies buttons to be associated with the buttons in the customized screen 310, among a plurality of buttons in a plurality of operation screens of the standard user interface UI1. For example, the user specifies a button 212 in the screen 210 (
Then, the UI builder determines whether the APIs corresponding to the specification target buttons (212, 213, 225, and the like) (in other words, the specification source buttons (311, 312, 313, and the like)) are defined or not. Specifically, the UI builder determines whether or not the instruction content given by the button operation on the specification target (in other words, the button operation on the specification source) is corresponding to a defined API. Then, the UI builder registers (stores) the content based on the determination result into the API information table 510 (see
In more detail, as shown in
Further, in the API information table 510, information (i.e., “Defined/Not”) indicating whether the API (corresponding API) corresponding to each button is defined or not is also stored. Further, when the corresponding API is defined, the corresponding API itself is associated with the button and stored therein.
In other words, in the API information table 510, it is defined whether or not the user operation on each button is assigned to any action instruction code in advance. Further, when the user operation on a button is assigned to an action instruction code in advance, the action instruction code (corresponding API) to which the user operation is assigned is also associated with the button and stored therein. Specifically, in the API information table 510, defined is a correspondence between the display position (arrangement position) of each of some display elements (buttons and the like) in the IWS user interface UI2 and the action instruction code (action instruction command) corresponding to the display element.
As to the button 311, for example, the information indicating that the corresponding API is present (“Defined”) and the corresponding API itself (IWS_set_color_copy), being associated with the arrangement information (the upper left coordinates (XS1, YS1) and the lower right coordinates (XE1, YE1)) of the button 311 (K1), are registered.
Further, as to the button 312, the information indicating that the corresponding API is present (“Defined”) and the corresponding API itself (IWS_set_mono_copy), being associated with the arrangement information (the upper left coordinates (XS2, YS2) and the lower right coordinates (XE2, YE2)) of the button 312 (K2), are registered.
Furthermore, as to the button (start button) 315, the information indicating that the corresponding API is present (“Defined”) and the corresponding API itself (IWS_start_button_on), being associated with the arrangement information (the upper left coordinates (XS5, YS5) and the lower right coordinates (XE5, YE5)) of the button 315 (K5), are registered.
On the other hand, as to the button 313, the information indicating that the corresponding API is not present (“Not”), being associated with the arrangement information (the upper left coordinates (XS3, YS3) and the lower right coordinates (XE3, YE3)) of the button 313 (K3), is registered.
Similarly, as to the button 314, the information indicating that the corresponding API is not present (“Not”), being associated with the arrangement information (the upper left coordinates (XS4, YS4) and the lower right coordinates (XE4, YE4)) of the button 314 (K4), is registered.
Further, as shown in
Specifically, as to the button 313 (K3), for example, the operation element information of the corresponding operation screen in the standard user interface UI1, being associated with the arrangement information (the upper left coordinates (XS3, YS3) and the lower right coordinates (XE3, YE3)) of the button 313 (K3) in the customized screen 310, is stored (defined). In more detail, the information of the button 225 in the standard user interface UI1, which is corresponding to the instruction content given by the button 313 in the customized screen 310, is stored in the coordinate conversion table 520. More specifically, the screen ID “011” (screen identification information) of the screen 220 having the button 225 (the screen 220 to which the button 225 belongs) and the representative position (herein, the center position (XC3, YC3) (see
As to the button 314, the same contents are stored.
Further, as described later, this coordinate conversion table 520 serves as a conversion table used for converting the operation position information (coordinate position or the like) in the IWS user interface UI2 into the operation element information (the screen ID of the screen including the corresponding button and the representative position coordinates of the corresponding button) of the corresponding operation screen in the standard user interface UI1.
The UI builder generates these information (the API information table 510, the coordinate conversion table 520, and the like) and transmits the information and the application P11 using the customized screen to the MFP 10 (Step S2).
When the MFP 10 receives the application P11 using the customized screen, the API information table 510 (
Further, after that, in a state where the application P11 using the customized screen is started up (described later), the coordinate conversion table 520 (
<Operation in MFP 10>
In the MFP 10, at a timing (in accordance with a predetermined operation by the user, immediately after the power-on, or the like), the IWS platform P2 is started up by the active standard platform P1. Further, after that, in accordance with the user operation or the like, the application P11 using the customized screen is started up on the IWS platform P2 (Step S3 (
After the application P11 using the customized screen is started up, the process operation shown in
First, in the IWS platform P2, the customized screen 310 which is generated by the UI builder in advance is displayed on the touch panel 25 through the application P11 using the customized screen (in more detail, the web browser thereof). Then, when the user operation is performed on the customized screen 310, the user operation is detected by the touch panel 25 and the detailed information is transmitted from the application P11 using the customized screen to the IWS platform P2 (Step S11). Specifically, the IWS platform P2 acquires the information (operation position information) of the operation position of the user operation in the customized screen 310 (for example, the coordinate information of the touch position (press position) of the touch operation (pressing operation)).
Next, in the IWS platform P2, the process of Steps S11 to S16 (see
Specifically, when the IWS platform P2 determines, with reference to the API information table 510, that the instruction content given by the user operation in the IWS user interface UI2 (Step S11) is assigned to specific action instruction information in advance, the IWS platform P2 determines that the specific action instruction information should be transferred to the standard platform P1 (Step S12). In other words, the specific action instruction information is determined as the data (recognition information) to be transferred. In this case, the process goes to Step S13, and the specific action instruction information (in more detail, the specific action instruction code (formed, for example, by using a specific API designed for a specific action instruction)) is selected as the data to be transferred. Then, the process goes to Step S16. In Step S16, the specific action instruction information is transferred from the IWS platform P2 to the standard platform P1. In other words, the instruction content given by the user operation is directly transferred to the standard platform P1 in a form of “action instruction information”.
When a pressing operation (touch operation) on the full-color copy button 311 (K1) is performed in the customized screen 310, for example, the following operation is performed. First, the IWS platform P2 determines, with reference to the API information table 510 (
As another case, also when a pressing operation (touch operation) on the start button 315 (K5) is performed in the customized screen 310, the same operation is performed. First, the IWS platform P2 determines, with reference to the API information table 510 (
On the other hand, when the user operation in the IWS user interface UI2 is not assigned to any action instruction code in advance, the IWS platform P2 determines that the information on the operation position of the user operation should be transferred to the standard platform P1 (Step S12). In other words, the information on the operation position of the user operation is determined as the data (recognition information) to be transferred. Then, the information on the operation position of the user operation (herein, the operation position information (in more detail, the coordinate information of touch information)) is selected as the data to be transferred (Step S14), and this information is transferred from the IWS platform P2 to the standard platform P1 (Step S16). In other words, the instruction content given by the user operation is, so to say, indirectly transferred from the IWS platform P2 to the standard platform P1 in a form of “information on the operation position of the user operation (herein, operation position information)”.
When a pressing operation (touch operation) on the frame-erase button 313 (K3) is performed in the customized screen 310, for example, the following operation is performed. First, the IWS platform P2 determines, with reference to the API information table 510 (
Next, in the standard platform P1, the process of Steps S17 to S19 (see
In more detail, when the information transferred from the IWS platform P2 is the “action instruction information”, the process goes from Step S17 to Step S19, and the standard platform P1 recognizes the instruction content given by the user operation (Step S11) on the basis of the action instruction information (also see the lower portion of
When the action instruction information using the specific API (“IWS_set_color_copy”) is transferred from the IWS platform P2 to the standard platform P1, for example, the following operation is performed. Specifically, the standard platform P1 recognizes, on the basis of the specific API, that the instruction content given by the user operation (Step S11) indicates the setting of “Full Color Copy” (the instruction content indicates that the mode relating to “Color” of the copy function should be set to the “Full Color Copy” mode). Then, on the basis of the instruction content, the standard platform P1 performs a “Full Color Copy” setting process (a process of setting the mode relating to “Color” of the copy function to the “Full Color Copy” mode).
As another case, when the action instruction information (start instruction information) using the specific API (“IWS_start_button_on”) is transferred from the IWS platform P2 to the standard platform P1, the following operation is performed. Specifically, the standard platform P1 recognizes, on the basis of the specific API (specific action instruction code), that the instruction content given by the user operation (Step S11) indicates the “start instruction”. Then, on the basis of the instruction content, the standard platform P1 performs the process (process of starting a copy operation) based on the “start instruction”.
On the other hand, when the information transferred from the IWS platform P2 is the “the information on the operation position (operation position information)”, the process goes to Step S18 (also see the substantially center portion of
When the coordinates (Xt, Yt) of a position inside the button 313 (K3) in the customized screen 310 is transferred as the operation position information (touch coordinates), for example, the following operation is performed. Specifically, on the basis of the coordinate conversion table 520 (
More specifically, the standard platform P1 determines, with reference to the coordinate conversion table 520 (
In other words, the operation position information (Xt, Yt) of the user operation on a specific display element (for example, the display button 313 in the customized screen 310) in the IWS user interface UI2 is converted into the operation element information (in more detail, information of a specific corresponding display element corresponding to the specific display element (target element of the user operation)) in the corresponding operation screen (e.g., the screen 220). The corresponding operation screen is a screen (e.g., the screen 220) including the corresponding display element (e.g., the display button 225) corresponding to the specific display element (the display button 313 or the like).
Then, on the basis of the operation element information obtained after the conversion, the standard platform P1 recognizes that the instruction content given by the user operation (Step S11) is the same as the instruction content given by the pressing operation on the position (XC3, YC3) in the screen having the screen ID “011” (the screen 220 (
After that, when it is determined in Step S20 that the process should continue, the process goes back to Step S11, and the above-described operation (Steps S11 to S19) is repeatedly performed. On the other hand, when it is determined in Step S20 that the process should be ended, the process of
As described above, in the operation of the first preferred embodiment, when the user operation is performed in (the customized screen 310 of) the IWS user interface UI2 (Step S11), it is determined which of the two types of information, i.e., the operation position information on the operation position of the user operation and the action instruction information based on the user operation, should be transferred to the standard platform P1 (Step S12). Then, in accordance with the determination result, either one of the operation position information and the action instruction information is transferred from the IWS platform P2 to the standard platform P1 (Steps S13, S14, and S16).
With this operation, it is possible to exclude any limitation due to whether or not there is an action instruction code for cooperation (API for cooperation, or the like) between the two user interfaces UI1 and UI2 and arrange relatively diverse buttons in the customized screen 310 of the IWS user interface UI2.
In more detail, when the “action instruction information” is transferred from the IWS platform P2, the standard platform P1 recognizes the instruction content given by the user operation on the basis of the action instruction information (API) (Steps S17 and S19). Therefore, when the API designed to indicate the instruction content given by the user operation is defined in advance, the instruction content is directly transmitted to the standard platform P1 by using the defined API (action instruction information). That is to say, the standard platform P1 can directly recognize the instruction content given by the user operation. In other words, the button for which the corresponding API is defined can be disposed in the customized screen 310.
On the other hand, when the “operation position information” is transferred from the IWS platform P2, by performing the conversion process using the coordinate conversion table 520, the operation position information is converted into the operation element information (the representative position of the corresponding button, the screen ID of the screen to which the corresponding button belongs, and the like) on the corresponding operation screen in the standard user interface UI1 (Steps S17 and S18). Then, the standard platform P1 recognizes the instruction content given by the user operation (Step S11) in the IWS user interface Ui2, on the basis of the operation element information obtained after the conversion. Therefore, also when the corresponding API designed to indicate the instruction content given by the user operation is not defined, the standard platform P1 can recognize the instruction content given by the user operation, on the basis of the position information of the touch operation on the button in the customized screen 310, and the like. In other words, relatively diverse buttons for each of which a corresponding API is not defined can be also arranged in the customized screen 310.
The second preferred embodiment is a variation of the first preferred embodiment. Hereinafter, description will be made, centering on the difference between the first and second preferred embodiments.
In the above-described first preferred embodiment, when there is no defined API corresponding to the action instruction given by the user operation (in other words, when the information on the operation position of the user operation is determined as the recognition information), the operation position information of the user operation is transferred from the IWS platform P2 to the standard platform P1. Then, the standard platform P1 converts the operation position information into the operation element information on the corresponding operation screen in the standard user interface UI1 by using the coordinate conversion table 520.
On the other hand, in the second preferred embodiment, when there is no defined API corresponding to the action instruction given by the user operation, the IWS platform P2 converts the operation position information of the user operation into the operation element information on the corresponding operation screen in the standard user interface UI1 by using the coordinate conversion table 520. After that, the operation element information obtained after the conversion is transferred from the IWS platform P2 to the standard platform P1. In other words, before transferring the information from the IWS platform P2 to the standard platform P1, the IWS platform P2 performs the conversion process.
Also in the second preferred embodiment, first, the customized screen generation process and the like by the UI builder of the computer 70 are performed in the same manner as in the first preferred embodiment (see the top portion of
After that, the application P11 using the customized screen is started up in the MFP 10. In the second preferred embodiment, however, in a state where the application P11 using the customized screen is started up (described later), both the coordinate conversion table 520 (
Specifically, in the second preferred embodiment, after the application P11 using the customized screen is started up, the operation shown in
In Step S32, however, the IWS platform P2 determines, with reference to the API information table 510, which of the two types of information, i.e., the information on the operation position of the user operation (in more detail, the information obtained after the conversion process using the coordinate conversion table 520) and the action instruction information based on the user operation, should be transferred to the standard platform P1 of the standard user interface UI1. In other words, the IWS platform P2 determines one of the two types of information, i.e., the action instruction information based on the user operation and the information on the operation position of the user operation, as the recognition information.
Further, in the second preferred embodiment, as shown in
When the coordinates (Xt, Yt) of a position inside the button 313 (K3) in the customized screen 310 is transferred as the operation position information (touch coordinates), for example, the operation position information (touch coordinate position (Xt, Yt)) is converted into the operation element information described below, on the basis of the coordinate conversion table 520 (
More specifically, the IWS platform P2 determines, with reference to the coordinate conversion table 520 (
When it is determined in Step S32 that the information on the operation position of the user operation should be transferred to the standard platform P1, in next Step S36, the IWS platform P2 transfers the information on the operation position of the user operation (in more detail, the information obtained after the conversion (the above-described operation element information)) to the standard platform P1 (also see the substantially center portion of
On the other hand, when it is determined in Step S32 that the action instruction information based on the user operation should be transferred to the standard platform P1, in Step S36, the IWS platform P2 transfers the action instruction information (action instruction code) to the standard platform P1, like in Step S16 (also see the lower portion of
After the above-described process of Steps S31 to S36 is performed mainly by the IWS platform P2, the standard platform P1 performs a process of Steps S37 to S39.
Specifically, first, the standard platform P1 recognizes the instruction content given by the user operation in the IWS user interface UI2 and performs the process in accordance with the instruction content, on the basis of the information transferred from the IWS platform P2 (Steps S37 to S39).
In more detail, when the information transferred from the IWS platform P2 is the “action instruction information”, the process goes from Step S37 to S39, the same process as the process (Steps S17 and S19) in the first preferred embodiment is performed. Specifically, the standard platform P1 recognizes the instruction content given by the user operation (Step S31) on the basis of the action instruction information.
On the other hand, when the information transferred from the IWS platform P2 is the information on the operation position of the user operation (in more detail, the screen ID of the corresponding screen and the representative position information of the corresponding button in the screen) (the operation element information obtained after the conversion using the coordinate conversion table 520), the process goes to Step S39. In this case, in Step S39, the standard platform P1 recognizes the instruction content given by the user operation (Step S31) on the basis of the operation element information obtained after the conversion, which is transferred from the IWS platform P2.
When the coordinates (Xt, Yt,) of the touch position inside the button 313 (K3) in the customized screen 310 is converted into the operation element information including the second “011” of the corresponding operation screen in the standard user interface UI1 and the representative coordinate position information “(XC3, YC3)” of the corresponding button in the screen having the screen ID in Step S35 and then the operation element information is transferred from the IWS platform P2 to the standard platform P1 in Step S36, for example, the following operation is further performed. Specifically, the standard platform P1 recognizes, on the basis of the operation element information, that the instruction content given by the user operation (Step S31) is the same as the instruction content given by the pressing operation on the position (XC3, YC3) in the screen having the screen ID “011” (the screen 220 (
After that, when it is determined in Step S40 that the process should continue, the process goes back to Step S31, and the above-described operation (Steps S31 to S39) is repeatedly performed. On the other hand, when it is determined in Step S40 that the process should be ended, the process of
As described above, in the operation of the second preferred embodiment, when the user operation is performed in (the customized screen 310 of) the IWS user interface UI2 (Step S31), first, it is determined which of the two types of information, i.e., the operation element information on the operation position of the user operation and the action instruction information based on the user operation, should be transferred to the standard platform P1 (Step S32). When it is determined that the “operation element information” should be transferred to the standard platform P1, by performing the conversion process using the coordinate conversion table 520, the operation position information of the user operation in the IWS user interface UI2 is converted into the operation element information (the screen ID, the representative position of the corresponding button, and the like) on the corresponding operation screen in the standard user interface UI1 (Steps S34 and S35). After that, in accordance with the determination result in Step S32, either one of the operation element information and the action instruction information is transferred from the IWS platform P2 to the standard platform P1 (Steps S33, S34 to S36).
With this operation, it is possible to exclude any limitation due to whether or not there is an action instruction code for cooperation (API for cooperation, or the like) between the two user interfaces UI1 and UI2 and arrange relatively diverse buttons in the customized screen 310 of the IWS user interface UI2.
In more detail, when the “action instruction information” is transferred from the IWS platform P2, the standard platform P1 recognizes the instruction content given by the user operation on the basis of the action instruction information (API) (Steps S37 and S39). Therefore, when the API designed to indicate the instruction content given by the user operation is defined in advance, the instruction content is directly transmitted to the standard platform P1 by using the defined API (action instruction information). That is to say, the standard platform P1 can directly recognize the instruction content given by the user operation. In other words, the button for which the corresponding API is defined can be disposed in the customized screen 310.
On the other hand, when the “operation element information (after the conversion)” is transferred from the IWS platform P2, the standard platform P1 recognizes the instruction content given by the user operation (Step S31) in the IWS user interface Ui2, on the basis of the operation element information obtained after the conversion. Therefore, also when the corresponding API designed to indicate the instruction content given by the user operation is not defined, the standard platform P1 can recognize the instruction content given by the user operation, on the basis of the position information of the touch operation on the button in the customized screen 310, and the like. In other words, relatively diverse buttons for each of which a corresponding API is not defined can be also arranged in the customized screen 310.
Further, though one aspect of the present invention in which the button 225 in the screen 220 (
There may be a case, for example, where the button 231 in the screen 230 is assigned to the button 313 in the customized screen 310 and when the pressing operation (touch operation) on the button 313 is performed, the same setting process (i.e., the setting process of “Frame Erase”) as that in response to the pressing operation on the button 231 is performed. Alternatively, there may be another case where the button 233 in the screen 230 is assigned to the button 313 in the customized screen 310 and when the pressing operation (touch operation) on the button 313 is performed, the same setting process (i.e., the setting process of “Entire Frame” (setting process “using the same set value (erase width) for top, bottom, left, and right”) as that in response to the pressing operation on the button 233 is performed. Further, as the “erase width”, for example, a default value (10 mm or the like) may be used.
Though one aspect of the present invention in which a single instruction content is assigned to each button in the customized screen 310 (
In the third preferred embodiment, the case where these three setting processes are assigned to the single button 316 in the UI builder will be described.
Also in the third preferred embodiment, the same process as that of
In more detail, the IWS platform P2 broadly classifies the plurality of instructions (the plurality of processes) assigned to the single button 316 (K6) into two processes. One process is a setting process in the case where the corresponding API is already defined and the other process is a setting process in the case where the corresponding API is undefined. Then, for each of the two processes, the corresponding process is performed.
As to the setting process in the case where the corresponding API is already defined, the process operations of Steps S12, S13, S16, S17, and S19 (see
Specifically, first, on the basis of the API information table 510 (
On the other hand, as to the setting process in the case where the corresponding API is undefined, the process operations of Steps S12, S14, S16, S17, S18, and S19 (see
Specifically, first, on the basis of the API information table 510 (FIG. 16), it is determined that for the second setting process (the setting process on “Frame Erase (Entire Frame by 10 mm (default value))”) and the third setting process (the setting process on “Booklet”), there is no defined API (Step S12). Then, for these two setting processes, the IWS platform P2 determines the touch coordinates (Xt, Yt) in the customized screen 310 as the information to be transferred to the standard platform P1, and transfers the touch coordinates (Xt, Yt) to the standard platform P1 (Steps S14 and S16).
Receiving the touch coordinates (Xt, Yt), the standard platform P1 determines that “the information on the operation position (in more detail, the operation position information)” is transferred from the IWS platform P2, and performs conversion of the information by using the coordinate conversion table 520 (
Then, the standard platform P1 recognizes each of the two pieces of operation element information (Step S19). Specifically, on the basis of the first one piece of operation element information (operation element information on the button 231 in the screen with screen ID “052”, which has the representative position (XC6, YC6)), the standard platform P1 recognizes that the instruction content given by the user operation includes the setting instruction of “Frame Erase”. Further, on the basis of the other one piece of operation element information (operation element information on the button in the screen with screen ID “053”, which has the representative position (XC7, YC7)), the standard platform P1 recognizes that the instruction content given by the user operation includes the setting instruction of “Booklet” (also see the lower portion of
With such operations, in the MFP 10, the plurality of processes (three setting processes) assigned to the single button 316 are performed automatically and successively in response to the pressing operation of the button 316. In more detail, the processes indicated by one instruction (the full-color setting process) corresponding to the defined API and two instructions (“Frame Erase” and “Booklet”) not corresponding to any defined API are successively performed (see
Thus, in the case where a plurality of instructions are assigned to the single button 316 in the IWS user interface UI2, the information to be transferred to the standard platform P1 (in other words, the recognition information) is determined for each of the plurality of instructions (Steps S51 and S12). Specifically, for the first instruction, the action instruction code (API or the like) is determined as the information to be transferred to the standard platform P1. On the other hand, for each of the second and third instructions, the information on the operation position of the user operation (touch coordinates) is determined as the information to be transferred to the standard platform P1. In other words, both the two kinds of information assigned to the single button 316 are transferred from the IWS platform P2 to the standard platform P1.
Then, on the basis of the information transferred for each of the plurality of instructions (in other words, the information determined as the recognition information for each of the plurality of instructions), the standard platform P1 recognizes the content of each of the plurality of instructions given by the user operation. In detail, when the standard platform P1 receives the action instruction code for one of the plurality of instructions, the standard platform P1 recognizes the instruction content given by the user operation on the basis of the action instruction code. Further, when the standard platform P1 receives the touch coordinates in the customized screen 310 for one of the plurality of instructions, the standard platform P1 recognizes the instruction content given by the user operation on the basis of the touch coordinates. In more detail, the standard platform P1 recognizes one or more instruction contents (for example, two instructions (“Frame Erase” and “Booklet”)) corresponding to one or more pieces of operation element information obtained after the conversion of the touch coordinates (operation position information), as some of the instruction contents given by the user operation.
With such operations, it is possible to collectively register a series of processes (also referred to as a workflow process) to a single button and perform the plurality of processes by pressing the single button. Especially, even in the case where the plurality of processes include a process relating to an undefined API, it is possible to perform the series of processes by pressing one button.
Further, especially, in the case where the touch coordinate information is transferred from the IWS platform P2 to the standard platform P1, when a plurality of processes (a plurality of setting processes corresponding to undefined APIs, or the like) are assigned to the touch coordinates, the plurality of processes are converted into the plurality of corresponding operation element information by using the coordinate conversion table 520. Then, the standard platform P1 recognizes the contents of the plurality of processes on the basis of the plurality of pieces of operation element information.
With such operations, in the case of collectively registering a series of processes to a single button and performing the plurality of processes by pressing the single button, the plurality of processes can include even two or more processes relating to the undefined APIs.
Further, though the modification of the first preferred embodiment has been described in the third preferred embodiment, this is only one exemplary case.
For example, the same modification can be made on the second preferred embodiment. In this case, unlike in the third preferred embodiment, the IWS platform P2 may perform conversion of the touch coordinates in the IWS user interface UI2 into the operation element information in the standard user interface UI1 by using the coordinate conversion table 520 (like in the second preferred embodiment) (see Step S35 and the like of
Specifically, as to one or more instructions relating to the undefined APIs among the plurality of instructions (the plurality of processes) obtained after the decomposition in the decomposition process of Step S51 (see
Furthermore, the one or more pieces of operation element information are transferred from the IWS platform P2 to the standard platform P1. Then, on the basis of the received one or more pieces of operation element information (for example, the respective screen IDs of the plurality of corresponding operation screens relating to the plurality of operations, the respective representative positions of the plurality of corresponding buttons relating to the plurality of operations, and the like), the standard platform P1 recognizes the contents of the one or more processes. In other words, the one or more instruction contents corresponding to the one or more pieces of operation element information are recognized as some of the instruction contents given by the user operation. Then, on the basis of the recognition results, the standard platform P1 may perform the one or more processes. Further, in the lower portion of
Though the preferred embodiments of the present invention have been described above, the present invention is not limited to the above-described exemplary cases.
In the above-described preferred embodiments, for example, the method of transmitting the information from the standard platform P1 to the IWS platform P2 is changed depending on whether or not the API is defined. Further, in the coordinate conversion table 520, only the conversion information on some buttons (the buttons corresponding to the undefined APIs) in the customized screen 310 are specified.
Not limited to the above cases, however, the transmission of the information from the standard platform P1 to the IWS platform P2 may be performed always through the conversion process using the coordinate conversion table 520, not depending on whether or not the API is defined. In that case, however, it is required that the coordinate conversion table 520 includes not only the conversion information on some buttons (the buttons corresponding to the undefined APIs) in the customized screen 310 but also the conversion information on a relatively large number of buttons (e.g., all the buttons) relating to the customized screen 310. In terms of reduction in the amount of information in the coordinate conversion table 520, the aspects shown in the above-described preferred embodiments are preferable.
While the invention has been shown and described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is therefore understood that numerous modifications and variations can be devised without departing from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2016-003011 | Jan 2016 | JP | national |