The present disclosure relates to KVM systems and methods, and more particularly to a KVM system and method which enables a user to select text or alphanumeric information appearing in a video frame on a display associated with a KVM appliance, to convert the selected portion of information appearing in the video frame into a text output, and to copy the text output into other applications, documents or web pages.
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
The traditional hardware-based KVM (keyboard, video and mouse) redirection over IP method relies on capturing a video output signal from a target system, usually a target computer or server, repackaging and compressing the video output signal, and sending it back across an IP network to a client computer to display the screen content on the client computer's display screen. The client computer may be a desktop, laptop, tablet, smartphone, or any other form of personal computing device having a display screen or in communication with a display device. This traditional hardware-based KVM system does not make use of any software, drivers or agents installed and running on the remote target computer or server. The transmitted and displayed data which is displayed on the client computer's display screen is of a graphical nature, meaning it is a matrix of pixels that builds the textual and non-textual screen content, similar to how pixels are used to build a photo.
Such traditional hardware-based KVM solutions are also referred to “agentless” KVM solutions and are usually preferred by IT administration professionals. Agentless KVM solutions are therefore solutions where no special software is installed on a target computer or server being remotely accessed by a client computer (i.e., a user using his/her personal computing device).
Agentless KVM solutions, while commonly employed at the present time, have a significant limitation. This is the inability of the client computer to select and extract text content that is visible in the video image frame being displayed on the user's display screen, and to further use and process it as text, for example, by copying it into other documents or onto the clipboard of the client's personal computing device. A typical example where such functionality is desired is when the remote computer display screen displays, for example, a textual or alphanumeric error number, a log number, a serial number, a software version number or BIOS version number, a software license number, one or more phone numbers, or possibly a hyperlink that the operator of the KVM solution would like to extract and use for further processing, possibly with one or more other applications.
As a result, the remote computer or server is not able to use and process such text in the video image frame for further consumption, or to pass the text into other applications. In order for the user to be able to use text being displayed in the video image frame on his/her personal computing device, the user typically has to resort to using an agent-based KVM solution, for example VNC (Virtual Network Computing), RDP (Remote Desktop Protocol) or another remote desktop solution. Such agent-based solutions, however, usually require an installation of software onto the target computer or server, which is generally considered undesirable from an IT management standpoint due to security, complexity, and other issues.
Accordingly, a need exists to enable users to extract and use important and/or helpful text or alphanumeric information being presented in a video frame on a display of a user's device which is running an agentless KVM application, to enhance user productivity when accessing a target computer or server during a KVM session.
This section provides a general summary of the disclosure and is not a comprehensive disclosure of its full scope or all of its features.
In one aspect the present disclosure relates to a method for selecting and copying one or more characters of at least one of text or alphanumeric information appearing within a video image frame being displayed on a display of a client computing device, during a keyboard, video and mouse (KVM) session in which the client computing device is being used. The method may comprise accessing a KVM appliance using a client computing device being operated by a user, wherein the client computing device is running a KVM application, and using the KVM application to control the KVM appliance to communicate with a target computer. The method may further include using the KVM appliance to supply a video frame received from the target computer to a display operatively associated with the client computing device. The video frame contains pixels making up the video frame, with the pixels in the video frame forming at least one text or alphanumeric character. The method may further include receiving an input from a user controllable control component of the client computing device. The input defines a portion of the video frame selected by the user which includes the at least one text or alphanumeric character to be converted into a text output. The method may further include using an optical character recognition (OCR) software application to convert the at least one text or alphanumeric character into the text output, and using the client computing device to copy the text output for subsequent use by the user.
In another aspect the present disclosure relates to a method for selecting and copying one or more characters of at least one of text or alphanumeric information appearing within a video image frame being displayed on a display of a client computing device, during a keyboard, video and mouse (KVM) session in which the client computing device is being used. The method may comprise accessing a KVM appliance using a client computing device being operated by a user, wherein the client computing device is running a KVM application. The method may further include using the KVM application to control the KVM appliance to communicate with a target computer, and using the KVM appliance to supply a video frame received from the target computer to a display operatively associated with the client computing device. The video frame contains pixels making up the video frame, with the pixels in the video frame forming at least one text or alphanumeric character. The method may further include receiving an input from a user controllable control component operatively associated with the client computing device. The input defines a portion of the video frame selected by the user which includes the at least one text or alphanumeric character to be converted into ASCII text output and copied. The method may further include using an optical character recognition (OCR) software application stored in a memory of the client computing device and running on the client computing device to recognize and convert the selected at least one text or alphanumeric character into a text output, receiving a COPY command created using the client computing device, and in response to receiving the COPY command, copying the text output for subsequent use by the user.
In still another aspect the present disclosure relates to a method for selecting and copying one or more characters of at least one of text or alphanumeric information appearing within a video image frame being displayed on a display of a client computing device, during a keyboard, video and mouse (KVM) session in which the client computing device is being used. The method may comprise accessing a KVM appliance using a client computing device being operated by a user, wherein the client computing device is running a KVM application. The method may also include using the KVM application to control the KVM appliance to communicate with a target computer, and using the KVM appliance to supply a video frame received from the target computer to a display operatively associated with the client computing device. The video frame contains pixels making up the video frame, with the pixels in the video frame forming at least one text or alphanumeric character. The method may also include receiving an input from a user controllable control component operatively associated with the client computing device. The input highlights a portion of the video frame selected by the user which includes the at least one text or alphanumeric character to be converted into ASCII text output and copied. The method may further include using an optical character recognition (OCR) software application stored in a memory of the client computing device and running on the client computing device to recognize and convert the selected at least one text or alphanumeric character into an ASCII text output. The method may also include using a clipboard of the client computing device to receive the ASCII text output in response to receiving a COPY command input by the user of the client computing device, receiving a PASTE command initiated by the user from the user controllable mouse-like component, and pasting the ASCII text output into at least one of a selected application, a selected document or a selected web page.
Further areas of applicability will become apparent from the description provided herein. It should be understood that the description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations or embodiments and are not intended to limit the scope of the present disclosure.
Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings, wherein:
Example embodiments will now be described more fully with reference to the accompanying drawings.
Referring to
The KVM appliance 12 is typically in communication with a network 18, which may be a local area network or a wide area network. For simplicity, this connect will be referred to throughout the following discussion as “network 18”. It will be appreciated that the KVM appliance 12 may communicate with the target computer 16 through a separate local area network (not shown), rather than a direct hard-wired connection as shown in
The system 10 may further include an OCR software application and a KVM software application 21 both loaded and running within a memory 22 (e.g., RAM, ROM, etc.) of a client computing device 24. The client computing device 24 may be a user's laptop, computing tablet, desktop computer, smartphone, or any other personal electronic device capable of running the KVM software application 21. The client computing device 24 may have a built in display 26 (e.g., LCD, LED, etc.) or optionally may be using an external display (not shown). The client computing device 24 typically also includes some form of user controllable control component, for example a graphical user input (GUI) device such as a keyboard 28 and/or touchpad 30 or external mouse 30a physically connected to the client computing device 24 such as via a USB connection. Optionally, a touch display feature may be used in place of the touchpad 30 or external mouse 30a to enable the user to select a portion of information appearing on the display 26 by using a finger being moved on the display 26. The keyboard 28 may be a physical keyboard, as depicted in
The client computing device 24 also may include an internal clipboard 32 onto which information selected using the touchpad 30 can be copied and pasted into an application, document or web page that the user is accessing (or will access in the future). The client computing device 24 communicates text (e.g., ASCII text) to, and receives text (e.g., ASCII text) back from, the KVM appliance 12 via the network 18. The client computing device 24 also receives a video signal back from the KVM appliance 12 over the network 18 which is displayed as a video frame on the display 26.
The system 10 provides a highly valuable feature of enabling the user to use the touchpad 30 to highlight a user selected portion of the video frame being displayed on the display 26, to OCR convert the text or alphanumeric information within the selected portion of the video frame to usable text information, and to copy the text information onto the clipboard 32 for subsequent use in an application, document or web page that the user accesses. This is accomplished by the user accessing the touchpad 30 and using one or more fingers to highlight just the portion of the information 34 within the video frame that the user wishes to convert to ASCII text, which in this example is portion 34a denoted by a dashed line. Once highlighted, the user may also select, using the touchpad 30 or a separate control on the client computing device 24, to “COPY” the selected portion of video onto the clipboard 32. The execution of the COPY command by the user invokes use of the OCR software application 20. The OCR software application 20 may be started upon the KVM application detecting that the user has selected (i.e., highlighted) a certain portion of video on the display 26, or possibly even once the COPY command has been received, and any one of these implementations may be used with the system 10.
At this point the user will have the selected information 34a OCR converted and copied onto the clipboard 32. It will then be possible to copy the selected information 34a automatically, electronically, into a selected document or into a selected application which the user subsequently opens, or into a web page that the user has accessed or is about to access, simply by using the “PASTE” command which is common with many applications. This completely eliminates the risk of any error by the user in copying the selected information 34a. Importantly, this also provides the user with a means to select information appearing in a video frame on the display 26 which is not known to the user beforehand (e.g., a BIOS version number, serial number, etc.). While some preexisting systems have provided the capability to OCR convert certain information appearing on a display, such systems have required that the specific information be programmed or otherwise input into the OCR application beforehand. The present system 10 and method of operation are not limited to the user knowing the exact information to be OCR converted beforehand; essentially any text or alphanumeric information which appears on the display 26 can be selected by the user for OCR conversion and then copied into a different application or a document for subsequent use.
It is also important to note that the process by which the user uses the system 10 is intuitive and does not necessitate any complex procedures for the user to carry out when selecting and OCR converting select portions of text or alphanumeric information appearing on the display 26, and then copying the OCR converted text into a different application. As such, the system 10 enables text and alphanumeric information appearing on the display 26 to be OCR converted into ASCII text output and used by the user in other applications in a virtually seamless manner. Moreover, this capability exists at any time while the user is using the system 10, and is therefore not limited to capturing text or alphanumeric information during only bootup or shut down operations.
Referring now to
The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.
Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “comprising,” “including,” and “having,” are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed.
When an element or layer is referred to as being “on,” “engaged to,” “connected to,” or “coupled to” another element or layer, it may be directly on, engaged, connected or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, when an element is referred to as being “directly on,” “directly engaged to,” “directly connected to,” or “directly coupled to” another element or layer, there may be no intervening elements or layers present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.). As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
Although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
Spatially relative terms, such as “inner,” “outer,” “beneath,” “below,” “lower,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms may be intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the example term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
This application claims the benefit of priority under 35 U.S.C. § 119(e) to U.S. Provisional Application No. 63/354,850, filed on Jun. 23, 2022. The entire disclosure of the above application is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63354850 | Jun 2022 | US |