The present invention relates generally to computers, including laptop type computers and desktop type computers, and in particular to power management in computers.
Laptop computers, commonly referred to as notebook computers or simply “notebooks”, have become a popular format for computers. Notebook computers are portable and have processing and storage capacities comparable to desktop systems, and thus constitute a truly viable alternative to desktops. However, a serious shortcoming among notebook computers is the limited power capacity of their batteries. Consequently, power management in notebook computers has become essential to battery life. Power management in desktop systems is also an increasing concern from the point of view of reducing power waste, reducing power bills, and so on.
Power management refers to the different power states of the system, whether it is a notebook computer or a desktop unit. The power states typically refer to the power state of the computer, but may also refer to the power state of its components such as the monitor or display, a hard drive, and so on.
For a notebook computer, managing the power state of the CPU (central processing unit) is important to its battery life. Commonly used power states include the ready state, where the computer is fully powered up and ready for use. A low power state attempts to conserve battery life by reducing power to the system. For example, a suspend state typically refers to a low power state in which power to most of the devices in the computer is removed. A hibernate state is an extreme form of low power state in which the state of the machine (e.g., it's register contents, RAM, I/O caches, and so on) are saved to disk, and then power is removed from the CPU and memory, as well as the devices.
Resumption of the ready power state from a suspend state is typically effected by a keyboard press, or mouse movement. Either action generates a signal that can be detected by wakeup logic, which then generates signals to bring the other components to the active and working state. Waking up from the hibernate state involves performing a boot sequence. When the system is booted, the state information previously stored to the disk is read to restore the system to the state at the time just prior to hibernation.
As shown by the illustrative embodiments disclosed herein, the present invention relates to a power control method and apparatus for a computer. An image capture device operates to detect the presence of motion in its field of view. The computer normally operates in a full power state. In accordance with one aspect of the present invention, when the computer is idle, it is placed in a low power state; however, the image device continues to operate. When motion is detected by the image capture device, it generates a wake-up signal. The wake-up signal serves to restore the computer to the full power state.
In accordance with another aspect of the invention, if the image capture device determines that there is insufficient motion for a predetermined period of time, it can generate a suspend signal. The suspend signal serves to put the computer in the low power state.
Following is a brief description of the drawings that are used to explain specific illustrative embodiments of the present invention:
The computer system 100 shown in
CPU (central processing unit, 112) is a typical data processing component that executes program instructions stored in memory. Random access memory (RAM, 114) is a typical memory component for storing data and program instructions for access by the CPU 112. A hard drive 108 is a typical high capacity storage device for storing data and programs.
An image capture device 102 is shown connected to a USB (universal serial bus) controller 104. The image capture device will typically be some form of digital camera, capable of capturing and storing images. The image capture device 102 according to the present invention will be discussed in further detail with respect to the identified figures. As will be explained, the image capture device 102 can be configured as an internal device or an external peripheral.
The USB interface is commonly used in contemporary computer devices, and thus is the interface that was contemplated for the image capture device 102 at the time of the present invention. The USB controller logic 104 operates according to the USB specification. However, it will be appreciated that any other suitable interface logic can be used to practice the present invention.
Logic/firmware is typically provided in the computer to detect when the computer should be transitioned to a low power state, and then to perform a sequence of operations to place the computer into a low power state. For example, a known power management technique is the Advanced Configuration and Power Interface (ACPI), and is used in contemporary Intel®-based personal computers (PCs). ACPI provides a standard for implementing power management in PCs. Of course, it is understood that the present invention can be used with other power management techniques and other hardware platforms.
The wake-up logic 106 may include functionality for determining system inactivity. The measure of “activity”, or a lack of activity, is implementation specific and will vary from one manufacturer to another. Typically, however, activity is based on the number and/or frequency of asserted interrupts, accesses to specific areas in memory, disk activity, and so on. When inactivity is detected for a period of time, the logic 106 can initiate a sequence to place the computer in a low power state.
The image capture device 102′ shown in
The connection 134 of the image capture device 102″ to the computer can represent a wireless connection between the image capture device and the computer. Contemporary standards include Bluetooth® (IEEE 802.15). An infrared (IR) connection is also possible. It can be appreciated that the USB controller 104 and the interface 126 would then be replaced with suitably configured circuitry to provide electrical and signaling support for a wireless interface. Similarly, the connection 134 can be a wired standard other than USB; e.g., firewire.
Referring now to
In the case of a laptop computer having an internal image capture device such as shown in
The optics section 202 images the light gathered in its field of view onto an image capture element (or array) 204. The image capture element 204 can be a charged-coupled device (CCD) or a CMOS-based device. Of course other image acquisition technologies can be used.
A memory 206 is typically connected to the image capture element 204 so that an image that has been acquired by the image capture element can be converted and stored to memory. The conversion typically involves conversion circuitry 208 which reads out the content of the image capture element 204 and converts the information to a suitable format for processing by the processor 212. The memory 206 can be configured to store some number of images for subsequent processing.
As will be discussed below, in accordance with one embodiment of the present invention, the firmware/logic 214 will comprise a hardware implementation of the algorithms used to perform image processing operations for detecting motion. In this embodiment, the firmware/logic 214 is integrated in an ASIC-based (application specific integrated circuit) implementation which performs the image processing. Alternative hardware implementations (i.e. SoC, system on chip) integrate blocks of
In accordance with another embodiment of the present invention, the processor 212 performs processing of images stored in the memory 206 according to a method embodied in the firmware/logic 214. In this embodiment, the firmware/logic 214 may comprise primarily program instructions burned into a ROM (read-only memory). The images stored in the memory 206 would be fed to the processor 212 as a video stream and processed by executing instructions stored in the firmware/logic 214.
USB logic is provided to interface the image capture device 102 in a suitable manner to the computer, as discussed above in connection with
As noted above, the connection of the image capture device 102 to the computer can be made wirelessly. Contemporary standards include Bluetooth® (IEEE 802.15). An infrared (IR) connection is also possible. In such cases, it is understood that the “connector” 224 shown in
Operation of the present invention as embodied in the systems shown in
A computer typically exists in one of the following power states: READY, SUSPENDED, and HIBERNATE. In the READY state, the computer is fully powered up and ready for use. Typically there is no distinction between whether the computer is active or idle, only that the computer is fully powered.
The SUSPENDED state is a power state which is generally considered to be the lowest level of power consumption available that still preserves operational data; e.g., register contents, status registers, paging register, and so on. The SUSPENDED state can be initiated by either the system BIOS or by higher-level software above the BIOS. The system BIOS may place the computer into the SUSPENDED state without notification if it detects a situation which requires an immediate response such as in a laptop when the battery charge level falls to a critically low power level. When the computer is in the SUSPENDED state, the CPU cannot execute instructions since power is not provided to all parts of the computer.
Some computers implement a HIBERNATE state in which the data state of the computer is saved to disk and then power to the computer is removed; i.e., the computer is turned off. When power is restored, the computer performs the normal boot sequence. After booting, the computer remembers (e.g., by way of a file) that it was HIBERNATE'd and reads the stored state from the disk. This effectively restores the machine to its operating state just before the time the HIBERNATE was initiated.
Referring to
If a determination is made (step 304) that the system has been inactive, then an attempt will be made to enter the low power SUSPENDED state. This is typically achieved by logic that monitors the activity and signals the OS (operating system). In a step 306, the OS makes a determination whether it is OK to transition to the SUSPENDED state. This typically involves the OS signaling the device drivers and applications to ask if it is OK to suspend. If a driver or application rejects the suspend request, then the system resumes monitoring the system activity, step 302.
If it is determined that it is OK to suspend, then the OS signals the drivers and applications in a step 308 that a suspend is going to occur. The drivers and applications can then take action to save their state, if necessary. When the OS determines that the drivers and applications have taken steps to save their state (e.g., receiving a positive indication, or simply assumes that the state has been saved), then the OS will initiate a suspend sequence, in a step 310. This may involve invoking some functionality in the BIOS to sequence the machine to the SUSPENDED state.
Referring to
In a step 402, the image capture device 102 captures an image (image acquisition) and saves it to its memory 206. In a step 404, a motion detection process is performed by suitable analysis of the stored images. It can be appreciated that the memory 206 must be initially “primed” with enough images so that the action step 404 can be performed; e.g., typically, the most recently captured (acquired) image is compared with the previously captured image. If, in a step 406, it is determined that there is no motion, then processing proceeds to step 402 where another image is captured. Though not shown in the figure, it can be appreciated that this process can be interrupted and stopped when the system resumes full power mode. Also, a suitable time delay between image captures can be provided, either as a hardcoded value or more typically as a user configurable parameter.
If, in step 406, it is determined that motion has been detected, then the image capture device 102 will generate (step 408) a suitable signal that can be detected by the computer. In the situation where the image capture device 102 is an internal device as illustrated in
In the case of a USB-compliant image capture device, step 408 may be an operation where the imaging device issues a USB Remote Wake-up command to the USB controller 104. The USB controller 104 can then respond to the command accordingly to resume from the SUSPENDED state. The USB controller resumes USB traffic. All the devices on the bus leave the SUSPENDED state. The USB tree is now functional and the OS informs the application of the device responsible for the wake-up. After which every application responds according to its internal design.
In the case of SUSPENDED, the device requires a 500 μA power source. For an internal device, this can be easily provided by the manufacturer of the motherboard. However, in the case of an external USB device, it is usual to cut off power to external devices in the SUSPENDED state, so it typically is not possible for the external device to issue a USB Remote Wake-up command to the USB controller. However, where an external USB-compliant image capture device is employed in a notebook design which provides some minimum power to certain external devices in the SUSPENDED state, the image capture device can operate according to the steps shown in
In the case of the HIBERNATE power state, a computer having an internal image capture device (102′,
Referring to
If motion is detected within the predetermined period of time, then processing continues to step 502 to capture the next image. If, on the other hand, there has been no motion for a sufficient amount of time, then processing proceeds according to the flow shown in
As an alternative to using multiple interrupts, the image capture device can be configured to be associated with a memory address that maps to an interrupt register that is accessible by the device. The image capture device firmware/logic can then load a suitable value in the interrupt register to indicate the power state to which the computer should transition. Thus, it is possible to transition to low power configurations other than SUSPEND and HIBERNATE. For example, the interrupt handler can simply reduce the clock speed of the CPU. The LCD display can be turned off, or its brightness can be adjusted to a predetermined brightness level or a level based on detected ambient light levels. The hard disk can be slowed or stopped. These transitions can be configured via a suitable user interface.
In the case of a USB-compliant image capture device that is configured to be an external device, the imaging device can cause a USB SUSPEND or a HIBERNATION transition via the application software.
Referring to
The algorithm incorporated in the present invention is based on edge detection. The motion detection algorithm (MDA) software solution comprises the following process:
Images (frames) that are stored in the image capture device 102 are scaled down in size and input to the algorithm; e.g., an image size of 80×60 pixels can be used. Edge detection is performed using the Canny edge detection technique, where three main operations are performed:
The low pass filter removes noise inherently present in the image acquisition process, especially in low-light conditions. The gradient of the low-passed image is computed by convolving it with the Sobel operator (Eq. 1):
This operation is performed both vertically and horizontally, thus enhancing vertical and horizontal derivatives respectively, whose absolute values are summed up to obtain a final gradient image. The no-maximum removal operation is a technique that facilitates locating the edges in the gradient image. An example of the result of a Canny edge detection action is shown in
Difference Processing
The current and previous edge images are then compared using an XOR operator. The comparison produces near-zero results where edges have not moved, and a positive result that indicates the locations where edges do not overlay, meaning that motion has taken place. An example of the result obtained by such an operator is shown in
Detection of Active Regions
The XOR image is used to detect which parts of the scene are moving. As shown in
Detection of ROI
The detection of the active cells is the main result of the motion detection algorithm. Once this information is available, different things can be done, according to different goals. In the software implementation of the entire MDA, the aim that was considered was to establish an area of the image that is most likely to contain the head of the subject (based on the amount and the structure of motion) in a typical webcam use scenario.
In this case, it is fundamental that the scenario is well defined, as well as the main goal of the algorithm, since it is in this part that heuristics and smart hypothesis play an important role to obtain the desired result. To accomplish this task, the algorithm acts as depicted in
With respect to
As scanning continues, active cell 617 is encountered. Compared to active cell 612, cell 617 is deemed not too far from cell 612. This causes a counter to be incremented, and the event is called a “hit”. As can be seen in
Continuing with the algorithm, refer now to
The foregoing discussion described how spatial information is taken into account to estimate the subject's head position. The complete software MDA also uses temporal information. Thus, by comparing the position of the head estimated in the current frame to the one found for the previous frame. If the two positions are too far apart, then the new position is ignored, and the head position is maintained at the previous coordinates.
Next, with reference to
An acquired image is fed into a downscaler 722 to produce a down-scaled (or down-sampled) version of the image from the video stream; in a particular implementation, its dimensions are 80×60 pixels. This downscaled image serves as an input to the algorithm. In part (a), each line of the image is low-pass filtered by a low-pass filter stage 702 using a single-line horizontal mask. In a particular embodiment, the low-pass filter 702 is a simple mean filter, i.e., a filter that computes the mean value of adjacent pixels. This differs from the software solution, where a 5×5 Gaussian low-pass filter was used. By processing only one line at a time, line-memory need is minimized.
In part (b), the data stream is fed into a Sobel operator stage 704 to find the edges of the image, as in the software approach. In the hardware adaptation, however, we just consider the result of the convolution between the low-pass filtered image obtained in the previous step and the vertical Sobel mask of Eq. 1. To perform this operation, three lines of the image are necessary. A two-line buffer 724 provides the 2×3 pixels from the previous two lines, while the remaining 1×3 pixels of the current (third) line are provided “on-the-fly” from the low-pass filter stage 702. In this step, we obtain information only on the vertical edges. This is deemed to be sufficient, as it has been observed that the vertical edges contain most of the information needed to detect interesting motion.
Once the vertical edges are detected, a global edge threshold is applied to the image in order to obtain a binary image. This operation is performed by a comparator stage 706 and corresponds to the no-minimal removal that is performed in the Canny edge detector of the software approach, but is much simpler and thus is less accurate. Nonetheless, it was discovered to produce good results for our overall goal.
The result of the foregoing stages is that of obtaining an edge mask image (vertical edges in this case). In part (c), this image is compared to the previous one that is stored in an edge image memory 712 by an XOR operator stage 708 which corresponds to the XOR operation in the software solution. This comparison is performed in pixel by pixel fashion, feeding the pipeline stage.
At this point, the hardware solution has at its disposal the information of the XOR image. This image will be certainly different from the XOR image obtained by the software solution, since a different process was used to obtain the images for performing the XOR operation. However, once produced and stored in memory, the XOR result can be used directly by the software solution in a transparent way to compute the active-cell image. In the hardware solution, however, we have some constraints about memory and thus it is not practical to store this image. Consequently, the active-cell image is computed on the fly, as the XOR values enter the pipeline stage.
To do this, a certain number of N-bit registers 742 is used to count the number of active pixels within a line. Each register will store the number of active pixels within N consecutive pixels. Since in this implementation we set the dimension of the cell to 8×8 pixels—as in the software case—we use ten eight-bit registers to store the information of a line, which is 80 pixels wide.
When eight lines are processed, the information contained in the registers is the number of active pixels in an 8×8 block, i.e., the same information we had in the software implementation. This information is thresholded via a comparator 734 and each cell is then deemed to be active or inactive based on the programmable cell activity threshold. Control logic 732 for the N-bit registers 742 sets the registers to zero in preparation for processing the next eight lines.
At the end of this process, we obtain an active-cell image, which can be stored in memory and used again by the software solution. As done for the XOR image, we process this image on the fly. Hardware implementation cannot perform the same algorithm used in software to detect the presence of the head in an efficient way, since comparisons and the multiple scan that should be performed do not fit a hardware solution efficiently.
For this reason, we have chosen to just compute a motion centroid, i.e. the average x and y coordinates of the active-cells, since this can be done as active cells are detected. To do this, we used two supplementary registers 744a, 744b to accumulate the positions of the detected active cells. A centroid computation stage 736 performs the operations to determine the centroid. Once the entire image is scanned, these registers will contain the sum of the x and y coordinates of the active cells. Using an additional counter 744c that counts the number of the active cells, a simple division performed by the computation stage 736 produces the coordinate of the centroid.