As wireless devices become physically smaller and more complex, these wireless devices do little to reflect its state or present a direct affordance for programming and configuration. For example, consider the scenario of replacing a wireless light switch. Suppose that the wireless light switch is not only configured to turn on a specific set of outside lights when a physical switch is thrown, but also configured to activate lights when motion sensors detect a possible visitor in the darkness, and activate lights automatically based on the time of day. In this example, a fairly simple device contains significant programming and configuration information. In order to replace the wireless light switch, typically, this programming and configuration information is looked up and/or transferred from the wireless light switch to an intermediate device, such as a computer. The information is subsequently transferred from the intermediate device to the replacement switch. In some cases, this type of transfer may require the user transferring the state and configuration information to have specific knowledge about how to look-up and transfer this information and perform several time-consuming actions to replace the wireless light switch.
As another example, consider the simple task of replacing one wireless device with another wireless device. In contrast, in the current state of the art, consider the scenario in which an old personal digital assistant (PDA) is replaced with a new PDA. In order to transfer the state and configuration information of the old PDA into the new PDA, a user may be required to use an intermediate device (e.g., a computer) to program and configure the new PDA. Specifically, the old PDA is docked on a computer, the information from the old PDA is transferred to the computer using the keyboard and other input devices associated with the computer, and the old PDA is subsequently undocked. Further, the old PDA's state and how the old PDA was configured must be known to the user. Specifically, the configuration of the old PDA may include which other devices the old PDA communicates with and how this communication is performed. Upon determining this information, a user can effectively copy state and configuration information into the new PDA. Then, the new PDA is docked and the information is transferred to the new PDA. Thus, again, the user may be required to determine the specifics of each of the PDAs in order to effectively transfer the state and content from the old PDA to the new PDA.
In general, in one aspect, the invention relates to a method for transferring digital content, comprising defining a first region of space associated with a first device and a second region of space associated with a second device, wherein the first device comprises digital content to be transferred to the second device, performing a first action within the first region, obtaining the digital content to be transferred from the first device in response to performing the first action to obtain captured digital content, performing a second action within the second region, and transferring the captured digital content to the second device in response to performing the second action.
In general, in one aspect, the invention relates to a system for transferring digital content, comprising a first device comprising digital content to be transferred to a second device, a user configured to perform a first action resulting in the capture of the digital content to be transferred from the first device to obtain captured digital content, wherein the user is further configured to perform a second action resulting in the transfer of the captured digital content to the second device, and a detection object configured to detect and interpret the first action and the second action.
In general, in one aspect, the invention relates to a method for removing digital content, comprising defining a first region of space associated with a first device, wherein the first device comprises digital content to be removed from the first device, performing a first action within the first region, denoting the digital content to be removed from the first device in response to performing the first action, and removing the digital content from the first device in response to performing the first action.
Other aspects of the invention will be apparent from the following description and the appended claims.
Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency. Further, the use of “ST” in the drawings is equivalent to the use of “Step” in the detailed description below.
In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a more thorough understanding of the invention. In other instances, well-known features have not been described in detail to avoid obscuring the invention.
In general, embodiments of the invention relate to providing a user interface to transferring digital content from one device to another device directly. Specifically, one or more embodiments of the invention relate to the direct manipulation of software by performing actions that result in the transfer of digital content from one device to another device or the removal of digital content from a device. More specifically, embodiments of the invention relate to capturing software by performing actions that are interpreted by a detection object, and transferring or removing the captured digital content by performing similar actions.
In one embodiment of the invention, the system of
In one embodiment of the invention, digital content is obtained from Device A (100) by detecting and interpreting actions performed by a user in the presence of Device A (100). In one embodiment of the invention, an action may be a physical gesture, such as one or more rotations of a hand in a clockwise/counterclockwise direction, a grabbing motion of a hand, a dropping motion of hand, the blink of an eye, etc. Further, actions performed by a user may involve a voice recognition interface where a user uses his/her voice to obtain digital content from Device A (100) and transfer digital content to Device B (102).
In
In one embodiment of the invention, actions are performed by a user in a defined region of space associated with Device A (100) (i.e., Region A (104)). For example, in
Those skilled in the art will appreciate that a defined region of space associated with a device may be specified in any unit (i.e., metric units, English units, etc.). In one embodiment of the invention, a defined region of space may be specified as a percentage of the dimensions of the device with which the defined region of space is associated. For example, if a particular device is 12 centimeters by 15 centimeters, then the height of the defined region of space associated with the device may be a percentage of the length and width of the device. Alternatively, in one embodiment of the invention, a defined region of space may be unrelated to an associated device's measurements. For example, the region of space may extend beyond the edge of the device. Further, if the device is very large in surface area, the height of the defined region of space may be short, whereas if the device is small, the height may be taller. Those skilled in the art will appreciate that the defined region of space associated with a particular device may be located in any region that is spatially near the device from which digital content is to be obtained or transferred. For example, a defined region of space associated with a device may be to the side of the device, beneath the device, etc. Further, those skilled in the art will appreciate that a defined region of space associated with different devices may be of different dimensions and located in different areas relative to each device.
Continuing with
Upon determining that Device A (100) is near, the detection object (108) includes functionality to determine the unique identity of Device A (100). In one embodiment of the invention, the detection object (108) may read a bar code or an RFID tag on each device, perform an optical recognition task, or communicate directly with Device A (100) over a network to determine the unique identity of Device A (100). Those skilled in the art will appreciate that there may be several other ways that the detection object determines a unique ID associated with another device, such as using an Ethernet cable to determine a device's IP address, etc.
Further, in one embodiment of the invention, the detection object (108) may include a mechanism to detect and interpret the actions performed by a user. For example, in one embodiment of the invention, the detection object (108) may detect physical gestures using embedded accelerometers that are able to parse the actions being performed by a user. In one embodiment of the invention, if power is unavailable to one more devices, the detection object (108) may use passive methods to determine unique device IDs to identify devices. Those skilled in the art will appreciate that the detection object may be any device capable of communicating with other devices and may or may not be worn by a user (for example, see
In
Further, in one embodiment of the invention, performing actions in defined regions of space for two or more devices may result in establishing a direct connection between the devices. In this case, once a user has established a direct connection, the devices may directly communicate with each other to transfer digital content to/from one or more devices. For example, in one embodiment of the invention, Device A (100), Device B (102), and the detection object (108) may form a network (e.g., a mesh network, a cluster network, etc.), where each device is directly connected to each of the other devices and all the devices can communicate directly with each other. In this case, actions performed in a defined region of space associated with Device A (100) or Device B (102) are parsed by detection object (108), and may result in establishing a direct link between Device A (100) and Device B (102).
In one embodiment of the invention, when a user performs an action in Region A (104), the detection devices (e.g., Camera A (130), Camera B (132)) record the action performed and the associated computer system captures the digital content (i.e., a handle or copy of the program information, state information, data, etc. is obtained) corresponding to the action performed. The action shown in
Those skilled in the art will appreciate that the user experience (i.e., the experience of the user transferring or removing digital content) involves performing actions spatially near one or more devices to mark, capture, transfer, and/or remove digital content. Physically, a user performs an action near a device and captures the digital content represented by a cloud of ectoplasm hovering over a device. Subsequently, a user may transfer the content in the same manner, i.e., by physically performing an action that drops the captured cloud of ectoplasm into another device. Thus, the user experience is represented by the metaphor of capturing and transferring a cloud of ectoplasm, as if the digital content is something a user can just grab out of the air near one device and project into another device.
At this stage, a determination is made whether the digital content is to be held (Step 310) or immediately transferred to Device B. In one embodiment of the invention, a user may decide to hold the digital content captured from Device A (Step 310), in which case the digital content may be stored in a user device (Step 312). For example, if Device A and Device B are far away from each other, or the digital content is to be transferred at some later time, the user may store the digital content captured from Device A in a user device, such as a wrist band, ring, wristwatch, a computer system associated with a detection device, etc. In this case, the user device may include memory to store the digital content captured from Device A.
Continuing with
Those skilled in the art will appreciate that while
Embodiments of the invention provide a user interface for a user to transfer digital content from one or more devices to one or more different devices, without the use of a display screen or keyboard. Further, one or more embodiments of the invention allow a user to directly manipulate software to obtain and transfer digital content with the use of physical gestures that are interpreted by another device.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Accordingly, the scope of the invention should be limited only by the attached claims.
This application is a continuation of U.S. patent application Ser. No. 11/111,390, entitled: “METHOD AND APPARATUS FOR TRANSFERRING DIGITAL CONTENT” and filed on Apr. 21, 2005. Accordingly, this application claims benefit from U.S. patent application Ser. No. 11/111,390 under 35 U.S.C. §120.
Number | Name | Date | Kind |
---|---|---|---|
4414537 | Grimes | Nov 1983 | A |
4843568 | Krueger et al. | Jun 1989 | A |
5097252 | Harvill et al. | Mar 1992 | A |
5319747 | Gerrissen et al. | Jun 1994 | A |
5412619 | Bauer | May 1995 | A |
5579481 | Drerup | Nov 1996 | A |
5581484 | Prince | Dec 1996 | A |
5714698 | Tokioka et al. | Feb 1998 | A |
5914701 | Gersheneld et al. | Jun 1999 | A |
6049327 | Walker et al. | Apr 2000 | A |
6088730 | Kato et al. | Jul 2000 | A |
6098886 | Swift et al. | Aug 2000 | A |
6126572 | Smith | Oct 2000 | A |
6151208 | Bartlett | Nov 2000 | A |
6222465 | Kumar et al. | Apr 2001 | B1 |
6223018 | Fukumoto et al. | Apr 2001 | B1 |
6233611 | Ludtke et al. | May 2001 | B1 |
6285757 | Carroll et al. | Sep 2001 | B1 |
6380923 | Fukumoto et al. | Apr 2002 | B1 |
6424334 | Zimmerman et al. | Jul 2002 | B1 |
6681031 | Cohen et al. | Jan 2004 | B2 |
6754472 | Williams et al. | Jun 2004 | B1 |
6850162 | Cacioli et al. | Feb 2005 | B2 |
6874037 | Abram et al. | Mar 2005 | B1 |
7042438 | McRae et al. | May 2006 | B2 |
7058204 | Hildreth et al. | Jun 2006 | B2 |
7222198 | Stavely et al. | May 2007 | B2 |
7227526 | Hildreth et al. | Jun 2007 | B2 |
7254376 | Park et al. | Aug 2007 | B2 |
7295181 | Alsio | Nov 2007 | B2 |
7301526 | Marvit et al. | Nov 2007 | B2 |
7301527 | Marvit | Nov 2007 | B2 |
7307527 | Forster | Dec 2007 | B2 |
7312788 | Fleischmann et al. | Dec 2007 | B2 |
7333090 | Tanaka et al. | Feb 2008 | B2 |
7405725 | Mohri et al. | Jul 2008 | B2 |
7460690 | Cohen et al. | Dec 2008 | B2 |
7514001 | Costa et al. | Apr 2009 | B2 |
7565295 | Hernandez-Rebollar | Jul 2009 | B1 |
20020054175 | Miettinen et al. | May 2002 | A1 |
20030028672 | Goldstein | Feb 2003 | A1 |
20030055977 | Miller | Mar 2003 | A1 |
20030095154 | Colmenarez | May 2003 | A1 |
20030149803 | Wilson | Aug 2003 | A1 |
20040046736 | Pryor et al. | Mar 2004 | A1 |
20040068567 | Moran et al. | Apr 2004 | A1 |
20040243342 | Rekimoto | Dec 2004 | A1 |
20050093868 | Hinckley | May 2005 | A1 |
20050219211 | Kotzin et al. | Oct 2005 | A1 |
20050219223 | Kotzin et al. | Oct 2005 | A1 |
20050275636 | Dehlin et al. | Dec 2005 | A1 |
20060013483 | Kurzweil et al. | Jan 2006 | A1 |
20060028429 | Kanevsky et al. | Feb 2006 | A1 |
20060165405 | Kanai et al. | Jul 2006 | A1 |
20060192772 | Kambayashi | Aug 2006 | A1 |
20070057912 | Romriell et al. | Mar 2007 | A1 |
Entry |
---|
Rekimoto, J., “Pick-and-Drop: A Direct Manipulation Technique for Multiple Computer Environments,” UIST '97, Alberta, Canada, Oct. 1997, 9 pages. |
Number | Date | Country | |
---|---|---|---|
20120144073 A1 | Jun 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11111390 | Apr 2005 | US |
Child | 13372284 | US |