Photo frames are traditionally provided as a means for securing and displaying memories in the form of photos. Photos are often gifted to friends and family in photo frames to serve as mementos of particular events or relationships. A user can generally look at a photo in a frame and reminisce about particular events or relationships. Oftentimes, however, memories can fade, and individual perspectives of a particular event or relationship may differ. Accordingly, there is a need for a photo frame that allows one or more users to capture their own audible comments or perspectives on the photo. Additionally, because photos are interchangeable, there is also a need for a photo frame that associates the audible comments with user-definable portions of the photo.
Embodiments of the invention are defined by the claims below, not this summary. A high-level overview of various aspects of the invention are provided here for that reason, to provide an overview of the disclosure, and to introduce a selection of concepts that are further described in the detailed description section below. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in isolation to determine the scope of the claimed subject matter.
In brief and at a high level, this disclosure describes, among other things, a photo frame and method for storing audio tracks and associating them with user-defined touch zones on a photo supported by the frame. The photo frame includes a capacitive sensor array that provides a backing for a user to place a photo thereon. The capacitive sensor is coupled to a microprocessor that enables the creation of user-defined touch zones by allowing the user to encircle an area of the photo with a touch gesture. The frame also includes a microphone for permitting recording of an audio track corresponding to each user-defined touch zone. The audio tracks and corresponding touch zones are stored in a memory. In some embodiments, the microprocessor can determine a selected touch zone based on a location of the user's touch and, based on the location, playback the audio track corresponding to the selected touch zone.
This summary is provided to introduce a selection of concepts in a simplified form. These concepts are further described below in the detailed description of the preferred embodiments. Various other aspects and advantages of the present invention will be apparent from the following detailed description of the preferred embodiments and the accompanying drawing figures.
Illustrative embodiments of the invention are described in detail below with reference to the attached drawing figures, and wherein:
The drawing figures do not limit the present invention to the specific embodiments disclosed and described herein. The drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the preferred embodiments.
The subject matter of select embodiments of the invention is described with specificity herein to meet statutory requirements. But, the description itself is not intended to necessarily limit the scope of claims. Rather, the claimed subject matter might be embodied in other ways to include different components, steps, or combinations thereof similar to the ones described in this document, in conjunction with other present or future technologies. Terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
Methods and devices are described herein for recording, storing and associating audio tracks with user-defined touch zones corresponding to areas of a photo supported on a frame. In particular, one aspect of the invention is directed to a photo frame. The photo frame includes a capacitive sensor array including a set of capacitive sensors, wherein each capacitive sensor in the set is operable to detect touch inputs; a memory operable for storing one or more user-defined touch zones, wherein each user-defined touch zone corresponds to a unique subset of capacitive sensors in the set; a microprocessor coupled to the capacitive sensor array and the memory, wherein the microprocessor is configured to generate each of the one or more user-defined touch zones by detecting a plurality of touch inputs encircling a unique subset of capacitive sensors; and a microphone coupled to the microprocessor and operable to record an audio track to the memory corresponding to each of the one or more user-defined touch zones.
Another aspect of the invention is directed to a method of associating an audio track to a user-defined touch zone of a photo supported by the frame. The method includes receiving a plurality of touch inputs, generally from a single touch gesture, through the photo placed over a capacitive sensor array, the plurality of touch inputs encircling a unique subset of capacitive sensors on the capacitive sensor array; generating, with a microprocessor in a recording mode and coupled to the capacitive sensor array, a user-defined touch zone based on a path covered by the touch gesture; and receiving an audio track corresponding to the user-defined touch zone for storage to a memory coupled to the microprocessor.
In some aspects of the disclosure, a photo frame includes a frame border having a presentation face, the presentation face having a plurality of capacitive sensors operable to detect touch inputs, and a masking material covering the plurality of capacitive sensors, wherein the masking material presents watermarks in front of each of the capacitive sensors, and wherein each capacitive sensor is operable to detect touch inputs through the masking material; a microphone operable to receive audio; a memory having a plurality of partitions, each partition configured to store an audio track received from the microphone and having a corresponding capacitive sensor from the plurality of capacitive sensors; a microprocessor configured to detect a touch input from one of the plurality of capacitive sensors and configured to activate the microphone for receiving the audio track for storage to the corresponding partition.
With reference now to the figures, methods and devices are described in accordance with embodiments of the invention. Various embodiments are described with respect to the figures in which like elements are depicted with like reference numerals. Referring initially to
The photo frame assembly 10 also includes a capacitive sensor array 22, as shown in
Referring now to
The frame assembly 10 may include a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the frame assembly 10 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the frame assembly 10. Computer storage media does not comprise signals per se. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
Memory 40 includes computer-storage media in the form of volatile and/or nonvolatile memory. The memory may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. The frame assembly 10 includes one or more processors 42 that read data from various entities such as memory 40 or I/O components (not shown). The memory may be operable to store computer-readable instructions for execution by one or more processors. The memory may also be operable to store media (e.g., audio files, recordings, or audio “tracks”) including other data structures. In some embodiments, the data structures can be related to user-defined touch zones generated by one or more processors and corresponding to particular audio files, as will be described herein.
In some embodiments, the sensor array is operable to detect touch inputs on any one of the plurality of capacitive sensors 24 disposed thereon. Each capacitive sensor 24 is operable to detect a touch input (i.e., a finger touch), such that the capacitive sensor array 22 can detect, from a user, a plurality of touch inputs from a single touch gesture conducted across a plurality of capacitive sensors 24 (i.e., a finger swipe). The capacitive sensor array 22 is operable to detect a sequence of touch inputs from a single touch gesture across a plurality of capacitive sensors 24, and communicate the location of each touch corresponding to a capacitive sensor 24 and its position on the capacitive sensor array 22 to the one or more processors 42. In some embodiments, the capacitive sensor array 22 may be passive, such that the processor detects the touch inputs based on body capacitance sensed by the individual capacitive sensors 24 on the capacitive sensor array 22. The one or more processors 42 may be operable to receive, from the capacitive sensor array 22, a plurality of signals each corresponding to a touch input on a particular capacitive sensor 24 and a location thereof with respect to the capacitive sensor array 22.
In other embodiments, the processor 42 may include executable instructions embedded thereon, or may read executable instructions stored in the memory 40. As such, the processor may execute instructions for generating user-defined touch zones based on the detection of one or more touch inputs encircling any subset of capacitive sensors. In some embodiments, the generation of user-defined touch zones is performed sequentially, based on the order in which the touch inputs were detected. For example, a first touch gesture conducted on the capacitive sensor array 22 encircling a first group of capacitive sensors can initiate the generation of a first user-defined touch zone. In other words, the first group of capacitive sensors encircled by the first touch gesture path may include all capacitive sensors included in the gesture path as well as all capacitive sensors encircled thereby. As such, the first touch zone may include all capacitive sensors in the first group. In some embodiments, a second touch gesture conducted on the capacitive sensor array 22 may encircle a second group of capacitive sensors, initiating the generation of a second user-defined touch zone. As a result, the second touch zone may include all capacitive sensors in the second group. In some instances, the second touch gesture may overlap one or more capacitive sensors included in the first touch zone. In such an event, the second touch zone may take priority over the capacitive sensors and each of the overlapped capacitive sensors may be reassociated with the second touch zone. In some other instances, priority may be provided to the first touch zone, whereby overlapped capacitive sensors are not reassociated to the second touch zone.
In some embodiments, the processor's generation of a touch zone may initiate immediately or briefly thereafter, an audio recording session, by activating the microphone 38. In some instances, an audible feedback (e.g., a beep or voice instruction) is also provided through the speaker 36 to confirm generation of the touch zone to a user and/or instruct the user to provide an audio recording corresponding to the newly generated touch zone. The microphone 38 is operable to receive the user provided audio and record it to the memory 40 corresponding to the most recently generated touch zone. In some embodiments, the memory 40 is partitioned to receive a maximum number of touch zones and/or corresponding audio recordings. In other embodiments, the memory partitions may limit each audio recording to a maximum recording duration. In embodiments, the audio recording can timeout upon reaching the maximum recording duration and be stored into memory. The recording may be stored with reference data (e.g., metadata) corresponding to the most recently generated touch zone. In other embodiments, the user may intentionally stop the recording by inputting a stop command such as touching any one of the capacitive sensors encircled by the corresponding most recently generated touch zone. An audible confirmation may be provided to the user upon a storing of the audio track corresponding to the most recently generated touch zone (e.g., a playback of the audio recording or a beep).
In one embodiment, the processor 42 may be able to detect a user's selection of a user-defined touch zone from a plurality of generated user-defined touch zones. For example, after the user has generated several touch zones and stored audio tracks corresponding thereto, the processor may be able to detect a user's selection of one of the touch zones to initiate playback of the corresponding audio track of the selected touch zone. In some embodiments, the processor may need to be changed into a playback mode, so that touch inputs detected by the capacitive sensor array 22 and/or the processor 42 are not misinterpreted as touch zone defining inputs. In such embodiments, in order to enable user-defined touch zone generation and the storage of corresponding audio tracks, the processor may need to be toggled into a recording mode. In embodiments, toggling an external switch, such as switch module 46 of
Turning now to
The first loop 54 of
In some embodiments, upon generation of the first user-defined touch zone 58, the photo frame produces an audible feedback alert to notify the user that the first user-defined touch zone 58 has been generated. Upon generation and notification of the first user-defined creation of a touch zone, the photo frame 10 initiates an audio receiving mode to receive a first audio track to correspond with the generated first user-defined touch zone 58. In the audio receiving mode, a microphone is enabled for receiving the first audio track. The first audio track is received by the microphone 38 included in the photo frame 10. The receipt of the first audio track may be terminated by either a time-out or a manual stop command input by the user. In some embodiments, a single touch input detected by at least one of the capacitive sensors 24 in the first user-defined touch zone 58 of capacitive sensors may terminate the recording mode and disable the microphone 38. In response to the detection of the stop command, the first audio track or an audible feedback alert can be played back to the user as confirmation that the first audio track was properly received by the photo frame 10. In other embodiments, once the first audio track is received, it is stored to the memory 40 along with a reference to the corresponding first user-defined touch zone 58.
Moving forward now to
Looking now to
It is within the scope of the invention to consider that zero, one, or more user-defined touch zones may be generated and associated with a corresponding audio track. Each touch zone and corresponding audio track can be subsequently stored using, for instance, the memory 40 of
Turning now to
Similar to the frame assembly 10 of
Many different arrangements of the various components depicted, as well as components not shown, are possible without departing from the scope of the claims below. Embodiments of the technology have been described with the intent to be illustrative rather than restrictive. Alternative embodiments will become apparent to readers of this disclosure after and because of reading it. Alternative means of implementing the aforementioned can be completed without departing from the scope of the claims below. Certain features and subcombinations are of utility and may be employed without reference to other features and subcombinations and are contemplated within the scope of the claims.