The following description is provided to assist the understanding of the reader. None of the information provided or references cited is admitted to be prior art. As family members grow older, some of the stories that they knew are lost. While oral traditions can be maintained, the oral traditions may not be accurate over time. Also, some people prefer to hear the story as told by those who witnessed the event, personally knew the story subjects, etc.
An illustrative device includes a user interface configured to display information and receive user input, a microphone configured to detect sound, and a speaker configured to transmit sound. The device also includes a transceiver configured to communicate with a database and a first user device and a processor operatively coupled to the user interface, the microphone, the speaker, and the transceiver. The processor is configured to receive a first image from the database and receive from the first user device a first message. The first message includes a request for information related to the first image. The processor is also configured to record via the microphone an audio recording that includes information related to the first image, transmit the audio recording to the database, and transmit to the database a request for the first image. The processor is further configured to receive the first image with an identifier of the audio recording and cause the user interface to display the first image and simultaneously cause the speaker to play the audio recording.
An illustrative method includes receiving, by a processor of a first user device, a first image from a database and receiving, by the processor, a first message from the second user device. The first message includes a request for information related to the first image. The method also includes recording, by the processor and via a microphone of the first user device, an audio recording that includes information related to the first image, transmitting the audio recording to the database, and transmitting to the database a request for the first image. The method also includes receiving the first image with an identifier of the audio recording and simultaneously causing, by the processor, a user interface of the first user device to display the first image and causing a speaker of the first user interface to play the audio recording.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the following drawings and the detailed description.
The foregoing and other features of the present disclosure will become apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings.
In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and make part of this disclosure.
Families have been sharing stories orally for generations. It is one of the most common pastimes at family gatherings all over the world. Looking through old photo albums as reference for stories provides an incredibly organic process for story flow. After a story is shared, the story typically is not saved beyond the memory of the persons who had heard it. Also, stories sound and feel different when retold by a secondary source. Great stories and crucial details within stories are frequently lost as time passes.
A computerized story capture system provides a digital service that makes it easy to create a high fidelity digital archive of a family's stories for preservation for the next generation. In some embodiments, the computerized story capture system allows people to browse through their photos while recording audio of the stories as they are organically told. In some embodiments, the computerized story capture system permits the user to naturally tell the story by choosing any photos they wish instead of only being able to record the audio over photos in a pre-ordered way such as a slideshow.
In some embodiments, the computerized story capture system enables users to record long running audio with no time limits and link that audio to photos to add context to the stories being told. Users can playback this audio as recorded (linear playback) or mixed with audio recorded at a different date (non-linear playback).
By way of example, a user could listen to all the audio recorded while the people speaking were looking at a particular image. The playback for a particular photo would play audio from 1:12:00 of a first two-hour recording session, 0:45:00 of a second one-hour recording session, and 00:01:00 of a third three-hour session. In an example embodiment, the audio is stored in a networked storage system, such as “the cloud,” not locally to the playback device.
Some embodiments of a computerized story capture system provide several advantageous features. For example, some embodiments allow a user to quickly download and seek to a specific point in each audio session without incurring the latency and bandwidth costs of downloading the whole clip. Some embodiments avoid holding open communication connections for streaming during recording and playback.
In an illustrative embodiment, user devices such as smartphones can be used to send an image to other user devices with a request for information regarding the image. For example, Amy can send an image of her grandfather to Steve, Amy's uncle. The image can be of Amy's grandfather holding a large fish in front of a lake. Steve can receive the image on his smartphone with a request from Amy asking Steve to explain the context of the image. In an illustrative embodiment, Steve can provide a response to Amy in the form of text, such as, “This photo was taken on one of our annual fishing trips to Canada when I was a kid.” In an alternative embodiment, Steve can record, via his smartphone, himself telling a story about the photo. For example, Steve can discuss the trip to Canada, how his dad struggled to get the fish into the boat, and how Steve was so excited that his hands were shaking when he took the photo of his dad, which explains why the photo is blurry. The explanation of the photo (e.g., whether in text format or audio format) can be stored in connection with the image. In an illustrative embodiment, Amy, her sisters, and other family members can access the photo and the explanation at a later time to reminisce, thereby preserving the memory.
As explained in greater detail below, various embodiments described herein provide functions and features that were not previously possible. For example, in some embodiments, a slideshow or photo album is presented to a user that includes a narration of one or more photos. The content of the slideshow or photo album can be accessed electronically virtually anywhere and at any time regardless of the availability of the narrator (e.g., whether the narrator is busy, ill, or deceased).
In some embodiments, a slideshow or photo album with associated audio recordings can provide advantages that were not previously available. For example, audio recordings can allow a person to explain the context and story surrounding a photo that would not be known by simply viewing the photo. Also, prompting a narrator for details about a photo or a story can allow the narrator to remember additional details, stories, or context that the narrator would not have otherwise provided. Recording such content preserves the stories and context in a manner that captures more of the emotion regarding the photo, story, or narrator than a simple photo or text-based explanation can. Additionally, various embodiments described herein make it more convenient and easier for people to record their stories or explanations of photos, thereby increasing, for example, the amount of familial history that is preserved. For example, very few individuals write memoirs about their life for their family members to cherish because it can be difficult or the individuals are uninterested in writing a memoir. However, various embodiments described herein makes it easy for virtually everyone to record stories and their own history. Furthermore, many people enjoy telling stories but do not enjoy writing.
Thus, various embodiments can be used to capture and preserve memories by making replay of the memories more enjoyable. Many people find it easier and more compatible with the human sensory system to watch and listen (e.g., to a slideshow of family histories while listening to a family member describe the photos) than to read a memoir. For example, it can be more enjoyable to listen to a story with a slideshow of relevant pictures than to sit and read a memoir. Various embodiments can make it easier to record their memories by simply telling a story related to associated photos.
The user device 105 and the user device 110 can be any suitable device that can communicate with each other, the network 115, the image storage device 120, and the audio storage device 125. For example, the user device 105 or the user device 110 can be a smartphone, a tablet, a personal computer, a laptop, a server, etc. In an illustrative embodiment, the user device 105 and/or the user device 110 include a camera configured to capture an image (e.g., a still image or a video). In an illustrative embodiment, the user device 105 and/or the user device 110 include a microphone configured to capture audio, such as one or more users speaking. The user device 105 and the user device 110 can include user interfaces. For example, the user interfaces can include a display for displaying images or text to the user. The user interfaces can receive user input from, for example, a touch screen, a keyboard, a mouse, etc.
The user device 105 and the user device 110 can communicate with each other and with the image storage device 120 and the audio storage device 125 via the network 115. The network 115 can include any suitable communication network such as a local-area network (LAN), a wide-area network (WAN), the Internet, wireless or wired communications infrastructure, servers, switches, data banks, etc.
The image storage device 120 stores images. In an illustrative embodiment, the image storage device 120 is a server connected to the internet. In an alternative embodiment, the image storage device 120 is memory of the user device 105 and/or the user device 110. Although the block diagram of
The audio storage device 125 stores images. In an illustrative embodiment, the audio storage device 125 is a server connected to the internet. In an alternative embodiment, the audio storage device 125 is memory of the user device 105 and/or the user device 110. Although the block diagram of
In some embodiments, image and audio data is stored on one or more servers and transmitted to a user device in segments, thereby reducing the amount of information transmitted to and stored on the user device. In an illustrative embodiment, audio recordings are associated with one or more images. Similarly, in such embodiments, an image can be associated with one or more audio recordings or portions of audio recordings. A database or record can be kept (e.g., on a server of the network 115, on the image storage device 120, on the audio storage device 125, etc.) that maintains such associations between images and audio recordings (or segments of audio recordings). In response to a user device requesting to download an image, a server of the network 115 can check such a database or record to determine associated audio recordings. The server can transmit to the user device the image and a listing of the associated audio recordings. Similarly, in response to a user requesting to play an audio recording, the server can transmit to the user device a listing of the images associated with the audio recording.
In the embodiment illustrated in
As shown in
In an illustrative embodiment, metadata associated with the audio recording can include an indication of who is speaking. For example, an audio recording can include multiple people speaking about a photo. The metadata can be used to indicate who is speaking at any particular instance. A user can add or edit the metadata to include names of individuals and when individuals begin and/or stop speaking. In an illustrative embodiment, during the recording, a user can select one of a plurality of individuals to indicate who is speaking. The selection of the individuals can be stored as metadata of the audio recording. During replay of the audio recording, an indication of who is speaking (e.g., who was selected during the recording) can be displayed.
In an illustrative embodiment, metadata associated with screen touches can be stored with the audio recording. For example, while recording, the user device tracks where a user taps or gestures on the photo during the recording. The user device records the places where the user has tapped or interacted with a displayed image. During playback, the touches or interactions with the touch screen can be displayed. In some embodiments, recognized gestures such as shapes cause a function to be performed, such as displaying a graphic. Interactions with the image can include zooming in or out, circling faces, drawing lines, etc.
In an illustrative embodiment, along with the audio recording, the user device can record a video of the user during the audio recording. The video can be played back during the playback of the audio recording. For example, a viewing window can be displayed for the video during playback while the image about which the subject is talking is simultaneously displayed. In an illustrative embodiment, the viewing window is displayed on the screen while the audio and video are recording. The user can move the viewing window around the screen during recording (e.g., to view a portion of the image that is obstructed by the viewing window). The location of the viewing window during the audio recording can be recorded and played back during the audio playback. Thus, the viewer of the playback can see the same screen that was displayed during the recording.
In an illustrative embodiment, the user device can detect that during a recording, speaking has stopped. After a predetermined threshold of not detecting speech (e.g., ten seconds, twenty seconds, thirty seconds, one minute, ten minutes, etc.), the application can prompt the user to end the recording session (or continue the session). In an alternative embodiment, after a predetermined threshold of not detecting speech, a suggested question can be displayed to the user to facilitate explanation or story telling. For example, a selected image during a recording session can be tagged with Grandpa and Aunt JoAnn. After a predetermined threshold of silence, a pop-up display can ask, “What was Grandpa doing in this picture?” or “How old was Aunt JoAnn in this picture?” The questions can be selected based on the tags of an image, dates of when the image was captured, etc.
In an illustrative embodiment, a user device records the audio files 205 and the metadata 210 and breaks the session 200 into the multiple audio files 205 (and associated metadata 210) into portions, as shown in
In the embodiment illustrated in
In some embodiments, playback of the session 200 is not a full playback of the recordings from beginning to end. For example, a user can select a mid-way point to being recording. For example, the user can select an image corresponding to a particular metadata 210 or the user can select a point along a playback timeline.
In an illustrative embodiment, a user interface display is provided on a user device to allow the user to navigate audio stories without leaving the context of the photos themselves. For example, the computerized story capture system includes a playback screen that puts linear progression horizontally on the page and uses vertical space to represent other stories that are available within the current context.
In the display shown in
In the embodiment illustrated in
In an illustrative embodiment, the various content (e.g., images, videos, audio recordings) can be organized in multiple ways to allow a user to navigate through the content. For example, the content can be found by selecting the person who uploaded the image or an album that the content is associated with. For example, the display of
The display of
In an illustrative embodiment, the various images in an album can be displayed using keywords that a user associates with images, locations of where the images were taken, people tagged in the images, dates of when the images were taken, etc. For example, images can be organized based on date ranges, such as decades (e.g., 1960s, 1970s, 1980s, etc.). In an alternative embodiment, the various images are organized by a popularity rating (e.g., based upon the number of times each image is viewed or downloaded). In an illustrative embodiment, images that have associated recordings can be marked as such. For example, a speech bubble can be displayed in the corner of the thumbnail of an image in an album.
As explained above with regard to
The user device 605 is any suitable user device, such as the user device 105 or the user device 110. The server 610 can be any suitable computing device, such as a computing device or server associated with the network 115. The storage device 615 can be any suitable storage device, such as the image storage device 120 and/or the storage device 125.
The sequence diagram of
In the operation 630, the user device 605 records audio. In an operation 635, the recorded audio is encoded. In an illustrative embodiment, encoding the audio includes breaking the recorded session (e.g., the session 200) into segments (e.g., the audio files 205). In illustrative embodiment, encoding the audio includes formatting an audio file and/or encrypting the audio file. In an operation 640, the user device 605 transmits to the server 610 the audio file(s) and any associated metadata (e.g., the metadata 210).
In an operation 645, the server 610 stores the received audio in a database. In an operation 650, a unique identifier is created for the received audio. In an illustrative embodiment, the unique identifier for the received audio is stored in the database with an indication of associated images or metadata. In an illustrative embodiment, the unique identifier identifies the received audio file among other received audio files.
In an operation 655, the recorded audio is transmitted to the storage device 615 for storage. In an illustrative embodiment, the recorded audio is stored in the storage device 615 with the unique identifier such that the server 610 or the user device 605 can use the unique identifier to request the recorded audio from the storage device 615. In an operation 660, the server 610 transmits a response to the user device 605 that includes a reference to the storage location of the recorded audio. In an illustrative embodiment, the reference includes the unique identifier.
In an illustrative embodiment, the user device 605 does not store the recorded audio after the audio is stored on the storage device 615. Thus, the recorded audio does not require long-term storage space in memory of the user device 605. Further, other user devices 605 (e.g., user devices 605 of friends or family) can access the recorded audio from the storage device 615. In an illustrative embodiment, the recorded audio can be converted into a text file. For example, speech recognition can be used to convert the recorded audio to text. The text associated with the recorded audio can be stored in the storage device 615. In an illustrative embodiment, the text of the recorded audio can be searchable by a user of the user device 605 to locate specific audio clips. In an alternative embodiment, the text can be displayed via the user device 605, such as in lieu of or along with a playback of the recorded audio.
In an illustrative embodiment, a user can request that another user input an annotation to an image. The annotation can be in the form of a short text answer, a long text answer, an audio recording, a video recording, etc. The annotation can be stored along with the image to be recalled later by either user or another user.
In an illustrative embodiment, a user can select a photo and transmit the photo to another user's user device for comment and/or annotation.
In an illustrative embodiment, after the user selects a question to ask related to the photo 900, the user can be prompted to select another user to send the selected question to.
In an illustrative embodiment, multiple users can contribute to the creation of a story. For example,
The screen shot of
The user profile of
In an illustrative embodiment, one or more photos can be memorialized in a physical medium while maintaining access to associated recordings.
For example, at a wedding, an album can contain pictures of the people of the wedding party as children. Attendees to the wedding can access the album and provide recorded content (or textual messages) for one or more of the pictures. Attendees can capture their own pictures and add them to the wedding album, for example, with audio recordings or textual messages. A printed wedding album can contain some or all of the pictures of the digital album with QR codes associated with pictures for which audio was recorded or messages were submitted.
An illustrative embodiment can be used to capture stories by non-associated people, such as non-family members, nurses, staff, etc. For example, a woman in a nursing home can have one or more conditions that affect the woman's memory. However, the woman may have lucid moments in which she can remember events from her past. In an illustrative embodiment, a nurse or staff member of the nursing home can use an embodiment of the present disclosure to record a story told by the woman (e.g., during a lucid moment). In an illustrative embodiment, the nurse or staff member can use a user device such as a smartphone with an application installed that records the woman's story. In such an embodiment, the application can allow the nurse or staff member to record a story, but not allow the nurse or staff member to replay, delete, and/or edit the recording. For example, in some instances, family members may wish to have control over the recordings, not the nurse or staff member.
One or more of the embodiments described herein can contain an administrator mode that allows users such as nurses to record and store content to multiple accounts. For example, a nurse may be responsible for twenty patients. The nurse may have access to accounts associated with each of the twenty patients. The access of the nurse can be limited based on the preferences of each patient (or their family member). For example, the nurse may have the ability to record content and store the content, but not have the ability to delete content.
In an illustrative embodiment, replaying stories can be used as a therapy tool. For example, patients with one or more memory conditions (e.g., dementia or Alzheimer's disease) can be routinely upset or distressed because they are confused (e.g., caused by the memory condition such as short-term memory loss). For some patients, retelling of certain stories can be used to calm the patients. For example, telling a particular patient a story related to a fond memory of the patient may distract the patient from his or her concern (e.g., caused by short-term memory loss) to focus on the story, which the patient still remembers. Such an embodiment can be used by nursing or staff members or by family members (e.g., to remind the patient of who the person is).
Such embodiments can be used in any suitable context. For example, a parent can have recorded a story such that another caretaker (e.g., a nurse while the child is in the hospital, a staff member of a daycare, another parent while the child is at a sleep-over, etc.) can replay the caretaker and calm the child down (e.g., if the child is homesick or is missing his or her parents). In other examples, the replaying of stories can be used in any other therapeutic or clinical purposes. In such an embodiment, the nursing or staff members may have access to replay or view content, but may not have access to add or delete content. In alternative embodiments, the nurse or staff member can have any suitable amount or degree of control or privileges over the account.
In an illustrative embodiment, the memory 2405 is an electronic holding place or storage for information so that the information can be accessed by the processor 2410. The memory 2405 can include, but is not limited to, any type of random access memory (RAM), any type of read only memory (ROM), any type of flash memory, etc. such as magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, etc.), optical disks (e.g., compact disk (CD), digital versatile disk (DVD), etc.), smart cards, flash memory devices, etc. The computing device 2400 may have one or more computer-readable media that use the same or a different memory media technology. The computing device 2400 may have one or more drives that support the loading of a memory medium such as a CD, a DVD, a flash memory card, etc.
In an illustrative embodiment, the processor 2410 executes instructions. The instructions may be carried out by a special purpose computer, logic circuits, or hardware circuits. The processor 2410 may be implemented in hardware, firmware, software, or any combination thereof. The term “execution” is, for example, the process of running an application or the carrying out of the operation called for by an instruction. The instructions may be written using one or more programming language, scripting language, assembly language, etc. The processor 2410 executes an instruction, meaning that it performs the operations called for by that instruction. The processor 2410 operably couples with the user interface 2420, the transceiver 2415, the memory 2405, etc. to receive, to send, and to process information and to control the operations of the computing device 2400. The processor 2410 may retrieve a set of instructions from a permanent memory device such as a ROM device and copy the instructions in an executable form to a temporary memory device that is generally some form of RAM. An illustrative computing device 2400 may include a plurality of processors that use the same or a different processing technology. In an illustrative embodiment, the instructions may be stored in memory 2405.
In an illustrative embodiment, the transceiver 2415 is configured to receive and/or transmit information. In some embodiments, the transceiver 2415 communicates information via a wired connection, such as an Ethernet connection, one or more twisted pair wires, coaxial cables, fiber optic cables, etc. In some embodiments, the transceiver 2415 communicates information via a wireless connection using microwaves, infrared waves, radio waves, spread spectrum technologies, satellites, etc. The transceiver 2415 can be configured to communicate with another device using cellular networks, local area networks, wide area networks, the Internet, etc. In some embodiments, one or more of the elements of the computing device 2400 communicate via wired or wireless communications. In some embodiments, the transceiver 2415 provides an interface for presenting information from the computing device 2400 to external systems, users, or memory. For example, the transceiver 2415 may include an interface to a display, a printer, a speaker, etc. In an illustrative embodiment, the transceiver 2415 may also include alarm/indicator lights, a network interface, a disk drive, a computer memory device, etc. In an illustrative embodiment, the transceiver 2415 can receive information from external systems, users, memory, etc.
In an illustrative embodiment, the user interface 2420 is configured to receive and/or provide information from/to a user. The user interface 2420 can be any suitable user interface. The user interface 2420 can be an interface for receiving user input and/or machine instructions for entry into the computing device 2400. The user interface 2420 may use various input technologies including, but not limited to, a keyboard, a stylus and/or touch screen, a mouse, a track ball, a keypad, a microphone, voice recognition, motion recognition, disk drives, remote controllers, input ports, one or more buttons, dials, joysticks, etc. to allow an external source, such as a user, to enter information into the computing device 2400. The user interface 2420 can be used to navigate menus, adjust options, adjust settings, adjust display, etc.
The user interface 2420 can be configured to provide an interface for presenting information from the computing device 2400 to external systems, users, memory, etc. For example, the user interface 2420 can include an interface for a display, a printer, a speaker, alarm/indicator lights, a network interface, a disk drive, a computer memory device, etc. The user interface 2420 can include a color display, a cathode-ray tube (CRT), a liquid crystal display (LCD), a plasma display, an organic light-emitting diode (OLED) display, etc.
In an illustrative embodiment, the power source 2425 is configured to provide electrical power to one or more elements of the computing device 2400. In some embodiments, the power source 2425 includes an alternating power source, such as available line voltage (e.g., 120 Volts alternating current at 60 Hertz in the United States). The power source 2425 can include one or more transformers, rectifiers, etc. to convert electrical power into power useable by the one or more elements of the computing device 2400, such as 1.5 Volts, 8 Volts, 12 Volts, 24 Volts, etc. The power source 2425 can include one or more batteries.
In an illustrative embodiment, the computing device 2400 includes a sensor 2430. In an illustrative embodiment, the sensor 2430 can include an image capture device. In some embodiments, the sensor 2430 can capture two-dimensional images. In other embodiments, the sensor 2430 can capture three-dimensional images. The sensor 2430 can be a still-image camera, a video camera, etc. The sensor 2430 can be configured to capture color images, black-and-white images, filtered images (e.g., a sepia filter, a color filter, a blurring filter, etc.), images captured through one or more lenses (e.g., a magnification lens, a wide angle lens, etc.), etc. In some embodiments, sensor 2430 (and/or processor 2410) can modify one or more image settings or features, such as color, contrast, brightness, white scale, saturation, sharpness, etc. In another example, the sensor 2430 is a device attachable to a smartphone, tablet, etc. In yet another example, the sensor 2430 is a device integrated into a smartphone, tablet, etc. In an illustrative embodiment, the sensor 2430 can include a microphone. The microphone can be used to record audio, such as one or more people speaking.
In an illustrative embodiment, any of the operations described herein can be implemented at least in part as computer-readable instructions stored on a computer-readable memory. Upon execution of the computer-readable instructions by a processor, the computer-readable instructions can cause a node to perform the operations.
The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable,” to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.” Further, unless otherwise noted, the use of the words “approximate,” “about,” “around,” “substantially,” etc., mean plus or minus ten percent.
The foregoing description of illustrative embodiments has been presented for purposes of illustration and of description. It is not intended to be exhaustive or limiting with respect to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosed embodiments. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.
The present application claims the priority to U.S. Provisional Application No. 62/132,401 filed Mar. 12, 2015, which is incorporated herein by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US16/22198 | 3/11/2016 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62132401 | Mar 2015 | US |