1. Field of the Invention
The embodiments of the present invention relate generally to the area of adaptive devices designed to aid individuals having one or more impairments, such as, for example dyslexia or low vision. More specifically, embodiments of the invention relate to systems and devices that are capable of transforming printed text obtained from a variety of sources or media formats into more user-accessible forms.
2. Background Information
The conveyance of information through text-based media is a ubiquitous phenomena in society. For some, however, the ability to acquire information contained in text-based media can be a daunting if not impossible task. Such individuals include, for example, those having learning difficulties, blindness, and visual impairments such as those arising from diabetic retinopathy, cataracts, age-related macular degeneration (AMD), and glaucoma.
Recent studies indicate that at least one in twenty individuals has dyslexia, a reading disability and the most common type of recognized learning disability (LD), and at least one in ten is affected with other forms of LD that limit the ability to read or write symbols. Reading-related LDs are genetic neurophysiological differences that affect a person's ability to perform linguistic tasks such as reading and spelling. A disability can vary across a population, exhibiting varying degrees of severity and amenability to remediation. The precise cause or pathophysiology of LDs such as dyslexia remains a matter of contention. Current efforts to remediate reading difficulties, such as dyslexia, fall short of remedying the difficulty in a large proportion of affected individuals. Further, the lack of systematic testing for the disability leaves the condition undetected in many adults and children.
In addition to the LD population, there is a large and growing population of people with poor or no vision. As the populations in many countries age, the low/no vision population is increasing. Difficulties in reading can interfere with performance of simple tasks and activities, and deprive affected individuals of access to important text-based information. Thus a need exists to provide individuals who are unable to read well or at all with the ability to garner information from text-based sources.
Embodiments of the invention provide methods, systems, and apparatuses, including graphical user interfaces and output capabilities, for capturing and presenting text to a user. Embodiments of the invention facilitate the capture of text from a source, such as for example, a magazine, a book, a restaurant menu, a train schedule, a posted sign or advertisement, an invoice or bill, a package of food or clothing, or a textbook and the presentation of the textual content in a user-accessible format, such as for example, an auditory format, a Braille format, or a magnified textual format, in the original language or a different language. As described more fully herein, embodiments of the invention may also optionally provide additional features, such as for example, those found in a personal assistant, such as calendaring, email, and note recording functionalities, connectivity to the internet or a cellular network, library and archiving functionalities, direct connectivity with a computer, speaker system, Braille output or input system, or external camera.
In
A display or a user interface screen may comprise any suitable display unit for displaying information appropriate for a mobile computing device. A display may be used by the invention to display, for example, file trees and text, and to assist with input into the text capture device. Input devices include, for example, typical device controls, such as volume controls, recording and playback controls, and navigation buttons. Examples for input/output (I/O) device(s) include an alphanumeric keyboard, a numeric keypad, a touch pad, input keys, buttons, switches, rocker switches, microphones, a speaker, Braille input pads or boards, cameras, and voice recognition device and software. Information entered by way of a microphone may be digitized by a voice recognition device. Navigation buttons can comprise an upward navigation button, a downward navigation button, a leftward navigation button, and a rightward navigation button. Navigation buttons also may comprise a select button to execute a particular function.
Embodiments of the invention provide mobile or portable computing devices. A mobile computing device may refer to any device having a processing system, a memory, and a mobile power source or supply, such as one or more batteries or solar cells, for example. Although a number of functionalities are discussed herein, embodiments of the present invention are not so limited and additional functionalities are possible, such as those commonly available with a laptop computer, ultra-laptop computer, portable computer, handheld computer, palmtop computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone/PDA, smart phone, pager, one-way pager, two-way pager, messaging device, data communication device, and or a music player (such as a mp3 player). Such functionalities include, for example, an ability to store, catalogue, and play music, to send and or receive text messages, to send and receive email, to store and access lists of contacts and important personal information, and an ability to calendar important dates and appointments. Additional functionalities include, the ability to connect to the internet with either a hard connection or a wireless connection. Access to the internet allows the device, for example, to search the internet to determine if the text being translated has already been decoded into an audio format.
In general, transformation, deciphering, and decoding mean capturing text, language, and or forms of communication in one format or medium and storing or outputting it in a different medium or format. For example, a device according to the present invention may capture text from a printed source, such as for example, a book, a magazine, a food package, a letter, a bill, a form of monetary currency, a class syllabus, a sign, a notice, a durable good package, an instruction sheet or pamphlet, and translate it into digital a file stored in memory, an auditory signal such as, a voice that reads the text contained in the printed source to a user, an output on a computer screen, or a Braille output device. Additional output devices include for example, an internal speaker, an amplifier, jack, or connection for headphones or for external speakers. Optionally, the decoding and transformation function may be further augmented by software that, for example, is capable of recognizing a particular type of document that is frequently referred to, such as for example, a bus schedule, a restaurant menu, a check or bank draft, or monetary forms (currency). In the case of currency, the ability to recognize the object as currency enables the device to quickly inform the user about the most relevant information, the currency's denomination (e.g., a one dollar bill or a twenty dollar bill) or the value on a bank draft. The object recognition may occur as a result of a user input or as a result of a software-driven object recognition process. In general, the ability to recognize an object allows the text transformation device to tailor its output to immediately include the information that is most relevant to the user. For example, in the case of currency, the user may desire to know quickly know the denomination rather than waiting for the device to capture all the text and output the text found on the currency (object) in a linear fashion. Image recognition capabilities are available through commercial software solutions, for example. Further, the device may be equipped with capabilities that allow the user to scan larger print or bold type in a text to search for articles or information that they are interested in, or that allows a user to quickly scan a label to look for certain nutritional information. For example, scanning for larger, differentially colored, italicized, or bold-faced type allows the user to quickly jump to sections of a menu, newspaper, or magazine that the user might be interested in, rather than having the device output text content in a linear fashion.
Embodiments of the invention comprise an imaging system that is configured to capture an image digitally for subsequent OCR processing of text contained within the image. As used herein, the term capture or capturing refers to capturing a video stream or photographing an image and is to be distinguished from scanning across the surface of an object to capture an image of the object. For example, scanning can involve placing the printed material to be recorded flat against a glass surface an having a scanner behind the glass scan the object or drawing a scanning device across the surface of a page. Advantages associated with capturing a text-based image via digital photography, as opposed to scanning, include greater ease of use and adaptability. Unlike with a scanner, the imaging device need not be placed flush against the surface to be imaged, thereby allowing the user the freedom and mobility to hold the imaging device at a distance from the surface, e.g., at a distance that is greater than a foot from the page of a book. Thus, such an imaging device is adaptable enough for imaging uneven surfaces such as a pill bottle or an unfolded restaurant menu, as well as substantially planar surfaces such as a street sign. Accordingly, some embodiments of the invention can capture images from both planar and non-planar objects. Capturing the image in such a manner allows for rapid acquisition of the digital images and allows for automated or semi-automated page turning.
Optionally, the imaging system includes a power source (such as, for example, one or more batteries, alternating or direct current acceptance capability, and solar cells), a plurality of lenses, a level detection mechanism, a zoom mechanism, a mechanism for varying focal length, an auto focus mechanism, a mechanism for varying aperture, a video capture unit, such as those employed in closed-circuit television cameras, and a shutter. An optional color sensor within the text capture device allows for the sensing of color present in an image that is captured. The sensing of color allows additional information to be obtained, such as for example, if the title and headings or certain features of a textual display are present in different colors, the text can be more rapidly sorted and desired information presented to the user. Further possible components include for example, a LED strobe, to facilitate image capture at distances less than 12 inches from the device, a xenon lamp, to facilitate image capture at distances greater than 12 inches from the device, headphone jacks and headphones, stereo speakers, connectivity options, such as USB OTG (universal serial bus on the go), and docking, remote control connections, targeting and aiming lights (that, for example, indicate with light the image capture area so that the user can align the device to capture the desired text), microphones, and power cables.
To optimize the quality of the captured image, embodiments optionally include a level detection mechanism that determines whether the imaging device is level to the surface being imaged. Any level detection mechanisms known in the art may be used for this purpose. The level detection mechanism communicates with an indicator that signals to the user when the device is placed at the appropriate angle (or conversely, at an inappropriate angle) relative to the surface being imaged. The signals employed by the indicator may be visual, audio, or tactile. Some embodiments include at least one automatically adjustable lens and or mirror that can tilt at different angles within the device so as to be level with the surface being imaged and compensate for user error.
To avoid image distortion at close range, embodiments may optionally include a plurality of lenses, one of which is a MACRO lens, as well as a zoom mechanism, such as digital and/or optical zoom. In certain embodiments, the device includes a lens operating in Bragg geometry, such as a Bragg lens. Embodiments can include a mechanism for varying the focal length and a mechanism for varying the aperture within predetermined ranges to create various depths of field. The image system is designed to achieve broad focal depth for capturing text-based images at varying distances from the imaging device. Thus, the device is adaptable for capturing objects ranging from a street sign to a page in a book. The minimum focal depth of the imaging device corresponds to an f-stop 5.6, according to certain embodiments. In some embodiments, the imaging device has a focal depth of f-stop 10 or greater.
In certain embodiments, the imaging device provides a shutter that is either electrical or mechanical, and further provides a mechanism for adjusting the shutter speed within a predetermined range. In some embodiments, the imaging device has a minimum shutter speed of 1/60ths. In other embodiments, the imaging device has a minimum shutter speed of 1/125ths. Certain embodiments include a mechanism for varying the ISO speed of the imaging device for capturing text-based images under various lighting conditions. In some embodiments, the imaging device includes an image stabilization mechanism to compensate for a user's unsteady positioning of the imaging device.
In addition to the one-time photographic capture model, some embodiments further include a video unit for continuous video capture. For example, a short clip of the image can be recorded using the video capture unit and processed to generate one master image from the composite of the video stream. Thus, an uneven surface, e.g., an unfolded newspaper which is not lying flat, can be recorded in multiple digital video images and accurately captured by slowly moving the device over the surface to be imaged. A software component of the imaging system can then build a final integrated composite image from the video stream for subsequent OCR processing to achieve enhanced accuracy. Similarly, a streaming video input to the imaging system can be processed for subsequent OCR processing. Software that performs the above described function is known in the art. Accordingly, both planar and non-planar objects can be imaged with a video unit employing continuous video capture.
Additionally, some embodiments include one or more light sources for enhancing the quality of the image captured by the device. Light sources known in the art can be employed for such a purpose. For example, the light source may be a FLASH unit, an incandescent light, or an LED light. In some embodiments, the light source employed optimizes contrast and reduces the level of glare. In one embodiment, the light source is specially designed to direct light at an angle that is not perpendicular to the surface being imaged for reducing glare.
In some embodiments, the image capturing system further includes a processor and software-implemented image detectors and filters that function to optimize certain visual parameters of the image for subsequent OCR processing. To optimize the image, especially images that include colored text, for subsequent OCR processing, some embodiments further include a color differential detection mechanism as well as a mechanism for adjusting the color differential of the captured image.
In some embodiments, the imaging system further includes CMOS image sensor cells. To facilitate users with unsteady hands and avoid image distortion, handheld embodiments further include an image stabilization mechanism, known by those of ordinary skill in the art.
The system can include a user interface comprising a number of components such as volume control, speakers, headphone/headset jack, microphone, and display. The display may be a monochromatic or color display. In some embodiments, an LCD display having a minimum of 640×480 resolution is employed. The LCD display may also be a touch screen display. According to certain embodiments, the user interface includes a voice command interface by which the user can input simple system commands to the system. In alternative embodiments, the system includes a Braille display to accommodate visually impaired users. In still other embodiments, the Braille display is a peripheral device in the system.
OCR systems and text reader systems are well-known and available in the art. Examples of OCR systems include, without limitation, FineReader (ABBYY), OmniPage (Scansoft), Envision (Adlibsoftware), Cuneiform, PageGenie, Recognita, Presto, TextBridge, amongst many others. Examples of text reader systems include, without limitation, Kurzwell 1000 and 3000, Microsoft Word, JAWS, eReader, WriteOutloud, ZoomText, Proloquo, WYNN, Window-Eyes, and Hal. In some embodiments, the text reader system employed conforms with the DAISY (Digital Accessible Information System) standard.
In some embodiments, the handheld device includes at least one gigabyte of FLASH memory storage and an embedded computing power of 650 mega Hertz or more to accommodate storage of various software components described herein, e.g., plane detection mechanism, image conditioners or filters to improve image quality, contrast, and color, etc. The device may further include in its memory a dictionary of words, one or more translation programs and their associated databases of words and commands, a spellchecker, and thesaurus. Similarly, the handheld device may employ expanded vocabulary lists to increase the accuracy of OCR with technical language from a specific field, e.g., Latin phrases for the practice of law or medicine or technical vocabularies for engineering or scientific work. The augmentation of the OCR function in such a manner to recognize esoteric or industry-specific words and phases and to account for the context of specialized documents increases the accuracy of the OCR operation.
In still other embodiments, the handheld device includes a software component that displays the digital text on an LCD display and highlights the words in the text as they are read aloud. For example, U.S. Pat. No. 6,324,511, the disclosure of which is incorporated by reference herein, describes the rendering of synthesized speech signals audible with the synchronous display of the highlighted text.
The handheld device may further comprise a software component that signals to the user when the end of a page is near or signals the approximate location on the page as the text is being read. Such signals may be visual, audio, or tactile. For example, audio cues can be provided to the user in the form of a series of beeps or the sounding of different notes on a scale. Signaling to the user that the end of the page provides a prompt for the user to turn the page, if manual page-turning is employed.
The handheld device may further include a digital/video magnifier. Examples of digital magnifiers available in the art include Opal, Adobe, Quicklook, and Amigo. In certain embodiments, the digital/video magnifier transfers the enlarged image of the text as supplementary inputs to the OCR system along with the image(s) obtained from the image capturing system. In other embodiments, the magnifier functions as a separate unit from the rest of the device and serves only to display the enlarged text to the user.
In various embodiments, the devices of the present invention may be implemented as wireless systems, wired systems, or a combination of both. When implemented as a wireless system, transformation devices may include components and interfaces suitable for communicating over a wireless shared media, such as one or more antennas, transmitters, receivers, transceivers, amplifiers, filters, control logic, and so forth. An example of wireless shared data may include portions of a wireless spectrum, such as the RF spectrum and so forth. Examples of wireless communication methods include, Bluetooth, ZigBee, wireless local area networks (WLAN), Wi-Fi (WLAN based on IEEE 802.11 specifications), Wi-Pro, WiMax, and GPS (global positioning systems). When implemented as a wired system, a transformation device may include components and interfaces suitable for communicating over wired communications media, such as input/output (I/O) adapters, physical connectors to connect the I/O adapter with a corresponding wired communications medium, a network interface card (NIC), disc controller, video controller, audio controller, and so forth. Examples of wired communications media may include a wire, cable, metal leads, printed circuit board (PCB), backplane, switch fabric, semiconductor material, twisted-pair wire, co-axial cable, fiber optics, and so forth.
Certain embodiments further include a data port for data transfer, such as transfer of images, from the system to a computing station. For example, the data port can be a USB 2.0 slot for wired communication with devices, and or communication may be wirelessly-enabled with 802.11a/b/g/n (Wi-Fi) standards, and or using a infrared (IR) port for transferring image data to a computing station. Still another embodiment includes a separate USB cradle that functions as a battery charging mechanism and/or a data transfer mechanism. Still other embodiments employ Bluetooth radio frequency or a derivative of Ultra Wide Band for data transfer.
An optional automatic page turner and page holder are respectively coupled to the housing and the image capturing system positioned opposite the slot where the book is to be placed. Automatic page turners are well known and available in the art. See U.S. 20050145097, U.S. 20050120601, SureTurn™ Advanced Page Turning Technology (Kirtas Technologies), the disclosures of which are incorporated herein by reference in their entirety. Pages may be turned in response to an automated signal from the transformation device or from a signal from a user. In addition, the device can be employed without an automated page turner, instead, relying on the user to turn pages of a book.
Some of the figures accompanying the description of the present invention may include a logic flow. Although such figures presented herein may include a particular logic flow, it can be appreciated that the logic flow merely provides an example of how the general functionality as described herein can be implemented. Further, the given logic flow does not necessarily have to be executed in the order presented unless otherwise indicated. In addition, the given logic flow may be implemented by a hardware element, a software element executed by a processor, or any combination thereof.
Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints.
Some embodiments or aspects of embodiments may be implemented, for example, using a machine-readable or computer-readable medium or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software. The machine-readable medium or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium and/or storage unit, for example, memory, removable or non-removable media, erasable or non-erasable media, writeable or re-writeable media, digital or analog media, hard disk, floppy disk, Compact Disk Read Only Memory (CD-ROM), Compact Disk Recordable (CD-R), Compact Disk Rewriteable (CD-RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of Digital Versatile Disk (DVD), a tape, a cassette, or the like. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.
Unless specifically stated otherwise, it may be appreciated that terms such as “processing,” “computing,” “calculating,” “determining,” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical quantities (e.g., electronic) within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices. The embodiments are not limited in this context.
The present application claims the benefit of the earlier filing date of U.S. Provisional Application No. 60/913,486, filed Apr. 23, 2007. The present application is related to U.S. patent application Ser. No. 11/729,662, entitled “System for Capturing and Presenting Text Using Video Image Capture for Optical Character Recognition,” filed Mar. 28, 2007, now pending, U.S. patent application Ser. No. 11/729,664, entitled “Method for Capturing and Presenting Text Using Video Image Capture for Optical Character Recognition,” filed Mar. 28, 2007, now pending, U.S. patent application Ser. No. 11/729,665, entitled “Method for Capturing and Presenting Text While Maintaining Material Context During Optical Character Recognition,” filed Mar. 28, 2007, now pending, an application (Application No. unknown) entitled “System for Capturing and Presenting Text While Maintaining Material Context During Optical Character Recognition,” filed Mar. 28, 2007, now pending, and PCT application No. PCT/US07/65528, entitled “Capturing and Presenting Text Using Auditory Signals,” filed Mar. 28, 2007, which claim the benefit of provisional application Nos. 60/811,316, filed Jun. 5, 2006 and 60/788,365, Filed Mar. 30, 2006, the disclosures of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
3627991 | Beall | Dec 1971 | A |
4514063 | Wang et al. | Apr 1985 | A |
4563771 | Gorgone et al. | Jan 1986 | A |
5596656 | Goldberg | Jan 1997 | A |
5634084 | Malsheen et al. | May 1997 | A |
5771071 | Bradley et al. | Jun 1998 | A |
5875428 | Kurzweil et al. | Feb 1999 | A |
5942970 | Norman | Aug 1999 | A |
5982911 | Matsumoto et al. | Nov 1999 | A |
6014464 | Kurzweil et al. | Jan 2000 | A |
6033224 | Kurzweil et al. | Mar 2000 | A |
6035061 | Katsuyama et al. | Mar 2000 | A |
6052663 | Kurzweil et al. | Apr 2000 | A |
6111976 | Rylander | Aug 2000 | A |
6115482 | Sears et al. | Sep 2000 | A |
6173264 | Kurzweil et al. | Jan 2001 | B1 |
6199042 | Kurzweil et al. | Mar 2001 | B1 |
6246791 | Kurzweil et al. | Jun 2001 | B1 |
6278441 | Gouzman et al. | Aug 2001 | B1 |
6289121 | Abe et al. | Sep 2001 | B1 |
6295084 | Nishizawa et al. | Sep 2001 | B1 |
6320982 | Kurzweil et al. | Nov 2001 | B1 |
6324511 | Kiraly et al. | Nov 2001 | B1 |
6542623 | Kahn | Apr 2003 | B1 |
6553138 | Rozin | Apr 2003 | B2 |
6587583 | Kurzweil et al. | Jul 2003 | B1 |
6762749 | Gouzman et al. | Jul 2004 | B1 |
6948937 | Tretiakoff et al. | Sep 2005 | B2 |
7006665 | Olson et al. | Feb 2006 | B2 |
7015969 | Brown et al. | Mar 2006 | B2 |
7123292 | Seeger et al. | Oct 2006 | B1 |
7167604 | Allen et al. | Jan 2007 | B2 |
7230538 | Lai et al. | Jun 2007 | B2 |
7308114 | Takehara et al. | Dec 2007 | B2 |
7325735 | Kurzweil et al. | Feb 2008 | B2 |
7366339 | Douglas et al. | Apr 2008 | B2 |
7403306 | Nakano et al. | Jul 2008 | B2 |
7751092 | Sambongi et al. | Jul 2010 | B2 |
7805307 | Levin et al. | Sep 2010 | B2 |
8139894 | Nestares | Mar 2012 | B2 |
8208729 | Foss | Jun 2012 | B2 |
8218020 | Tenchio et al. | Jul 2012 | B2 |
8423076 | Kim et al. | Apr 2013 | B2 |
20010026263 | Kanamori et al. | Oct 2001 | A1 |
20020011558 | Neukermans et al. | Jan 2002 | A1 |
20020067520 | Brown et al. | Jun 2002 | A1 |
20020106620 | Barnum | Aug 2002 | A1 |
20020131636 | Hou | Sep 2002 | A1 |
20020196330 | Park et al. | Dec 2002 | A1 |
20030035050 | Mizusawa et al. | Feb 2003 | A1 |
20030048928 | Yavitz | Mar 2003 | A1 |
20030063335 | Mandel et al. | Apr 2003 | A1 |
20030091244 | Schnee et al. | May 2003 | A1 |
20030109232 | Park et al. | Jun 2003 | A1 |
20030138129 | Olson et al. | Jul 2003 | A1 |
20030195749 | Schuller | Oct 2003 | A1 |
20040136570 | Ullman et al. | Jul 2004 | A1 |
20040202352 | Jones et al. | Oct 2004 | A1 |
20040205623 | Weil et al. | Oct 2004 | A1 |
20050031171 | Krukowski et al. | Feb 2005 | A1 |
20050052558 | Yamazaki et al. | Mar 2005 | A1 |
20050120601 | Sadegh et al. | Jun 2005 | A1 |
20050129284 | Campbell | Jun 2005 | A1 |
20050140778 | Kim et al. | Jun 2005 | A1 |
20050145097 | Wolberg et al. | Jul 2005 | A1 |
20050157955 | Cervantes et al. | Jul 2005 | A1 |
20050286743 | Kurzweil et al. | Dec 2005 | A1 |
20050288932 | Kurzweil et al. | Dec 2005 | A1 |
20060008122 | Kurzweil et al. | Jan 2006 | A1 |
20060011718 | Kurzweil et al. | Jan 2006 | A1 |
20060013444 | Kurzweil et al. | Jan 2006 | A1 |
20060013483 | Kurzweil et al. | Jan 2006 | A1 |
20060045389 | Butterworth | Mar 2006 | A1 |
20060054782 | Olsen et al. | Mar 2006 | A1 |
20060232844 | Nakajima | Oct 2006 | A1 |
20060261248 | Hwang | Nov 2006 | A1 |
20060280338 | Rabb | Dec 2006 | A1 |
20070109417 | Hyttfors et al. | May 2007 | A1 |
20070230748 | Foss | Oct 2007 | A1 |
20070230749 | Foss | Oct 2007 | A1 |
20070230786 | Foss | Oct 2007 | A1 |
20070280534 | Foss | Dec 2007 | A1 |
20080102888 | Sellgren et al. | May 2008 | A1 |
20090052807 | Kotlarsky et al. | Feb 2009 | A1 |
20090167875 | Ho | Jul 2009 | A1 |
20090169061 | Anderson et al. | Jul 2009 | A1 |
20090169131 | Nestares et al. | Jul 2009 | A1 |
20090172513 | Anderson et al. | Jul 2009 | A1 |
20090197635 | Kim et al. | Aug 2009 | A1 |
20090245695 | Foss et al. | Oct 2009 | A1 |
20090268030 | Markham | Oct 2009 | A1 |
20100063621 | Pelletier | Mar 2010 | A1 |
20100111361 | Tan et al. | May 2010 | A1 |
20110267264 | McCarthy et al. | Nov 2011 | A1 |
Number | Date | Country |
---|---|---|
10-2000-0063969 | Nov 2000 | KR |
10-2006-0116114 | Nov 2006 | KR |
PCT FI2006050048 | Jan 2005 | WO |
2009006015 | Jan 2009 | WO |
Entry |
---|
Ball et al. “Co-Design in Synthetic Biology: A System Level Analysis of the Development of an Environmental Sensing Device” (2010) pp. 385-396. |
Anderson et al. “Why Consumers Don't Adopt Smart Wearable Electronics” IEEE CS Wearable Computing (2008) pp. 1-3. |
Anderson et al. “Let's Get Physical” Interactions Sep.-Oct. 2008 pp. 1-5. |
International Search Report/Written Opinion for Patent Application No. PCT/US2008/067237, mailed Oct. 29, 2008, 10 Pages. |
Number | Date | Country | |
---|---|---|---|
20080260210 A1 | Oct 2008 | US |
Number | Date | Country | |
---|---|---|---|
60913486 | Apr 2007 | US |