Social network system

Information

  • Patent Grant
  • 10042821
  • Patent Number
    10,042,821
  • Date Filed
    Wednesday, March 23, 2016
    9 years ago
  • Date Issued
    Tuesday, August 7, 2018
    7 years ago
Abstract
The present invention includes systems and methods for sending social media messages without the need for keyboard inputs. A microphone captures live audio speech data and transmits the audio data to a processing unit. The processing unit converts the audio to speech data. The processing unit also removes censored words, emphasizes key words, and edits that data to include product and promotional messages where appropriate. The processing unit then uses code words contained in the speech data to send the speech data to the appropriate social media outlets for output.
Description
BACKGROUND OF THE INVENTION
Field of the Invention

The present invention is generally related web publishing. More specifically, the present invention is related to modifying received audio speech data for automatic text publication on social media.


Description of the Related Art

Players, teams, and businesses currently use social media to increase their reach and communicate with fans to promote themselves, their views, products, and brands. Social media messages are commonly integrated into television broadcasts through commentary or displayed alongside live broadcasts in a portion of the display.


It is difficult, however, for athletes to send messages through social media during the course of a game because athletes do not have free use of their hands. An athlete cannot send, for example, a live comment regarding an event during the game because the athlete cannot leave the game to send a message through a phone. This limitation makes it difficult for players, teams, and businesses to fully leverage social media.


There is a need in the art for improved systems and methods for delivering real-time game commentary from players through social media.


SUMMARY OF THE PRESENTLY CLAIMED INVENTION

One exemplary method for sending social media messages describes receiving audio speech data through one or more microphones. The method also describes processing the audio speech data at a processing unit. The processing unit converts the audio speech data to text speech data. The method also describes comparing the text speech data to one or more databases. The one or more databases include one or more code words. The method also describes sending the processed speech data for output through social media. The processing unit routes text speech data for output through social media according to code words included in the text speech data.


One exemplary system for sending social media messages provides one or more microphones, a processing unit, and a processor. The one or more microphones receive audio speech data through one or more microphones. The processing unit processes the audio speech data and compares the text speech data to one or more databases. The processing unit converts the audio speech data to text speech data. The one or more databases include one or more code words. Execution of instructions stored in the memory by the processor performs a set of operations. The operations include sending the processed speech data for output through a social media interface. The processing unit routes text speech data for output through the social media interface according to code words included in the text speech data.


One exemplary non-transitory computer-readable storage medium is also described, the non-transitory computer-readable storage medium having embodied thereon a program executable by a processor to perform an exemplary method for sending social media messages. The exemplary program method describes receiving audio speech data. The program method also describes processing the audio speech data. The program method also describes converting the audio speech data to text speech data. The program method also describes comparing the text speech data to one or more databases. The one or more databases include one or more code words. The program method also describes sending the processed speech data for output through social media. The processing unit routes text speech data for output through social media according to code words included in the text speech data.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates a system for sending social media messages.



FIG. 2 illustrates database tables in a player data database, a catchword database, and a product insert database.



FIG. 3 illustrates a method for processing speech data.



FIG. 4 illustrates a method for processing text data.





DETAILED DESCRIPTION

The present invention includes systems and methods for sending social media messages without the need for keyboard inputs. A microphone captures live audio speech data and transmits the audio data to a processing unit. The processing unit converts the audio to speech data. The processing unit also removes censored words, emphasizes key words, and edits that data to include product and promotional messages where appropriate. The processing unit then uses code words contained in the speech data to send the speech data to the appropriate social media outlets for output.


Social messages can be sent from entertainment or cultural events that are presented at a theatre, gymnasium, stadium, or other facility to a group of people. Such events include a wide variety of sporting events such as football (American and Global), baseball, basketball, soccer, ice hockey, lacrosse, rugby, cricket, tennis, track and field, golf, cycling, motor sports such as automobile or motorcycle racing, horse racing, Olympic games, and the like; cultural events such as concerts, music festivals, plays, the opera, and the like; religious events; and more permanent exhibitions such as museum, historic home, and the like.



FIG. 1 illustrates a system 100 for sending social media messages. The system 100 includes a wearable item 105, a processing unit 130, the Internet 195, an internet server 190, and three databases 175, 180, and 185. As illustrated in FIG. 1, a microphone 120, radio transmitter 115, and an antenna 110 are connected to the helmet 105. The processing unit includes a radio receiver 135 and a text processing application 155. The radio receiver 135 includes an analog-to-digital converter 140 and a means for receiving one or more channels 145. The text processing application 155 includes a speech recognition unit 160, a player identification detection unit 165, and a catchword detection unit 170. The three databases 175, 180, and 185 include code words (not shown).


The microphones 120 can be acoustic-to-electric transducers for converting audio data into an electrical signal. The microphones 120 can be used with a wireless transmitter. The microphones can be wearable. The radio transmitter 115 is in communication with the microphone 120. The wearable items 105 can be sporting equipment used in the course of playing a sport, including protective equipment or non-protective equipment. The wearable items 105 can include helmets, protective padding, uniforms, jerseys, footwear, eyewear (e.g. glasses, face shields), or balls (e.g. football, baseball, soccer ball).


The processing unit 130 is in communication with the radio transmitter 115 through the antenna 110, wherein the radio transmitter 115 produces a radio transmission 125 for delivery to the processing unit 130. The processing unit 130 can be a personal computer, a desktop computer, or a server. The radio transmission 125 is a radio frequency signal carrying audio data. The radio transmitter 115 converts an electrical signal from the microphone 120 into a radio signal for transmission to the antenna 110. The radio transmitter 115 can be a one-way radio transmitter. The radio transmitters 115 can include at least one power source, a radio oscillator, a signal modulator, and a radio frequency amplifier. The radio transmitter 115 can be wireless or wearable. The antenna 110 can convert an electrical signal into radio waves for transmitting a radio-frequency audio signal. The processing unit is in communication with the server 190 through the Internet 195. The system 100 can automatically publish digital speech data to a website through the Internet 195. The server 190 is connected to the Internet 195 and hosts one or more remotely accessible web pages. The server can publish content received via the Internet 195 to social media websites such as Twitter or Facebook. The digital speech data can be representative of verbal commentary during a sporting event.


The radio receiver sends a digital audio signal 150 to the text processing application 155. The analog-to-digital converter 140 converts analog radio signal to digital audio signal. The radio receiver 135 receives radio transmissions 125 through one or more channels 145. The radio receiver 135 is a radio frequency receiver for receiving the radio transmission 125 from the radio transmitter 115. The one or more channels 145 are data parameters defining the channel through which the radio receiver 135 receives the radio transmission 125. The data parameters control or change the frequency monitored by the radio receiver 135. The one or more channels 145 are identified with a speaker, such as an athlete.


The speech recognition unit 160 includes a software program for translating spoken words to text. The speech recognition unit 160 may be an automatic speech recognition program. The speech recognition unit 160 converts the digital audio signal into text. The player identification detection unit 165 is a software program for determining the identity of a sports player by the channel associated to each of the one or more athletes. The catchword detection unit 170 is a software program for recognizing code words in the speech recognition unit output, wherein code words include catchwords and product words. The catchword detection unit modifies the speech recognition unit output.


The player data database 175, the catchword database 180, and the product insert database 185 may be relational databases such as Microsoft Access or Microsoft SQL Server or flat files, such as comma-separated value text files, where the flat files are compatible with applications such as Microsoft Office applications. The player data database 175 is a database of player speech data produced by the text processing application 155. The player data database 175 is a relational database with one or more data tables. Each of the one or more player data database data tables contains the speech recognition unit output and metadata associated with the speech recognition unit output. The catchword database 180 is a database of catchwords provided to the catchword detection unit 170. Each of the one or more catchword database data tables contains catchwords used to modify the text data. The product insert database 185 is a database of product words and sponsored words, wherein the sponsored words are associated with product words and the sponsored words are used to replace associated product words in the text data.



FIG. 2 illustrates database tables 200 in the player data database 175, the catchword database 180, and the product insert database 185. The processing unit 130 uses the player data database data table 210 to organize text speech data. The player data database table 210 organizes text data 225 according to time 215 and player identification 220. The timestamp for each text data record corresponds to when the system 100 created the text data record. Player identification 220 provides the identity of the speaker associated with the text data record. The player identification 220 can be the name of the sports player, the jersey number of the sports player, the channel identification associated with the speaker, or the frequency associated with the speaker. The speech recognition unit 160 outputs the text data 225.


The processing unit uses the catchword database 180 to modify text speech data. The catchword database table 230 includes product words 235, censor words 240, key words 245, and code words 250. The processing unit modifies the text speech data to remove censor words 240 listed in the database. Censor words 240 include obscene language and content prohibited by government agencies (such as the Federal Communications Commission). The processing unit modifies the text speech data to replace product words 235 with corresponding sponsored words 265 listed in the product insert table 255. Product words 235 include specific products, words associated with specific brands, or words associated with specific products. The processing unit further modifies the text speech data to emphasize key words 245 listed in the database. Key words include interjections and words that convey excitement. The processing unit routes modified speech data for output through social media according to code words 250 listed in the database. Code words 250 include words associated with posting messages to particular social media forums, as well as words indicating the beginning and end of messages.


The processing unit uses the product insert database 185 to modify text speech data. The product insert database table 255 includes product words 235 and sponsor words 265. The processing unit modifies the text speech data to replace product words 235 listed in the database with sponsored words 265 listed in the database. Product words 235 include specific products, words associated with specific brands, or words associated with specific products. Sponsored words 265 include words associated with advertising, endorsements, or promotional deals, as well as words for specific brands or marketing campaigns.



FIG. 3 illustrates a method 300 for processing speech data. The method begins at block 305, where the radio receiver 135 receives the radio transmission 125. The radio receiver 135 may receive radio transmissions 125 through multiple channels, and wherein the channels may be predefined and changed. At block 315, the radio receiver processes the radio transmission 125 using the analog-to-digital converter 140 to convert the radio transmission 125 into digital audio signal. At block 320, the text processing application 155 uses the speech recognition unit 160 to convert the digital audio signal to text data. The text processing application 155 may use a standard input/output stream. At block 310, the text processing application 155 uses the digital audio signal 150 and channel 145 information to identify the player. The player identification unit 165 then associates the text data with a player based on player information associated with the channel. The player identification unit 165 can compare the frequency of the digital audio signal with information regarding each player and the channel associated with each player. The text processing application 155 stores the text data produced by the speech recognition unit 160, player identity data produced by the player identification detection unit 165, and the current time in the player data database 175. At block 325, the text processing application 155 uses the catchword detection unit 170 to examine the text data for words stored in the catchword database 180 and product insert database 185 and process the text data according to the detected words. The method goes back to block 305 if the text data does not include code words used to route the text data for output through social media. If the text data includes one or more code words used to route the text data for output through social media, the method moves to block 330. The text processing application 155 can use a loop construct to compare each word of the text data to the code words 250. At block 330, the text processing application 155 extracts the text data for output. The text processing application 155 can select a series of words for extraction based on the code word used and the location of the code word. The text processing application 155 can select a series of words or characters starting with a code word 250 and a ending with code word 250. The text processing application can also select a series of words or characters between a first occurrence of a code word 250 and a second occurrence of a code word 250 in the text data. At block 335, the text processing application outputs the text data together with the player identification for publication through social media.



FIG. 4 illustrates a method 400 for processing text data. The method begins at block 410, where catchword detection unit 170 examines a record from the player database table 210.


At block 415, the catchword detection unit determines whether the text data contains censored words 240 listed in the catchword database table 230. The catchword detection unit can compare each word in the record with each censored word 240 listed in the catchword database table 230.


If the text data does not contain censored words 240, the method continues to block 408. If the text data contains censored words 240 listed in the catchword database table 230, the method continues to block 420. At block 420, censored words contained in the text data are replaced with redacted text or a placeholder. The method then continues to block 425.


At block 425, the catchword detection unit determines whether the text data contains product words 235 listed in the catchword database table 230. The catchword detection unit can compare each word in the record with each product word 235 listed in the catchword database table 230.


If the text data does not contain product words 235, the method continues to block 435. If the text data contains product words 235 listed in the catchword database table 230, the method continues to block 430. At block 430, product words contained in the text data are replaced with sponsored words listed in the product insert table 255. The method then continues to block 435.


At block 435, the catchword detection unit determines whether the text data contains key words 245 listed in the catchword database table 230. The catchword detection unit can compare each word in the record with each key word 245 listed in the catchword database table 230.


If the text data does not contain key words 245, the method continues to block 445. If the text data contains key words 245 listed in the catchword database table 230, the method continues to block 440. At block 440, key words contained in the text data are emphasized in the text data. The text processing application may insert markup language formatting commands before and after each key word to emphasize the key word. The method then continues to block 445.


At block 445, the processing unit 130 uploads the text data to the internet server 190 via the Internet 195. The processing unit 130 can upload the text data using a content submission application programming interface (API) provided by an operator of the internet server 190 to allow for direct publishing to a social media website. At block 450, the catchword detection unit increments to the next record in the player database table 210 and repeats the method, beginning again at block 304.


While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. The present descriptions are not intended to limit the scope of the presently claimed invention or to limit the scope of embodiments of the presently claimed invention. The present descriptions are intended to cover alternatives, modifications, and equivalents consistent with the spirit and scope of the disclosure.

Claims
  • 1. A method for publishing audio speech as text social media messages using code words, the method comprising: receiving audio signals from a plurality of wearable devices worn by a plurality of users each speaking a plurality of code words, wherein the wearable device includes: a microphone that captures the code words spoken by the user as audio signals, anda radio transmitter that transmits the audio signals to a processing unit that includes a radio receiver, wherein the transmitted audio signals are assigned to different radio transmission channels each having data parameters that control a frequency monitored by the radio receiver, and wherein each channel is assigned to a different user;executing instructions stored in memory by a processor of the processing unit, the instructions executed to: identify the user associated with the received audio signal based on the radio transmission channel used to transmit the received audio signal,convert the received audio signal into a digital record comprising corresponding text data, wherein the text data record is assigned a unique timestamp, andidentify that the digital text data record assigned the unique timestamp includes the plurality of code words based on matching to text stored within a database associated with the identified user, wherein the plurality of code words identified within the text data record assigned the unique timestamp includes: a first matching code word that marks a beginning of a social media message to be published within the converted text of the text data record,a second matching code word that marks an end of the social media message to be published within the converted text of the text data record, anda third matching code word that authorizes publication of a portion of the text data record defined by the first and the second matching code word as a social media message via a corresponding social media server tied to the third matching code word; andgenerating a social media message to be published on the corresponding social media server based on the identification of the third code word within the converted text data record, wherein the message includes the portion of the converted text data record associated with the first and the second code word, and wherein the generated social media message is associated with the identified user.
  • 2. The method of claim 1, further comprising: comparing the portion of the digital text data record associated with the first and the second code word with one or more databases that includes special terms; andmodifying the portion of the digital text data record based on the comparison.
  • 3. The method of claim 2, wherein the special terms include specific words to be censored from the social media message.
  • 4. The method of claim 3, wherein modification of the converted text data record that includes one or more censored words is performed by redacting the converted text data record.
  • 5. The method of claim 3, wherein modification of the converted text data record that includes one or more censored words is performed by replacing the censored words with a placeholder.
  • 6. The method of claim 2, wherein the special terms include specific key words to be emphasized in the social media message.
  • 7. The method of claim 6, wherein the emphasis of the specific key words in the social media message is provided via markup language formatting commands.
  • 8. The method of claim 2, wherein the special terms include sponsored terms associated with a particular product or promotional message to be included in the social media message.
  • 9. The method of claim 1, wherein identification of the user associated with the received audio includes comparing the received audio against a database that includes information about a plurality of different users.
  • 10. A system for publishing audio speech as text social media messages using code words, the system comprising: a wearable device worn by a user that includes: a microphone that captures audio signals based on the user speaking a plurality of code words, anda radio transmitter that transmits the received audio signals to a processing unit that includes a radio receiver, wherein the transmitted audio signals are assigned to different radio transmission channels each having data parameters that control a frequency monitored by the radio receiver, and wherein each channel is assigned to a different user;a processing unit that includes instructions stored in memory, the instructions executed by the processor to: identify the user associated with the received audio signal based on the radio transmission channel used to transmit the received audio signal,convert the received audio signal into a digital record comprising corresponding text data, wherein the text data record is assigned a unique timestamp, andidentify that the digital text data record assigned the unique timestamp includes the plurality of code words based on matching to text stored within a database associated with the identified user, wherein the plurality of code words identified within the text data record assigned the unique timestamp includes: a first matching code word that marks a beginning of a social media message to be published within the converted text of the text data record,a second matching code word that marks an end of the social media message to be published within the converted text of the text data record, anda third matching code word that authorizes publication of a portion of the text data record defined by the first and the second matching code word as a social media message via a corresponding social media server tied to the third matching code word; anda communication interface that generates a social media message to be published on the corresponding social media server based on the identification of the third code word within the converted text data record, wherein the message includes the portion of the converted text data record associated with the first and the second code word, and wherein the generated social media message is associated with the identified user.
  • 11. The system of claim 10, wherein the processor further: compares the portion of the digital text data record associated with the first and the second code word with one or more databases that includes special terms; andmodifies the portion of the digital text data record based on the comparison.
  • 12. The system of claim 11, wherein the special terms include specific words to be censored from the social media message.
  • 13. The system of claim 12, wherein modification of the converted text data record that includes one or more censored words is performed by redacting the converted text data record.
  • 14. The system of claim 12, wherein modification of the converted text data record that includes one or more censored words is performed by replacing the censored words with a placeholder.
  • 15. The system of claim 11, wherein the special terms include specific key words to be emphasized in the social media message.
  • 16. The system of claim 15, wherein the emphasis of the specific key words in the social media message is provided via markup language formatting commands.
  • 17. The system of claim 11, wherein the special terms include sponsored terms associated with a particular product or promotional message to be included in the social media message.
  • 18. The system of claim 10, wherein identification of the user associated with the received audio includes comparing the received audio against a database that includes information about a plurality of different users.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation and claims the priority benefit of U.S. patent application Ser. No. 14/788,754 filed Jun. 30, 2015, which claims the priority benefit of U.S. provisional application No. 62/023,355, filed on Jul. 11, 2014, the disclosures of which are incorporated herein by reference.

US Referenced Citations (167)
Number Name Date Kind
6253179 Beigi Jun 2001 B1
6487534 Thelen et al. Nov 2002 B1
6622084 Cardno et al. Sep 2003 B2
6633852 Heckerman et al. Oct 2003 B1
6980966 Sobrado et al. Dec 2005 B1
7082427 Seibel et al. Jul 2006 B1
7715723 Kagawa et al. May 2010 B2
7800646 Martin Sep 2010 B2
7818176 Freeman et al. Oct 2010 B2
7881702 Heyworth et al. Feb 2011 B2
7970608 Madhavapeddi et al. Jun 2011 B2
8090707 Orrtung et al. Jan 2012 B1
8183997 Wong et al. May 2012 B1
8253586 Matak Aug 2012 B1
8254535 Madhavapeddi et al. Aug 2012 B1
8265612 Athsani et al. Sep 2012 B2
8290925 Anandan et al. Oct 2012 B1
8355912 Keesey et al. Jan 2013 B1
8472988 Metcalf et al. Jun 2013 B2
8502717 Lin et al. Aug 2013 B2
8502718 Chiu et al. Aug 2013 B2
8543404 Moore et al. Sep 2013 B2
8560323 Madhavapeddi et al. Oct 2013 B2
8577685 Morrison Nov 2013 B2
8589667 Mujtaba et al. Nov 2013 B2
8611930 Louboutin et al. Dec 2013 B2
8620344 Huang et al. Dec 2013 B2
8626465 Moore et al. Jan 2014 B2
8630216 Deivasigamani et al. Jan 2014 B2
8660501 Sanguinetti Feb 2014 B2
8665118 Woodard et al. Mar 2014 B1
8696113 Lewis Apr 2014 B2
8706044 Chang et al. Apr 2014 B2
8724723 Panicker et al. May 2014 B2
8750207 Jeong et al. Jun 2014 B2
8793094 Tam et al. Jul 2014 B2
8816868 Tan et al. Aug 2014 B2
8831529 Toh et al. Sep 2014 B2
8831655 Burchill et al. Sep 2014 B2
8836851 Brunner Sep 2014 B2
8843158 Nagaraj Sep 2014 B2
8849308 Marti et al. Sep 2014 B2
8862060 Mayor Oct 2014 B2
8873418 Robinson et al. Oct 2014 B2
8874090 Abuan et al. Oct 2014 B2
8917632 Zhou et al. Dec 2014 B2
8934921 Marti et al. Jan 2015 B2
9343066 Cronin May 2016 B1
9384734 Wiseman Jul 2016 B1
9711146 Cronin Jul 2017 B1
20020099574 Cahill et al. Jul 2002 A1
20040117528 Beacher et al. Jun 2004 A1
20050160270 Goldberg et al. Jul 2005 A1
20050207596 Beretta et al. Sep 2005 A1
20060025214 Smith Feb 2006 A1
20060095329 Kim May 2006 A1
20070032945 Kaufman Feb 2007 A1
20070136128 Janacek et al. Jun 2007 A1
20070282621 Altman et al. Dec 2007 A1
20070290888 Reif et al. Dec 2007 A1
20080114633 Wolf et al. May 2008 A1
20080134282 Fridman et al. Jun 2008 A1
20080317263 Villarreal Dec 2008 A1
20090005040 Bourne Jan 2009 A1
20090198778 Priebe Aug 2009 A1
20100057743 Pierce Mar 2010 A1
20100070312 Hunt Mar 2010 A1
20100086107 Tzruya Apr 2010 A1
20100201362 Crossan Aug 2010 A1
20100208082 Buchner et al. Aug 2010 A1
20110029894 Eckstein Feb 2011 A1
20110035220 Opaluch Feb 2011 A1
20110211524 Holmes et al. Sep 2011 A1
20110282860 Baarman et al. Nov 2011 A1
20120022875 Cross et al. Jan 2012 A1
20120023390 Howes Jan 2012 A1
20120078667 Denker et al. Mar 2012 A1
20120092190 Stefik et al. Apr 2012 A1
20120201362 Crossan et al. Aug 2012 A1
20120262305 Woodard et al. Oct 2012 A1
20120303390 Brook et al. Nov 2012 A1
20120303753 Hansen Nov 2012 A1
20120331058 Huston et al. Dec 2012 A1
20130018810 VonAllmen Jan 2013 A1
20130054375 Sy et al. Feb 2013 A1
20130122936 Hudson et al. May 2013 A1
20130124234 Nilsson et al. May 2013 A1
20130126713 Haas et al. May 2013 A1
20130141555 Ganick et al. Jun 2013 A1
20130165086 Doulton Jun 2013 A1
20130167290 Ben Jul 2013 A1
20130185102 Grossi Jul 2013 A1
20130227011 Sharma et al. Aug 2013 A1
20130238370 Wiseman et al. Sep 2013 A1
20130254234 Pierce Sep 2013 A1
20130265174 Scofield et al. Oct 2013 A1
20130279917 Son et al. Oct 2013 A1
20130303192 Louboutin Nov 2013 A1
20130304691 Pinckney et al. Nov 2013 A1
20130317835 Mathew Nov 2013 A1
20130324274 Stites Dec 2013 A1
20130328917 Zhou Dec 2013 A1
20130331087 Shoemaker Dec 2013 A1
20130331118 Chhabra Dec 2013 A1
20130331137 Burchill Dec 2013 A1
20130332108 Patel Dec 2013 A1
20130332156 Tackin Dec 2013 A1
20130336662 Murayama et al. Dec 2013 A1
20130343762 Murayama et al. Dec 2013 A1
20140012918 Chin et al. Jan 2014 A1
20140019172 Oxenham et al. Jan 2014 A1
20140025235 Levien et al. Jan 2014 A1
20140032250 Oxenham et al. Jan 2014 A1
20140032377 Oxenham et al. Jan 2014 A1
20140036088 Gabriel Feb 2014 A1
20140046802 Hosein et al. Feb 2014 A1
20140062773 MacGougan Mar 2014 A1
20140065962 Le Mar 2014 A1
20140071221 Dave Mar 2014 A1
20140081882 Govindaraman Mar 2014 A1
20140095219 Zises Apr 2014 A1
20140095337 Pigeon et al. Apr 2014 A1
20140105084 Chhabra Apr 2014 A1
20140129629 Savir et al. May 2014 A1
20140129962 Lineberger et al. May 2014 A1
20140136196 Wu May 2014 A1
20140139380 Ouyang May 2014 A1
20140141803 Marti May 2014 A1
20140162628 Bevelacqua Jun 2014 A1
20140167794 Nath Jun 2014 A1
20140168170 Lazarescu Jun 2014 A1
20140171114 Marti Jun 2014 A1
20140176348 Acker, Jr. et al. Jun 2014 A1
20140180820 Louboutin Jun 2014 A1
20140189937 Pietrzak et al. Jul 2014 A1
20140191979 Tsudik Jul 2014 A1
20140200053 Balasubramanian Jul 2014 A1
20140222335 Piemonte Aug 2014 A1
20140222531 Jacobs et al. Aug 2014 A1
20140232633 Shultz Aug 2014 A1
20140232634 Piemonte Aug 2014 A1
20140241730 Jovicic et al. Aug 2014 A1
20140247279 Nicholas Sep 2014 A1
20140247280 Nicholas Sep 2014 A1
20140266804 Asadpour Sep 2014 A1
20140269562 Burchill Sep 2014 A1
20140274150 Marti Sep 2014 A1
20140283135 Shepherd Sep 2014 A1
20140293959 Singh Oct 2014 A1
20140358545 Robichaud et al. Dec 2014 A1
20140363168 Walker Dec 2014 A1
20140364089 Lienhart Dec 2014 A1
20140364148 Block Dec 2014 A1
20140365120 Vulcano Dec 2014 A1
20140375217 Feri et al. Dec 2014 A1
20150006648 Cao Jan 2015 A1
20150011242 Nagaraj Jan 2015 A1
20150026623 Horne Jan 2015 A1
20150031397 Jouaux Jan 2015 A1
20150105035 de Oliveira Apr 2015 A1
20150106085 Lindahl Apr 2015 A1
20150154513 Kennedy et al. Jun 2015 A1
20150170099 Beach-Drummond Jun 2015 A1
20150220940 Tuteja Aug 2015 A1
20150242889 Zamer et al. Aug 2015 A1
20150347928 Boulanger et al. Dec 2015 A1
20150379478 Klemm et al. Dec 2015 A1
Foreign Referenced Citations (6)
Number Date Country
102843186 Dec 2012 CN
1 096 715 Aug 2006 EP
WO 0051259 Aug 2000 WO
WO 2009104921 Aug 2009 WO
WO 2013051009 Apr 2013 WO
WO 2013089236 Jun 2013 WO
Non-Patent Literature Citations (76)
Entry
US 9,679,565, 06/2017, Cronin (withdrawn)
U.S. Appl. No. 14/840,840 Office Action dated Jul. 1, 2016.
U.S. Appl. No. 14/798,201, John Cronin, Information Map Placement, filed Feb. 13, 2015.
U.S. Appl. No. 14/731,384, John Cronin, Wireless System for Social Media Management, filed Jun. 4, 2015.
U.S. Appl. No. 14/798,339, John Cronin, Social Media Connection for Venue Interactions, filed Jul. 13, 2015.
U.S. Appl. No. 14/840,840, John Cronin, Event Tailgating Community Management, filed Aug. 31, 2015.
U.S. Appl. No. 14/840,855, John Cronin, Event Tailgating Parking Management, filed Aug. 31, 2015.
U.S. Appl. No. 14/840,840 Final Office Action dated Dec. 29, 2016.
U.S. Appl. No. 14/798,201 Office Action dated Nov. 1, 2016.
U.S. Appl. No. 14/731,384 Final Office Action dated Nov. 25, 2016.
Chan, Casey; “NFL Helmets Are Finally Using Technology To Make Things Not Suck”, Gizmodo, Aug. 22, 2012. http://Gizm odo.com/5937115/nfl-helmets-are-finally-using-technology-to-make-things-not-suck.
“Cisco Stadiumvision Mobile Solution”, Cisco, Aug. 1, 2013.
“Create Innovative Services with Play Apps”, Date of Download: Jan. 16, 2014 http://www.oledcomm.com/LIFI.html, Oledcomm—France LiFi.
Danakis, C et al.; “Using a CMOS Camera Sensor for Visible Light Communication”; 3rd IEEE Workshop on Optical Wireless Communications; [online], Dec. 3-7, 2012 [retrieved Aug. 14, 2015]. Retrieved from the Internet: <URL: https://195.134.65.236/IEEE Globecom 2012/papers/p1244-danakis.pdf> pp. 1244-1248.
Dawson, Keith; “LiFi in the Real World” All LED Lighting—Illuminating the LED Community, Jul. 31, 2013.
Gonzalez, Antonio; “NFL's helmet radios back on air”, The Associated Press, telegram.com, Published Aug. 15, 2012.
Gorman, Michael; “Outstanding Technology brings visible light communication to phones and tablets via dongle and LEDs”, Edgadget International Editions, Jul. 16, 2012.
Grebe, Helmut; “Coming soon: the “Twitter Helmet” (/2014/coming-soon-the-twitter-helmet)”, All Twitter Blogs, Apr. 1, 2014.
Haas, Harald; “Delivering safe and secure wireless communications”, pureLiFi. Date of download: Jan. 16, 2014 http://purelifi.co.uk/.
“How It Works”, Ticketfly.com (http://start.ticketfly.com/platform/how-it-works/) Jan. 1, 2010.
“iPhone and Android Parking App”, by ParkWhiz, Aug. 8, 2014.
Interactive Seat Map FAQs. Official Ticketmaster site. May 2, 2014. http://www.ticketm aster.com/interactiveseatm ap/faq.html.
Khan, Mehwish; “Mobilink Introduces Mobilink Voiler, a Voice-Based Social Networking Service”, Propakistani Telecom and IT News, Dec. 20, 2013.
Kim, Torrey; “5 Free Apps That Help You Find Parking Discounts”, Mobile Coupons & Deals Expert, About.com, Date of download: Aug. 1, 2014.
“KLM Meet & Seat”, KLM.com, May 2, 2014. http://www.klm.com/travel/us en/prepare for trvel/on board/Your seat on board/meet and seat.htm.
Kumar, Navin; “Visible Light Communications Systems Conception and VIDAS”, IETE Technical Review, vol. 25, Issue 6, Nov.-Dec. 2008. Date of download: Nov. 19, 2009 http://www.tr.ietejournals.org.
Levi's Stadium Mobile App, Aug. 21, 2014.
LiFi Overview—Green wireless mobile communication—LiFi Technology. Date of download: Jan. 16, 2014.
Li, Yang et al., “VICO: A Framework for Configuring Indoor Visible Light Communication Networks” Aug. 11, 2012, Mobile Adhoc and Sensor Systems (MASS), 2012 IEEE 9th International Conference, Las Vegas, NV.
McConky et al., Katie T.; “Automating Battlefield Event Reporting Using Conceptual Spaces and Fuzzy Logic for Passive Speech Interpretation”, Military Communications Conference, 2009, MILCOM 2009. IEEE, Oct. 18-21, 2009.
“Minnesota Theater Offers ‘Tweet Seats’ to Smartphone Addicts”, Huffington Post, Dec. 28, 2012.
Montero, Eric, “Design and Implementation of Color-Shift Keying for Visible Light Communications”, Sep. 2013, McMaster University.
“New Tailgate Parking Available for 2014 O'Reilly Auto Parts Route 66 NHRA Nationals”, Chicagoland Speedway, Apr. 14, 2014.
Nguyen et al., “A Novel like switching scheme using pre-scanning and RSS prediction in visible light communication networks”, EURASIP Journal on Wireless Communications and Networking, 2013.
Ogasawara, Todd; “StartTalking: Free Android App for Handsfree Twitter, Facebook, & Text Messaging”, SocialTimes, Sep. 30, 2010.
Ogawa; “Article about VLC Guidance developed”, Visible Light Communications Consortium (VLCC), Aug. 31, 2012.
Ogawa; “iPhone app from CASIO”, Visible Light Communications Consortium (VLCC), Apr. 26, 2012.
Ostrow, Adam; “Update Twitter and Your Facebook Status Using Voice”, Mashable.com, Oct. 29, 2008.
Parekh, Rupal; “Is Voice-Based Bubbly the New Twitter?”, Adage.com—Global News, Mar. 11, 2010.
“Pay-By-Phone Parking Meter App Expanding Citywide This Summer”, CBS Chicago Local news, May 6, 2014.
Povey, Gordon, “VLC for Location, positioning and navigation”, Jul. 27, 2011, http://visiblelightcomm.com/vlc-for-location-positioning-and-n . . . .
Rambabu et al., K.; “An Optimal Driving System by Using Wireless Helmet”, International Journal of Science, Engineering and Technologies Research (IJSETR) vol. 2, Iss. 9, Sep. 2013. ISSN: 2278-7798.
Rosenthal, Gregg; “Report: Owners planning to have players miked-up”, Around the League, NFL.com, Published Jul. 4, 2012.
Salter, Chuck; “TicketMaster Teams With Facebook So You Can Sit Next to Your Friends”, Fast Company, Aug. 24, 2011.
“Seating chart software made with you in mind”, Table Plan Software 1 Social Tables. Date of Download: May 2, 2014 https://socialtables.com/seating-chart-software.
“Social Seating and Booking Platform”, SeatID. Date of Download: May 2, 2014 http://www.seatid.com/product/.
Sorgi, Jay; “NFL considers in-stadium audio with miked-up players, coaches”, Todays TMJ4, Aug. 28, 2013.
“Speech-to-text server replace with product name advertising twitter tweet facebook social”, Google Search Oct. 28, 2013.
“Sports Communications System”, Telex Intercom, Feb. 22, 2010.
Stadium App 1 Levi's Stadium, Aug. 6, 2014.
Tailgate Scout Home page <http://tailgatescout.com/site> Date of download: Oct. 15, 2015.
Tailgate Scout Features page <http://tailgatescout.com/site/features/> Date of download: Oct. 15, 2015.
Tailgate Scout About page <http://tailgatescout.com/site/about/> Date of download: Oct. 15, 2015.
Thanigavel, M.; “Li-Fi Technology in Wireless Communication”, International Journal of Engineering Research & Technology (IJERT), ISSN: 2278-0181, vol. 2 Issue 10, Oct. 2013.
Wang et al., Hongwei; “A Reservation-based Smart Parking System”, The First International Workshop on Cyber-Physical Networking Systems, 2011.
Williams, George; “5 Easy Speech-to-Text Solutions”, The Chronicle of Higher Education, ProfHacker, Teaching, Tech, and Productivity. Mar. 3, 2010.
Won, Eun Tae; “Visible Light Communication: Tutorial”, Project: IEEE P802.15 Working Group for Wireless Personal Area Networks (WPANs), Mar. 9, 2008.
YouTube, “Twitter Helmet to Let User Tweet With Their Heads?”, Anonymex, published on Apr. 17, 2014.
PCT Application No. PCT/US2015/033613 International Search Report and Written Opinion dated Sep. 1, 2015.
U.S. Appl. No. 14/788,754 Office Action dated Aug. 20, 2015.
U.S. Appl. No. 14/798,201 Final Office Action dated Jun. 1, 2016.
U.S. Appl. No. 14/798,201 Office Action dated Oct. 8, 2015.
U.S. Appl. No. 14/731,384 Office Action dated Apr. 29, 2016.
U.S. Appl. No. 14/798,339 Final Office Action dated Mar. 24, 2016.
U.S. Appl. No. 14/798,339 Office Action dated Sep. 4, 2015.
U.S. Appl. No. 14/840,840 Office Action dated Mar. 15, 2016.
U.S. Appl. No. 14/840,840 Office Action dated Oct. 30, 2015.
U.S. Appl. No. 14/840,855 Final Office Action dated Apr. 14, 2016.
U.S. Appl. No. 14/840,855 Office Action dated Oct. 27, 2015.
U.S. Appl. No. 14/798,201 Final Office Action dated Jun. 2, 2017.
U.S. Appl. No. 14/798,339 Office Action dated May 10, 2017.
U.S. Appl. No. 14/840,840 Office Action dated Jun. 23, 2017.
U.S. Appl. No. 14/840,855 Office Action dated Jul. 26, 2017.
U.S. Appl. No. 14/798,339 Final Office Action dated Oct. 13, 2017.
U.S. Appl. No. 14/840,840 Final Office Action dated Oct. 26, 2017.
U.S. Appl. No. 14/798,201 Office Action dated Jan. 8, 2018.
Provisional Applications (1)
Number Date Country
62023355 Jul 2014 US
Continuations (1)
Number Date Country
Parent 14788754 Jun 2015 US
Child 15078778 US