The disclosed embodiments relate generally to the playback of audio, video, or other data streams, and more specifically, to various techniques for allowing a recipient of a broadcast stream to resume a temporal experience that has been interrupted.
Listening to a broadcast of a radio or television program can be a deeply engaging experience for a user. Such experiences are sometimes interrupted, such as when a baby needs immediate attention, when arriving at a destination in the middle of enjoying the experience in transit, or whenever having to walk away from a radio or television. In some cases, it is possible to resume the program later from recorded data, such as a podcast or online video, but it may be difficult or inconvenient to find the data, or the position at which the program was interrupted. In some cases, it is possible to continue the program by tuning to another receiver of the broadcast, but that is inconvenient as well.
In another situation, a listener discovers an engaging broadcast, but has missed the beginning of it. Even though it is technically possible to replay the broadcast from a podcast or online video, it may be difficult or inconvenient for the listener to locate the stored data.
In these situations, the temporal experience is less than optimal. For example, John drives home from work while listening to a broadcast of a fascinating interview by Terry Gross on the radio program “Fresh Air.” He is only 25 minutes into a one-hour broadcast when he gets home. John could stay in the car until the end of the hour, which would be awkward and inconvenient. He could leave the car and wait until the episode of Fresh Air becomes available as an Internet podcast, but that might not occur for a long time, and he would find it difficult to pick up at the program position where he left off. To do so, he would have to note the position within the program and the date. If the broadcast is a rerun then the important date is not the current date, but that of the original broadcast. The date of the original broadcast was mentioned at the beginning of the program, but John did not write it down while driving.
Such problems are not limited to radio programs. Television programs, movies, and other temporal experiences all place much importance on their progression within time.
A broadcast recognition system according to U.S. patent application Ser. No. 13/401,728 can identify broadcast sources from a few seconds of audio, and determine the time position of the segment of audio within the broadcast stream. We propose several solutions to the problem of resuming the experience of a broadcast after an interruption. Some of the solutions offer additional functionality, such as playback control options.
The present invention is directed to systems and methods for resuming identifiable broadcast streams. These are media streams that the user cannot pause. Some examples are radio broadcasts, television broadcasts, webcasts, and Internet radio streams. The invention can be fully embodied in each of servers, clients, and the interactions of any combination of servers, clients, and users.
According to an aspect of the invention, a user operates a client device that comprises a microphone. In some embodiments, the client is a smartphone with an application program (app) installed. According to another aspect of the invention, one or more servers monitor a number of broadcast sources. According to some embodiments, broadcast signals come from radio stations, television stations, Internet stations, or any source of media content that a user has no control to pause, reposition or resume. A server (or a plurality of servers) maintains a database that stores station data, including static metadata about the station, and fingerprints for live broadcast audio signals. The client captures audio segments from a microphone and sends a corresponding query to the server. Matching audio fingerprints between client audio segments and monitored station audio signals can be used to identify the broadcast station that originated the signal. Based on the station's metadata, this may lead to one or more ways to support the continuation of the user's listening experience. Multiple alternative scenarios will be described.
The information sent by the client to the server for purposes of identifying a station is known as a query. In various embodiments, a query comprises one or more of: a sampled audio segment, a compressed audio segment, or a fingerprint sequence that the client computes from the sampled audio segment. Note that the terminology for fingerprints can be confusing: the fingerprint of a segment of audio may be a fingerprint sequence, with one fingerprint element per time frame. In this disclosure, the terms “fingerprint” and “fingerprint sequence” are used interchangeably.
In various embodiments, the query metadata may include client context information such as a timestamp, the client's location, a user profile or user preference data, or input from a sensor on the client. A query elicits a response from the server. The server receives the query, decompresses the audio segment, if necessary, and computes an audio fingerprint if necessary. The server runs a broadcast stream recognition system. The broadcast stream recognition system uses a fingerprint database, and looks for a match between the client's fingerprint sequence and a fingerprint sequence among the fingerprint sequences of the monitored broadcasts. If a match has been achieved, the response from the server may include one or more of: an identification of the broadcast station; an identification of a radio or TV program; an identification of a music title or album; and other information indicating possible ways for continuing to experience the content from the client device. In some embodiments, the user commands a portable client to switch to a substitute content source on the fly.
According to some embodiments, the client comprises a programmable tuner. In response to a user's request via a client app, the recognition server identifies a station, and then instructs the client to set the frequency of the programmable tuner to that of the identified station. This enables the user to leave the car, and continue listening to the broadcast through the speaker of the mobile client device. The user's listening experience then continues without a hitch. In some embodiments the user makes a request to a client operating system.
In some embodiments the client identifies a need to program and enable its tuner without a user request. The client is always listening, and enables the tuner when the broadcast audio becomes faint. When the client is playing broadcast audio, and hears the same broadcast from another source through its microphone, then the client disables its tuner. This is useful if, for example, a user listening to a broadcast on a portable client walks into a room or turns on a car radio playing the same broadcast. In such case, the portable client, by turning off its own turner, can conserve its battery energy. One method to distinguish broadcast audio of an external source from broadcast audio received from its own speaker is for the client to add a small delay to its speaker audio output.
In some embodiments, the server provides the client with information that identifies the source of a live Internet broadcast stream for the identified station, if such a broadcast stream exists. In some embodiments, when a broadcast stream is identified, the client accesses an on-demand broadcast stream for the broadcast content.
In some embodiments, a server stores stream sources for this purpose. In some embodiments, the server sources media streaming content from a third party. In some embodiments, the server provides playback controls such as pause, rewind, and fast-forward to the user, through the client. In some embodiments, the client downloads a media file, either from the server or from a third party, stores it in a local non-transitory medium, and plays the media file on demand.
The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be used without departing from the principles of the present invention.
U.S. patent application Ser. No. 13/401,728 describes systems and methods to detect and identify a broadcast station (or stream) that a client hears. Some such systems are able to timestamp the point at which the user has captured the stream for recognition. Using additional data if needed, some embodiments of the present invention are able to give users options for “putting on hold” and later resuming a program after its interruption. For example, when a user leaves her car, so that her car radio becomes unavailable as an audio play source, some embodiments of the invention can save sufficient information to continue the program uninterrupted from a client device such as a mobile phone. Some embodiments save sufficient information to resume the program later, at the same position. Some embodiments provide for a user to indicate an amount of rewinding from the position, which can help re-establish the context of the program. Some embodiments use resources such as available alternative stream sources. Some embodiments assume that the position of the last captured audio received from the client and successfully identified marks the position of the interruption. Such embodiments use that position as a reference timestamp for the beginning of a new listening session. Some embodiments use a default, but user settable, amount of rewinding before starting to play the program again.
For purposes of illustration, we use audio as the exemplary medium for identifying broadcast sources; the audio that a user listens to may come from a radio station, a TV station, or another stream source. Note that the present invention may be implemented for generalized data streams, including audio, video and other data, such as subcarrier metadata. The corresponding fingerprint sequences may be generated from these generalized stream signals and subsequently matched, in much the way the disclosure handles audio. A person skilled in the art will readily see how to transpose the techniques presented to media other than audio.
Digital audio transmission systems may compress audio signals using an audio codec. However, for the purpose of comparing “similar-sounding” audio segments, systems may pre-process audio segments to generate a fingerprint sequence (also called “signature” or “robust hash”). The fingerprints of two audio segments can be compared to determine how similar the two audio segments are to each other. Fingerprinting is closely related to perceptually-based compression. Both rely on compact representations of audio signals. However, whereas codecs seek to maximize the quality of signal reconstruction, fingerprints seek to optimize precision and recall during recognition. When matching a query of sufficient length, the precision of recognition is very high. That is true for both audio and video signals. The audio and image components of video can be fingerprinted separately, or the fingerprints combined. Audio fingerprints give information about broadcast content in a compact form that allows accurate identification.
A broadcast monitoring system receives audio from multiple broadcast sources, segments the audio received from an audio source into blocks, computes fingerprints for each block, and indexes the fingerprints by broadcast source and a universal timestamp. Small block sizes allow a lower detection latency, but greater overhead in storage and processing requirements. A block size on the order of one second is reasonable.
In some embodiments the system aggregates the fingerprints into fingerprint buffers. A fingerprint buffer contains the fingerprints of a single broadcast channel over a particular length of time, such as one hour or one day. Some embodiments comprise a signal buffer that stores digital signal data corresponding to a particular length of time; in an example embodiment, 30 seconds of radio fingerprints stored; in another, 6 hours of TV audio fingerprints are stored; with the current availability of cheap storage, it is quite practical to store weeks of fingerprinted material. The system then processes the data in the signal buffer and stores it in a fingerprint buffer, routinely discarding the oldest fingerprint data in the fingerprint buffer.
The “real-time” timing reference for live content is the point in time of broadcasting the stream signal from a broadcast station, through radio waves or via Internet. There may be a delay between the real-time signal and the time at which broadcast station fingerprints are available to the server, due to latency in the broadcast monitoring system for signal capture and fingerprint generation. These delays cannot be eliminated, but they can be accurately tracked with timestamping. The same is true on a client. Delay occurs between the real-time reference and the reception of fingerprints on the server that performs the matching of fingerprints for station identification; the delays are due to signal capture, fingerprint generation and data transmission. In embodiments that require near continuity during a hand-off, it is important to minimize both types of delay (server-side and client-side). The presence of a gap could cause the loss of an important understanding or appreciation of the program.
Broadcast monitoring system 120 captures a set of broadcast signals 102, including the signal 104 that tuner 106 is tuned to. It extracts the audio from each signal, and uses the audio to create fingerprints that will help identify broadcast stations 100 by their audio content. Broadcast monitoring system 120 associates these fingerprints and related data for the monitored stations, and stores them in a broadcast database 130. Database 130 provides data that support matching of audio signals captured by a client 110 with audio signals from any of the monitored stations, in order to identify which station is responsible for broadcasting signal 104.
Client 110 captures the data with one or more sensors, such as microphones for audio. Client 110 has the ability to (1) capture audio signal 108 received from tuner 106, (2) convert the audio signal to audio data, and (3) send a query to detection system 140 through network connection 142. In various embodiments, the audio data is sampled audio, compressed audio, or audio fingerprints. In various embodiments, client 110 performs steps (1-3) only when the user issues a command, automatically at certain intervals, or continuously.
Detection system 140 identifies which broadcast station 100 is the source of the audio signal 108 received by client 110. The audio data received through network connection 142 is converted (if necessary) to audio fingerprints. Detection system 140 searches the broadcast database 130 in an attempt to match client audio fingerprints with live content fingerprints. Detection system 140 sends match information to client 110 through network connection 144. Various embodiments enable and perform different flows for exchanging data between client 110 and detection system 140.
In some embodiments, broadcast signals 102 and 104 include subcarrier data. In an embodiment for FM radio, examples are Radio Data System (RDS) data, or other systems that encode the name of a program, the name and call sign of a broadcast station, and the name of a song. In an embodiment for TV, some examples of subcarrier data are captions, datacasting, and MPEG-2 transport stream data encapsulation. Broadcast monitoring system 120 stores subcarrier data in the broadcast database. Though subcarrier data may be undetectable in audio signal 108, detection system 140 can transfer such data through network connection 144 to client 110.
Audio signal 108 comprises environmental noise mixed with the audio output from tuner 106, and the signal is also possibly affected by distortion. In an embodiment, client 110 performs preprocessing of the signal, such as noise filtering on the audio signal. In an embodiment, the client uploads sampled audio over network connection 142. In another embodiment, the client uploads compressed audio data; in yet another embodiment, the client computes and uploads audio fingerprints derived from the captured audio. In some embodiments, the client uploads other contextual information, such as location, user demographic information, user preferences, etc., to detection system 140 along with the audio data.
In various embodiments, broadcast database 130 is stored on one server, multiple servers, or a data center, and detection system 140 may use a single server or be distributed. In various embodiments, broadcast monitoring system 120 may use a server, or be distributed across multiple servers, as appropriate for the physical locations of broadcast stations 100 and the size of the broadcast database 130, and detection system 140 may use the same servers as broadcast monitoring system 120, or different servers.
In one embodiment, broadcast stations 100 are radio stations. The sensors used by the broadcast monitoring system 120 comprise, for example, an array of programmable radio tuners that capture audio from selected broadcast stations 100. In another embodiment, the broadcast stations 100 may be television stations, and the broadcast monitoring system 120 uses an array of programmable TV tuners, configured to record (at least) audio in a suitable format. In another embodiment, a HD radio tuner also captures, in addition to the signal content, useful metadata such as a program name, or title of content such as a song or interview. In another embodiment, radio or TV is captured via Internet streams. In every embodiment, broadcast monitoring uses appropriate sensors to capture signals. Much of the station metadata is not broadcast with the signal content, and it never or infrequently changes: station name, broadcast frequency, program guide, or URL's for retrieving the program guide or the recent playlists may be statically stored in the broadcast database, as well as appropriate protocols to acquire additional metadata when available, perhaps through other channels, such as a station's website, or datacasting.
In an embodiment, fingerprints 302 have associated timestamps 304. These timestamps are optional because they are somewhat redundant. Since they predictably mimic the passage of time, timestamps can be calculated by tracking the current position in the fingerprint stream from a single initial timestamp. Whether they are derived from stored timestamps 304, or recalculated as just described, timestamps allows the determination of a temporal position, with sufficient accuracy that it is feasible to resume an interrupted listening experience precisely from the point of interruption—within more than acceptable limits, such as fraction of a second. Note the program offset, if needed, can be computed from the timestamp and station metadata such as the schedule of programs 314.
In some embodiments, a broadcast signal includes subcarrier data that encode metadata such as a music title (or song name), an artist name, or the name of the program. When such data is present, it may be decoded and stored as live subcarrier metadata 306. In some embodiments, additional live data 308 may also be stored as part of the live content data 300.
In contrast with live content data, station metadata 310 comprises static parts, and other parts that are only updated infrequently (e.g., a few times per day or per hour). Station metadata 310 includes: an identity of the station channel 312, specified at least by name (e.g., KQED) and by frequency (e.g., FM 66.5). In an embodiment, station metadata 310 includes a schedule 314 for the station's programs; a website 316 for the station; the broadcasting range 318 of the station, describing the geographical locations served by the broadcast station; and more, to be described soon. In some embodiments, the broadcasting range will be used by a station detection system 140 to restrict its search to local stations (stations that match the user location information provided by the client) or at least to favor local stations over remote, stations that might be received via Internet.
The station metadata 310 for a station may also include links 322 that give access to third party (alternative) sources 222 for the broadcast content, as well as Internet live streaming URLs 324, playlists 326 for music programs, and possibly other data 330 that are not described in this exemplary version of the broadcast station data.
Broadcast station data 250 only stores a range of the most recent data collected from live content, limited by storage availability or more often, dictated by the needs of an application. In some embodiments, broadcast station data 250 may allocate a fixed amount of storage for each broadcast station 100. One implementation uses circular buffer storage areas, where old data is discarded after a certain amount of time, such as a few minutes, or one day, and the freed space is reused thereafter. The appropriate duration of data retention varies with the system and the application.
When subcarrier information exists in the broadcast signals 102, the broadcast monitoring system 120 is able to extract from the signal and decode live subcarrier metadata 306. Matching subcarrier metadata 306 between a monitored broadcast signal and a signal captured by a client 110, when both exist, provides a fast way to detect mismatches, and time-approximate matches. Some embodiments do not extract such metadata from the subcarrier data in broadcast signals. Instead, stations may give access to roughly equivalent metadata, such as song titles, via URLs that can be used to retrieve on demand metadata such as (timed) playlists. Broadcast monitoring system 120 generates live content data 300 that includes fingerprints 402 and optional data such as timestamps 304, subcarrier metadata 306 and additional live data 308. The live content data 300 is sent (presumably, streamed) streamed to broadcast database 130.
Regarding station metadata 310, an embodiment of the station metadata 310 has static components, such as station channel data 312 (channel name and frequency), broadcasting range 318, station website 316 and access URLs (322, 324); this data may be fixed, assigned at system setup, and occasionally edited by a system administrator. The station metadata 310 also has components (such as a program schedule 314 and playlists 326) that can be manually edited, or automatically generated. An example of automatically generated (part of the other data 330) is data that tracks the times of broadcasting pre-recorded ads. These are examples that illustrate the richness of the station metadata 310. In some embodiments, further details are required, e.g., for full access to third party broadcast content 322. Thus, the contributions of broadcast monitoring system to the station metadata portion of the broadcast database 130 are discrete, infrequent, and of a relatively small size. This is in sharp contrast with the processes that generate live content data 300. As a result of creating and maintaining both live content data 300 and station metadata 310 using the processes just described, the broadcast database 130 is ready for use in broadcast source matching applications, by a detection system 140.
As a result of matching, scoring and selection, fingerprint matching module 504 determines a best match (or in some embodiments more than one strong match) and forwards the resulting matches to response generation module 506. In some embodiments, ambiguous matches are first disambiguated using context variables such as location, as explained below. In some embodiments, response generation module 506 receives metadata from external information source 508. Following selection, response generation module 506 formats a response based on the match result, and including the metadata, as appropriate for client 110, and sends the response to the client over network connection 144.
According to different embodiments, fingerprint matching module 504 performs its search in various ways. In some embodiments the search proceeds through sets of live content fingerprints 402 in order, then through fingerprints within the set in order over a reasonable time range. The order of fingerprints may be simply chronological in a forward or reverse direction. Alternatively, shorter fingerprint segments may be ordered for search according to various criteria. Some embodiments search fingerprints for common jingles or theme songs first. In some embodiments, sets of live content fingerprints 402 are searched sequentially in order, and in some embodiments searches of live content fingerprints simultaneously on different processors.
In some embodiments, response generation module 506 associates live content fingerprints 302 or parts of station metadata 310 with popularity and user preference statistics. Other embodiments, instead, associate demographic data, derived from contextual or other information. For example, if detection system 140 is aware of a user's age, the fingerprint matching module 504 may give a higher priority to searches in the monitored broadcast fingerprint database 130 to stations known to be popular among that demographic.
The association weights their detection priority, which makes earlier detection more likely, and boosts the performance of detection system 140. According to some embodiments, broadcast database 130 makes popularity and preference statistics accessible via the other data 330 component of the broadcast station data 250, and provide the data to detection system 140 along with fingerprint data. Preference statistics can be gathered from user profile information or curated by a database owner. Popularity statistics can be derived from the number of searches that hit each broadcast station; other statistics are available from third parties. Such data allows fingerprint matching module 504 to select a search order that minimizes computation. Some embodiments accumulate popularity statistics by counting the number of query results for each station. Some embodiments access such data from other sources, such as Nielsen ratings.
Detection system 140 receives user data, and in some embodiments contextual information, from network connection 142. Some examples of contextual information are GPS location, native language, primary spoken language, age, gender, user name or account name, and user preferences regarding broadcasts. In one embodiment, the detection system may rely on user profile information included with the contextual information and a history of the user's activity to prioritize searches of broadcasts associated with features of the user's profile and behavior. Some such behavior is a history of query results from a particular device. Other such behavior is identifiable from one or more online or social media profiles connected with the device. Profiles can include data such as online message posting, email content, or the content of conversations.
In various embodiments, contextual information is helpful for restricting or prioritizing the set of possible broadcast station data 250 to search in the monitored broadcast database 130. For example, the client's GPS location is useful for detection systems 140 to focus its search in preference to broadcast stations that are available within certain geographical areas. Filtering broadcast stations by location and other contextual information thereby improve both the speed and accuracy of broadcast station recognition.
In some embodiments, client 110 performs fingerprinting. In some embodiments, detection system 140 performs fingerprinting. The detection system 140 may succeed or fail to produce a match. It encodes that information with the response that it sends to the client through network connection 144. According to some embodiments, a failure response includes information about the reason for the failure. When fingerprint matching module 504 succeeds to find a match, the response generation module 506 provides relevant information to client 110 over network connection 144. What information is relevant varies across embodiments. Some examples of relevant information are the identity of the sampled content, the identity of the broadcast stream, and metadata relevant to the identified content, such as a link to the archived content or a link to a streaming program.
Detection system 140 receives the query, computes a fingerprint from the segment of audio data, and performs fingerprint matching module 504 by comparing the computed fingerprint to those in a broadcast database. The fingerprint matching produces a match result sends it to response generation module 506. Response generation module 506 reads metadata from information source 508 and formats a response as appropriate for the client, which displays a corresponding response on a user interface. Responses to successful matches comprise a message with metadata from the broadcast database regarding the identity of the match. The identity of the match may include an ID of a broadcast station. It may also include the name of the program running on the broadcast station. It may also include the name of a song playing on the broadcast station. The information sent with the identity of the match can come from stored program schedule information or from information detected by the detection system. Responses to an unsuccessful fingerprint match indicate that. An unsuccessful match response, according to some embodiments, is, “Could not find a match”. An example of a successful match response for one particular use case is:
“Show Host: Terry Gross
Show Name: Fresh Air
Broadcasting Channel: WFPK 91.9 Radio Louisville
Show Time: 7-9 pm EST Wednesdays”
Various embodiments and various use cases produce different match results and metadata. For example, a radio station may offer special opportunities for concert tickets, as well as various promotions, ads and incentive programs, along with the station identification data. Other stations might have links to fund-raising opportunities and other URLs.
If audio signal 108 matches more than one database fingerprint, such as an HD version and an analog version of a radio station, the match result contains multiple station channels. Client 110 provides for the user to select one. In some embodiments, since some radio stations broadcast the same content on different frequencies from different towers with some overlap between their broadcasting range, client 110 may automatically select the strongest frequency with the strongest signal.
In some embodiments, the data source is on the same server as the detection system. In some embodiments, the data source resides on a server closely associated with the detection system. In some embodiments the data source is not closely associated with the detection system.
Note that a certain amount of time is required for broadcast monitoring system 120 to capture broadcast signals 102, and generate their fingerprints, for broadcast database 130 to receive and store the fingerprints, and for detection system 140 to retrieve the fingerprints from broadcast database 130. Therefore, for live broadcasts, it is necessary for client 110 or detection system 140 to allow a delay between the fingerprints of the client-side audio and the server-side audio fingerprints when fingerprint matching module 504 happens. A delay of 15 seconds is more than enough for most embodiments.
It should be noted that the process steps and instructions can be embodied in software, firmware or hardware, and when embodied in software, can be downloaded to reside on and be operated from different platforms used by a variety of operating systems.
The operations herein may also be performed by an apparatus. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references below to specific languages are provided for disclosure of enablement and best mode of the present invention.
While the invention has been particularly shown and described with reference to a preferred embodiment and several alternate embodiments, it will be understood by persons skilled in the relevant art that various changes in form and details can be made therein without departing from the spirit and scope of the invention.
Finally, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the claims that follow.
This application claims priority from U.S. Provisional Application No. 62/153,335, filed on Apr. 27, 2015, entitled, “SYSTEM AND METHOD FOR CONTINUING AN INTERRUPTED BROADCAST STREAM,” (Attorney Docket No MELD 1029-1), naming inventors Kathleen McMahon, Victor Leitman, Bernard Mont-Reynaud, and Regina Collecchia. This application is also related to U.S. application Ser. No. 13/401,728 filed on Feb. 21, 2012, entitled “SYSTEM AND METHOD FOR MATCHING A QUERY AGAINST A STREAM”, naming inventors Keyvan Mohajer, Bernard Mont-Reynaud, and Joe Aung. Both applications mentioned above are hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62153335 | Apr 2015 | US |