Portions of the documentation in this patent document contain material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office file or records, but otherwise reserves all copyright rights whatsoever.
Traffic information providers have struggled with the ability to convey traffic information in a usable and compelling way for television audiences. Typically, traffic information providers show a video image from a camera or other similar, simple graphic with an arrow pointing to a general area to convey the current state of the traffic conditions.
Graphics used to display traffic flow conditions or other traffic information to a broadcast television or cable network audience are well known in the art. Generally, static graphic images are manually placed on a static background map or web page, which is then rendered into a NTSC or similar signal for delivery to viewers.
Two-dimensional (“2D”) displays limited to a static display of colors and objects with no motion to depict current road conditions are also known in the art. The status colors do not change to reflect real-time, actual road conditions, since a static web image must initially be created from traffic flow data to generate the 2D display. Rather, a user manually places selected icons on the display map when it is desirable to add an additional visual traffic feature or item.
A known traffic display system designed and utilized by Mobility Technologies, Inc., converts real-time traffic flow data from a proprietary system into an animated, color-coded 2D video format, such as NTSC, SDI, AVI, DPS or MPEG. This product is described in co-pending U.S. patent application Ser. No. 10/447,530, filed May 29, 2003, entitled “Method of Displaying Traffic Flow Data Representing Traffic Conditions,” the disclosure of which is incorporated herein by reference.
U.S. patent application Ser. No. 10/611,494, filed Jun. 30, 2003, entitled “Method of Creating a Virtual Traffic Network,” the disclosure of which is incorporated herein by reference, describes a Virtual Geo-Spatial Traffic Network (“VGSTN”) which provides spatially oriented traffic data and information, including flow data, incidents, events, congestion and news all synaptically integrated in a unified virtual network. The contents of the VGSTN are represented in several ways, including a 2D animated traffic flow display representation that is output in various video formats. Such representations are limited by a 2D view of the traffic conditions as well as the inability to cover a large road system or portion thereof without overwhelming a viewer.
Although integrated traffic systems have been added to show animated, real-time traffic flow conditions, none of the existing methods of displaying traffic conditions has the ability to accurately display the traffic conditions of an entire region or road system in a detailed, multi-dimensional fashion. Traffic information providers have struggled with a method to convey a significant amount of information that allows the viewer to quickly ascertain the traffic conditions of a road system relative to various landmarks, and with the viewer's perspective relative to the direction of travel.
Additionally, although three-dimensional (“3D”), “fly-through” displays have become more popular in scenes and other “produced” television segments (such as weather reporting), these 3D displays have not been leveraged by traffic information providers. 3D technology allows for a more compelling traffic product, and also gives the traffic information provider the ability to convey several pieces of traffic information at once, including traffic flow data and incident data, across an individual roadway or entire road system within a single television broadcast segment.
It would be desirable to convey traffic conditions in an animated or non-animated 3D view, to display traffic conditions in a geo-spatially correct virtual road network, and to display traffic conditions based on proximity of that data to the current view of the display.
The present invention allows for proximity settings of traffic data, thereby managing the traffic information shown to the viewer in a way that is easily viewed. The present invention also allows for animated and non-animated billboards for advertisements along with other signage to be presented within a 3D graphic display. Additionally, using 3D technology allows the television viewer to enjoy watching a portion of the news broadcast that is typically considered a bland and lifeless segment.
Briefly stated, according to a first aspect of the present invention, a computer-implemented method of displaying traffic conditions on a road system includes creating a 3D graphical map of the road system that includes one or more segments. The status of the segments on the 3D graphical map is determined. The status of the segments corresponds to traffic data associated with each segment. A 3D model of the road system is created by combining the 3D graphical map and the status of the segments.
According to a second aspect of the present invention, a 3D model representing traffic conditions on a road system includes a 3D graphical map of the road system that includes one or more segments of the road system. Traffic flow on the 3D graphical map is associated with the segments. The traffic flow corresponds to traffic data associated with the segments.
According to a third aspect of the present invention, a computer-implemented method of displaying traffic conditions on a road system includes creating a graphical 3D map of the road system that includes one or more 3D point locations of the road system. Traffic data associated with the 3D point locations on the graphical map is determined. A 3D model of the road system is created by combining the graphical 3D map and the traffic data associated with the 3D point locations.
According to a fourth aspect of the present invention, a 3D model representing traffic conditions on a road system includes a graphical 3D map of the road system that includes one or more 3D point locations of the road system. Traffic data on the 3D graphical map is associated with the 3D point locations. The traffic data is combined with the 3D graphical map.
According to a fifth aspect of the present invention, a computer-implemented method of displaying traffic conditions on a road system includes creating a 3D graphical map of the road system that includes one or more segments and one or more 3D point locations of the road system. The status of the segments on the 3D graphical map is determined, and corresponds to traffic data associated with each segment. Traffic data associated with the 3D point locations on the graphical map is determined. A 3D model of the road system is created by combining the 3D graphical map, the status of the segments and the traffic data associated with the 3D point locations.
The foregoing summary, as well as the following detailed description of the invention, will be better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there are shown in the drawings embodiments which are presently preferred. It should be understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
In the drawings:
The 3D traffic display system (“TV3D”) according to the present invention provides an integrated, single 3D dynamic view of traffic flow conditions on a road system. The TV3D utilizes a 3D modeling tool and includes the ability to define camera/route paths through the model for production of a traffic “scene” in a known video format (e.g., NTSC, SDI, MPEG, DPS, AVI, etc). Preferably, the TV3D obtains integrated traffic data reflecting incidents, congestion, and flow data from a VGSTN to generate the 3D display of integrated traffic flow conditions. Preferably, the TV3D is installed at a client location so that the display, or scene, is rendered for traffic reports. The scenes may include landmarks, which include, but are not limited to roadways, structures, places, terrain, as well as animated and/or non-animated billboards for advertising. The landmarks are preferably modeled and rendered as part of each scene. The scenes can be rendered in real-time and/or non real-time, and may contain animated objects therein.
According to a preferred embodiment of the present invention, the TV3D generates a 3D traffic display, generally designated 10, as shown in
Referring to
When the user opens the 3D Studio Max software, a Traffic Pulse 3D user interface 210 is displayed (see
The 3D Studio Max software includes user interface controls that allow the producer to perform the following capabilities:
The fly-through scenes utilized with the TV3D are designed to correspond to a specific metropolitan area. Desired fly-through routes are selected based on their importance for traffic conditions (typically high usage, significant traffic issues, etc.) and for the availability of notable landmarks. The scene characteristics include the specific altitude, angle, speed and direction of the view of the fly-through. Once the route is planned and the landmarks along the route have been specified, the route and landmarks are plotted using mapping software having planned routes 220 (see
To produce a scene, the major roads of the selected metro area are imported into the Studio Max software to create a digitally plotted route system 230. For example, the digitally plotted route system 230 shown in
As shown in
Downloading Traffic Data
Referring to
As shown in
Referring to
In the preferred embodiment, the TV3D requires access to a VGSTN's XML interfaces over a communication link such as standard TCP/HTTP Internet connection. The VGSTN's XML feed is accessed by the TV3D via various scripting languages and integrated into a 3D modeling tool, such as 3D Studio Max. For example, the 3D Studio Max application has an embedded scripting language that integrates the traffic data 205 from the VGSTN into the TV3D.
It should be noted, however, that the present invention does not require access to a VGSTN to produce a 3D display to generate fly-through scenes of traffic conditions on a road system. Accessing a VGSTN and integrating the traffic data therefrom simply increases the sophistication of the traffic flow data shown in the 3D display since the VGSTN includes synaptically integrated traffic data. However, any traffic data file (e.g., real-time, non-real-time, not-integrated traffic data) may be used to create a 3D display showing traffic flow conditions according to the present invention.
Illustrating Congested Traffic Flow
A specific 3D fly-through illustrates traffic conditions on a specific route, such that congestion along the route is shown as the camera or view travels or “flies” along the route. As discussed above, the route path was initially specified when the scene was created in the 3D Studio Max software. Cars shown on the route are preferably animated such that they “travel” along the route path, changing colors to indicate the congestion at any given point on the route. Animated cars or other objects may move at speeds indicative of their respective congestion level. Thus, the TV3D system must determine at what points along the route path the cars should change color to indicate congestion on a specified portion, or link, in the route. Cars turn a specific color (for example, red for a jammed link) at the start of a congested link and turn back to green at the end of the congested link.
Where a link begins and ends is determined by the amount of the full route path it occupies and its position in the path. For example, referring to
The 3DTG script calculates the path percentage for the beginning and end of each link by using the length of each link (available in the data file 268 in miles) and the overall link ordering (e.g., the listing order of the links in the data file corresponding to the geographic ordering of those links). As shown in
Accordingly, the start point path percent and the end point path percent are calculated and stored for each link in the route path 310.
The color of the cars are changed as they enter and leave each congested or partially congested link based on the previously calculated link start and stop path percentage. Changing a characteristic (like color) of an object (like the car) is accomplished by using a key frame. Since a scene for the route path is made up of individual frames, the color of an object within a specific frame number may be changed using key frames. This color can then be changed again at a later point. To make the desired color changes (entering and exiting the congested link), a key frame number for each change is calculated.
The cars within a scene are assigned to follow the route path and cover the complete path distance over the entire scene length. Therefore, a percentage of the complete route path is directly correlated to a percentage of the entire amount of scene frames for that particular scene. Continuing the previous example, the entire scene can be any length. However, a common choice is to use a ten second animated scene to illustrate the traffic conditions. Thus, at thirty frames per second, 300 individual frames make up the entire scene. Referring to
To force an abrupt change in color from red back to green using the key frame concept, an additional key frame must be set on the 38th frame to indicate that the car is red. This is due to the fact that the key frames simply specify a color value at a particular frame. If a red key were specified at the first frame and a green key were specified at the 39th frame, the car would start out red and then would slowly transform to green by the 39th frame. By setting a red color key on the 38th frame in addition to the other key frames, the car starts out red and remains completely red until the 38th frame. On the 39th frame, the car becomes green.
However, as shown in
For example, as shown in
When adding key frames to the scene, the congestion status of the previous link in the route path must be verified. If the previous link has the same congestion type as the current link, the car color should stay the same over both links. Thus, at the end of the previous link, there is no need for a key frame to turn the car back to green. Since the car remains the congestion color, there is also no need for a key frame at the beginning of the next link to turn it to the corresponding congestion color. Similarly, when the previous link is a different congestion type, there is no need to turn back to green at the end of the previous link. However, the cars are nonetheless turned a different color at the start of the next link. Thus, the same checking is completed when considering the congestion type of the next link. Additionally, when the starting path percentage of the car is in the middle of a congested link, a special key frame must be added to turn the car the color of the congestion at frame zero. Furthermore, since the cars travel the full complete path over the scene length independent of the starting location, wrapping considerations are taken into account when looking at the “previous” and “next” link.
The foregoing discussion and examples assume that it is known which links are congested. However, the 3DTG script must read the data from the link congestion information file 285 to determine which of the links are congested. The dynamic congestion data is then read from the data file that was created. Links that contain moderate or heavy congestion are listed in this file. As each link is read, the link id of the congested link is searched for in the configuration information that was previously read from the route information data file 265. When a match occurs, the congested link has been found and the processing described previously is performed to add key frames for each car on the path to indicate this congested link. This is performed for each congested link.
Additionally, those skilled in the art will recognize that the objects (i.e., cars) shown in the scene which represent different congestion levels need not be animated as discussed above. Rather, the cars may be static, having different colors within each different link without departing from the spirit and scope of the present invention.
Illustrating Traffic Incidents
The 3DTG script also places incident markers in the scene based on dynamic data in the downloaded traffic incident data file 295. The 3D Studio Max scene file 206 is structured such that the coordinates of the scene correspond to the latitude and longitude of the actual, physical area being shown. For example, scenes in North America are designed such that all of the objects in the scene exist in the quadrant with positive “y” and negative “x” coordinates corresponding to the negative longitude and the positive latitude in North America. Incident icons that are placed outside of the camera view of the scene are copied and placed at the correct location based on the latitude and longitude of the incident. The specific icon that is copied depends on the type of incident. For example, if the incident type is “construction,” a construction barricade icon is copied. Other incident icons include car accident, truck accident, disabled vehicle, fire location and event. The icon border is changed based on the criticality of the incident. The criticality is supplied in the data file 295.
The 3DTG script further includes an interactive user interface 340 that cycles through each incident that has been placed in a particular scene (see
Illustrating Point Traffic Speeds
In some metropolitan areas, the flow data from roadside sensors is available. The 3D Studio Max scenes are created with sensor speed display points 12, 102 in the scenes, as shown in
The present invention provides numerous advantages including:
Additionally, according to the present invention, the viewer has the ability to virtually “fly” through the display to tell the story of current traffic conditions. Such fly-throughs provide a method for conveying traffic information and any single fly-through may contain multiple directions and angles. Although known traffic condition display methods depict a single view of an area, thereby leading to illogical jumping around over the traffic map while all of the traffic conditions of a route are presented, the fly-through of the present invention shows the viewer the traffic conditions from the beginning to the end of a desired route. This is a significant difference in the ability for a viewer to visualize and understand what is occurring in a person's daily commute.
Additionally, the TV3D according to the present invention has the ability to utilize the integrated traffic data within an existing VGSTN and the related data it maintains, including:
Prior to installing the system at a client location, the TV3D system is preferably set up in the following manner:
The present invention may be implemented with any combination of hardware and software. If implemented as a computer-implemented apparatus, the present invention is implemented using means for performing all of the steps and functions described above.
The present invention may be implemented with any combination of hardware and software. The present invention can be included in an article of manufacture (e.g., one or more computer program products) having, for instance, computer useable media. The media has embodied therein, for instance, computer readable program code means for providing and facilitating the mechanisms of the present invention. The article of manufacture can be included as part of a computer system or sold separately.
It will be appreciated by those skilled in the art that changes could be made to the embodiments described above without departing from the broad inventive concept thereof. It is understood, therefore, that this invention is not limited to the particular embodiments disclosed, but it is intended to cover modifications within the spirit and scope of the present invention as defined by the appended claims.
This application claims the benefit of U.S. Provisional Patent Application No. 60/500,857 filed Sep. 5, 2003 and entitled “Method of Displaying Traffic Flow Conditions Using a 3-D System.”
Number | Name | Date | Kind |
---|---|---|---|
5402117 | Zijderhand | Mar 1995 | A |
5539645 | Mandhyan et al. | Jul 1996 | A |
5583494 | Mizutani et al. | Dec 1996 | A |
5594432 | Oliva et al. | Jan 1997 | A |
5673039 | Pietzsch et al. | Sep 1997 | A |
5774827 | Smith, Jr. et al. | Jun 1998 | A |
5812069 | Albrecht et al. | Sep 1998 | A |
5845227 | Peterson | Dec 1998 | A |
5850352 | Moezzi et al. | Dec 1998 | A |
5889477 | Fastenrath | Mar 1999 | A |
5926113 | Jones et al. | Jul 1999 | A |
5959577 | Fan et al. | Sep 1999 | A |
5982298 | Lappenbusch et al. | Nov 1999 | A |
5987374 | Akutsu et al. | Nov 1999 | A |
5987377 | Westerlage et al. | Nov 1999 | A |
6107940 | Grimm | Aug 2000 | A |
6150961 | Alewine et al. | Nov 2000 | A |
6151550 | Nakatani | Nov 2000 | A |
6161092 | Latshaw et al. | Dec 2000 | A |
6209026 | Ran et al. | Mar 2001 | B1 |
6295066 | Tanizaki et al. | Sep 2001 | B1 |
6401027 | Xu et al. | Jun 2002 | B1 |
6452544 | Hakala et al. | Sep 2002 | B1 |
6594576 | Fan et al. | Jul 2003 | B2 |
6728628 | Peterson | Apr 2004 | B2 |
6785606 | DeKock et al. | Aug 2004 | B2 |
6845316 | Yates | Jan 2005 | B2 |
6862524 | Nagda et al. | Mar 2005 | B1 |
6911918 | Chen | Jun 2005 | B2 |
6989765 | Gueziec | Jan 2006 | B2 |
7010424 | Zhao et al. | Mar 2006 | B2 |
7116326 | Soulchin et al. | Oct 2006 | B2 |
7161497 | Gueziec | Jan 2007 | B2 |
7221287 | Gueziec et al. | May 2007 | B2 |
7274311 | MacLeod | Sep 2007 | B1 |
20020158922 | Clark et al. | Oct 2002 | A1 |
20030171870 | Gueziec | Sep 2003 | A1 |
20040083037 | Yamane et al. | Apr 2004 | A1 |
20040243533 | Dempster et al. | Dec 2004 | A1 |
20050033506 | Peterson | Feb 2005 | A1 |
20050052462 | Sakamoto et al. | Mar 2005 | A1 |
20050099321 | Pearce | May 2005 | A1 |
20050099322 | Wainfan et al. | May 2005 | A1 |
20060253245 | Cera et al. | Nov 2006 | A1 |
Number | Date | Country |
---|---|---|
11014382 | Jan 1999 | JP |
2002074404 | Mar 2002 | JP |
2005265641 | Sep 2005 | JP |
2006242888 | Sep 2006 | JP |
3841924 | Nov 2006 | JP |
2007071749 | Mar 2007 | JP |
2007102238 | Apr 2007 | JP |
2007161198 | Jun 2007 | JP |
2008183221 | Aug 2008 | JP |
Number | Date | Country | |
---|---|---|---|
20050143902 A1 | Jun 2005 | US |
Number | Date | Country | |
---|---|---|---|
60500857 | Sep 2003 | US |