NAVAIR’s Information Fusion Center
What happens when advances in modern electronics mean that sensors like imaging-class radars, advanced day/night cameras, and even more exotic items like hyperspectral sensors, laser radars, etc. are no longer very expensive items that are mounted on dedicated platforms? When a wide array of video cameras, surveillance turrets, ubiquitous radar capabilities, and other systems built into vehicles, aircraft, ships, and unmanned vehicles provide an explosion of sensor data – just as a range of databases related to human patterns or physical infrastructure are also appearing on the scene, in numbers.
In part, it is similar to what happened when the Internet went from an academic platform to a global phenomenon. The good news was, so much more information became available. The bad news was, finding the things we were looking for started to involve a lot more work.
The military has this same problem with sensors, only worse. Most of the time, they’re not necessarily looking for discrete answers, but for an overall picture of what’s going on. That becomes hard as sensors move from a small number deployed on dedicated platforms, to hundreds or thousands of them employed in platforms of every shape and size. For some applications, like domestic security or protecting certain key areas, it gets even harder. The need to include physical surveillance, communications surveillance, information about human activities, and improved geo-awareness all combine to produce a maddeningly complex task.
Moore’s Law of doubling computing power, and Metcalfe’s Law of exponential network power, created this data explosion. Several cycles later, the military is hoping it can begin to offer assistance, by turning massive arrays of data into coherent systems that help humans respond at the speed of events. The first step was data fusion. The next step was sensor fusion. The third step is information fusion… and the US Navy has just set up a center to work on it.
- Situation Awareness vs. Information Awareness
- Information Fusion: A Scenario Illustration
- The NAWCWD’s Information Fusion Center
- IF Center: Contracts and Related Events
Situation Awareness vs. Information Awareness
Information processing capabilities need to grow, in order to handle the complexity these twin explosions have created. As the amount of available data grows, that becomes harder and harder.
Militaries have always had this problem, even when all the “sensors” were human foot soldiers. Unsurprisingly, all branches of the US military, and even domestic security agencies like the Coast Guard, have data and sensor fusion projects under development and in operation.
A wide array of projects qualify, because there are various levels of fusion, depending on the specific tasks and goals involved. The Link 16 standard embedded in the MIDS-LVTs carried by fighters is one simple form of data fusion. A target seen and identified by any fighter jet in a formation, or any linked ground station or ship, is seen and identified for all. As a step up from Link 16, The USA’s F-22 and forthcoming F-35 have sensors of various kinds embedded in the aircraft’s base airframe, collecting information about aircraft, radars, and enemy forces. How to display all of that information so the pilot can just fly the aircraft according to one coherent picture? One that makes the range and intensity of risks around the aircraft clear, rather than having the pilot try to figure it all out while flying the plane, and constantly punching a keypad to switch from one read-out to another?
Even higher up the scale, SSDS combat system on board some new US Navy ships aims to use sensor fusion plus limited automation, in order to shorten ships’ response time to supersonic cruise missiles.
As difficult as those tasks may seem, they pale in comparison to the challenge of performing similar functions away from open seas, or conventional battlefields.
The problem is this: What do traffic patterns and sensor images really mean?
An F-35B fighter, or its carrying LHD ship, can apply standard “videogame logic”: Try to keep any and all potential threats at a distance, with special attention paid to a known array of serious threats.
One the other hand, if you’re commanding a key American base in a country racked by civil war, that logic doesn’t apply in the same way. Walled-off disengagement will be fatal to the mission of stabilizing the country, so mere situational awareness won’t suffice. Which is why current counter-insurgency doctrine stresses information awareness that includes local human networks, motivations, and patterns of activity.
Available data for our imaginary base commander may include extensive use of biometrics, constant surveillance and associated archives from RAID/GBOSS sensors near the base, UAV data from the nearby area, mapping of firefight locations in the surrounding area over the last several months, human network analysis carried out by military intelligence teams, “human terrain” reports from embedded social scientists, and more.
Like a ship commander, it’s possible to go from one data source to the other, sequentially, and rely on a human brain to synthesize it and make sense of it all. That will always be happening. As situations and environments become more complex, and data sources explode, the humans start to need help.
Unfortunately, mere data fusion or sensor fusion is going to be of rather limited help to our base commander. It might tell the commander that a larger than usual number of locals building an outer section of the base are absent from the check-in biometric scans today. It won’t say that they all hail from the same tribe, or note that this tribe was involved in a number of serious firefights with American forces 3 months ago, before a truce was worked out, or that local gun prices have been rising due to scarcity. Hopefully, the system will add that tomorrow is the commonly-celebrated anniversary of an important event in tribal history, and that firing guns in the air is often part of local celebrations.
In the end, a human being still has several decisions to make. The difference is, the human is alerted to the possibility that there are important decisions to be made.
Or, on the domestic front, ask what kinds of systems and sensors you might need at your fingertips, in order to sort the pattern from the clutter, and see the ship carrying the terrorists who perpetrated the November 2008 Mumbai Massacres as a potential threat – before the attack happened? Or to see the next threat coming.
Would sensors alone help? Not unless they also included mapped overlays of where key infrastructure was located, a daunting task for any advanced society. While useful, even that wouldn’t be enough. Could sensors, plus mapping with key data overlaid, plus a system that could understand relationships and certain characteristics, plus useful a priori rule sets, help you try to sort the chaff from the rice?
That can never be done with 100% certainty. At present, however, even 20% certainty, or earlier alerts of potential trouble, would represent a huge improvement.
Information Fusion: A Scenario Illustration
To get there, you’d need 4 types of information. They will be presented within a fictional domestic security scenario that’s designed to illustrate the concepts.
Situational awareness that can see a potential threat.
Sensors help, and multiple types of correlated sensors can be very useful if they work to broaden the scope of awareness, make evasion harder, or answer different sets of potential questions.
Simple scenario: A passenger aircraft has veered off of its filed flight plan slightly, and reports minor engine trouble.
Begin to set context: determine what it’s doing, or in some cases not doing.
This step, and the next step, are where “semantic ontologies” that help add context to what’s happening are useful. Sensors can tell you that someone is smiling and making a signal with their thumb and forefinger, and that you’re in Rio de Janeiro. A good semantic ontology can tell you that this signal may mean “OK” in Boston, but it means something very unflattering in Brazil. It can also tell you that the area around a local bar is a higher risk area than normal, hence worthy of increased weight put on possible signs of imminent violence, and that this signal is one of them.
Back in our scenario, the plane hasn’t lost much altitude. Maintenance records are requested by computer, and indicate that the aircraft recently underwent a scheduled engine overhaul. Based on its transmissions and identification, the plane’s circuit today has taken it from Kuala Lumpur, Malaysia to Dacca, Bangladesh, and then on a long hop to Istanbul, Turkey, overflying India and several other countries.
Determine what threats an activity, pattern, or related associations you pull up could raise.
In our fictional scenario, the system’s ontologies “know” that passenger airplanes can be used as weapons. Its rule sets also “know” that engine failure shortly after a major maintenance overhaul is more unusual than normal, and that Bangladesh and Malaysia are both considered moderate to high risk for terrorist connections. These pieces of information increase the internal threat potential rating it assigns to the aircraft. As does the next correlation.
The system notices that the plane’s extrapolated new course brings it closer to the nuclear reactors at Narora. Not on a direct collision course that would trigger obvious alarm, but into close enough proximity to raise the internal threat calculator again. The system adds that a large chemical manufacturing plant is even more directly along its new path. That set of correlations is enough to break several thresholds, triggering notifications to its operators, along with a request for further effort. Consent is given.
Now data mining begins, via connections to various systems and databases that might exist at other sites. Passenger manifests are fed in, and the system looks for things like who might be continuing through all stops from Malaysia, which tickets might be one-way, biometric scans and pictures of passengers (if you have them) that can matched against various databases for matches and links. The imaginary system also has access to network mapping of recent volumes and patterns of communications intercepts from Lashkar-e-Taiba within Pakistan, checks on next-of-kin notification numbers against key numbers flagged by phone intercepts and classified security lists, etc.
Classify the potential threat, and recommend based on resources.
Maybe the above scenario is a threat. Maybe it isn’t. The key is, can the dots be found and connected in time to do something about it, if the situation is in fact a threat?
Situations evolve, so the system must continue to go through these steps, until a situation or correlations rise to the level of something demanding action. At which point, a decision maker needs to have a narrowed potential threat list, a coherent overview of why the situation is seen as risky, and an understanding of what resources are available. In the above scenario, those resources may include everything from alerting local air bases and/or missile batteries, to advance warnings issued to local authorities and first responders as the risk mounts.
Especially if those local authorities and responders had been part of simulated exercises involving the system, and their core recommendations re: what they would need, and need to do, were included in some of the system’s rule sets.
Right now, air traffic control systems already provide situational awareness. Obviously, that falls far short of what’s required in the above scenario.
The NAWCWD’s Information Fusion Center

At first glance, US Naval Air Command’s Naval Air Warfare Center Weapons Division may seem like an odd place to put an information fusion center. On the other hand, the US Navy has more experience with threat assessment and partial decision automation than other services. It also has indirect responsibility for the critical infrastructure issue of monitoring international shipping, and an interest in land-based force protection due to its responsibility for the US Marines and for a growing cadre of “expeditionary personnel” deployed on land.
The NAWCWD Information Fusion Center is not focused on any particular platform, but on advancing the general science and application of information fusion systems. Its initial priorities are clear, and will involve systems designed to help with “critical infrastructure protection and force protection”. Robert Reddit, NAWCWD’s Director of Information Fusion, puts it this way:
“Information Fusion is the science behind Critical Infrastructure Protection, Homeland Security, ForceNet and Maritime Domain Awareness. Currently there are hundreds of rooms with hundreds of individuals all tracking 10’s of thousands of aircraft, maritime vessels, ground vehicles and individuals with everyone looking for the needle in the haystack. Information Fusion reduces the rooms and individuals and finds the needle, pulls it out of the hay and puts it where it won’t hurt anybody.”
While trials have been held involving shipborne systems and even some systems on aircraft, the system’s requirements will leave it anchored firmly on land. This is driven by the bandwidth required for the high-speed connections to large, secure databases, the need to remove increased project risks associated with weight and miniaturization issues, and the physical requirements of extreme data mining capacity and artificial intelligence computing. Data from ground, aerial, and naval sensors can be sent back as needed, and processing will take place on land before the result is beamed back to relevant personnel.
Naval ships would be the next logical places to deploy information fusion applications, if the concept works out. Moore’s Law of increased computing power in smaller sizes, and a growing array of high-bandwidth secure communication options including AESA radars and expanded satellite networks, could make such naval deployments feasible in a decade or so. To reduce backhaul bandwidth, such a system might help naval task forces work within themselves on ore limited tasks, becoming a sort of “super-combat system” adjunct.
That step is still many years away, however. In the mean time, their biggest effect of a successful information fusion applications is likely to involve improved alignment between front-line tools and the emerging Nation-Tribe approach, in areas where the traditional Nation-State diplomacy and security orientation is clearly failing. In the materiel world, however, their biggest influence is likely to be felt by and through military robots.
Robots’ relatively cheap cost and varied sensors can become an information fusion system’s key inputs, and whose ability to affect matters on the ground offers rapid response. Right now, human feats of information fusion, added by technological sensors and data retrieval, are used to cue UAVs to known terrorists. How much more valuable do robotic numbers and their corresponding coverage become to the counterinsurgency fight, if improving video coverage allows feats like facial recognition via partial image processing onboard a future UAV, followed by fast offboard correlation with databases of individuals known to be hostile, human network analysis, and other high-powered tools? Followed by alerts to appropriate personnel in a local Task Force ODIN, for decisions and action.
This will not be failproof, of course. Nothing ever is. Just as your computer becomes far more powerful when plugged into the Internet, however, information fusion centers may help ensure that “the network is the robot,” adding a different kind of exponential power to military robotics.
IF Center: Contracts and Related Events

The US Navy’s Naval Air Warfare Center Weapons Division (NAVAIR’s NAWCWD) has an Information Fusion Center to help develop these kinds of systems.
Jan 15/09: A pair of firms receive multiple-award, indefinite-delivery/ indefinite-quantity 5-year contracts to research and develop Information Fusion at NAVAIR’s IF Center. Efforts will include research, development, integration and testing; operation and ongoing improvement of the IF Center; training for newly developed software, hardware and other IF products; and independent verification and validation of sensors and systems. Since it’s a multiple award contract, both companies will have the opportunity to bid on each individual task order issued by NAVAIR’s IF Center.
General Dynamics Advanced Information Systems in Santa Clara, CA receives a maximum $95.3 million contract (N68936-09-D-0005), and will perform work in Santa Clara, CA (70%) and China Lake, CA (30%).
The firm will leverage its past work on the Quarterback and Story Maker systems, and their team includes Raytheon Network Centric Systems in St. Petersburg, FL; Northrop Grumman I.T. in Chantilly, VA; Whitney, Bradley and Brown (WBB) in Reston, VA; New Directions Technologies Inc. (NDTI) in Ridgecrest, CA; International Association of Virtual Organizations, Research and Scientific (IAVO) in Durham, NC; and Advanced Fusion Technologies (AFT) in Springfield, VA. GD-AIS release.
Lockheed Martin in San Diego, CA receives a maximum $103.8 million contract, and will perform work in Santa Clara, CA (70%) and China Lake, CA (30%).
These contracts are expected to be complete in January 2013, and were solicited under a multiple award electronic request for proposals. Two offers were received by the Naval Air Warfare Center Weapons Division at China Lake, CA.
Additional Readings and Sources
- DID appreciates the assistance of Robert Redditt, US NAWCWD’s Director of Information Fusion, in explaining the effort’s broad goals and concepts. Note, however, that the scenario and inferences above, and any mistakes or omissions therein, are entirely DID’s, and do not reflect NAWCWD views or policy.
- Lifeboat Foundation – Minding the Planet: The Meaning and Future of the Semantic Web. Which is also referred to in tech circles as “Web 3.0”. Many of its early commercial developments have their origin in past military projects, and their growth over time will parallel and complement some of the goals of the Information Fusion Center. Warning: long article, heavy speculation, but still useful once that’s filtered.
- 7th International Semantic Web Conference, 2008 – Introduction to the Semantic Web Tutorial. See also the main conference site.