18 February 2017

Automated Eyes In The Sky

by Jules Hurst 

An unmarked IL-76 Cargo Aircraft lands at an international airport just before dawn and a team of workers swarm the plane. As cranes and forklifts quickly unload the telephone-pole sized missiles of an S-400 battery, a Skysat-7 microsatellite passes over 500 km above, recording high-resolution video of the whole event. In near real-time, the satellite beams the imagery down to a cluster computer-filled ground station and linked CPUs begin running analytic algorithms against stills in the video data, screening the terrain for vehicular objects, measuring them from pixel length, and ultimately comparing suspected objects to equipment databases. After vehicles are identified, cluster computers compare the possible matches to a list of weapon systems prohibited by a United Nations arms embargo before sending an automated message to a private-sector imagery analyst monitoring for violations: Possible S-400 Battery Components, REF IMAGE. In a matter of minutes, the analyst receives the video and confirms the algorithms’ findings, improving the code’s reliability through machine learning by validating its analysis. With a quick phone call, the analyst notifies United Nations inspectors on the ground, who rapidly move to intercept the missiles.

In contrast to the narrative above, today’s geospatial intelligence professionals examine almost every image by hand. Machines facilitate analysis, not conduct it. The integration of object recognition software into the processing and exploitation of national and private satellite imagery architectures will dramatically increase the speed with which imagery analysts conduct first phase analysis, the geographic areas they can cover, and the complication of trends they can observe. A growing constellation of commercial satellites and increasing data transmission rates will expand the amount of imagery available to analyze.

Advances in object recognition and image classification will break down images into components and make data from images analyzable in entirely new ways. Every object on a photograph can be automatically prescribed a class by software and treated according to rules set by users. What your computer once interpreted as a set of pixels, will become a series of objects laid across a landscape that you can query through words. It’s the equivalent of the creation of an optical character recognition capability for images-the function that lets you search a PDF. Imagery analysts will be able to type in a query such as tanks, set geographic and temporal restrictions, and recall a series of images that contain tanks in that time and place. Each object on an image poses the potential to become a new data point with time-based and geospatial metadata. And every pool of images is another source of big data.

Celerity of Identification and Queuing

As object recognition and image classification software takes hold in the field, imagery analysts will be able to script notifications that change the tempo and method of their analysis. Analysts could create watch lists that warn them of the presence of specific equipment, environmental changes, construction, or a host of other analyst-defined indicators. This software will be capable of restricting the alerts geographically or temporally. A United States Customs and Border Protection agent might script an alert that notifies him in the event an algorithm identifies one or more possible vehicles crossing the U.S.-Mexican border outside of authorized checkpoints during hours of darkness. More complicated scripts could notify analysts when a confluence of objects move within a specified distance of one another as a warning of specific events, like the intersection of multiple refueling trucks and fighter aircraft indicating an upcoming flight operation. These processes will increase the speed with which events are detected to a point where satellite images prompt the queuing of earth-based signals or human intelligence collection assets in minutes, permanently modifying the speed of strategic information gathering.

Macrotrends and Historical Queries

Over time, as these algorithms identify objects and record their locations in time and space, organizations will develop large data holdings which allow for complicated retroactive queries such as find all locations where three or more T-72 tanks have been collocated within 100 meters within the past 90 days or count all objects identified as helicopters within Bulgaria within the past 48 hours. Analysts will be able to automate the comparison of object locations over years as they examine recurring events like training exercises, mobilizations, and commonplace logistical schedules and draw conclusions built on years of evidence. The power of these queries will rise in concert with the spatial resolution of overhead imagery, the consistency of its coverage, and the speed at which historical databases of images can be analyzed.

Iraqi tanks assigned to the Iraqi Army 9th Mechanized Division drive through a checkpoint near Forward Operating Base Camp Taji, Iraq. (Michael Larson/U.S. Navy Photo)

A World of Global Surveillance

These advances will not be limited to satellite collection. All overhead imaging platforms, private, public, and military, will be able to have images that meet minimum resolutions processed for object recognition in real-time or retroactively. The uses will astound us. Law enforcement officers may be able to corroborate alibis by utilizing footage collected by drones to confirm the location of a suspect’s car at the time of a crime. Conservationists might estimate elephant populations by running object-recognition programs against satellite imagery collected over wildlife preserves. Whatever the use, growth in the civilian remote sensing field will grant private organizations capabilities that were previously the domain of a minority of national governments and artificial intelligence enablers will reduce the amount of manpower they require to be effective. If the proliferation of cellphones with high-resolution cameras made us feel like we were under surveillance, the revolution in object recognition and its application to remote sensing may remove all doubt.

Jules Jay Hurst is an Army Reserve Officer and a mediocre imagery analyst who would like the robots to help him. The views expressed here are the author’s alone and do not reflect the views of the U.S. Army, the Department of Defense, or the U.S. Government.

No comments: