When I think about AI and machine learning in Earth observation, I see four distinct phases for leveraging the technology for geospatial applications. The first phase is descriptive. You look at an image and describe what’s in it, like object detection. The second is the diagnostic phase, in which you identify patterns observed in the data across geography and time. Third is the predictive phase. An example would be you’re analyzing a port and forecasting what will happen next. And the fourth phase is the prescriptive phase, where AI tells you what action to take. I would say we’re not even close to that final phase because you need to build trust before you can trust AI to tell you what to do. But this framework helps clarify where the industry is today and where we’re heading.

To understand our current position of GeoAI in our industry, it helps to look at how far we’ve come. In recent years, we have seen the successful use of AI for image interpretation, including object and feature detection at scale, from Electro-Optical (EO) and SAR imagery. AI has been used to identify objects such as ships, planes, and cars, as well as features such as building footprints, agricultural field boundaries, roads, etc. GeoAI has been successfully employed for other missions, but broader GeoAI adoption still requires trustworthy AI/ML models.  

Further, the industry has begun incorporating generative AI technologies, such as LLMs, into geospatial semantic workflows. Large language models, such as ChatGPT and Gemini, are currently not location-aware, but this paradigm is fast changing with recent developments in Vision Language Models and GeoAI. GeoAI is also embracing Agentic AI models, which focus on one or more tasks and are self-learning, designed to excel at those tasks.

Earth Foundation models from organizations such as Google, IBM, OpenAI, NASA, ESA, and others are now combining multi-source data for global-scale analytics. The embedding-based approach of foundation models enables efficient global-scale analysis and edge deployment, where local ground truth can be incorporated to improve model accuracy. Earth Foundation models can leverage multi-source data from the planet at varying resolutions across spectral, spatial, and temporal dimensions. The foundation models will allow us to understand what’s happening at any geographic location worldwide, with historical context spanning 20 to 40 years, collected by satellites such as Landsat, Sentinel, and others.

Meanwhile, every single square inch of the Earth has already been imaged by sensors,  via electro-optical or Synthetic Aperture Radar (SAR) sensors, as well as other sensors in space, air, land, and sea domains. This convergence is particularly powerful for SAR data that is inherently structured for AI/ML analysis. Its frequency, wavelength, amplitude, phase, polarization, and data can be interpreted by machines rather than requiring human visualization. SAR’s ability to collect imagery 24/7, in all-weather conditions, and across constellations enables continuous monitoring of our planet.

Where We Are and Where We’re Going

Right now, we’re still in the descriptive phase of the AI value proposition, understanding what is happening in a given image. By leveraging time series data, we are now entering the diagnostic phase of understanding patterns of life, physical as well as human, and identifying and diagnosing anomalies observed in the data with AI models.  Humans are creatures of habit, and as we understand the patterns of life, we can take steps towards the prediction phase. For example, at a port facility, you’d typically see consistent patterns of vessel activity. When those patterns deviate, AI can flag the anomaly for investigation, combing through data to identify what changed and why, using other data sources such as OSINT (Open Source Intelligence).

The remaining challenge is bringing all these datasets together for consistent analysis. Data comes in different shapes, quality levels, velocities, and volumes. The industry has not yet fully solved this integration problem. The predictive phase is within reach, but we’re not there today.

Going forward, I see Earth Foundation models providing the global baseline, and location/region-based analytics will increasingly rely on smaller models that can run on satellites or at edge locations (closer to where data is collected). These specialized AI Agentic models can leverage foundational knowledge but refine it for specific use cases. Multi-source data will be the norm, and validating AI models at scale will enable humans to trust them for predictive and eventually prescriptive actions.

SAR’s Unique Position in an AI-Driven World

SAR data is inherently both 3D and 4D.  On top of 24/7 all-weather capability, collecting 3D data is a real differentiator for SAR. SAR’s interferometric ability to use two or more images of an area collected at different times allows for the computation of precise elevation changes over an area. In an AI-driven world, a SAR satellite constellation offers a unique advantage for continuous, near-real-time observations of the planet and provides timely, reliable insights to support missions ranging from disaster response to infrastructure and maritime monitoring, object custody, and more.

———-
Kumar leads Synspective USA and has 30+ years in the geospatial industry. He holds a PhD from Purdue University and serves on the UN Group of Experts on Global Geospatial Information Management.



Tags