Skip to content

Latest commit

 

History

History
448 lines (375 loc) · 40 KB

File metadata and controls

448 lines (375 loc) · 40 KB
<script src="https://revealjs.com/dist/reveal.js"></script> <style> /* Autofit text */ .reveal .slides { display: flex; align-items: center; justify-content: center; } .reveal .slides section { font-size: 2vw; /* Adjust the font size as needed */ } /* Autofit images */ .reveal .slides img { max-width: 70%; max-height: 70%; object-fit: contain; pointer-events: none; /* Disable pointer events on images to allow zooming */ } /* Zoom button */ .zoom-button { position: fixed; top: 20px; right: 20px; z-index: 999; } </style>

Satellite Image Coverage Classification Using ResNet Convolution Neural Network


Credits

Presented by:

Emily Calvert Headshot Sophie Ollivier Salgado Headshot

Emily Calvert Email: calvertemily15@gmail.com LinkedIn: Emily Calvert

Sophie Ollivier Salgado Email: sollivier5@gmail.com LinkedIn: Sophie Ollivier Salgado

Contact us with any questions or to connect!


Advantages and Applications of GeoSpatial Analytics and AI using Satellite Imagery

Geospatial analytics combined with Artificial Intelligence (AI) has become a game-changer in many industries. This combination empowers us to derive critical insights and patterns from vast volumes of satellite imagery, which can be incredibly beneficial for a variety of applications. It is essentially reshaping how we understand and interact with our world, helping us solve complex problems by illuminating new insights.

Satellite

General

Satellite imagery can cover virtually every corner of the globe, allowing for large-scale analysis. This can lead to rapid insights across vast geographies.

  • As satellites pass over the same locations multiple times, they create a historical archive of images. We can analyze these images to identify changes and trends over time, providing powerful insights into patterns of growth, decline, or transformation.
Illustration

Examples of Industry Applications


Agriculture

By analyzing satellite imagery, farmers can identify areas of stress in crops long before they might be visible to the naked eye. This can lead to early intervention, potentially saving vast swathes of crops from disease or pest infestation.

Agriculture Illustration

Environmental Science

Environmental Illustration

This technology is being used to monitor deforestation, track wildlife populations, and assess the impact of natural disasters. We can analyze satellite data to help direct emergency services to the most affected areas.


Urban Planning and Infrastructure

Planners can analyze satellite images and gain insights into population growth, land use changes, and infrastructure development. This can help inform decisions about where to build new roads, schools, and other public infrastructure.


Conclusion

Geospatial analytics and AI using satellite imagery offer powerful tools for gaining insights and solving complex problems across a wide range of industries. These technologies allow us to understand our world in greater detail and make more informed decisions about how to manage our resources and plan for the future.


Problem Statement

Manual Classification of Satellite Imagery is both expensive and time-consuming, yet it is a vital task for identifying cloud coverage and determining the quality of satellite data.


The Importance of Classification

Without proper classification, we are left with single snapshots of imagery for our analysis. This leads to unrepeatable results and highly localized analyses.


Challenges in Manual Classification


Volume of Data

Satellite imagery is incredibly dense and memory intensive. With 50 years of accessible satellite data, and more being collected every hour, the potential scale for geospatial analysis is vast. However, computing and processing power presents a significant constraint.

Volume of Data Illustration

Time Consumption

Given the scale and pixel density, manually identifying each raster in an image can take hours.

Volume of Data Illustration

Other obstacles and considerations include:

  • Consistency: Human perception can lead to inconsistent classification methodologies employed, leading to unreproducible results.
  • Cost: Given the manpower and computing power necessary, manual preprocessing of satellite imagery will cost your organization greatly.

Our Solution

We propose a model for the classification of coverage type in satellite imagery. This automated pre-processing tool will determine the quality of imagery by identifying cloud coverage and other relevant features, depending on your organization’s needs.


Applications

The possibilities for applying this technology are endless. This slide highlights some examples of how automated classification of imagery will contribute to your organization.


Scalability

Our model is designed to meet the needs of any project. With the right data management, tuning of hyperparameters, and data augmentation, you can process more data and build more robust models. Our solution enables the use of more detailed imagery and larger datasets.

Engaging Image

Efficiency

The model can quickly evaluate the quality and relevance of imagery before transferring dense data or making large API calls. This process increases efficiency by focusing resources only on pertinent data sets.

Engaging Image

Consistency

Our model ensures consistency by employing the same processes of classification with each iteration. This leads to reproducible results.

Engaging Image

Conclusion

Our innovative solution leads to both time and cost savings. Additionally, it opens up possibilities for real-time analysis in various industries, such as disaster management, national security, traffic management, logistics, and wildfire surveillance.


Model


Data Selection

Kaggle Header

Class Imbalance

  • We observed a class imbalance with 24.6% more data for classes other than 'desert'
  • We will initially run the model without addressing this imbalance, but may revisit this decision based on performance metrics
  • The data split was as follows: Training (60%), Validation (20%), Test (20%)

Transforming the Data

  • Crop to 224x224 pixels for training data, with scale and ratio augmentation for data integrity
  • Random horizontal flip to augment the data and prevent overfitting
  • Normalization of pixel values using pre-calculated values from the ImageNet dataset

Factors Influencing Model Selection

  • Amount and Type of Data: A large amount of image data demands a model capable of learning complex patterns.
  • Distribution Pattern: Pixel intensity distribution varies between classes; some classes have a wide range of pixel intensities, indicating more variability.
  • Pixel Density and Color Channel Distributions: Understanding these can help set appropriate thresholds for image segmentation and processing, and can indicate which models might be more effective.
  • Goal: The primary goal is accurate multi-class classification of satellite images.
Grid of Distribution Plots

ResNet Convolution Neural Network

  • ResNet, short for Residual Network, is a type of Convolutional Neural Network (CNN) for image recognition.
  • ResNet uses skip connections to tackle the "vanishing gradient problem" in deep neural networks.
  • ResNet can handle a large amount of high-dimensional data, such as images, efficiently.

How ResNet Addresses Key Factors

  • ResNet uses convolutional layers, which are effective with image data, and can handle a large amount of data without overfitting.
  • ResNet can handle variability in pixel intensities and color distributions due to its deep structure and skip connections.
  • ResNet's convolutional layers can detect subtle differences in color channel distributions, helping to distinguish between different classes.
Illustration

Validation Testing

Model Training

Model Building Decision Processes The model was trained with an early stopping condition to improve training efficiency. The ResNet model performed exceptionally well, negating the need for further fine-tuning.

Epoch Image Early Stop Image

Performance


Preformance

Confusion Matrix

  • The diagonal elements represent the instances where the predicted label is equal to the true label, i.e., correct predictions. Off-diagonal elements are those mislabeled by the classifier.
  • 'Cloudy' and 'Desert' scenes are perfectly classified with no mislabels.
  • 'Green Area' has some misclassifications, with 49 instances incorrectly predicted as 'Water'.
  • 'Water' scenes also have minor misclassifications, with one instance each mislabeled as 'Cloudy' and 'Green Area'.
  • Overall, the model shows a high degree of accuracy with the majority of instances correctly classified for each category.
Confusion Matrix Heatmap

<style> .grid-container { display: grid; grid-template-columns: repeat(2, 1fr); } .grid-item { max-width: 100%; height: auto; } </style>

Classification Report

  • Precision: High precision scores for all classes.
  • Recall: High for 'Cloudy' and 'Desert', lower for 'Green Area' and 'Water'.
  • F1-Score: Close to 1 for 'Cloudy' and 'Desert', lower for 'Green Area' and 'Water'.
  • Support: Balanced for 'Cloudy', 'Green Area', and 'Water'. Lower for 'Desert'.
  • Accuracy: High score of 0.91.
Precision
Recall
F1
Support

Conclusion

  • The distribution patterns of pixel intensities and color channels impact model performance.
  • Model better trained to recognize brighter images ('Desert' and 'Cloudy').
  • Less variation in 'Desert' and 'Cloudy' images assists classification.
  • More variation in 'Green Area' and 'Water' images presents challenges for classification.
  • Classes with narrower ranges of pixel intensities might be easier to distinguish.
  • Imbalance in class instances can influence model performance.

Next Steps

  • Adjust for early stop with fewer iterations.
  • Fine-tune for higher performance.
  • Address class imbalance.
  • Synthesize or provide more data.
  • Identify stronger distribution pattern on lower-performing classes.

Thank You!

We appreciate your attention

Thank You Image

</div>
<script> // Initialize the Reveal.js presentation Reveal.initialize(); // Zoom function function zoomInOut() { const slidesElement = document.querySelector('.reveal .slides'); if (slidesElement.style.transform === 'scale(1)') { slidesElement.style.transformOrigin = 'center center'; // Set the transform origin to the center slidesElement.style.transform = 'scale(1.5)'; // Adjust the zoom level as needed } else { slidesElement.style.transformOrigin = ''; // Reset the transform origin slidesElement.style.transform = 'scale(1)'; } } </script>