Web application for accessing and downloading historical precipitation data in Brazil.
-
🗺️ Interactive Map:
- Geospatial visualization of monitoring stations across Brazil using Folium.
- Intuitive navigation: click on a map point to view station details.
-
📊 Detailed Dashboard:
- Dynamic Filters: Filter by year, month, date range, and operational status.
- Interactive Charts: Time-series precipitation analysis.
- Metadata Display: Station code, coordinates, and status.
-
💧 Hydrological & Statistical Analysis:
- Distributions & Tests: PDF, CDF, and Kolmogorov-Smirnov test to find the best statistical fit (GEV, Gumbel, Normal, Weibull, etc.).
- IDF Curves & HMax: Intensity-Duration-Frequency curves and maximum precipitation by return period.
- SPI-1 Index: Standardized Precipitation Index calculations modeling drought and wet cycles.
-
🌾 Commodities vs SPI:
- Dual-axis time-series visualizations correlating the State's average SPI-1 with local commodity prices (Soybean, Corn, Coffee, Sugarcane) using normalized scales.
-
⚡ High Performance:
- Uses Parquet format for ultra-fast data loading.
- Optimized data pipeline.
-
🌍 Bilingual Support: English and Portuguese (PT-BR) localizations.
The meteorological data used in this project is extracted from BDMEP (Banco de Dados Meteorológicos para Ensino e Pesquisa), provided by INMET (National Institute of Meteorology - Brazil). Commodity data relies on historical price series for Brazilian states.
- Language: Python 3.12
- Framework: Streamlit
- Data Processing: Pandas, NumPy, SciPy (Scientific & Hydrological computing)
- Visualization: Plotly Express, Plotly Graph Objects, Matplotlib, Folium
raindata/
├── app.py # Application entry point (Navigation)
├── src/
│ ├── functions/ # Analytical functions (charts, data prep, hydrology, statistics)
│ └── utils/ # Utilities (i18n, Streamlit wakeup script)
├── pages/
│ ├── home.py # Home Page (Folium Map)
│ ├── explorer_page.py # Dataset Explorer (Station filters and raw metrics)
│ └── data_analysis_page.py # Hydrological Analysis & Commodities vs SPI
├── data/
│ ├── metadata_estacoes.parquet # Generated metadata file
└── requirements.txt # Project dependencies
-
Clone the repository:
git clone https://github.com/your-username/raindata.git cd raindata -
Create a virtual environment:
python3 -m venv .venv source .venv/bin/activate # Linux/Mac # or .venv\Scripts\activate # Windows
-
Install dependencies:
pip install -r requirements.txt
-
Prepare Data (ETL):
- Place your raw
.csvfiles from BDMEP in therain_datasetsfolder. - Run the
convert.ipynbnotebook to generatemetadata_estacoes.parquetand convert data to Parquet.
- Place your raw
-
Run the App:
streamlit run app.py
The application uses a custom dark theme with blue accents for better data visualization. Configuration is located in .streamlit/config.toml.