Conversation
Updated various service configurations in docker-compose.yml, including port changes and added new services.
Greptile OverviewGreptile SummaryThis PR updates Docker service networking configurations and resolves port conflicts across the AgCloud platform. The main changes involve replacing The sensorGuard service files are updated consistently to use Important Files Changed
Confidence score: 2/5
Sequence DiagramsequenceDiagram
participant User
participant Kafka as "Kafka Topic (sensors)"
participant FlinkApp as "SensorGuard Flink App"
participant Engine as "Engine"
participant StateStore as "State Store"
participant API as "DB API Service"
participant Writers as "Writers (Kafka/Console)"
participant SilenceSweep as "Silence Sweep Thread"
User->>Kafka: "Publish sensor data"
FlinkApp->>API: "Authenticate and get token"
API-->>FlinkApp: "Return access token"
FlinkApp->>API: "Fetch active sensors"
API-->>FlinkApp: "Return sensor list"
FlinkApp->>StateStore: "Initialize with active sensors"
FlinkApp->>SilenceSweep: "Start background thread"
loop Continuous Processing
FlinkApp->>Kafka: "Consume sensor events"
Kafka-->>FlinkApp: "Return sensor data JSON"
FlinkApp->>FlinkApp: "Parse JSON to Event object"
FlinkApp->>Engine: "process_event(Event)"
Engine->>StateStore: "Check if device is known"
StateStore-->>Engine: "Return device status"
alt Device is known
Engine->>StateStore: "Update device last_seen_ts"
Engine->>API: "update_device_last_seen(device_id)"
API-->>Engine: "Confirm update"
Engine->>Engine: "Close keepalive alerts"
Engine->>Engine: "Check for corrupted readings"
Engine->>Engine: "Check for out-of-range values"
Engine->>Engine: "Check for stuck sensor"
alt Alert condition detected
Engine->>StateStore: "Open new alert"
Engine->>Writers: "write(Alert)"
Writers->>Kafka: "Publish alert to alerts topic"
else Alert condition resolved
Engine->>StateStore: "Close existing alert"
Engine->>Writers: "write(Alert with end_ts)"
Writers->>Kafka: "Publish closed alert"
end
else Device unknown
Engine->>Engine: "Skip processing"
end
end
loop Periodic Silence Sweep
SilenceSweep->>SilenceSweep: "Wait for interval"
SilenceSweep->>API: "get_sensors_last_seen()"
API-->>SilenceSweep: "Return sensor timestamps"
SilenceSweep->>SilenceSweep: "Check for missing keepalive"
alt Sensor silent too long
SilenceSweep->>StateStore: "Create missing_keepalive alert"
SilenceSweep->>Writers: "write(Alert)"
Writers->>Kafka: "Publish silence alert"
else Sensor back online
SilenceSweep->>StateStore: "Close missing_keepalive alert"
SilenceSweep->>Writers: "write(Alert with end_ts)"
Writers->>Kafka: "Publish resolved alert"
end
end
|
| - HTTP_INFER_URL=http://fruit-inference-http:8004/infer_json | ||
| volumes: | ||
| - ./streaming/flink/jobs:/opt/flink/jobs:ro | ||
| - ./streaming/flink/jobs:/opt/flink/jobs |
There was a problem hiding this comment.
style: Jobs volume mount changed from read-only (:ro) to read-write - could allow unintended modifications. Is read-write access needed for the Flink jobs directory?
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
No description provided.