A comprehensive AI-powered platform to automate the assessment of student submissions, including code, reports, videos, and viva questions. Designed to improve fairness, accuracy, and feedback quality in academic evaluations.
SAAT leverages cutting-edge AI technologies to transform traditional assessment methods. It supports diverse submission types from GitHub repositories to video presentations and provides tailored feedback to both students and educators through an intuitive, centralized interface.
- GitHub repository integration
- File structure visualization with Monaco Editor
- Code review with naming convention checks, comment accuracy, and line-by-line feedback
- Team contribution evaluation via commit history
- Automated checks for compliance with lecturer’s requirements
- AI-generated content and plagiarism detection
- Structured feedback and originality scoring
- Audio transcription using Whisper + FFmpeg
- Visual keyframe extraction using OpenCV & Florence2-large
- Video segmentation and content summarization
- Timestamp-based feedback by teachers
- Contextual viva questions from reports, code, and video submissions
- Powered by Gemini 1.5 Flash API
- Adaptive and personalized using prompt engineering techniques
The platform is divided into four key components:
- Code Analysis Module
- Report Analysis Module
- Video Assessment Module
- Viva Question Generation Module
Each module integrates seamlessly with the central web app and Firebase backend.
| Layer | Technology |
|---|---|
| Frontend | React, Tailwind CSS |
| Backend | Python, Flask |
| Database | Firebase |
| AI/ML Models | Gemini 1.5 Flash, Whisper, RoBERTa, Florence2-large |
| Tools & APIs | GitHub API, OpenCV, FFmpeg, HuggingFace, Google Generative AI |
- Teacher Dashboard with assignment-based performance view
- Grading logic based on weighted score calculation
- Hidden marks until teacher approval
- Role-based access and authentication
- Hosted on: https://www.saat.42web.io
- How can advanced technologies ensure fairness and consistency in assessments?
- How to evaluate programming and written submissions using NLP and static code analysis?
- How can LLMs generate personalized viva questions?
| Use Case | Model / Tool |
|---|---|
| Code Naming Feedback | Custom rule-based engine |
| Report Analysis | OpenAI GPT-3.5, RoBERTa, Toxic-BERT |
| Video Transcription | Whisper + FFmpeg |
| Visual Analysis | Florence2-large, OpenCV, ResNet-18 |
| Viva Question Generation | Gemini 1.5 Flash, T5-small QG, BLIP-2 |
Key milestones completed:
- ✅ Functional MVP for each module
- ✅ User testing and teacher feedback integration
- ✅ Hosted version with real-time feedback
- ✅ Dashboard and grading logic implementation
- Jayathilaka A.G.K.D. (IT21252990) – Code Assessment Module
- Liyanage U.S.P. (IT21306754) – Report Analysis Module
- Gunasekara W.M.A.S. (IT21373916) – Video Analysis Module
- Rathnayake R.M.U.V. (IT21271182) – Viva Question Generation
MIT License – see the LICENSE file for details.