Skip to content

vervitK/Deep-Learning-Projects

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

Deep-Learning-Projects

Face Detection Script This Python script detects faces in images using OpenCV and a pre-trained Haar Cascade classifier. It processes a directory of images, identifies faces, and records metadata about the detected faces.

Requirements Python 3.x OpenCV (cv2) pandas Google Colab (if running on Colab) Installation Ensure you have Python installed. You can download it from python.org. Install required packages using pip pip install opencv-python pandas

Face Mask Detection Model

Face Mask Detection Model

This Python script implements a face mask detection model using TensorFlow/Keras with MobileNet as the base convolutional neural network.

Requirements

  • Python 3.x
  • TensorFlow 2.x
  • OpenCV (cv2)
  • NumPy
  • scikit-learn
  • matplotlib

You can install the required libraries using pip:

pip install tensorflow opencv-python numpy scikit-learn matplotlib

Usage

  1. Data Preparation:

    • Ensure you have the images.npy file containing image data and annotations (masks) in the specified format.
  2. Running the Script:

    • Open and run the face_mask_detection.py script in your Python environment.
  3. Training the Model:

    • The script will preprocess the data, split it into training and testing sets, and define the MobileNet-based model architecture.
    • Custom loss functions (Dice coefficient) and callbacks (ModelCheckpoint, EarlyStopping, ReduceLROnPlateau) are used during model training.
    • Adjust the parameters (e.g., epochs, batch size, learning rate) as needed.
  4. Viewing Results:

    • After training, the model's performance metrics (loss and dice coefficient) will be displayed.
    • Optionally, you can visualize the predicted masks on sample images from the test set.

Script Overview

  • Data Preprocessing:

    • Loads image data from images.npy and preprocesses it (resizing, normalization).
    • Extracts masks (annotations) for face regions.
  • Model Architecture:

    • Uses MobileNet as the base model for feature extraction.
    • Implements a custom upsampling and concatenation architecture for mask prediction.
  • Loss Function:

    • Defines a custom loss function incorporating binary cross-entropy and the Dice coefficient.
  • Training:

    • Compiles and trains the model using Adam optimizer and custom loss function.
    • Utilizes callbacks for model checkpointing, early stopping, and learning rate adjustment.

Customization

  • Model Configuration:

    • Modify the MobileNet configuration (alpha, include_top, weights) in the model_new() function.
    • Adjust the upsampling and concatenation layers in the model architecture.
  • Training Parameters:

    • Experiment with different hyperparameters (epochs, batch size, learning rate) to optimize model performance.

File Structure

  • face_mask_detection.py: Main script containing data loading, model definition, training, and evaluation.
  • images.npy: Input data file containing images and annotations.
  • model-{loss:.2f}.h5: Saved model checkpoints based on the training loss.

Notes

  • Ensure that the images.npy file is correctly formatted and contains the necessary image data and annotations.
  • Adjust the script paths and file names based on your local environment.

Feel free to customize this README based on your specific project details, including additional sections or explanations as needed. This documentation will help users understand and use your face mask detection model effectively.

VGG-Face Model for Face Recognition


VGG-Face Model for Face Recognition

This Python script implements a face recognition model using the VGG-Face architecture. The model is capable of generating embedding vectors for facial images, which can be used for face verification or identification tasks.

Requirements

  • Python 3.x
  • TensorFlow 2.x
  • OpenCV (cv2)
  • NumPy
  • matplotlib

You can install the required libraries using pip:

pip install tensorflow opencv-python numpy matplotlib

Usage

  1. Data Preparation:

    • Ensure you have a dataset organized in the PINS directory with subdirectories representing different individuals and containing images (*.jpg or *.jpeg format).
  2. Running the Script:

    • Open and run the face_recognition.py script in your Python environment.
  3. Model Overview:

    • The script loads the VGG-Face architecture and weights (vgg_face_weights.h5) to create a face recognition model.
    • It processes each image in the dataset, extracts features using the VGG-Face model, and generates embedding vectors.
  4. Generating Embeddings:

    • The generate_embeddings() function processes images from the dataset, computes embedding vectors using the VGG-Face model, and stores the embeddings in a list.
  5. Embedding Storage:

    • The resulting embeddings can be stored for use in face recognition applications, such as face verification or identification.

Script Components

  • IdentityMetadata Class:

    • Represents metadata for an image, including the base directory, individual's name, and image filename.
  • load_metadata() Function:

    • Loads metadata (image paths) from the dataset directory (PINS) into an array of IdentityMetadata objects.
  • load_image() Function:

    • Loads an image from the specified path using OpenCV, converting BGR to RGB color format.
  • vgg_face() Function:

    • Defines the VGG-Face architecture using TensorFlow/Keras sequential layers.
  • vgg_face_descriptor Model:

    • Instantiates the VGG-Face model up to the penultimate layer, serving as an embedding generator.
  • generate_embeddings() Function:

    • Processes each image in the dataset, computes embedding vectors using vgg_face_descriptor, and returns a list of embeddings.

Customization

  • Dataset Configuration:

    • Organize your dataset in the PINS directory with subdirectories representing individuals.
  • Model Fine-Tuning:

    • Modify the VGG-Face architecture (vgg_face() function) or experiment with different pre-trained models for face recognition.
  • Embedding Usage:

    • Extend the script to implement face verification or identification using the generated embeddings.

File Structure

  • face_recognition.py: Main script for loading data, defining the VGG-Face model, and generating embeddings.
  • vgg_face_weights.h5: Pre-trained weights for the VGG-Face model.
  • PINS/: Dataset directory containing images organized by individuals.

Notes

  • Ensure that the PINS directory structure matches the expected format for the script to load metadata and images correctly.
  • Customize parameters, such as image resizing dimensions (224x224), based on your dataset and model requirements.

Feel free to tailor this README to include additional details or explanations specific to your face recognition project. This documentation will assist users in understanding and utilizing your VGG-Face model effectively.

Sure, here's a README template tailored for your face recognition model script:


Neural Network for SVHN Dataset Classification

This Python script implements a neural network model for classifying the Street View House Numbers (SVHN) dataset using TensorFlow/Keras. The model is designed to recognize handwritten digits (0-9) based on the provided images.

Requirements

  • Python 3.x
  • TensorFlow 2.x
  • NumPy
  • matplotlib

You can install the required libraries using pip:

pip install tensorflow numpy matplotlib

Usage

  1. Dataset Preparation:

    • Ensure you have the SVHN dataset stored in an HDF5 file (Autonomous_Vehicles_SVHN_single_grey1.h5).
  2. Running the Script:

    • Open and run the svhn_classification.py script in your Python environment.
  3. Model Overview:

    • The script loads the SVHN dataset from the HDF5 file and splits it into training, validation, and test sets.
    • It defines and trains a neural network model using TensorFlow/Keras to classify handwritten digits.
  4. Model Architecture:

    • Two variations of the model architecture are provided (model and model_with_batchnorm), demonstrating the use of batch normalization.
  5. Training:

    • The model is trained using SGD optimizer and categorical cross-entropy loss.
    • Model weights are saved using callbacks (ModelCheckpoint) and learning rate is adjusted (ReduceLROnPlateau) during training.

Script Components

  • Loading Dataset:

    • Reads the SVHN dataset from the HDF5 file and splits it into training, validation, and test sets.
  • Visualizing Data:

    • Displays the first 10 images from the training set with their corresponding labels.
  • Model Design:

    • Defines neural network architectures (model and model_with_batchnorm) using Sequential API from TensorFlow/Keras.
  • Model Training:

    • Compiles and trains the model using SGD optimizer and categorical cross-entropy loss.
    • Utilizes callbacks (ModelCheckpoint, ReduceLROnPlateau) for saving model weights and adjusting learning rate.

Customization

  • Dataset Configuration:

    • Adjust the HDF5 file path and dataset keys (X_train, y_train, X_val, y_val, X_test, y_test) based on your dataset structure.
  • Model Architecture:

    • Modify the neural network architecture (model or model_with_batchnorm) by adding or removing layers, changing activation functions, or adjusting layer sizes.
  • Training Parameters:

    • Experiment with different batch sizes, epochs, learning rates, and optimizer configurations to optimize model performance.

File Structure

  • svhn_classification.py: Main script for loading data, defining models, training, and evaluating.
  • Autonomous_Vehicles_SVHN_single_grey1.h5: HDF5 file containing the SVHN dataset.
  • model_weights.h5: Saved model weights after training.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors