Conversation
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
TheColdIce
left a comment
There was a problem hiding this comment.
I have gone through all of the code now @henrikfo. Nice work!
- I have made some comments that needs to be fix/clarified.
- I think I have flagged all code that is not used.
- I have not flagged ruff errors. I also noticed some inconsistency in the docstring format, I dont know if ruff will flag this.
- I think also the wiki needs to be updated with how to run the attack. There is some params, use_fp16 for example, that is not clarified in the audit.yaml. Alternatively, the audit file can be flashed out.
If the comments are resolved, ruff checks are fixed and there is some clarification regarding config params, I think we can merge with main.
I believe that all of them are now resolved or responded to!
Those files/line have been removed!
All ruff checked have passed and the inconsistencies in docstrings should be resolved aswell!
Yes, the use_fp16 bug is fixed and I will create an new Issue about creating a comprehensive wiki for the attack!
Great! |
Description
Summary of changes
DiffMI Attack Implementation:
CelebA_InputHandlerin a file calledcelebA_diffmi_handler.pyfor the diffusion model training, a specific file calledtrain_utils.pyis created inattacks/utils/diffmi_utils/with training specific to the DiffMI attack. The reason being that the training procedure and all its functions is very complex. IN LATER UPDATES TO THE PULL REQUEST CUSTOM LOSS FUNCTIONS MIGHT BE SUPPORTED?Configuration Updates for DiffMI:
audit.yamlto include a newdiffmiattack section with parameters for fine-tuning, preprocessing, pretraining, and attack-specific settings.Evaluation Pipeline:
WiP
How Has This Been Tested?
The attack has been tested with and without minibatch for H100 and 2080ti respectively. Fine-tuning on a 2080ti is not recommended since only a batch_size of
1is possible.