Repository logo
 

Images in motion?: a first look into video leakage in federated learning

dc.contributor.authorRasul, Md Fazle, author
dc.contributor.authorRay, Indrakshi, advisor
dc.contributor.authorJayasumana, Anura P., committee member
dc.contributor.authorBezawada, Bruhadeshwar, committee member
dc.contributor.authorSimske, Steve, committee member
dc.date.accessioned2025-09-01T10:42:14Z
dc.date.available2026-08-25
dc.date.issued2025
dc.description.abstractFederated learning (FL) allows multiple entities to train a shared model collaboratively. Its core, privacy-preserving principle is that participants only exchange model updates, such as gradients, and never their raw, sensitive data. This approach is fundamental for applications in domains where privacy and confidentiality are important. However, the security of this very mechanism is threatened by gradient inversion attacks, which can reverse-engineer private training data directly from the shared gradients, defeating the purpose of FL. While the impact of these attacks is known for image, text, and tabular data, their effect on video data remains an unexamined area of research. This paper presents the first analysis of video data leakage in FL via gradient inversion attacks. We evaluate two common video classification approaches: one employing pre-trained feature extractors and another that processes raw video frames with simple transformations. Our results indicate that the use of feature extractors offers greater resilience against gradient inversion attacks. We also demonstrate that image super-resolution techniques can enhance the frames, extracted through gradient inversion attacks, enabling attackers to reconstruct higher-quality videos. Our experiments validate this across scenarios where the attacker has access to zero, one, or more reference frames from the target environment. We find that although feature extractors make attacks more challenging, leakage is still possible if the classifier lacks sufficient complexity. We, therefore, conclude that video data leakage in FL is a viable threat and the conditions under which it occurs warrant further investigation.
dc.format.mediumborn digital
dc.format.mediummasters theses
dc.identifierRasul_colostate_0053N_19169.pdf
dc.identifier.urihttps://hdl.handle.net/10217/241805
dc.identifier.urihttps://doi.org/10.25675/3.02125
dc.languageEnglish
dc.language.isoeng
dc.publisherColorado State University. Libraries
dc.relation.ispartof2020-
dc.rightsCopyright and other restrictions may apply. User is responsible for compliance with all applicable laws. For information about copyright law, please see https://libguides.colostate.edu/copyright.
dc.rights.accessEmbargo expires: 08/25/2026.
dc.subjectfederated learning
dc.subjectmachine learning
dc.subjectdeep leakage
dc.subjectvideo data
dc.subjectgradient inversion attack
dc.titleImages in motion?: a first look into video leakage in federated learning
dc.typeText
dcterms.embargo.expires2026-08-25
dcterms.embargo.terms2026-08-25
dcterms.rights.dplaThis Item is protected by copyright and/or related rights (https://rightsstatements.org/vocab/InC/1.0/). You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).
thesis.degree.disciplineComputer Science
thesis.degree.grantorColorado State University
thesis.degree.levelMasters
thesis.degree.nameMaster of Science (M.S.)

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Rasul_colostate_0053N_19169.pdf
Size:
682.68 KB
Format:
Adobe Portable Document Format