Independent Study Presentation

Title: Inner Knowledge Loss for Robust Neural Network Compression

Presenter:  Cody Blakeney

Advisor:  Dr. Ziliang Zong

Date/Time:  Tuesday, May 4th @10:00 a.m. (CDT)

Zoom Link:   https://txstate.zoom.us/j/96018947267

 

Abstract:

Deep Neural Networks have exploded in popularity in the past decade. Their use cases are wide spread across many domains and their prevalence in all of our lives is ubiquitous. However, these models are incredible computer and memory intensive. Because of this as the prevalence of neural networks has risen, so to has risen the research field of neural network compression and acceleration. These techniques which significantly reduce the number of parameters and FLOPs required for models seemed to be almost a free lunch showing very little drop in top line metrics. Only very recently has work been done accessing the their effects on other important qualities of models beyond top line metrics. Recent studies show that popular compression techniques like pruning and quantization often damage a small subset of the classes to preserve overall accuracy in classification tasks. Pruning has also been shown to reduce robustness to adversarial examples. In this work we show how use of knowledge distillation, and specifically the use of hidden layer features can reduce the damaging effects of model compression on Bias and class-subset cannibalization.

 

Deadline: May 5, 2021, midnight