Independent Research Presentation

Title: Performance Modeling Across Heterogenous Domains


Student: Arunavo Dey


Advisor:  Dr. Tanzima Islam


Date: 12.01.22


Time: 12 PM


Zoom Meeting Link:



To reduce the data collection overhead in High-Performance Computing (HPC) for each new application or architecture, scientists aim to leverage existing performance models. This work aims to transfer knowledge across heterogeneous domains. However, the number and meanings of the parameters describing new architectures or applications may differ from what the existing model has seen during training, which makes the training and testing domains heterogeneous. Consequently, this work proposes a method to address the heterogeneity that lies in the meanings or data distributions of the features between domains. Our proposed methodology is to: (1) train a source model based on the existing labeled performance data; (2) adapt the source model with a few labeled samples from the target domain, and (3) generalize the intermediate model with samples having different numbers and types of features. We create a pipeline of three nonlinear neural networks (NN), where the first NN learns a source model, and the second model adapts the source model using n-shot input from the target domain, and the final model boosts the performance of the second model. We also compare our approach with two other transfer learning techniques: (a) fine-tuning-based (FT) adapting all layers of the source model, and (b) linear probing (LP) adapting the model while freezing the input and a hidden layer.


Deadline: Dec. 14, 2022, midnight