{ "id": "1911.09659", "version": "v1", "published": "2019-11-21T18:39:01.000Z", "updated": "2019-11-21T18:39:01.000Z", "title": "AdaFilter: Adaptive Filter Fine-tuning for Deep Transfer Learning", "authors": [ "Yunhui Guo", "Yandong Li", "Liqiang Wang", "Tajana Rosing" ], "categories": [ "cs.CV" ], "abstract": "There is an increasing number of pre-trained deep neural network models. However, it is still unclear how to effectively use these models for a new task. Transfer learning, which aims to transfer knowledge from source tasks to a target task, is an effective solution to this problem. Fine-tuning is a popular transfer learning technique for deep neural networks where a few rounds of training are applied to the parameters of a pre-trained model to adapt them to a new task. Despite its popularity, in this paper, we show that fine-tuning suffers from several drawbacks. We propose an adaptive fine-tuning approach, called AdaFilter, which selects only a part of the convolutional filters in the pre-trained model to optimize on a per-example basis. We use a recurrent gated network to selectively fine-tune convolutional filters based on the activations of the previous layer. We experiment with 7 public image classification datasets and the results show that AdaFilter can reduce the average classification error of the standard fine-tuning by 2.54%.", "revisions": [ { "version": "v1", "updated": "2019-11-21T18:39:01.000Z" } ], "analyses": { "keywords": [ "deep transfer learning", "adaptive filter fine-tuning", "pre-trained deep neural network models", "convolutional filters", "public image classification datasets" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }