During the planning phase of a land-cover-classification project, a machine-learning engineer proposes re-using a ResNet-50 model that was originally trained on ImageNet (natural RGB photographs) as the starting point for a new classifier.
The new task involves hyperspectral satellite images containing 128 spectral bands whose visual characteristics differ greatly from natural photographs. Only about 1,000 labeled satellite images are available, GPU time is limited, and the team intends to freeze the early convolutional layers and fine-tune the remaining layers.
Which single factor in this scenario most strongly suggests that transfer learning from the ImageNet model is likely to harm rather than help model performance?
The large spectral and visual mismatch between the ImageNet source data and the hyperspectral satellite imagery.
The restricted GPU compute budget.
The limited number of labeled satellite images (about 1,000).
The plan to freeze the early convolutional layers and fine-tune only the later layers.
Transfer learning helps when the source and target domains share related feature spaces and data distributions. A large mismatch between ImageNet RGB photos and hyperspectral satellite imagery means the low-level and high-level features learned during pre-training are unlikely to be useful for the target problem. Fine-tuning may not be able to override those unsuitable features, so the transferred weights can actually increase the target error-an effect known as negative transfer.
By contrast, a small labeled dataset, limited compute budget, and freezing early layers are common reasons to apply transfer learning, not red flags against it. They do not, by themselves, imply that the transferred knowledge will be detrimental.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is transfer learning?
Open an interactive chat with Bash
What makes hyperspectral imagery different from RGB imagery?