You are building an automated defect-detection system for a manufacturing line. Each new product introduces previously unseen defect classes, but you usually receive only three to five labeled images per class at launch. To avoid retraining from scratch every time, you decide to apply Model-Agnostic Meta-Learning (MAML) as a few-shot learning strategy.
Which statement best explains how MAML supports rapid adaptation to the new defect classes?
It trains a GAN to synthesize additional labeled samples, converting the task into a standard large-data classification problem.
It freezes the pretrained feature extractor and retrains only the final classification layer on the new classes using logistic regression.
It meta-trains a task-agnostic initialization so that a handful of gradient updates on the small support set quickly yield high accuracy on the new defect classes.
It stores every support image in an external memory and performs nearest-neighbor lookup at inference, requiring no further gradient updates.
MAML performs meta-training across many related tasks to learn a set of model parameters that are intentionally easy to fine-tune. During deployment on a new task, the model begins from this initialization and needs only a few gradient-descent updates on the small support set to reach good generalization. The other options describe different, incorrect approaches:
Storing support images in an external memory for nearest-neighbor lookup is characteristic of memory-augmented or metric-based networks, not MAML.
Freezing most of a network and retraining only the final layer is a standard transfer learning approach, not the meta-learning strategy employed by MAML.
Generating synthetic data with a GAN is a data-augmentation strategy, not a meta-learning algorithm designed for rapid adaptation from a few examples.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Model-Agnostic Meta-Learning (MAML)?
Open an interactive chat with Bash
How does MAML differ from transfer learning?
Open an interactive chat with Bash
Why is MAML effective for few-shot learning scenarios?