Bash, the Crucial Exams Chat Bot
AI Bot

Data Preparation and Processing  Flashcards

AWS Certified AI Practitioner AIF-C01 Flashcards

FrontBack
What is data augmentationCreating additional training samples by modifying existing data, often used in image or text datasets.
What is data cleaningThe process of detecting and correcting errors or inconsistencies in a dataset to improve its quality.
What is data deduplicationThe process of removing duplicate records to maintain data integrity and avoid redundancy.
What is data labelingAssigning meaningful tags or categories to data samples to make them usable for machine learning models.
What is data transformationThe process of changing data into a format suitable for analysis, such as normalization or encoding.
What is feature selectionThe process of choosing relevant features to improve model performance and reduce computational complexity.
What is imputationThe process of replacing missing values in a dataset with substituted values like the mean, median, or mode.
What is normalizationRescaling numeric data to a range, typically between 0 and 1, to ensure fair contributions to a model.
What is one-hot encodingConverting categorical data into binary vectors where each category is represented by a one-hot encoded value.
What is outlier detectionIdentifying data points that are significantly different from the rest of the data, often due to errors or unusual conditions.
What is PCA (Principal Component Analysis)A technique used to reduce dimensionality by projecting data onto principal components that explain most of the variance.
What is SMOTESynthetic Minority Oversampling Technique, used to balance datasets by generating synthetic samples for the minority class.
What is standardizationTransforming data to have a mean of 0 and a standard deviation of 1 for consistent scaling.
What is the difference between structured and unstructured dataStructured data is organized into rows and columns, while unstructured data lacks predefined organization.
What is the difference between train-test split and cross-validationTrain-test split divides data once, whereas cross-validation iteratively divides data for better reliability.
What is the role of data integrationCombining data from multiple sources to ensure consistency and enable meaningful analysis.
Why is data preprocessing importantBecause raw data may contain noise, errors, or irrelevant information that can hinder model learning.
Why is data splitting importantTo divide data into training, validation, and test sets for unbiased evaluation of machine learning models.
Why is feature scaling necessaryTo ensure features contribute equally to a machine learning model, avoiding dominance by larger values.
Why is handling missing data importantBecause missing values can negatively affect model performance and lead to biased results.
This deck focuses on the steps involved in preparing and processing data for machine learning models, including data cleaning, labeling, and transformation techniques.
Share on...
Follow us on...