Bash, the Crucial Exams Chat Bot
AI Bot
Designing for Performance and Scalability (GCP PCA) Flashcards
GCP Professional Cloud Architect Flashcards
| Front | Back |
| Design considerations for globally distributed systems | Ensure consistent performance with low latency and regional autonomy using GCP’s global infrastructure |
| Design for scalability in GCP | Prioritize horizontal scaling using managed GCP services like Compute Engine and Kubernetes Engine |
| Difference between regional and zonal services in GCP? | Regional services span multiple zones for high-availability, zonal services are confined to a single zone |
| How can Global Load Balancing improve system scalability? | Directs user traffic efficiently using Anycast IP to maximize performance globally |
| How can partitioning help large datasets? | Improves query performance by dividing data into manageable sections based on criteria |
| How do service quotas affect scalability? | Limits resource usage to prevent overuse but may need adjustment for high workloads |
| How does caching improve performance? | Reduces latency and database load by storing frequently accessed data closer to the client |
| How does Cloud Pub/Sub support scalability? | By decoupling components to handle unpredictable and high-throughput workloads |
| How does Google Bigtable handle scalability? | Scales horizontally to handle petabytes of data for low-latency, high-throughput workloads |
| How does Google Kubernetes Engine ensure high availability? | By distributing workloads across multiple nodes and zones automatically |
| How does Google’s global network assist scalability? | By providing low-latency connections between regions and auto-scaling support |
| How does horizontal partitioning differ from vertical partitioning? | Horizontal partitioning divides data rows-wise, while vertical partitioning splits data column-wise |
| How does managed instance groups assist in scalability? | Scales compute instances up or down automatically based on load conditions |
| How does sharding improve scalability? | By splitting a database into smaller, manageable pieces distributed across different resources |
| How to achieve eventual consistency? | Use asynchronous replication and reconciliation mechanisms |
| Importance of fault tolerance in systems design | Ensures continuity of service in the event of infrastructure or component failure |
| What is a cold failover strategy? | A failover method where resources are provisioned only after a disaster occurs |
| What is a disaster recovery strategy in GCP? | A plan to recover services and data using backups, regions, and zones after a failure |
| What is a key principle for designing high-availability systems? | Eliminate single points of failure using managed services and redundancy |
| What is auto-scaling in GCP? | Auto-scaling dynamically adjusts compute resources based on traffic or workload |
| What is blue-green deployment? | A release strategy that ensures zero downtime by switching between two environments during updates |
| What is the benefit of preemptible VMs in scaling? | They offer cost-effective compute capacity for fault-tolerant and batch processing workloads |
| What is the CAP theorem? | States that a distributed system can only guarantee two of three: Consistency, Availability, and Partition tolerance |
| What is the importance of latency monitoring? | Identifies delays in system or network response to optimize user experience |
| What is the importance of SLAs in performance design? | Guarantees uptime and performance levels for GCP services |
| What is the purpose of Cloud Spanner? | To provide a fully managed, horizontal-scaling, and globally consistent database |
| What is the purpose of rate limiting in application design? | Prevents system overloading and ensures fair resource distribution to clients |
| What is the purpose of using a circuit breaker in cloud design? | To prevent cascading failures by stopping request forwarding during system overloads |
| What is the role of Cloud Interconnect in hybrid scaling? | Provides high-speed, secure connections between on-premise and Google Cloud resources |
| What is the role of Cloud Monitoring in performance tuning? | It helps track, visualize, and optimize metrics like latency, throughput, and errors |
| What is the role of VPC in scalability? | Allows you to design scalable and secure network architectures across regions |
| What is the significance of Backup and DR with Cloud Storage? | Ensures secure, scalable, and durable data storage for recovery in case of failures |
| When to use Cloud Functions over Compute Engine? | For event-driven tasks and lightweight compute needs without managing servers |
| Why choose multi-regional storage? | For high availability and data redundancy across multiple regions |
| Why implement a hybrid cloud architecture? | Combines on-premise and GCP resources to manage cost, speed, and scalability requirements |
| Why use Cloud CDN? | To cache content at the edge for lower latency and improved performance |
| Why use Cloud Run for scalable applications? | Provides serverless compute that automatically scales with incoming requests |
| Why use Cloud SQL for transactional workloads? | Provides managed relational databases with strong consistency and replication for reliable performance |
| Why use Google Cloud Load Balancer? | To distribute traffic globally and ensure high availability and scalability |
| Why use Read Replicas for databases? | To offload read-heavy workloads and improve database performance and scaling |
This deck emphasizes designing scalable and reliable systems, factoring in performance, availability, and disaster recovery strategies with GCP tools.