GCP Professional Cloud Architect Practice Test
Use the form below to configure your GCP Professional Cloud Architect Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

GCP Professional Cloud Architect Information
What the GCP Professional Cloud Architect Exam Measures
Google’s Professional Cloud Architect certification is designed to validate that an individual can design, develop and manage robust, secure, scalable, highly available, and dynamic solutions on Google Cloud Platform (GCP). Candidates are expected to understand cloud architecture best practices, the GCP product portfolio, and how to translate business requirements into technical designs. In addition to broad conceptual knowledge—network design, security, compliance, cost optimization—the exam emphasizes real-world decision-making: choosing the right storage option for a given workload, planning a secure multi-tier network, or architecting a resilient data-processing pipeline.
Format, Difficulty, and Prerequisites
The test lasts two hours, is proctored (either onsite or online), and consists of 50–60 multiple-choice and multiple-select questions. Questions are scenario based; many describe a fictitious company’s requirements and ask which architecture or operational change best meets cost, performance, or security needs. Although there are no formal prerequisites, Google recommends at least three years of industry experience—including a year of hands-on work with GCP. Because the exam’s scope is wide, candidates often discover that depth in one or two products (e.g., BigQuery or Cloud Spanner) is insufficient; a successful architect must be equally comfortable with compute, networking, IAM, data analytics, and DevOps considerations.
GCP Professional Cloud Architect Exam Practice Exams for Readiness
Taking full-length practice exams is one of the most effective ways to gauge exam readiness. Timed mock tests recreate the stress of the real assessment, forcing you to manage the clock and make decisions under pressure. Detailed answer explanations expose gaps in knowledge—particularly around edge-case IAM policies, VPC peering limits, or cost-optimization trade-offs—that casual study can miss. Many candidates report that after scoring consistently above 80 % on high-quality practice tests, their real-exam performance feels familiar rather than daunting. Equally important, reviewing why a distractor option is wrong teaches nuanced differences between seemingly similar GCP services (for example, Cloud Load Balancing tiers or Pub/Sub vs. Cloud Tasks), sharpening the judgment skills the exam prizes.
Building a Personal Study Plan
Begin with Google’s official exam guide and skill-marker documents, mapping each bullet to hands-on demos in Cloud Shell or a free-tier project. Allocate weekly blocks: architecture design sessions, product-specific deep dives, and at least three full practice exams spaced over several weeks. Complement those with whitepapers (e.g., the Site Reliability Workbook), case studies, and the latest architecture frameworks. Finally, revisit weak domains using Qwiklabs or self-built mini-projects—such as deploying a canary release pipeline or designing a multi-region Cloud Spanner instance—to transform theoretical understanding into muscle memory. By combining structured study, real-world experimentation, and targeted practice exams, candidates enter test day with both confidence and the architect’s holistic mindset Google is looking for.

Free GCP Professional Cloud Architect Practice Test
- 20 Questions
- Unlimited time
- Designing and planning a cloud solution architectureManaging and provisioning a solution infrastructureDesigning for security and complianceAnalyzing and optimizing technical and business processesManaging implementationEnsuring solution and operations excellence
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
Your organization hosts dev, test, and production workloads in distinct Google Cloud projects under one Cloud Billing account. Finance requires that when aggregated monthly spend hits 90 % of a US$50 000 budget, the FinOps mailing list is notified. At 100 % spend, every Compute Engine VM in the dev and test projects must shut down automatically while production stays running. You want a low-overhead solution using only managed Google Cloud services. What should you do?
Configure a Cloud Monitoring alert on the billing/gcp_cost metric at 90 % and 100 %; have the alert trigger a Cloud Run job that shuts down all non-production VMs.
Set organization-level Compute Engine quotas for dev and test projects to zero once overall spend reaches 100 % using an Organization Policy constraint.
Create a single Cloud Billing Budget with 90 % email and 100 % Pub/Sub thresholds; trigger a Cloud Function that calls the Cloud Billing and Compute Engine APIs to disable billing and stop VMs in dev and test projects.
Purchase a US$50 000 monthly Committed Use Discount for dev and test workloads and export billing data to BigQuery for manual review of overruns.
Answer Description
Create a single Cloud Billing Budget that covers all projects and sets two threshold rules: one at 90 % with email notifications to the FinOps list, and one at 100 % that publishes a message to a Cloud Pub/Sub topic. Deploy a Cloud Function subscribed to that topic. When triggered, the function calls the Cloud Billing API to disable billing (or set the spending limit to zero) on the dev and test projects and uses the Compute Engine API to stop their VM instances. This relies entirely on managed services-Budgets, Pub/Sub, Cloud Functions, and Cloud APIs-fulfilling both the alerting and automated remediation requirements without introducing additional infrastructure.
Other options fall short:
- Relying solely on Cloud Monitoring alerts cannot enforce hard spend stops and does not integrate directly with budgets.
- Purchasing CUDs and exporting data to BigQuery offers no enforcement or timely alerting.
- Manipulating quotas or policy constraints does not correlate with real-time spend and cannot guarantee shut-down exactly at 100 % of the budget.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Cloud Billing Budget in Google Cloud?
How does Pub/Sub enable communication between services in this solution?
What APIs are involved in stopping VMs and disabling billing in this solution?
What is a Cloud Billing Budget in GCP?
How does Cloud Pub/Sub work in this solution?
What is the role of Cloud Functions in cost control automation?
Your organization's finance department wants a weekly dashboard that compares actual Google Cloud spend against predefined limits for each cost center. They also want an automated notification sent to a Pub/Sub topic when 80 % of any limit is reached. As the cloud architect, which approach should you recommend?
Create separate Cloud Billing budgets for each cost center with an 80 % threshold that publishes to Pub/Sub, export billing data to BigQuery, and build a Looker Studio dashboard on that dataset.
Purchase committed use discounts per cost center, schedule weekly PDF Billing Reports emails, and configure Eventarc triggers to relay any budgetExceeded events to Pub/Sub.
Enable Cloud Monitoring billing metrics, set an alerting policy at 80 % of monthly cost, and visualize spending with Metrics Explorer charts.
Turn on the Recommender API for cost insights, forward recommendation notifications to Pub/Sub, and rely on the console's Cost Table report for weekly reviews.
Answer Description
Cloud Billing Budgets can be created per cost center (using billing sub-accounts, projects or labels) with threshold rules that publish budgetExceeded messages to Pub/Sub. Detailed cost data can be exported continuously to BigQuery, where Looker Studio (formerly Data Studio) can build a weekly dashboard that slices spend by cost center. The other options miss one or both requirements: Cloud Monitoring billing metrics do not provide fine-grained cost-center breakdowns; Recommender APIs and console cost tables cannot automate Pub/Sub alerts; committed use discounts and PDF reports do not satisfy the near-real-time alerting or interactive dashboard needs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Cloud Billing Budget?
What is Pub/Sub, and how does it work in this solution?
How does BigQuery and Looker Studio help in creating dashboards?
What is a Cloud Billing Budget in GCP?
How does BigQuery support cost analysis in GCP?
What is Pub/Sub used for in this solution?
MedDeviceCo streams 5 TB of patient telemetry data daily from thousands of devices. The solution must (a) ingest the high-throughput stream reliably, (b) surface aggregated metrics to clinicians within five seconds, (c) archive all raw events unchanged for seven years at the lowest feasible cost while keeping data encrypted in transit and at rest, and (d) support a weekly machine-learning training job on the full historical data set with minimal operational effort. Which Google Cloud design best satisfies these functional and non-functional requirements?
Publish messages to Cloud Pub/Sub, use Dataflow streaming to write aggregates to BigQuery while archiving raw data to a Cloud Storage Archive bucket, and run weekly BigQuery ML training jobs on the stored data.
Invoke Cloud Functions from devices to insert records into Cloud SQL; export daily SQL dumps to Filestore for seven-year retention and train models in Vertex AI using imported CSV files.
Ingest data through Cloud IoT Core directly into Cloud Bigtable replicated across regions; query it from BigQuery using federated connectors and train models in AI Platform on exported snapshots.
Send device data through an external HTTP(S) Load Balancer to GKE pods that persist events in Firestore; export collections nightly to Cloud Storage Standard and run weekly Dataproc Spark jobs for model training.
Answer Description
Cloud Pub/Sub provides globally distributed, highly durable ingestion that scales to millions of messages per second without manual sharding. A streaming Dataflow pipeline can branch the stream, writing near-real-time aggregates to BigQuery and archiving the unchanged raw events to a Cloud Storage Archive bucket, the lowest-cost tier for long-term retention. Dashboards can query the streamed aggregates in BigQuery within seconds, and BigQuery ML can train models directly on the data without export, satisfying the weekly retraining requirement. All chosen services encrypt data in transit and at rest and are HIPAA-eligible. The alternative designs either cannot meet the ingest scale and latency requirements (Cloud SQL, Firestore), rely on a deprecated service (IoT Core), or use a more expensive storage class, so they fail one or more stated constraints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key benefits of using Cloud Pub/Sub for data ingestion?
Why is Cloud Storage Archive the best option for long-term data retention?
How does BigQuery ML meet the machine learning training requirements?
What is Cloud Pub/Sub and why is it suitable for high-throughput data ingestion?
How does Dataflow enable near-real-time processing and integration with BigQuery?
Why is Cloud Storage Archive the best choice for long-term data retention at the lowest cost?
You are modernizing a food delivery platform currently running as a single virtual machine on Compute Engine. The business wants to split it into three loosely coupled services (Ordering, Rider Allocation, Notification). Requirements: each service must scale independently, communicate asynchronously to absorb surges, continue operating during downstream outages, and minimize infrastructure management with an event-driven model. Which design best meets these needs?
Deploy each service as Cloud Functions subscribed to its own Cloud Pub/Sub topic for inter-service messaging.
Package the services into Cloud Run and use Cloud Tasks queues for communication between them.
Migrate the services to separate App Engine flexible applications and rely on Cloud SQL tables for coordination.
Run the three services in a GKE cluster using StatefulSets and RabbitMQ for internal queues you operate.
Answer Description
Cloud Functions scale horizontally based on incoming events with no servers to manage. When each service publishes events to a dedicated Cloud Pub/Sub topic, producers and consumers are fully decoupled, messages are durably stored and delivered at-least-once, and back-pressure is automatically absorbed. This satisfies independent scaling, asynchronous communication, and tolerance of downstream failures. Operating a RabbitMQ cluster on GKE requires additional administration, Cloud Tasks is intended for point-to-point task dispatch rather than broadcast event streams, and using Cloud SQL tables with App Engine is synchronous and tightly couples services.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Cloud Pub/Sub, and how does it facilitate asynchronous communication?
How do Cloud Functions handle scaling automatically?
What are the benefits of using an event-driven architecture for inter-service communication?
What is Cloud Pub/Sub and how does it facilitate asynchronous communication?
How does Cloud Functions scale automatically in an event-driven model?
Why is RabbitMQ not ideal for this design compared to Cloud Pub/Sub?
Your team is migrating 50 TB of transactional data from an on-premises MySQL database to Cloud Spanner by using a Dataflow pipeline. Before cut-over, auditors require evidence that every source table's row count and column-level checksums are identical in the target system. You need an automated, scalable solution that can read from both databases and output a summary report highlighting any mismatches. Which Google-supported approach should you recommend?
Create Cloud Monitoring uptime checks with a custom metric that queries each table's row count and triggers an alert on mismatch.
Enable Datastream schema drift detection and export the alert logs to Cloud Logging for post-migration analysis.
Use Cloud SQL Insights to collect query statistics from both environments and manually compare the results.
Run the open-source Data Validation Tool as a Dataflow job to compare row counts and column aggregates between MySQL and Cloud Spanner.
Answer Description
The open-source Data Validation Tool (DVT) is designed specifically to verify large-scale data migrations. It runs Apache Beam pipelines (locally or on Dataflow) that connect to two heterogeneous data stores, compute row counts and configurable column aggregates (such as checksums), and write a summary of any discrepancies. Datastream's drift alerts focus on schema changes, not data accuracy. Cloud SQL Insights surfaces query performance, not data parity. Cloud Monitoring uptime checks report service availability, not detailed table-level comparisons.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Data Validation Tool (DVT) used in this solution?
How does Dataflow contribute to running the Data Validation Tool?
Why are row counts and column-level checksums important in verifying data migrations?
What is the Data Validation Tool (DVT) used for?
How does Apache Beam integrate into the Data Validation Tool?
Why are Datastream and Cloud SQL Insights not suitable for this case?
Your company uses a customer-managed symmetric encryption key stored in Cloud KMS to protect objects in a production Cloud Storage bucket. A Compute Engine service account must upload and download objects that the bucket automatically encrypts with this key. Compliance mandates that only the central Security team can rotate, disable, or otherwise administer the key. Which single IAM role should you grant to the service account on the specific CryptoKey to satisfy these requirements?
Grant roles/cloudkms.admin on the CryptoKey.
Grant roles/storage.objectAdmin on the Cloud Storage bucket.
Grant roles/owner on the project that contains the key ring.
Grant roles/cloudkms.cryptoKeyEncrypterDecrypter on the CryptoKey.
Answer Description
Granting the Cloud KMS CryptoKey Encrypter/Decrypter role (roles/cloudkms.cryptoKeyEncrypterDecrypter) on the individual CryptoKey lets the service account call Encrypt and Decrypt-exactly what it needs to read and write data protected by the key. The role does not include administrative permissions such as update, disable, destroy, or setIamPolicy, so key management remains exclusively with the Security team. Granting roles/cloudkms.admin or project-level Owner would violate least-privilege by allowing key administration. Granting storage-specific roles does not give the service account the cryptographic permissions required to use the CMEK.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the roles/cloudkms.cryptoKeyEncrypterDecrypter role used for?
What are the risks of granting roles/cloudkms.admin to a service account?
Why wouldn't roles/storage.objectAdmin be sufficient for using a customer-managed encryption key in Cloud Storage?
Your company, a global fashion retailer, is launching a new Golang-based microservice for its e-commerce storefront. Business stakeholders require worldwide user latency below 150 ms, the ability to absorb flash-sale traffic spikes 20× above the daily average, zero-downtime releases, and minimal infrastructure operations. For the next six months the service must continue to use the existing on-premises PostgreSQL database while a full migration is planned. Which Google Cloud architecture best satisfies these requirements?
Run the microservice in an App Engine flexible environment in one region, synchronize inventory changes to a Cloud Bigtable instance via Datastream, and rely on App Engine automatic scaling and Cloud Armor for DDoS protection.
Package the service into a container and deploy it to Cloud Run in multiple regions behind a global external HTTPS load balancer with Cloud CDN; connect to the on-premises PostgreSQL database over a Dedicated Interconnect protected by HA VPN; manage zero-downtime releases with Cloud Deploy canary rollouts.
Use a regional managed instance group of preemptible Compute Engine VMs behind a regional external HTTPS load balancer, scale on CPU, connect to the on-premises PostgreSQL database over Cloud VPN, and perform blue/green deployments by swapping instance templates.
Create a single-region GKE Standard cluster with node auto-provisioning, expose the service through an internal load balancer and Cloud NAT, and replicate the on-premises PostgreSQL database to Cloud SQL using Database Migration Service before cut-over.
Answer Description
Running the workload on Cloud Run delivers automatic scaling from zero to thousands of container instances without manual VM management, aligning with the minimal-ops requirement and handling 20× traffic spikes. Deploying Cloud Run services in multiple regions behind a global external HTTPS load balancer and Cloud CDN keeps user latency low worldwide. A Dedicated Interconnect supplemented with HA VPN provides high-bandwidth, encrypted connectivity to the on-premises PostgreSQL instance until it is migrated. Cloud Deploy supports progressive rollouts (canary or blue/green) for Cloud Run, enabling zero-downtime releases. The other options either rely on regional or preemptible VM infrastructure that increases operational burden, lack a global front-end, or move the database prematurely, so they fail to meet one or more stated business requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Cloud Run and why is it suitable for handling traffic spikes?
How does a Dedicated Interconnect with HA VPN provide reliable connectivity to the on-premises PostgreSQL database?
What is a global external HTTPS load balancer and how does it improve worldwide user latency?
What is Cloud Run and how does it handle traffic spikes?
What is a Dedicated Interconnect and why is it paired with HA VPN in this solution?
How does Cloud Deploy enable zero-downtime releases for microservices?
An online travel platform runs about 50 microservices on GKE Autopilot in three regions. Architects must:
- page operators when the 95th-percentile latency from the external HTTP(S) load balancer to the booking API exceeds 300 ms for 5 minutes (burn-rate alert on an SLO).
- let engineers view end-to-end request traces, including backend database calls, without modifying application code. Which approach meets both goals with the least operational effort?
Configure a Cloud Monitoring uptime check against the booking endpoint with an alerting policy on availability, and deploy the Cloud Trace agent as a DaemonSet in each cluster to capture traces.
Enable Anthos Service Mesh so Envoy sidecars automatically export request-latency distribution metrics to Cloud Monitoring; define an SLO with burn-rate alerting on the 95th-percentile latency metric; rely on ASM's built-in Cloud Trace integration for distributed tracing.
Install Prometheus and Jaeger in each cluster to scrape service metrics and collect traces, then create Prometheus-based alert rules for high latency.
Create a logs-based metric for request latency, attach a burn-rate alert to it, and instrument all services with OpenTelemetry libraries to emit Cloud Trace spans.
Answer Description
Anthos Service Mesh (ASM) automatically injects Envoy sidecars that export server request-latency distribution metrics to Cloud Monitoring and send distributed traces to Cloud Trace without requiring any application-level instrumentation. Those latency histograms can be used to define an SLO on the 95th-percentile latency of the booking service, and Cloud Monitoring supports burn-rate alerting on that SLO, enabling paging when the 300 ms objective is violated. The other options either rely on simpler uptime checks that cannot express a percentile-based SLO, require developers to add OpenTelemetry libraries, or add and operate separate Prometheus/Jaeger stacks, all of which increase operational burden.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Anthos Service Mesh (ASM)?
How does burn-rate alerting work in Cloud Monitoring?
What is distributed tracing and how does ASM handle it?
What is Anthos Service Mesh (ASM) and how does it simplify monitoring?
What are burn-rate alerts in SLO monitoring?
How does ASM’s integration with Cloud Trace enable distributed tracing without code changes?
Your company's customer-facing web app runs in a regional managed instance group (MIG) in us-central1 behind a global external HTTP(S) load balancer. It stores transactions in a regional Cloud SQL for MySQL instance with HA in the same region. The BCP now demands recovery within 5 minutes and ≤15 minutes data loss if the entire us-central1 region goes down. Budgets forbid major re-platforming. Which architecture most cost-effectively meets these RTO/RPO targets?
Store transaction data in a multi-region Cloud Storage bucket served through Cloud CDN, keep the Compute Engine MIG in us-central1, and use Cloud Functions to recreate instances if the region fails.
Replace Cloud SQL with a multi-region Cloud Spanner instance, move the application to Cloud Run with multi-region deployment, and configure Serverless NEGs behind the existing load balancer.
Provision an additional multi-zonal MIG in us-east1, add it to the existing global HTTP(S) load balancer, create a cross-region Cloud SQL read replica in us-east1, and automate promotion of the replica and database connection failover when us-central1 becomes unreachable.
Convert the current MIG to a regional MIG, enable Cloud SQL automatic failover zones, and rely on the global load balancer's health checks to shift traffic among zones within us-central1.
Answer Description
Creating a second multi-zonal MIG in another region and adding it to the existing global HTTP(S) load balancer provides compute capacity even if the original region is lost. A cross-region Cloud SQL read replica keeps data nearly up to date (seconds to minutes replication lag), satisfying a 15-minute RPO. Automating promotion of the replica and updating the application's connection string can bring the service back within the 5-minute RTO without changing platforms. Migrating to Cloud Spanner or Cloud Run would meet the objectives but introduces significant cost and re-engineering. Restricting resources to a single region, whether by expanding to more zones or changing the data store, does not address a regional outage and therefore fails the new requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Global HTTP(S) Load Balancer in GCP?
What are Cloud SQL cross-region read replicas?
What is RTO and RPO in disaster recovery planning?
What is a regional managed instance group (MIG) in GCP?
How does a cross-region Cloud SQL read replica work?
What is a global external HTTP(S) load balancer in GCP?
An e-commerce company has containerized an image-processing service that uses ImageMagick to create product thumbnails whenever a new image file is written to a Cloud Storage bucket. Upload traffic is unpredictable-some days there are no images, while major sales events trigger thousands of uploads per minute. The team wants to trigger the processing automatically from Cloud Storage events, reuse the existing container without code changes, pay only while requests are handled, and have the service scale from zero to meet sudden spikes. Which Google Cloud managed compute option best meets these requirements while minimizing operational overhead?
Host the container on a GKE Autopilot cluster and configure a Horizontal Pod Autoscaler driven by Pub/Sub messages.
Create a regional managed instance group of pre-emptible Compute Engine VMs that poll the bucket for new objects.
Refactor the code into Cloud Functions (1st generation) with a Cloud Storage trigger.
Deploy the container on Cloud Run and use Eventarc to forward Cloud Storage object-create events to the service.
Answer Description
Cloud Run is designed to run stateless container images without infrastructure management. It supports any containerized application that listens for HTTP requests, so the existing ImageMagick-based container can be used unchanged. Eventarc can deliver Cloud Storage object-create events to Cloud Run, satisfying the event-driven trigger requirement. Cloud Run automatically scales instances up rapidly in response to traffic and back down to zero when idle, so the company pays only for actual request processing time.
Cloud Functions (1st gen) would require rewriting the service into a supported language runtime and does not allow bringing an arbitrary container image. GKE Autopilot removes some cluster management tasks but still incurs per-pod charges even when idle and does not scale to zero by default. A managed instance group on Compute Engine needs VM administration and keeps instances running, leading to higher idle cost.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Cloud Run, and how does it work?
How does Eventarc integrate with Cloud Storage and Cloud Run?
Why is Cloud Run preferred over Compute Engine or GKE Autopilot in event-driven use cases?
What is Eventarc and how does it work with Cloud Run?
Why does Cloud Run scale to zero and how does it handle spikes in traffic?
How does Cloud Run differ from Cloud Functions and GKE Autopilot?
An EU-based ticketing company will migrate its containerized web application from on-premises to Google Cloud. The service must: handle 10× traffic spikes without operator action, survive loss of an entire region, keep customer data in the EU, achieve 15-minute RPO and 1-hour RTO, and cost less than adopting Cloud Spanner. Which high-level architecture best meets these business requirements?
Migrate to App Engine standard environment in the EU multiregion and store all transactional data in a multi-region Cloud Spanner instance with automatic leader rebalancing.
Deploy the workload to a regional GKE Autopilot cluster with nodes in multiple zones of europe-west1; use a single-region Cloud SQL for MySQL HA instance in the same region; front-end with Cloud CDN and a global external HTTP(S) Load Balancer.
Use managed instance groups in two zones of europe-west1 and host MySQL on Compute Engine VMs; copy nightly persistent-disk snapshots to a dual-region Cloud Storage bucket; serve traffic through a regional external HTTP(S) Load Balancer.
Run the containers on Cloud Run in europe-west1 and europe-west4 behind a global external HTTP(S) Load Balancer; use a Cloud SQL for MySQL HA primary in europe-west1 with a cross-region read replica in europe-west4 that is promoted if the primary region fails; schedule 5-minute automated backups.
Answer Description
Running the stateless application on Cloud Run in two European regions allows automatic scale-out from zero to many instances without manual intervention and can be fronted by a global external HTTP(S) Load Balancer that provides cross-regional fail-over. Storing data in Cloud SQL for MySQL with a highly available primary instance in europe-west1 and an asynchronous cross-region read replica in europe-west4 keeps data within the EU, delivers an RPO of seconds to minutes (well under 15 minutes), and-after scripted promotion-meets the 1-hour RTO target. Because Cloud SQL is priced per-instance and per-GB rather than per-node like Cloud Spanner, it offers lower total cost for typical ecommerce database sizes.
The GKE-only and single-region designs do not meet the regional fail-over requirement, while the App Engine plus Cloud Spanner option satisfies the objectives but violates the cost-optimization mandate. Nightly snapshot-based replication cannot achieve the required 15-minute RPO.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Cloud Run and how does it handle traffic spikes?
What is RPO and RTO, and why are they important?
Why is Cloud SQL a better choice than Cloud Spanner for this use case?
What is Cloud Run and how does it enable automatic scale-out?
Why is Cloud SQL preferred over Cloud Spanner for this implementation?
How does a global external HTTP(S) Load Balancer provide cross-region failover?
An online ticketing company runs its payment API on Compute Engine managed instance groups (MIGs) in the us-central1 region behind an external HTTP(S) load balancer. Orders are stored in a regional Cloud SQL for MySQL instance that uses standard (single-primary) configuration. Management mandates that if the entire us-central1 region becomes unavailable, payment processing must resume within 30 minutes and lose at most 5 minutes of data, while incurring the lowest additional cost that still meets these objectives. Which disaster-recovery design should you recommend?
Create a cross-region read replica of Cloud SQL in us-east1, automate replica promotion on failure, and deploy an equivalent MIG in us-east1 as a failover backend of the same global HTTP(S) load balancer.
Enable Cloud SQL high-availability (regional) configuration and add a second zonal MIG in another zone of us-central1 behind an internal TCP/UDP load balancer.
Migrate the database to a multi-region Cloud Spanner instance for zero RPO and keep the application deployed only in us-central1 to minimize compute costs.
Schedule nightly Cloud SQL exports to Cloud Storage, replicate the export bucket to us-east1, and prepare Deployment Manager templates to recreate the database and VMs on demand after an outage.
Answer Description
Creating a cross-region read replica of the Cloud SQL primary in another region keeps the replica asynchronously updated, and typical replication lag is seconds-well inside the 5-minute RPO. Promoting the replica to primary during a regional outage is a documented procedure that can be automated to finish in minutes, satisfying the 30-minute RTO. Deploying a second MIG in the same target region and adding it as a failover backend to the global external HTTP(S) load balancer lets traffic switch automatically when the original region is unreachable, with minimal incremental cost (only the replica instance and additional autoscaled VMs).
Enabling only regional HA for Cloud SQL or placing all resources in one region cannot survive a full-region failure. Relying on nightly exports and manual recreation would exceed both the 5-minute RPO and 30-minute RTO. Migrating to Cloud Spanner multi-region would meet the objectives but at a significantly higher ongoing cost than a single Cloud SQL primary plus one read replica, so it is not the lowest-cost solution that satisfies the requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Cloud SQL read replica?
What is a global HTTP(S) load balancer in GCP?
What is RTO and RPO in disaster recovery planning?
What does RPO and RTO mean in disaster recovery?
What is a cross-region read replica in Cloud SQL?
How do global HTTP(S) load balancers work in failover scenarios?
Your organization must perform a quarterly penetration test against its production workloads hosted in three Google Cloud projects. The security team plans to run credential-brute-forcing, SQL-injection, and low-volume denial-of-service (DoS) checks from an external test network. They want to know what permissions or notifications are required before they start. What should you tell them?
Proceed without notifying Google, provided the tests are limited to your own projects and exclude any traffic-flooding or resource-exhaustion scenarios that would violate the Acceptable Use Policy.
Request an authorization token through Security Command Center and include the DoS checks because Google permits low-volume DoS tests once the token is issued.
File a notification only if the test targets IAM policies; other attack vectors such as SQL injection or DoS do not require any communication with Google.
Open a support case at least two weeks in advance and wait for written approval from Google before performing any penetration activity.
Answer Description
Google no longer requires customers to request or obtain explicit approval before running penetration tests as long as the activity is limited to the customer's own GCP resources and complies with the Acceptable Use Policy (AUP). The AUP forbids tests that could adversely affect Google services or other tenants, including any form of DoS or resource-exhaustion attack. Therefore the team may proceed without filing a request only if they omit the DoS portion of the plan; otherwise the activity would violate the AUP.
The other options are incorrect because:
- Google does not offer an approval queue or authorization token for routine penetration tests, and filing a special request is unnecessary.
- Cloud Security Command Center does not issue "tokens" for destructive testing, nor does it override the AUP.
- IAM-specific probes are treated the same as any other penetration test; no separate notification channel exists.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Acceptable Use Policy (AUP) in Google Cloud?
Why is notification not required for penetration tests on your own GCP projects?
What constitutes low-volume Denial-of-Service (DoS) testing, and why is it restricted?
What is the Acceptable Use Policy (AUP) in Google Cloud?
What testing activities require notification to Google in Google Cloud?
How does Google Cloud Security Command Center support penetration tests?
Your team operates a regional GKE cluster that serves a real-time bidding API. Each request causes short CPU spikes, and traffic varies widely: during peaks pods hit 90 % CPU and new replicas stay Pending because the nodes are full; at night overall load falls to 5 %. You must absorb traffic bursts automatically yet keep VM spending low during quiet periods with no manual actions. Which configuration best meets these goals?
Use a Horizontal Pod Autoscaler targeting 70 % CPU and also enable Cluster Autoscaler on the node pool with a low minimum and a high maximum node count.
Migrate the Deployment to Cloud Run and set concurrency to allow scaling from zero to many instances.
Create a Horizontal Pod Autoscaler targeting 70 % CPU but keep the node pool at a fixed size.
Enable Cluster Autoscaler on the node pool with min 0 and max 20 nodes but remove any Horizontal Pod Autoscaler.
Answer Description
Horizontal Pod Autoscaler (HPA) adds or removes pod replicas in response to metrics such as CPU load, letting the service grow when demand rises and shrink when it falls. However, replicas can only start if the cluster has spare capacity. Cluster Autoscaler (CA) watches for Pending pods and, when none of the current nodes can fit them, increases the node pool size; it also scales nodes back down when they sit unused. Using HPA without CA could still leave replicas Pending, while CA alone would not increase pod count when CPU rises. Combining HPA with CA lets pods and nodes grow together during peaks and both contract during off-hours, minimizing cost. Moving the workload to Cloud Run or relying on Vertical Pod Autoscaler would not satisfy the requirement to stay on the existing GKE deployment or would not address pod-level scaling needs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Horizontal Pod Autoscaler (HPA) in GKE?
What does Cluster Autoscaler (CA) do in GKE?
Why combine HPA and Cluster Autoscaler (CA) in GKE?
What is Horizontal Pod Autoscaler (HPA) in GKE?
What does Cluster Autoscaler (CA) do in Kubernetes?
Why combine Horizontal Pod Autoscaler (HPA) with Cluster Autoscaler (CA) in GKE?
Your e-commerce team runs a stateless checkout service on Cloud Run deployed to us-central1 and europe-west1. You must release version 2 without customer downtime, observe error rates for a sample of real traffic, and instantly revert if problems appear-all while avoiding the cost of duplicating the entire stack in a separate environment. Which deployment strategy best meets these requirements?
Rely on Cloud Run automatic rollbacks and perform an in-place rolling update that immediately shifts 100 % of traffic to version 2.
Deploy version 2 as a separate Cloud Run service and switch the external HTTP(S) Load Balancer to it after final tests (blue-green).
Migrate the service to Google Kubernetes Engine and perform a rolling update with maxSurge 1 and maxUnavailable 0 to prevent downtime.
Use Cloud Run's traffic-splitting feature to implement a canary release, starting with a small percentage of traffic on the new revision and increasing it after monitoring.
Answer Description
Cloud Run allows multiple revisions of the same service to be active at once and supports fine-grained traffic splitting between them. By directing a small percentage of production traffic to the new revision first, the team can monitor live metrics and gradually increase the percentage when healthy. Reverting is instantaneous-simply shift all traffic back to the previous revision. Creating an entirely separate service behind a load balancer resembles blue-green deployment but duplicates resources. An in-place rolling update without traffic control offers no safe observation window, and moving the workload to GKE introduces unnecessary complexity for this requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Cloud Run's traffic-splitting feature work?
What is a canary release strategy in cloud deployments?
Why is Cloud Run preferred over GKE for this scenario?
What is Canary Release in Cloud Run?
How does traffic splitting work in Cloud Run?
Why is blue-green deployment unsuitable in this scenario?
A retail company needs to extend its on-premises network to Google Cloud while a larger migration is planned. Each site already has dual 1-Gb internet circuits and can tolerate occasional extra latency but requires encrypted traffic and at least 99.99 % availability. They want a solution deployable within days and without installing new physical links. Which connectivity option best satisfies the interim requirements?
Create an HA Cloud VPN gateway and establish two redundant BGP tunnels to Cloud Router for dynamic routing.
Provision a 10-Gbps Dedicated Interconnect at a nearby colocation facility and attach it to a Cloud Router.
Order 200-Mbps Partner Interconnect VLAN attachments through a service provider and enable BGP routing.
Configure Classic Cloud VPN tunnels with static routes on a single VPN gateway.
Answer Description
Highly available (HA) Cloud VPN creates redundant IPsec tunnels over the public internet and exchanges routes with Cloud Router. Because it uses existing ISP links, it can be provisioned in software within minutes, provides encryption in transit, and carries a 99.99 % SLA when at least two tunnels are established in separate zones. Dedicated and Partner Interconnects meet or exceed the availability goal but require ordering and provisioning physical circuits that can take weeks and incur additional cost. Classic VPN is quick to deploy but its 99.9 % SLA does not meet the stated availability requirement. Therefore, HA Cloud VPN is the best fit for a fast, temporary, highly available connection that relies on existing internet bandwidth.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an HA Cloud VPN?
How does a Cloud Router enhance connectivity in HA Cloud VPN?
Why is Dedicated Interconnect unsuitable for quick deployment compared to HA Cloud VPN?
What is a Cloud Router in Google Cloud?
What is HA Cloud VPN and how is it different from Classic VPN?
What is the difference between Partner Interconnect and Dedicated Interconnect?
Your company runs a fleet of Compute Engine VMs and several Cloud Run microservices behind an external HTTP(S) load balancer. Site-reliability engineers need a single dashboard that shows CPU and memory for every workload, an alert when log entries containing the text "payment_failed" exceed 50 per minute, and end-to-end distributed traces that reveal where most request latency is spent. Which approach best satisfies these requirements while keeping operational overhead low?
Export every log entry to BigQuery, build Data Studio dashboards for CPU and memory from scheduled queries, run periodic SQL jobs to count "payment_failed" lines, and inspect query execution plans to diagnose request latency.
Enable Cloud Operations on all resources: deploy the Ops Agent to each VM, rely on Cloud Run's built-in telemetry, create a logs-based metric filtered on "payment_failed" with an alerting policy, and enable the Cloud Trace API with language agents to emit distributed spans.
Deploy Prometheus sidecars in every service for metrics, forward logs to an Elasticsearch cluster with Fluentd, use Jaeger for tracing, and present data through a custom Grafana portal.
Install only the Cloud Monitoring metric agent on VMs, configure uptime checks for CPU and memory, set an alert on profiler-reported exceptions exceeding 50 per minute, and rely on Cloud Profiler flame graphs to locate latency sources.
Answer Description
Installing the Ops Agent on each Compute Engine VM streams host metrics and logs directly to Cloud Monitoring and Cloud Logging. Cloud Run already exports service-level metrics and container stdout/stderr to the same back-ends, so a unified dashboard can be built without additional collectors. In Cloud Logging, a logs-based metric filtered on textPayload:"payment_failed" can count matching entries; attaching an alerting policy to this metric meets the notification requirement. Finally, enabling the Cloud Trace API and adding the language-specific Trace agent or OpenTelemetry exporter to the services allows spans to be sent to Cloud Trace, producing end-to-end request traces that highlight the slowest downstream call. The other options either rely on self-managed stacks, batch processing, or tools (uptime checks, Cloud Profiler) that do not meet all three observability goals.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Ops Agent, and why is it used in this approach?
How does the logs-based metric work for detecting 'payment_failed' errors?
What is Cloud Trace, and how does it help with distributed tracing?
What is the Ops Agent in Google Cloud?
How do logs-based metrics and alerting policies work in Google Cloud Logging?
What is the purpose of the Cloud Trace API?
Your team is preparing a detailed runbook for migrating a mission-critical 10 TB on-premises PostgreSQL database to Cloud SQL for PostgreSQL using Database Migration Service (DMS). The runbook is divided into discovery/assessment, planning, and execution sections. Which of the following tasks belongs in the planning section rather than in discovery/assessment or execution?
Execute user-acceptance and performance tests against the Cloud SQL instance to confirm it meets the agreed service level objective.
Inventory all database schemas and collect baseline CPU, I/O, and query-latency statistics from the source instance.
Start a DMS continuous-data-replication job and monitor lag until scheduled cutover time.
Document a rollback plan and the precise order in which client applications will be cut over to Cloud SQL.
Answer Description
During planning you translate the findings from discovery into a concrete, risk-mitigated migration design. Defining an explicit rollback strategy and cutover sequencing is part of that design work: it uses the inventory and performance data gathered in discovery and is required before you launch DMS jobs or validation tests in execution. Cataloging schemas and measuring utilization are discovery tasks, while starting DMS continuous replication and running post-migration performance tests are execution-phase activities.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the role of a rollback plan in database migration?
How does Database Migration Service (DMS) use continuous replication in migration?
What is included in performance tests during a database migration?
What is the purpose of a rollback plan during database migration?
How does the planning phase differ from discovery and execution in migration projects?
What is the role of Database Migration Service (DMS) during database migration to Cloud SQL?
Your VPC hosts several Compute Engine instances; the application servers are tagged "app-tier." Compliance requires:
- Only the on-prem bastion subnet 192.168.10.0/24 (via Cloud VPN) may SSH to app-tier VMs.
- App-tier VMs may send traffic only to 10.16.0.0/16; every other egress destination must be blocked.
- Connectivity for all other VMs must remain unchanged. With the fewest additional VPC firewall rules, which configuration meets these requirements?
For target tag app-tier add exactly four rules:
- Egress allow (all protocols) to 10.16.0.0/16, priority 100
- Egress deny (all protocols) to 0.0.0.0/0, priority 200
- Ingress allow tcp:22 from 192.168.10.0/24, priority 1000
- Ingress deny tcp:22 from 0.0.0.0/0, priority 1100 Keep all default VPC rules.
Delete the default "allow egress 0.0.0.0/0" rule for the VPC, then create an egress allow 10.16.0.0/16 rule and an ingress allow tcp:22 from 192.168.10.0/24 targeted at app-tier instances.
Create an organization-level egress deny 0.0.0.0/0 rule (priority 1000) and a project-level egress allow 10.16.0.0/16 rule; add a single ingress allow tcp:22 from 192.168.10.0/24 for tag app-tier.
Add three rules for tag app-tier: egress allow 10.16.0.0/16 (priority 100), egress deny 0.0.0.0/0 (priority 200), and ingress allow tcp:22 from 192.168.10.0/24 (priority 1000); rely on default rules for other traffic.
Answer Description
Google Cloud evaluates firewall rules from the lowest numeric priority to the highest and stops at the first match. Because the default ingress rule "allow-ssh" (priority 65534) would still permit SSH from anywhere, a higher-priority deny is needed after a specific allow from the bastion subnet. Likewise, two egress rules are required: an early allow to 10.16.0.0/16 followed by a broader deny to 0.0.0.0/0. Targeting the rules at the "app-tier" tag prevents any effect on other VMs, so four tag-scoped rules (allow + deny for both directions) are the minimal compliant set; deleting or changing default rules would impact other workloads, and omitting the deny-SSH rule would leave the servers exposed.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a VPC in Google Cloud Platform?
How does priority work in Google Cloud firewall rules?
What does tagging target resources (like 'app-tier') in firewall rules achieve?
Why is it necessary to define both allow and deny rules for egress and ingress traffic in this scenario?
What does the priority of firewall rules mean, and how does it affect traffic flow?
How do target tags like 'app-tier' help in managing VPC firewall rules?
An online retailer runs its primary PostgreSQL workload on a single-region Cloud SQL instance. The compliance team requires a disaster-recovery posture that guarantees a recovery point objective (RPO) of at most 5 minutes and a recovery time objective (RTO) under 1 hour if the region hosting the primary instance becomes unavailable. Operations wants the simplest managed approach that keeps additional cost low. Which solution best meets the business continuity requirements?
Schedule automated Cloud SQL exports every 5 minutes to Cloud Storage and restore the latest export to a new instance in another region when needed.
Migrate the database to a multi-region configuration of Cloud Spanner to obtain automatic synchronous replication across regions.
Move PostgreSQL to self-managed Compute Engine VMs with regional persistent disks and replicate nightly snapshots to a different region using Storage Transfer Service.
Create a cross-region Cloud SQL read replica in a second region and document a failover runbook that promotes it to primary during a regional outage.
Answer Description
Creating a cross-region read replica for the existing Cloud SQL instance keeps the database fully managed while streaming changes asynchronously to another region with typical replication lag in seconds, well within the 5-minute RPO. In a regional outage, administrators can promote the replica to primary and redirect application traffic, a process that is normally completed in minutes, satisfying the 1-hour RTO. The alternative options either rely on time-consuming backup restores, introduce significantly higher operational complexity, or require migrating to a more expensive service such as Cloud Spanner, so they do not fit the stated cost and simplicity constraints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is RPO and RTO when discussing disaster recovery?
How does a Cloud SQL read replica work for disaster recovery?
What is Cloud Spanner, and why is it not suitable in this scenario?
What is an RPO and an RTO in disaster recovery?
How does Cloud SQL’s cross-region read replica ensure high availability?
Why is migrating to Cloud Spanner not cost-efficient in this scenario?
Neat!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.