🔥 40% Off Crucial Exams Memberships — Deal ends today!

3 hours, 32 minutes remaining!
00:20:00

GCP Professional Cloud Architect Practice Test

Use the form below to configure your GCP Professional Cloud Architect Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for GCP Professional Cloud Architect
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

GCP Professional Cloud Architect Information

What the GCP Professional Cloud Architect Exam Measures

Google’s Professional Cloud Architect certification is designed to validate that an individual can design, develop and manage robust, secure, scalable, highly available, and dynamic solutions on Google Cloud Platform (GCP). Candidates are expected to understand cloud architecture best practices, the GCP product portfolio, and how to translate business requirements into technical designs. In addition to broad conceptual knowledge—network design, security, compliance, cost optimization—the exam emphasizes real-world decision-making: choosing the right storage option for a given workload, planning a secure multi-tier network, or architecting a resilient data-processing pipeline.

Format, Difficulty, and Prerequisites

The test lasts two hours, is proctored (either onsite or online), and consists of 50–60 multiple-choice and multiple-select questions. Questions are scenario based; many describe a fictitious company’s requirements and ask which architecture or operational change best meets cost, performance, or security needs. Although there are no formal prerequisites, Google recommends at least three years of industry experience—including a year of hands-on work with GCP. Because the exam’s scope is wide, candidates often discover that depth in one or two products (e.g., BigQuery or Cloud Spanner) is insufficient; a successful architect must be equally comfortable with compute, networking, IAM, data analytics, and DevOps considerations.

GCP Professional Cloud Architect Exam Practice Exams for Readiness

Taking full-length practice exams is one of the most effective ways to gauge exam readiness. Timed mock tests recreate the stress of the real assessment, forcing you to manage the clock and make decisions under pressure. Detailed answer explanations expose gaps in knowledge—particularly around edge-case IAM policies, VPC peering limits, or cost-optimization trade-offs—that casual study can miss. Many candidates report that after scoring consistently above 80 % on high-quality practice tests, their real-exam performance feels familiar rather than daunting. Equally important, reviewing why a distractor option is wrong teaches nuanced differences between seemingly similar GCP services (for example, Cloud Load Balancing tiers or Pub/Sub vs. Cloud Tasks), sharpening the judgment skills the exam prizes.

Building a Personal Study Plan

Begin with Google’s official exam guide and skill-marker documents, mapping each bullet to hands-on demos in Cloud Shell or a free-tier project. Allocate weekly blocks: architecture design sessions, product-specific deep dives, and at least three full practice exams spaced over several weeks. Complement those with whitepapers (e.g., the Site Reliability Workbook), case studies, and the latest architecture frameworks. Finally, revisit weak domains using Qwiklabs or self-built mini-projects—such as deploying a canary release pipeline or designing a multi-region Cloud Spanner instance—to transform theoretical understanding into muscle memory. By combining structured study, real-world experimentation, and targeted practice exams, candidates enter test day with both confidence and the architect’s holistic mindset Google is looking for.

GCP Professional Cloud Architect Logo
  • Free GCP Professional Cloud Architect Practice Test

  • 20 Questions
  • Unlimited time
  • Designing and planning a cloud solution architecture
    Managing and provisioning a solution infrastructure
    Designing for security and compliance
    Analyzing and optimizing technical and business processes
    Managing implementation
    Ensuring solution and operations excellence

Free Preview

This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!

Question 1 of 20

Your organization hosts dev, test, and production workloads in distinct Google Cloud projects under one Cloud Billing account. Finance requires that when aggregated monthly spend hits 90 % of a US$50 000 budget, the FinOps mailing list is notified. At 100 % spend, every Compute Engine VM in the dev and test projects must shut down automatically while production stays running. You want a low-overhead solution using only managed Google Cloud services. What should you do?

  • Configure a Cloud Monitoring alert on the billing/gcp_cost metric at 90 % and 100 %; have the alert trigger a Cloud Run job that shuts down all non-production VMs.

  • Set organization-level Compute Engine quotas for dev and test projects to zero once overall spend reaches 100 % using an Organization Policy constraint.

  • Create a single Cloud Billing Budget with 90 % email and 100 % Pub/Sub thresholds; trigger a Cloud Function that calls the Cloud Billing and Compute Engine APIs to disable billing and stop VMs in dev and test projects.

  • Purchase a US$50 000 monthly Committed Use Discount for dev and test workloads and export billing data to BigQuery for manual review of overruns.

Question 2 of 20

Your organization's finance department wants a weekly dashboard that compares actual Google Cloud spend against predefined limits for each cost center. They also want an automated notification sent to a Pub/Sub topic when 80 % of any limit is reached. As the cloud architect, which approach should you recommend?

  • Create separate Cloud Billing budgets for each cost center with an 80 % threshold that publishes to Pub/Sub, export billing data to BigQuery, and build a Looker Studio dashboard on that dataset.

  • Purchase committed use discounts per cost center, schedule weekly PDF Billing Reports emails, and configure Eventarc triggers to relay any budgetExceeded events to Pub/Sub.

  • Enable Cloud Monitoring billing metrics, set an alerting policy at 80 % of monthly cost, and visualize spending with Metrics Explorer charts.

  • Turn on the Recommender API for cost insights, forward recommendation notifications to Pub/Sub, and rely on the console's Cost Table report for weekly reviews.

Question 3 of 20

MedDeviceCo streams 5 TB of patient telemetry data daily from thousands of devices. The solution must (a) ingest the high-throughput stream reliably, (b) surface aggregated metrics to clinicians within five seconds, (c) archive all raw events unchanged for seven years at the lowest feasible cost while keeping data encrypted in transit and at rest, and (d) support a weekly machine-learning training job on the full historical data set with minimal operational effort. Which Google Cloud design best satisfies these functional and non-functional requirements?

  • Publish messages to Cloud Pub/Sub, use Dataflow streaming to write aggregates to BigQuery while archiving raw data to a Cloud Storage Archive bucket, and run weekly BigQuery ML training jobs on the stored data.

  • Invoke Cloud Functions from devices to insert records into Cloud SQL; export daily SQL dumps to Filestore for seven-year retention and train models in Vertex AI using imported CSV files.

  • Ingest data through Cloud IoT Core directly into Cloud Bigtable replicated across regions; query it from BigQuery using federated connectors and train models in AI Platform on exported snapshots.

  • Send device data through an external HTTP(S) Load Balancer to GKE pods that persist events in Firestore; export collections nightly to Cloud Storage Standard and run weekly Dataproc Spark jobs for model training.

Question 4 of 20

You are modernizing a food delivery platform currently running as a single virtual machine on Compute Engine. The business wants to split it into three loosely coupled services (Ordering, Rider Allocation, Notification). Requirements: each service must scale independently, communicate asynchronously to absorb surges, continue operating during downstream outages, and minimize infrastructure management with an event-driven model. Which design best meets these needs?

  • Deploy each service as Cloud Functions subscribed to its own Cloud Pub/Sub topic for inter-service messaging.

  • Package the services into Cloud Run and use Cloud Tasks queues for communication between them.

  • Migrate the services to separate App Engine flexible applications and rely on Cloud SQL tables for coordination.

  • Run the three services in a GKE cluster using StatefulSets and RabbitMQ for internal queues you operate.

Question 5 of 20

Your team is migrating 50 TB of transactional data from an on-premises MySQL database to Cloud Spanner by using a Dataflow pipeline. Before cut-over, auditors require evidence that every source table's row count and column-level checksums are identical in the target system. You need an automated, scalable solution that can read from both databases and output a summary report highlighting any mismatches. Which Google-supported approach should you recommend?

  • Create Cloud Monitoring uptime checks with a custom metric that queries each table's row count and triggers an alert on mismatch.

  • Enable Datastream schema drift detection and export the alert logs to Cloud Logging for post-migration analysis.

  • Use Cloud SQL Insights to collect query statistics from both environments and manually compare the results.

  • Run the open-source Data Validation Tool as a Dataflow job to compare row counts and column aggregates between MySQL and Cloud Spanner.

Question 6 of 20

Your company uses a customer-managed symmetric encryption key stored in Cloud KMS to protect objects in a production Cloud Storage bucket. A Compute Engine service account must upload and download objects that the bucket automatically encrypts with this key. Compliance mandates that only the central Security team can rotate, disable, or otherwise administer the key. Which single IAM role should you grant to the service account on the specific CryptoKey to satisfy these requirements?

  • Grant roles/cloudkms.admin on the CryptoKey.

  • Grant roles/storage.objectAdmin on the Cloud Storage bucket.

  • Grant roles/owner on the project that contains the key ring.

  • Grant roles/cloudkms.cryptoKeyEncrypterDecrypter on the CryptoKey.

Question 7 of 20

Your company, a global fashion retailer, is launching a new Golang-based microservice for its e-commerce storefront. Business stakeholders require worldwide user latency below 150 ms, the ability to absorb flash-sale traffic spikes 20× above the daily average, zero-downtime releases, and minimal infrastructure operations. For the next six months the service must continue to use the existing on-premises PostgreSQL database while a full migration is planned. Which Google Cloud architecture best satisfies these requirements?

  • Run the microservice in an App Engine flexible environment in one region, synchronize inventory changes to a Cloud Bigtable instance via Datastream, and rely on App Engine automatic scaling and Cloud Armor for DDoS protection.

  • Package the service into a container and deploy it to Cloud Run in multiple regions behind a global external HTTPS load balancer with Cloud CDN; connect to the on-premises PostgreSQL database over a Dedicated Interconnect protected by HA VPN; manage zero-downtime releases with Cloud Deploy canary rollouts.

  • Use a regional managed instance group of preemptible Compute Engine VMs behind a regional external HTTPS load balancer, scale on CPU, connect to the on-premises PostgreSQL database over Cloud VPN, and perform blue/green deployments by swapping instance templates.

  • Create a single-region GKE Standard cluster with node auto-provisioning, expose the service through an internal load balancer and Cloud NAT, and replicate the on-premises PostgreSQL database to Cloud SQL using Database Migration Service before cut-over.

Question 8 of 20

An online travel platform runs about 50 microservices on GKE Autopilot in three regions. Architects must:

  1. page operators when the 95th-percentile latency from the external HTTP(S) load balancer to the booking API exceeds 300 ms for 5 minutes (burn-rate alert on an SLO).
  2. let engineers view end-to-end request traces, including backend database calls, without modifying application code. Which approach meets both goals with the least operational effort?
  • Configure a Cloud Monitoring uptime check against the booking endpoint with an alerting policy on availability, and deploy the Cloud Trace agent as a DaemonSet in each cluster to capture traces.

  • Enable Anthos Service Mesh so Envoy sidecars automatically export request-latency distribution metrics to Cloud Monitoring; define an SLO with burn-rate alerting on the 95th-percentile latency metric; rely on ASM's built-in Cloud Trace integration for distributed tracing.

  • Install Prometheus and Jaeger in each cluster to scrape service metrics and collect traces, then create Prometheus-based alert rules for high latency.

  • Create a logs-based metric for request latency, attach a burn-rate alert to it, and instrument all services with OpenTelemetry libraries to emit Cloud Trace spans.

Question 9 of 20

Your company's customer-facing web app runs in a regional managed instance group (MIG) in us-central1 behind a global external HTTP(S) load balancer. It stores transactions in a regional Cloud SQL for MySQL instance with HA in the same region. The BCP now demands recovery within 5 minutes and ≤15 minutes data loss if the entire us-central1 region goes down. Budgets forbid major re-platforming. Which architecture most cost-effectively meets these RTO/RPO targets?

  • Store transaction data in a multi-region Cloud Storage bucket served through Cloud CDN, keep the Compute Engine MIG in us-central1, and use Cloud Functions to recreate instances if the region fails.

  • Replace Cloud SQL with a multi-region Cloud Spanner instance, move the application to Cloud Run with multi-region deployment, and configure Serverless NEGs behind the existing load balancer.

  • Provision an additional multi-zonal MIG in us-east1, add it to the existing global HTTP(S) load balancer, create a cross-region Cloud SQL read replica in us-east1, and automate promotion of the replica and database connection failover when us-central1 becomes unreachable.

  • Convert the current MIG to a regional MIG, enable Cloud SQL automatic failover zones, and rely on the global load balancer's health checks to shift traffic among zones within us-central1.

Question 10 of 20

An e-commerce company has containerized an image-processing service that uses ImageMagick to create product thumbnails whenever a new image file is written to a Cloud Storage bucket. Upload traffic is unpredictable-some days there are no images, while major sales events trigger thousands of uploads per minute. The team wants to trigger the processing automatically from Cloud Storage events, reuse the existing container without code changes, pay only while requests are handled, and have the service scale from zero to meet sudden spikes. Which Google Cloud managed compute option best meets these requirements while minimizing operational overhead?

  • Host the container on a GKE Autopilot cluster and configure a Horizontal Pod Autoscaler driven by Pub/Sub messages.

  • Create a regional managed instance group of pre-emptible Compute Engine VMs that poll the bucket for new objects.

  • Refactor the code into Cloud Functions (1st generation) with a Cloud Storage trigger.

  • Deploy the container on Cloud Run and use Eventarc to forward Cloud Storage object-create events to the service.

Question 11 of 20

An EU-based ticketing company will migrate its containerized web application from on-premises to Google Cloud. The service must: handle 10× traffic spikes without operator action, survive loss of an entire region, keep customer data in the EU, achieve 15-minute RPO and 1-hour RTO, and cost less than adopting Cloud Spanner. Which high-level architecture best meets these business requirements?

  • Migrate to App Engine standard environment in the EU multiregion and store all transactional data in a multi-region Cloud Spanner instance with automatic leader rebalancing.

  • Deploy the workload to a regional GKE Autopilot cluster with nodes in multiple zones of europe-west1; use a single-region Cloud SQL for MySQL HA instance in the same region; front-end with Cloud CDN and a global external HTTP(S) Load Balancer.

  • Use managed instance groups in two zones of europe-west1 and host MySQL on Compute Engine VMs; copy nightly persistent-disk snapshots to a dual-region Cloud Storage bucket; serve traffic through a regional external HTTP(S) Load Balancer.

  • Run the containers on Cloud Run in europe-west1 and europe-west4 behind a global external HTTP(S) Load Balancer; use a Cloud SQL for MySQL HA primary in europe-west1 with a cross-region read replica in europe-west4 that is promoted if the primary region fails; schedule 5-minute automated backups.

Question 12 of 20

An online ticketing company runs its payment API on Compute Engine managed instance groups (MIGs) in the us-central1 region behind an external HTTP(S) load balancer. Orders are stored in a regional Cloud SQL for MySQL instance that uses standard (single-primary) configuration. Management mandates that if the entire us-central1 region becomes unavailable, payment processing must resume within 30 minutes and lose at most 5 minutes of data, while incurring the lowest additional cost that still meets these objectives. Which disaster-recovery design should you recommend?

  • Create a cross-region read replica of Cloud SQL in us-east1, automate replica promotion on failure, and deploy an equivalent MIG in us-east1 as a failover backend of the same global HTTP(S) load balancer.

  • Enable Cloud SQL high-availability (regional) configuration and add a second zonal MIG in another zone of us-central1 behind an internal TCP/UDP load balancer.

  • Migrate the database to a multi-region Cloud Spanner instance for zero RPO and keep the application deployed only in us-central1 to minimize compute costs.

  • Schedule nightly Cloud SQL exports to Cloud Storage, replicate the export bucket to us-east1, and prepare Deployment Manager templates to recreate the database and VMs on demand after an outage.

Question 13 of 20

Your organization must perform a quarterly penetration test against its production workloads hosted in three Google Cloud projects. The security team plans to run credential-brute-forcing, SQL-injection, and low-volume denial-of-service (DoS) checks from an external test network. They want to know what permissions or notifications are required before they start. What should you tell them?

  • Proceed without notifying Google, provided the tests are limited to your own projects and exclude any traffic-flooding or resource-exhaustion scenarios that would violate the Acceptable Use Policy.

  • Request an authorization token through Security Command Center and include the DoS checks because Google permits low-volume DoS tests once the token is issued.

  • File a notification only if the test targets IAM policies; other attack vectors such as SQL injection or DoS do not require any communication with Google.

  • Open a support case at least two weeks in advance and wait for written approval from Google before performing any penetration activity.

Question 14 of 20

Your team operates a regional GKE cluster that serves a real-time bidding API. Each request causes short CPU spikes, and traffic varies widely: during peaks pods hit 90 % CPU and new replicas stay Pending because the nodes are full; at night overall load falls to 5 %. You must absorb traffic bursts automatically yet keep VM spending low during quiet periods with no manual actions. Which configuration best meets these goals?

  • Use a Horizontal Pod Autoscaler targeting 70 % CPU and also enable Cluster Autoscaler on the node pool with a low minimum and a high maximum node count.

  • Migrate the Deployment to Cloud Run and set concurrency to allow scaling from zero to many instances.

  • Create a Horizontal Pod Autoscaler targeting 70 % CPU but keep the node pool at a fixed size.

  • Enable Cluster Autoscaler on the node pool with min 0 and max 20 nodes but remove any Horizontal Pod Autoscaler.

Question 15 of 20

Your e-commerce team runs a stateless checkout service on Cloud Run deployed to us-central1 and europe-west1. You must release version 2 without customer downtime, observe error rates for a sample of real traffic, and instantly revert if problems appear-all while avoiding the cost of duplicating the entire stack in a separate environment. Which deployment strategy best meets these requirements?

  • Rely on Cloud Run automatic rollbacks and perform an in-place rolling update that immediately shifts 100 % of traffic to version 2.

  • Deploy version 2 as a separate Cloud Run service and switch the external HTTP(S) Load Balancer to it after final tests (blue-green).

  • Migrate the service to Google Kubernetes Engine and perform a rolling update with maxSurge 1 and maxUnavailable 0 to prevent downtime.

  • Use Cloud Run's traffic-splitting feature to implement a canary release, starting with a small percentage of traffic on the new revision and increasing it after monitoring.

Question 16 of 20

A retail company needs to extend its on-premises network to Google Cloud while a larger migration is planned. Each site already has dual 1-Gb internet circuits and can tolerate occasional extra latency but requires encrypted traffic and at least 99.99 % availability. They want a solution deployable within days and without installing new physical links. Which connectivity option best satisfies the interim requirements?

  • Create an HA Cloud VPN gateway and establish two redundant BGP tunnels to Cloud Router for dynamic routing.

  • Provision a 10-Gbps Dedicated Interconnect at a nearby colocation facility and attach it to a Cloud Router.

  • Order 200-Mbps Partner Interconnect VLAN attachments through a service provider and enable BGP routing.

  • Configure Classic Cloud VPN tunnels with static routes on a single VPN gateway.

Question 17 of 20

Your company runs a fleet of Compute Engine VMs and several Cloud Run microservices behind an external HTTP(S) load balancer. Site-reliability engineers need a single dashboard that shows CPU and memory for every workload, an alert when log entries containing the text "payment_failed" exceed 50 per minute, and end-to-end distributed traces that reveal where most request latency is spent. Which approach best satisfies these requirements while keeping operational overhead low?

  • Export every log entry to BigQuery, build Data Studio dashboards for CPU and memory from scheduled queries, run periodic SQL jobs to count "payment_failed" lines, and inspect query execution plans to diagnose request latency.

  • Enable Cloud Operations on all resources: deploy the Ops Agent to each VM, rely on Cloud Run's built-in telemetry, create a logs-based metric filtered on "payment_failed" with an alerting policy, and enable the Cloud Trace API with language agents to emit distributed spans.

  • Deploy Prometheus sidecars in every service for metrics, forward logs to an Elasticsearch cluster with Fluentd, use Jaeger for tracing, and present data through a custom Grafana portal.

  • Install only the Cloud Monitoring metric agent on VMs, configure uptime checks for CPU and memory, set an alert on profiler-reported exceptions exceeding 50 per minute, and rely on Cloud Profiler flame graphs to locate latency sources.

Question 18 of 20

Your team is preparing a detailed runbook for migrating a mission-critical 10 TB on-premises PostgreSQL database to Cloud SQL for PostgreSQL using Database Migration Service (DMS). The runbook is divided into discovery/assessment, planning, and execution sections. Which of the following tasks belongs in the planning section rather than in discovery/assessment or execution?

  • Execute user-acceptance and performance tests against the Cloud SQL instance to confirm it meets the agreed service level objective.

  • Inventory all database schemas and collect baseline CPU, I/O, and query-latency statistics from the source instance.

  • Start a DMS continuous-data-replication job and monitor lag until scheduled cutover time.

  • Document a rollback plan and the precise order in which client applications will be cut over to Cloud SQL.

Question 19 of 20

Your VPC hosts several Compute Engine instances; the application servers are tagged "app-tier." Compliance requires:

  • Only the on-prem bastion subnet 192.168.10.0/24 (via Cloud VPN) may SSH to app-tier VMs.
  • App-tier VMs may send traffic only to 10.16.0.0/16; every other egress destination must be blocked.
  • Connectivity for all other VMs must remain unchanged. With the fewest additional VPC firewall rules, which configuration meets these requirements?
  • For target tag app-tier add exactly four rules:

    • Egress allow (all protocols) to 10.16.0.0/16, priority 100
    • Egress deny (all protocols) to 0.0.0.0/0, priority 200
    • Ingress allow tcp:22 from 192.168.10.0/24, priority 1000
    • Ingress deny tcp:22 from 0.0.0.0/0, priority 1100 Keep all default VPC rules.
  • Delete the default "allow egress 0.0.0.0/0" rule for the VPC, then create an egress allow 10.16.0.0/16 rule and an ingress allow tcp:22 from 192.168.10.0/24 targeted at app-tier instances.

  • Create an organization-level egress deny 0.0.0.0/0 rule (priority 1000) and a project-level egress allow 10.16.0.0/16 rule; add a single ingress allow tcp:22 from 192.168.10.0/24 for tag app-tier.

  • Add three rules for tag app-tier: egress allow 10.16.0.0/16 (priority 100), egress deny 0.0.0.0/0 (priority 200), and ingress allow tcp:22 from 192.168.10.0/24 (priority 1000); rely on default rules for other traffic.

Question 20 of 20

An online retailer runs its primary PostgreSQL workload on a single-region Cloud SQL instance. The compliance team requires a disaster-recovery posture that guarantees a recovery point objective (RPO) of at most 5 minutes and a recovery time objective (RTO) under 1 hour if the region hosting the primary instance becomes unavailable. Operations wants the simplest managed approach that keeps additional cost low. Which solution best meets the business continuity requirements?

  • Schedule automated Cloud SQL exports every 5 minutes to Cloud Storage and restore the latest export to a new instance in another region when needed.

  • Migrate the database to a multi-region configuration of Cloud Spanner to obtain automatic synchronous replication across regions.

  • Move PostgreSQL to self-managed Compute Engine VMs with regional persistent disks and replicate nightly snapshots to a different region using Storage Transfer Service.

  • Create a cross-region Cloud SQL read replica in a second region and document a failover runbook that promotes it to primary during a regional outage.