🔥 40% Off Crucial Exams Memberships — Deal ends today!

3 hours, 32 minutes remaining!
00:20:00

GCP Professional Cloud Architect Practice Test

Use the form below to configure your GCP Professional Cloud Architect Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for GCP Professional Cloud Architect
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

GCP Professional Cloud Architect Information

What the GCP Professional Cloud Architect Exam Measures

Google’s Professional Cloud Architect certification is designed to validate that an individual can design, develop and manage robust, secure, scalable, highly available, and dynamic solutions on Google Cloud Platform (GCP). Candidates are expected to understand cloud architecture best practices, the GCP product portfolio, and how to translate business requirements into technical designs. In addition to broad conceptual knowledge—network design, security, compliance, cost optimization—the exam emphasizes real-world decision-making: choosing the right storage option for a given workload, planning a secure multi-tier network, or architecting a resilient data-processing pipeline.

Format, Difficulty, and Prerequisites

The test lasts two hours, is proctored (either onsite or online), and consists of 50–60 multiple-choice and multiple-select questions. Questions are scenario based; many describe a fictitious company’s requirements and ask which architecture or operational change best meets cost, performance, or security needs. Although there are no formal prerequisites, Google recommends at least three years of industry experience—including a year of hands-on work with GCP. Because the exam’s scope is wide, candidates often discover that depth in one or two products (e.g., BigQuery or Cloud Spanner) is insufficient; a successful architect must be equally comfortable with compute, networking, IAM, data analytics, and DevOps considerations.

GCP Professional Cloud Architect Exam Practice Exams for Readiness

Taking full-length practice exams is one of the most effective ways to gauge exam readiness. Timed mock tests recreate the stress of the real assessment, forcing you to manage the clock and make decisions under pressure. Detailed answer explanations expose gaps in knowledge—particularly around edge-case IAM policies, VPC peering limits, or cost-optimization trade-offs—that casual study can miss. Many candidates report that after scoring consistently above 80 % on high-quality practice tests, their real-exam performance feels familiar rather than daunting. Equally important, reviewing why a distractor option is wrong teaches nuanced differences between seemingly similar GCP services (for example, Cloud Load Balancing tiers or Pub/Sub vs. Cloud Tasks), sharpening the judgment skills the exam prizes.

Building a Personal Study Plan

Begin with Google’s official exam guide and skill-marker documents, mapping each bullet to hands-on demos in Cloud Shell or a free-tier project. Allocate weekly blocks: architecture design sessions, product-specific deep dives, and at least three full practice exams spaced over several weeks. Complement those with whitepapers (e.g., the Site Reliability Workbook), case studies, and the latest architecture frameworks. Finally, revisit weak domains using Qwiklabs or self-built mini-projects—such as deploying a canary release pipeline or designing a multi-region Cloud Spanner instance—to transform theoretical understanding into muscle memory. By combining structured study, real-world experimentation, and targeted practice exams, candidates enter test day with both confidence and the architect’s holistic mindset Google is looking for.

GCP Professional Cloud Architect Logo
  • Free GCP Professional Cloud Architect Practice Test

  • 20 Questions
  • Unlimited time
  • Designing and planning a cloud solution architecture
    Managing and provisioning a solution infrastructure
    Designing for security and compliance
    Analyzing and optimizing technical and business processes
    Managing implementation
    Ensuring solution and operations excellence
Question 1 of 20

Your team is containerizing a scientific simulation platform on Google Kubernetes Engine. During a run, thousands of pods concurrently read and write millions of small checkpoint files (4-16 KB each) into a shared directory. The data must be visible to all pods immediately, and the simulation controller requires standard POSIX file semantics (open, append, rename). After each run, the entire dataset is deleted. Which Google Cloud storage option best satisfies these performance and access requirements while minimizing operational overhead?

  • Cloud Storage Standard class accessed through the gcsfuse driver

  • Filestore High Scale SSD tier

  • Regional Persistent Disks on each node with an rsync sidecar for synchronization

  • A Cloud Bigtable cluster with one column family per checkpoint and HFile export after completion

Question 2 of 20

A retailer runs a REST-based order-management application on-premises. A logistics partner must call this API from the public internet, but the security team requires that the backend remain reachable only over a private network. The company also needs per-partner request quotas, OAuth 2.0 enforcement, and detailed usage analytics-all without modifying the legacy application. You already operate workloads on Google Cloud and want to minimize ongoing operational effort. Which approach best meets these requirements?

  • Expose the on-prem API through an external TCP load balancer with Cloud NAT; enforce quotas and OAuth in application code.

  • Establish VPC Network Peering between the on-prem network and Google Cloud and share the private service address directly with the partner.

  • Re-engineer the API as Cloud Functions behind Cloud Endpoints and retire the on-prem system.

  • Deploy Apigee X in Google Cloud, connect its runtime to the on-prem API over Cloud VPN, and expose the Apigee-managed HTTPS endpoint to the partner.

Question 3 of 20

An enterprise runs nightly Spark-based extract-transform-load (ETL) jobs on a regional managed instance group (MIG) of standard Compute Engine VMs. Each run processes 7 TB of data stored in Cloud Storage and writes checkpoints back to the same bucket every 10 minutes, allowing the job to resume after a failure. Management wants to reduce compute cost while preserving the current four-hour completion window and keeping operational effort low. Additional constraints are:

  • Instances must not have public IP addresses.
  • Engineers want to keep using gcsfuse to mount the Cloud Storage bucket.
  • Any interruptions should be handled automatically so that jobs finish within the window without manual intervention.

Which deployment approach best meets all requirements?

  • Replace the MIG with a regional MIG that uses Spot VMs without external IP addresses; configure instance startup scripts to relaunch the Spark job after each preemption.

  • Deploy the ETL pipeline as a Cloud Run job that mounts the Cloud Storage bucket with gcsfuse and relies on Cloud Scheduler to trigger nightly executions.

  • Provision a GKE Standard cluster with a node pool consisting of Spot VMs that have no public IPs and are behind Cloud NAT; run the Spark workload as Kubernetes CronJobs that use gcsfuse mounts and let Kubernetes reschedule pods when nodes are preempted.

  • Create a GKE Autopilot cluster and deploy the ETL code as Kubernetes CronJobs; Autopilot will automatically place the pods on Google-managed nodes and restart them after preemption events.

Question 4 of 20

A financial-services firm is in the discovery phase of migrating 500 on-premises virtual machines to Google Cloud. Executives want a dependency map before grouping workloads into migration waves. The data center uses dynamic service routing, and application traffic patterns vary during month-end processing. Using Google Cloud Migration Center, which pre-migration action best ensures that application dependencies are captured accurately while keeping operational risk low?

  • Deploy Google Application Migration Service mobility agents only and combine agent metrics with current firewall rules to infer dependencies.

  • Install the Migration Center discovery client on all relevant VMs and enable network profiling for a full business cycle to record live traffic between services.

  • Perform a one-time SNMP and port scan from a Migration Center appliance to catalog open ports on each host.

  • Import the existing configuration-management database (CMDB) into Migration Center and generate the dependency map without collecting live network data.

Question 5 of 20

A managed instance group of web servers runs in the prod-vpc network. Every VM is tagged web-frontend and is reached through an external HTTPS load balancer. The network currently has these firewall rules:

  • default-allow-internal (priority 65534, allow all protocols from 10.128.0.0/9, 172.16.0.0/12, 192.168.0.0/16)
  • default-deny-ingress (priority 65535, deny all)
  • allow-https-web (priority 1000, allow tcp:443 from 0.0.0.0/0 to targets tagged web-frontend)

A new policy states that the web servers must:

  • accept HTTPS only from 35.191.0.0/16 and 130.211.0.0/22 (load-balancer ranges)
  • allow SSH only from the on-premises subnet 10.10.0.0/24
  • block all other sources without affecting other prod-vpc workloads

Which approach satisfies these requirements with the fewest firewall changes?

  • Modify allow-https-web to permit tcp:443 only from 35.191.0.0/16 and 130.211.0.0/22, add an ingress allow rule (priority 1000) for tcp:22 from 10.10.0.0/24 to targets tagged web-frontend, then create an ingress deny all rule (priority 2000) that targets the web-frontend tag with source 0.0.0.0/0. Leave the default rules unchanged.

  • Delete default-allow-internal and allow-https-web. Create two new ingress allow rules that target web-frontend: one for tcp:443 from 35.191.0.0/16 and 130.211.0.0/22, and one for tcp:22 from 10.10.0.0/24. Rely on default-deny-ingress to block everything else.

  • Attach a Cloud Armor security policy to the load balancer that allows requests from 35.191.0.0/16, 130.211.0.0/22, and 10.10.0.0/24 and blocks all other sources. No firewall rule changes are needed.

  • Add an ingress deny rule (priority 900) that targets web-frontend and denies tcp:443 from 0.0.0.0/0 except 35.191.0.0/16 and 130.211.0.0/22. Add no other rules.

Question 6 of 20

Your organization is reviewing several reliability testing proposals for its microservices platform on Google Kubernetes Engine. To qualify an activity as chaos engineering, which characteristic must the experiment explicitly include?

  • A deliberate injection of a controlled failure while observing that the system maintains its defined steady-state behavior.

  • Automated scaling tests that double resource limits to estimate future capacity requirements.

  • Deployment of a new application version to canary GKE pods prior to full rollout.

  • Continuous replay of peak production traffic to measure throughput under sustained load.

Question 7 of 20

Your company is expanding its Google Cloud deployment to several regions. Each product team will keep its own project, but leadership wants to enforce a single RFC 1918 address space that allows private IP communication between virtual machines in any region, with centralized control of all firewall rules and routes. You must also avoid approaching the current hard cap on the number of VPC Network Peering links per network. Which design best meets these requirements?

  • Provision one custom-mode VPC per team project and connect them to a central hub VPC through Dedicated Cloud Interconnect attachments.

  • Give every product team its own auto-mode VPC and connect the VPCs with VPC Network Peering so that all internal subnets are reachable.

  • Create a separate VPC in each region inside a single project and interconnect them with Cloud VPN tunnels configured for dynamic routing.

  • Create one custom-mode VPC in a dedicated host project, add regional subnets for every needed region, and attach each product team's project as a service project using Shared VPC.

Question 8 of 20

During a Google Cloud Well-Architected review, you discover that a business unit runs several hundred Compute Engine instances across multiple projects and routinely provisions them at peak capacity. The finance team demands a concrete plan to lower unpredictable spending while keeping current service levels intact. Which recommendation most closely aligns with the Cost Optimization pillar of the Google Cloud Well-Architected Framework for this scenario?

  • Implement VPC Service Controls for all projects and forward all network logs to Cloud Logging to reduce data-exfiltration risk and simplify security audits.

  • Migrate workloads to larger custom machine types now to accommodate expected traffic growth over the next 12 months and avoid future performance issues.

  • Export Compute Engine rightsizing and idle-VM recommendations to BigQuery, trigger a Cloud Function to automatically resize or shut down inefficient instances, and monitor savings with Cloud Billing reports.

  • Configure autoscaling policies to add additional VM instances when CPU utilization exceeds 70 %, ensuring services remain highly available during traffic spikes.

Question 9 of 20

A global ticketing startup will run flash-sale campaigns that drive up to one million HTTPS requests per second from users on every continent for a few minutes at a time. Business leadership requires that the system prevent overselling by keeping ticket inventory strongly consistent worldwide, deliver sub-100 ms response times, remain available even if an entire Google Cloud region fails, and impose minimal day-to-day operations on a five-person engineering team. Which high-level Google Cloud architecture best meets these objectives?

  • Use App Engine standard environment in one region with automatic scaling, serve static assets via Cloud CDN, and keep inventory in Firestore in Datastore mode with eventual consistency for queries.

  • Run the workload on a single-region GKE cluster with horizontal pod autoscaling behind a regional external HTTP(S) load balancer, store inventory in a Cloud SQL instance configured for high availability, and protect the service with Cloud Armor.

  • Create large pre-provisioned Compute Engine managed instance groups with preemptible VMs behind a regional TCP load balancer; cache inventory counts in Memorystore and persist them in Cloud Bigtable replicated across two zones.

  • Deploy Cloud Run services in at least two distant regions, expose them through a global external HTTP(S) load balancer with Cloud Armor and optional Cloud CDN, and store ticket inventory in a multi-region Cloud Spanner instance.

Question 10 of 20

Your company runs a rapidly growing global SaaS billing platform. Eighty percent of traffic originates from North America and Europe. The application must execute thousands of financial transactions per second with ACID guarantees, and every committed transaction must survive a complete regional outage (RPO 0, RTO < 15 minutes). Operations wants a fully managed service that can scale horizontally without manual sharding and lets them apply schema changes with no downtime. Which Google Cloud database solution and deployment option best satisfies these requirements?

  • Migrate the database to Cloud SQL for PostgreSQL in us-central1 with a cross-region read replica in europe-west1.

  • Run a multi-master MySQL cluster on Compute Engine managed instance groups spread across us-central1 and europe-west1 with asynchronous replication.

  • Provision a Cloud Spanner instance using the nam-eur3 multi-region configuration to serve reads and writes from both continents.

  • Use Cloud Bigtable with two clusters, one in us-central1 and one in europe-west4, leveraging multi-cluster routing.

Question 11 of 20

Your team of 50 developers maintains dozens of microservices deployed on GKE. Engineering leadership wants to shorten feedback loops and improve code quality by introducing Gemini Code Assist at both development time and in continuous integration (CI). All proprietary source code and model prompts must stay inside the company's Google Cloud project, and no workload may rely on external SaaS providers or be run in non-compliant regions. Which approach best satisfies these requirements?

  • Run Cloud Workstations with the Cloud Code plugin enabled for Gemini Code Assist, and add a scripted Gemini-powered step to Cloud Build that executes under the project's default Cloud Build service account.

  • Deploy a custom fine-tuned LLM on Compute Engine instances located in a lower-cost region that does not meet the organization's compliance standards, replacing existing static analysis.

  • Export the full repository each night to an external SaaS that uses a GPT-4 engine for automated code reviews, then manually merge the suggested patches.

  • Insert a Cloud Function in the pipeline that sends container images to an external generative-AI API for summarization before pushing them to Artifact Registry.

Question 12 of 20

Your organization is designing a multi-environment Apigee X deployment. Strict security policy requires the following:

  1. Back-end microservices run in private GKE clusters inside two separate VPC networks: prod-svc-vpc and nonprod-svc-vpc. These VPCs must not be exposed to the public internet.
  2. Only Apigee should receive client traffic; clients must never connect directly to the clusters.
  3. Operational teams want clean separation of IAM, quotas, and billing between production and non-production while keeping administration effort low.

Which network architecture best satisfies the requirements?

  • Create separate VPC networks for prod and non-prod in the same Google Cloud project, deploy all Apigee instances there, and use Cloud NAT so the runtime nodes call backend services through their public IP addresses.

  • Create one Apigee organization with two instances that share the same runtime VPC; use firewall rules instead of VPC peering to reach the private GKE clusters over the public internet.

  • Create two Google Cloud projects, one per environment. In each project create an Apigee organization with a single Apigee instance whose runtime uses its own VPC network. Peer apigee-prod-vpc only with prod-svc-vpc and apigee-nonprod-vpc only with nonprod-svc-vpc.

  • Create one Apigee organization in a shared project and configure two environments (prod and non-prod) on a single Apigee instance that is attached to a shared VPC network peered to both service VPCs.

Question 13 of 20

Your company is migrating loosely coupled microservices to Google Kubernetes Engine (GKE). Each service already passes unit tests and performance benchmarks run in Cloud Build. After several releases, run-time failures occur because of mismatched request and response formats once workloads reach the shared staging cluster. You must extend the CI/CD pipeline so these issues are caught earlier without markedly increasing build time. Which type of test should you add?

  • Add an automated end-to-end integration test stage that deploys all related microservices into a temporary GKE namespace and runs API contract scenarios across service boundaries.

  • Introduce stress tests that generate peak-load traffic against each microservice to validate autoscaling policies prior to staging deployment.

  • Expand the existing unit test suite with additional component-level functional tests that mock external dependencies for each microservice in isolation.

  • Insert a static application security testing (SAST) step in Cloud Build to scan source code and container images for known vulnerabilities before the build is promoted.

Question 14 of 20

ExampleSoft must give an external penetration tester, Alice, temporary read-only access to Cloud Logging data in the production project. She is outside your Google Workspace, and you are not permitted to create service accounts or export logs. Which identity type should receive the Logs Viewer role (roles/logging.viewer) to uphold least-privilege principles and maintain good credential hygiene?

  • Add Alice to a new Google Group in ExampleSoft's domain and assign the role to that group.

  • Grant the role to Alice's personal Google Account (for example, [email protected]).

  • Configure workload identity federation so Alice receives temporary credentials mapped to an external principal.

  • Create a dedicated service account, generate a JSON key, and give the key file to Alice.

Question 15 of 20

Your company ingests financial transaction records from multiple regions into a dual-region Cloud Storage bucket. Compliance regulations require that every object remain intact and undeleted for at least seven years. After the seven-year period, objects should be removed automatically to avoid unnecessary storage costs. Platform administrators must not be able to override the retention requirement. What should you do to meet these needs with minimal ongoing operational effort?

  • Enable Object Versioning on the bucket and configure a lifecycle rule to delete live and noncurrent object versions after 2,555 days.

  • Turn on default event-based holds for the bucket and require uploaders to release the hold after seven years so objects can be deleted.

  • Change the bucket's storage class to Archive and instruct administrators to delete objects manually once they reach seven years of age.

  • Set a seven-year bucket retention policy, lock the policy, and add a lifecycle rule that deletes objects older than 2,555 days.

Question 16 of 20

A retail enterprise is formalizing its non-functional requirements before rolling out a new set of micro-services across multiple regions. One proposal defines reliability as "the fraction of time the service is reachable from at least one region." Several architects object that this definition is incomplete. Based on guidance from the Google Cloud Well-Architected Framework, which alternative wording most accurately expresses the core concept of reliability so it can be used as the foundation for service-level objectives (SLOs)?

  • The percentage of time the service returns successful responses within 200 milliseconds.

  • The capacity of a system to scale out automatically whenever utilization exceeds 70 percent.

  • The ability of a system to perform its required functions under stated conditions for a specified period.

  • The guarantee that data will never be lost even if multiple zones fail.

Question 17 of 20

A financial-services company runs a real-time risk engine on Google Cloud and must stream 8 Gbps of data from its on-premises data center. The data center already has an MPLS edge router but no presence in a Google colocation facility, and the team cannot deploy new hardware. They need a private connection that delivers at least 10 Gbps aggregate bandwidth, offers a 99.9 % SLA per VLAN attachment, and can be provisioned in under two weeks. Which Google Cloud connectivity option best meets these requirements?

  • Use Private Service Connect to expose internal Google Cloud endpoints to the on-premises network.

  • Order two 5 Gbps VLAN attachments through Partner Interconnect in separate partner locations to create a redundant 10 Gbps private link.

  • Provision a redundant 10 Gbps Dedicated Interconnect at a Google colocation facility and extend the data center network to that site.

  • Configure HA VPN with two IPSec tunnels over different ISPs to Google Cloud.

Question 18 of 20

Your security team mandates that BigQuery data in the analytics-prod project must only be queried from Google-managed laptops that comply with company endpoint policies. In addition, the data must never be copied to projects outside analytics-prod, even if an IAM administrator accidentally grants BigQuery roles to another project's service account. Which security control design best meets both requirements?

  • Configure an organization-level hierarchical firewall policy that blocks all egress except to the corporate VPN and turn on BigQuery Data Access audit logs in analytics-prod.

  • Create a VPC Service Controls perimeter around analytics-prod and add an Access Context Manager access level that allows requests only from corporate-managed devices, denying all other egress.

  • Apply an organization policy that disables cross-project data export and enforces CMEK for BigQuery, while routing all traffic through Cloud NAT private IP ranges.

  • Enable Cloud Identity-Aware Proxy for BigQuery, create a context-aware access policy requiring compliant devices, and export BigQuery audit logs to Cloud Storage for additional monitoring.

Question 19 of 20

Your company runs a Spring Boot REST API on a single-zone managed instance group behind an external HTTP(S) load balancer in us-central1. After a recent zone-wide outage, management now requires at least 99.95 % availability and wants to eliminate all virtual-machine patching tasks. The traffic pattern is bursty but remains low most of the day, so reducing steady-state compute cost is important. Which architecture change best satisfies the new objectives while keeping ongoing costs as low as possible?

  • Migrate the application to GKE Autopilot with clusters in two regions, enable multi-cluster ingress, and run the workload on spot VMs for cost savings.

  • Containerize the service and deploy it to Cloud Run (fully managed) in two nearby regions behind a global external HTTP(S) load balancer, configuring one minimum instance per region.

  • Convert the instance template to preemptible VMs and move to a regional managed instance group spanning two zones behind the existing external HTTP(S) load balancer.

  • Deploy the API to App Engine flexible environment with manual scaling of two instances per region and use Cloud DNS latency-based routing across regions.

Question 20 of 20

During a phased migration of several hundred microservices to Google Kubernetes Engine, roughly 15-20 percent of the Cloud Build jobs fail because of missing Maven dependencies or integration-test mismatches. The operations lead asks how an AI-assisted migration tool such as CogniPort can be used to shorten recovery time without replacing the existing CI/CD tooling. What should you recommend?

  • Add a CogniPort post-build step in the existing Cloud Build pipeline so it analyzes compilation and test logs on every run and opens pull requests with suggested fixes.

  • Replace Cloud Build completely with CogniPort's native build service so that all compilation and tests run inside CogniPort.

  • Run CogniPort only once at the start of the migration to create an application inventory and then disable it to avoid runtime overhead.

  • Invoke CogniPort during the kubectl apply command so it can rewrite Kubernetes manifests before they are deployed to the cluster.