GCP Professional Cloud Architect Practice Test
Use the form below to configure your GCP Professional Cloud Architect Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

GCP Professional Cloud Architect Information
What the GCP Professional Cloud Architect Exam Measures
Google’s Professional Cloud Architect certification is designed to validate that an individual can design, develop and manage robust, secure, scalable, highly available, and dynamic solutions on Google Cloud Platform (GCP). Candidates are expected to understand cloud architecture best practices, the GCP product portfolio, and how to translate business requirements into technical designs. In addition to broad conceptual knowledge—network design, security, compliance, cost optimization—the exam emphasizes real-world decision-making: choosing the right storage option for a given workload, planning a secure multi-tier network, or architecting a resilient data-processing pipeline.
Format, Difficulty, and Prerequisites
The test lasts two hours, is proctored (either onsite or online), and consists of 50–60 multiple-choice and multiple-select questions. Questions are scenario based; many describe a fictitious company’s requirements and ask which architecture or operational change best meets cost, performance, or security needs. Although there are no formal prerequisites, Google recommends at least three years of industry experience—including a year of hands-on work with GCP. Because the exam’s scope is wide, candidates often discover that depth in one or two products (e.g., BigQuery or Cloud Spanner) is insufficient; a successful architect must be equally comfortable with compute, networking, IAM, data analytics, and DevOps considerations.
GCP Professional Cloud Architect Exam Practice Exams for Readiness
Taking full-length practice exams is one of the most effective ways to gauge exam readiness. Timed mock tests recreate the stress of the real assessment, forcing you to manage the clock and make decisions under pressure. Detailed answer explanations expose gaps in knowledge—particularly around edge-case IAM policies, VPC peering limits, or cost-optimization trade-offs—that casual study can miss. Many candidates report that after scoring consistently above 80 % on high-quality practice tests, their real-exam performance feels familiar rather than daunting. Equally important, reviewing why a distractor option is wrong teaches nuanced differences between seemingly similar GCP services (for example, Cloud Load Balancing tiers or Pub/Sub vs. Cloud Tasks), sharpening the judgment skills the exam prizes.
Building a Personal Study Plan
Begin with Google’s official exam guide and skill-marker documents, mapping each bullet to hands-on demos in Cloud Shell or a free-tier project. Allocate weekly blocks: architecture design sessions, product-specific deep dives, and at least three full practice exams spaced over several weeks. Complement those with whitepapers (e.g., the Site Reliability Workbook), case studies, and the latest architecture frameworks. Finally, revisit weak domains using Qwiklabs or self-built mini-projects—such as deploying a canary release pipeline or designing a multi-region Cloud Spanner instance—to transform theoretical understanding into muscle memory. By combining structured study, real-world experimentation, and targeted practice exams, candidates enter test day with both confidence and the architect’s holistic mindset Google is looking for.

Free GCP Professional Cloud Architect Practice Test
- 20 Questions
- Unlimited time
- Designing and planning a cloud solution architectureManaging and provisioning a solution infrastructureDesigning for security and complianceAnalyzing and optimizing technical and business processesManaging implementationEnsuring solution and operations excellence
Your team is containerizing a scientific simulation platform on Google Kubernetes Engine. During a run, thousands of pods concurrently read and write millions of small checkpoint files (4-16 KB each) into a shared directory. The data must be visible to all pods immediately, and the simulation controller requires standard POSIX file semantics (open, append, rename). After each run, the entire dataset is deleted. Which Google Cloud storage option best satisfies these performance and access requirements while minimizing operational overhead?
Cloud Storage Standard class accessed through the gcsfuse driver
Filestore High Scale SSD tier
Regional Persistent Disks on each node with an rsync sidecar for synchronization
A Cloud Bigtable cluster with one column family per checkpoint and HFile export after completion
Answer Description
The workload needs a fully managed, POSIX-compliant file system that can be mounted simultaneously by thousands of containers and deliver very high IOPS for small-file access. Filestore High Scale SSD is purpose-built for exactly this pattern, offering tens of GB/s throughput and hundreds of thousands of IOPS with immediate consistency. Cloud Storage classes are object stores lacking shared POSIX semantics. Cloud SQL and Bigtable are databases, not file systems, and using per-node persistent disks with manual synchronization would add complexity and likely bottleneck on I/O.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is Filestore High Scale SSD tier the best option for this workload?
What is POSIX compliance, and why is it important for this simulation workload?
Why are the other options unsuitable for this scenario?
What is Filestore High Scale SSD tier?
What does POSIX compliance mean for file systems?
Why aren't Cloud Storage or Cloud Bigtable viable options here?
A retailer runs a REST-based order-management application on-premises. A logistics partner must call this API from the public internet, but the security team requires that the backend remain reachable only over a private network. The company also needs per-partner request quotas, OAuth 2.0 enforcement, and detailed usage analytics-all without modifying the legacy application. You already operate workloads on Google Cloud and want to minimize ongoing operational effort. Which approach best meets these requirements?
Expose the on-prem API through an external TCP load balancer with Cloud NAT; enforce quotas and OAuth in application code.
Establish VPC Network Peering between the on-prem network and Google Cloud and share the private service address directly with the partner.
Re-engineer the API as Cloud Functions behind Cloud Endpoints and retire the on-prem system.
Deploy Apigee X in Google Cloud, connect its runtime to the on-prem API over Cloud VPN, and expose the Apigee-managed HTTPS endpoint to the partner.
Answer Description
Using Apigee X addresses every stated need. You can deploy Apigee's runtime in a Google-managed project and connect it privately to the on-premises API through Cloud VPN or Cloud Interconnect, ensuring the legacy service is never directly exposed to the internet. External partners call an Apigee-managed HTTPS endpoint, while Apigee policies provide OAuth 2.0 enforcement, partner-specific quota management, and rich usage analytics without any code changes. Re-implementing the API on Cloud Functions would require redevelopment effort and a full migration. Forwarding traffic with an external TCP load balancer plus Cloud NAT would still leave OAuth, quota enforcement, and analytics to be implemented in the application stack, increasing maintenance. VPC Network Peering cannot make a private address reachable to an external partner and offers no API management features. Therefore, the Apigee-based approach is the only solution that satisfies all security and governance requirements while keeping operational overhead low.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Apigee X and why is it suitable for API management?
How does Cloud VPN ensure secure communication between Google Cloud and on-premises systems?
What are the benefits of using OAuth 2.0 for API security?
What is Apigee X and how does it work?
How does Cloud VPN help connect on-premises to Google Cloud?
What is OAuth 2.0 and why is it enforced in API management?
An enterprise runs nightly Spark-based extract-transform-load (ETL) jobs on a regional managed instance group (MIG) of standard Compute Engine VMs. Each run processes 7 TB of data stored in Cloud Storage and writes checkpoints back to the same bucket every 10 minutes, allowing the job to resume after a failure. Management wants to reduce compute cost while preserving the current four-hour completion window and keeping operational effort low. Additional constraints are:
- Instances must not have public IP addresses.
- Engineers want to keep using gcsfuse to mount the Cloud Storage bucket.
- Any interruptions should be handled automatically so that jobs finish within the window without manual intervention.
Which deployment approach best meets all requirements?
Replace the MIG with a regional MIG that uses Spot VMs without external IP addresses; configure instance startup scripts to relaunch the Spark job after each preemption.
Deploy the ETL pipeline as a Cloud Run job that mounts the Cloud Storage bucket with gcsfuse and relies on Cloud Scheduler to trigger nightly executions.
Provision a GKE Standard cluster with a node pool consisting of Spot VMs that have no public IPs and are behind Cloud NAT; run the Spark workload as Kubernetes CronJobs that use gcsfuse mounts and let Kubernetes reschedule pods when nodes are preempted.
Create a GKE Autopilot cluster and deploy the ETL code as Kubernetes CronJobs; Autopilot will automatically place the pods on Google-managed nodes and restart them after preemption events.
Answer Description
Running the workload as Kubernetes CronJobs on a GKE Standard cluster whose sole node pool uses Compute Engine Spot VMs delivers the largest discount while still satisfying operational and technical constraints:
− GKE Standard supports creating node pools backed entirely by Spot VMs, which are up to 91% cheaper than regular on-demand VMs. When a Spot VM is preempted, the node pool's autoscaler automatically provisions a replacement, and the Kubernetes control plane reschedules the interrupted pods. Because the Spark job checkpoints to Cloud Storage every 10 minutes, the job can restart and still meet the four-hour SLA. − Nodes in the pool can be created without external IP addresses and reach the internet (if needed) through Cloud NAT, satisfying the no-public-IP requirement. − The Cloud Storage FUSE CSI driver (or gcsfuse in a container image) lets pods mount the bucket exactly as on the current VMs, so no code changes are required.
The other options fail to meet one or more constraints:
- A GKE Autopilot cluster does not allow you to specify Spot or preemptible capacity; although Google may run Autopilot on discounted infrastructure, pods are not exposed to preemption and the pricing discount is smaller, so cost-saving potential is lower.
- Migrating to Cloud Run jobs removes the ability to use
gcsfusemounts and may exceed CPU-second quotas, risking the four-hour SLA. - Converting the existing MIG to Spot VMs keeps costs down, but you must build custom logic for instance replacement, health checks, and job restarts, increasing operational overhead compared with Kubernetes' built-in rescheduling.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Spot VMs in GCP?
How does gcsfuse work with Cloud Storage?
What is a Kubernetes CronJob?
What are Spot VMs and why are they cost-effective?
What is gcsfuse and why is it used in this deployment?
How does Kubernetes handle interruptions on Spot VMs?
A financial-services firm is in the discovery phase of migrating 500 on-premises virtual machines to Google Cloud. Executives want a dependency map before grouping workloads into migration waves. The data center uses dynamic service routing, and application traffic patterns vary during month-end processing. Using Google Cloud Migration Center, which pre-migration action best ensures that application dependencies are captured accurately while keeping operational risk low?
Deploy Google Application Migration Service mobility agents only and combine agent metrics with current firewall rules to infer dependencies.
Install the Migration Center discovery client on all relevant VMs and enable network profiling for a full business cycle to record live traffic between services.
Perform a one-time SNMP and port scan from a Migration Center appliance to catalog open ports on each host.
Import the existing configuration-management database (CMDB) into Migration Center and generate the dependency map without collecting live network data.
Answer Description
Migration Center's discovery client (also called the MigrateOps Agent) can be installed on each VM to collect hardware, software, and process information. When network profiling is enabled, the agent records inbound and outbound connections over time, producing a traffic graph that identifies service dependencies. Running the profiling for at least one representative business cycle (often one to two weeks) is recommended to capture periodic spikes such as month-end jobs. Importing a static CMDB, relying only on firewall logs, performing one-time port sweeps, or using the Application Migration Service agent alone will not reliably reveal transient or east-west dependencies, risking missed links and poorly sequenced migration waves.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Migration Center discovery client?
Why is network profiling over a full business cycle important?
How does live traffic profiling differ from using a static CMDB for migration planning?
Why is network profiling recommended for a full business cycle when migrating VMs?
What is the role of the Migration Center discovery client in VM migration?
What are east-west dependencies, and why are they important for migration planning?
A managed instance group of web servers runs in the prod-vpc network. Every VM is tagged web-frontend and is reached through an external HTTPS load balancer. The network currently has these firewall rules:
- default-allow-internal (priority 65534, allow all protocols from 10.128.0.0/9, 172.16.0.0/12, 192.168.0.0/16)
- default-deny-ingress (priority 65535, deny all)
- allow-https-web (priority 1000, allow tcp:443 from 0.0.0.0/0 to targets tagged web-frontend)
A new policy states that the web servers must:
- accept HTTPS only from 35.191.0.0/16 and 130.211.0.0/22 (load-balancer ranges)
- allow SSH only from the on-premises subnet 10.10.0.0/24
- block all other sources without affecting other prod-vpc workloads
Which approach satisfies these requirements with the fewest firewall changes?
Modify allow-https-web to permit tcp:443 only from 35.191.0.0/16 and 130.211.0.0/22, add an ingress allow rule (priority 1000) for tcp:22 from 10.10.0.0/24 to targets tagged web-frontend, then create an ingress deny all rule (priority 2000) that targets the web-frontend tag with source 0.0.0.0/0. Leave the default rules unchanged.
Delete default-allow-internal and allow-https-web. Create two new ingress allow rules that target web-frontend: one for tcp:443 from 35.191.0.0/16 and 130.211.0.0/22, and one for tcp:22 from 10.10.0.0/24. Rely on default-deny-ingress to block everything else.
Attach a Cloud Armor security policy to the load balancer that allows requests from 35.191.0.0/16, 130.211.0.0/22, and 10.10.0.0/24 and blocks all other sources. No firewall rule changes are needed.
Add an ingress deny rule (priority 900) that targets web-frontend and denies tcp:443 from 0.0.0.0/0 except 35.191.0.0/16 and 130.211.0.0/22. Add no other rules.
Answer Description
Editing the existing HTTPS rule so that it no longer allows all sources removes direct internet access while keeping the rule count low. Adding a separate SSH allow rule lets the operations subnet connect. Because the default-allow-internal rule (priority 65534) would still permit other RFC 1918 ranges, creating a targeted deny-all ingress rule with a lower priority number than 65534 ensures every packet not matched by the two precise allow rules is dropped only for VMs that carry the web-frontend tag. Other prod-vpc workloads keep using default-allow-internal because the new rules match only the tagged web servers. Deleting default-allow-internal (or relying on Cloud Armor) would risk breaking other internal traffic, and moving the servers to a new subnet is a larger change than required.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are firewall rule priorities in GCP?
What is the purpose of tags like 'web-frontend' in firewall rules?
What are RFC 1918 ranges, and why do they matter in firewall rules?
Why is priority significant in configuring firewall rules?
What are the IP ranges 35.191.0.0/16 and 130.211.0.0/22 used for?
Why is the default-allow-internal rule left unchanged in the correct solution?
Your organization is reviewing several reliability testing proposals for its microservices platform on Google Kubernetes Engine. To qualify an activity as chaos engineering, which characteristic must the experiment explicitly include?
A deliberate injection of a controlled failure while observing that the system maintains its defined steady-state behavior.
Automated scaling tests that double resource limits to estimate future capacity requirements.
Deployment of a new application version to canary GKE pods prior to full rollout.
Continuous replay of peak production traffic to measure throughput under sustained load.
Answer Description
Chaos engineering is distinguished by the intentional and well-controlled introduction of failures or abnormal conditions while the team observes whether the application continues to meet its defined "steady-state" behavior. The purpose is to surface hidden weaknesses before they cause outages in production. Merely replaying peak traffic, performing canary releases, or running capacity-planning exercises can all improve reliability, but none of those activities purposefully inject faults to verify resilience, so they are not considered chaos engineering.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is chaos engineering?
What is 'steady-state behavior' in chaos engineering?
How are controlled failures injected during chaos experiments?
Your company is expanding its Google Cloud deployment to several regions. Each product team will keep its own project, but leadership wants to enforce a single RFC 1918 address space that allows private IP communication between virtual machines in any region, with centralized control of all firewall rules and routes. You must also avoid approaching the current hard cap on the number of VPC Network Peering links per network. Which design best meets these requirements?
Provision one custom-mode VPC per team project and connect them to a central hub VPC through Dedicated Cloud Interconnect attachments.
Give every product team its own auto-mode VPC and connect the VPCs with VPC Network Peering so that all internal subnets are reachable.
Create a separate VPC in each region inside a single project and interconnect them with Cloud VPN tunnels configured for dynamic routing.
Create one custom-mode VPC in a dedicated host project, add regional subnets for every needed region, and attach each product team's project as a service project using Shared VPC.
Answer Description
A single custom-mode Shared VPC hosted in a central project meets all stated goals. Because a VPC network is a global resource, subnets created in multiple regions share one private address space, and instances in those subnets can reach each other over Google's private backbone using internal IPs. Attaching the product teams' projects as service projects places their resources in the shared network while letting them keep separate billing and IAM boundaries. Central admins manage routes and firewall rules in the host project. This architecture uses no VPC peering links, so it cannot hit the peering-link quota.
The other options either create independent VPCs that must be interconnected with VPC Network Peering or Cloud VPN/Interconnect, adding operational overhead and consuming peering links or tunnel/quota, and they do not provide a single centrally managed network.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Shared VPC in Google Cloud?
How does Google’s private backbone support regional communication in a Shared VPC?
What is the advantage of avoiding VPC Network Peering links?
What is a Shared VPC?
How does RFC 1918 address space work in a Google Cloud VPC?
What happens when you hit the VPC peering quota in Google Cloud?
During a Google Cloud Well-Architected review, you discover that a business unit runs several hundred Compute Engine instances across multiple projects and routinely provisions them at peak capacity. The finance team demands a concrete plan to lower unpredictable spending while keeping current service levels intact. Which recommendation most closely aligns with the Cost Optimization pillar of the Google Cloud Well-Architected Framework for this scenario?
Implement VPC Service Controls for all projects and forward all network logs to Cloud Logging to reduce data-exfiltration risk and simplify security audits.
Migrate workloads to larger custom machine types now to accommodate expected traffic growth over the next 12 months and avoid future performance issues.
Export Compute Engine rightsizing and idle-VM recommendations to BigQuery, trigger a Cloud Function to automatically resize or shut down inefficient instances, and monitor savings with Cloud Billing reports.
Configure autoscaling policies to add additional VM instances when CPU utilization exceeds 70 %, ensuring services remain highly available during traffic spikes.
Answer Description
The Cost Optimization pillar focuses on continuously measuring, monitoring, and right-sizing resource usage so you pay only for what you actually need. Exporting Recommender data lets you programmatically surface idle or over-provisioned resources, and an automated workflow can apply rightsizing or turn off unused instances. Billing reports then verify the realized savings. The other options map to different Well-Architected pillars: autoscaling emphasizes reliability, over-provisioning for future growth targets performance optimization, and VPC Service Controls address security and compliance rather than direct cost reduction.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Cost Optimization pillar of the Google Cloud Well-Architected Framework?
How does Google Cloud Recommender help with reducing costs?
What role does BigQuery play in automating cost optimization?
What is the Google Cloud Well-Architected Framework?
What is Compute Engine rightsizing?
How does exporting Recommender data to BigQuery help with cost optimization?
A global ticketing startup will run flash-sale campaigns that drive up to one million HTTPS requests per second from users on every continent for a few minutes at a time. Business leadership requires that the system prevent overselling by keeping ticket inventory strongly consistent worldwide, deliver sub-100 ms response times, remain available even if an entire Google Cloud region fails, and impose minimal day-to-day operations on a five-person engineering team. Which high-level Google Cloud architecture best meets these objectives?
Use App Engine standard environment in one region with automatic scaling, serve static assets via Cloud CDN, and keep inventory in Firestore in Datastore mode with eventual consistency for queries.
Run the workload on a single-region GKE cluster with horizontal pod autoscaling behind a regional external HTTP(S) load balancer, store inventory in a Cloud SQL instance configured for high availability, and protect the service with Cloud Armor.
Create large pre-provisioned Compute Engine managed instance groups with preemptible VMs behind a regional TCP load balancer; cache inventory counts in Memorystore and persist them in Cloud Bigtable replicated across two zones.
Deploy Cloud Run services in at least two distant regions, expose them through a global external HTTP(S) load balancer with Cloud Armor and optional Cloud CDN, and store ticket inventory in a multi-region Cloud Spanner instance.
Answer Description
To satisfy the requirements:
- Global low-latency access and automatic cross-regional failover are provided by a global external HTTP(S) load balancer that can route traffic to multiple back-end regions.
- Flash-sale spikes are handled by Cloud Run, whose fully managed, serverless containers scale automatically from zero to thousands of instances with no capacity pre-provisioning.
- Google Cloud's global load-balancing edge mitigates large-scale L3/L4 DDoS attacks, while Cloud Armor security policies supply additional Layer 7 filtering for application-level threats.
- A multi-region Cloud Spanner instance supplies externally consistent, strongly consistent reads and writes across regions and continues serving during a regional outage.
- Both Cloud Run and Cloud Spanner are managed services, minimizing operational overhead.
The remaining designs fail key requirements: a single-region GKE/Cloud SQL deployment cannot survive a regional outage; Compute Engine managed instance groups with Bigtable do not provide global strong consistency and require significant capacity management; App Engine with Firestore in Datastore mode offers only eventual consistency. Therefore, deploying Cloud Run in multiple regions behind a global external HTTP(S) load balancer with Cloud Armor, backed by a multi-region Cloud Spanner database, is the only option that meets all objectives.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Cloud Spanner and why is it suitable for strong consistency in global systems?
How does Cloud Run handle flash-sale traffic spikes compared to Compute Engine or GKE?
What role does a global external HTTP(S) load balancer and Cloud Armor play in this architecture?
What is Cloud Spanner, and why is it suitable for global ticket inventory management?
How does a global external HTTP(S) load balancer work, and why is it important?
Why is Cloud Run preferred for handling flash-sale spikes in this architecture?
Your company runs a rapidly growing global SaaS billing platform. Eighty percent of traffic originates from North America and Europe. The application must execute thousands of financial transactions per second with ACID guarantees, and every committed transaction must survive a complete regional outage (RPO 0, RTO < 15 minutes). Operations wants a fully managed service that can scale horizontally without manual sharding and lets them apply schema changes with no downtime. Which Google Cloud database solution and deployment option best satisfies these requirements?
Migrate the database to Cloud SQL for PostgreSQL in us-central1 with a cross-region read replica in europe-west1.
Run a multi-master MySQL cluster on Compute Engine managed instance groups spread across us-central1 and europe-west1 with asynchronous replication.
Provision a Cloud Spanner instance using the nam-eur3 multi-region configuration to serve reads and writes from both continents.
Use Cloud Bigtable with two clusters, one in us-central1 and one in europe-west4, leveraging multi-cluster routing.
Answer Description
A multi-region Cloud Spanner instance (for example, the nam-eur3 configuration) automatically replicates data across multiple regions and uses synchronous Paxos commits to provide global strong consistency with an effective recovery point objective of zero and a 99.999 % availability SLA, meeting the RPO 0 / low-RTO requirement even during a regional outage. Spanner's horizontal scaling, automatic sharding, and online schema changes minimize operational overhead.
Cloud SQL with cross-region replicas is limited to a single primary region, offers only asynchronous replication (non-zero RPO) and requires manual sharding to scale. Cloud Bigtable provides petabyte scale and multi-cluster replication but is not relational and only guarantees eventual consistency for cross-region writes, so it cannot meet ACID requirements. Self-managed MySQL on Compute Engine imposes significant operational burden and uses asynchronous replication, resulting in data loss risk during regional failures. Therefore, a multi-region Cloud Spanner deployment is the only option that fulfills all stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Can you explain what ACID guarantees mean and why they are important for this use case?
What is the major difference between Cloud Spanner's multi-region configuration and Cloud SQL's cross-region read replicas?
Why is the Paxos consensus algorithm critical in Cloud Spanner's functioning?
What is Cloud Spanner and how does it ensure global strong consistency?
What is Paxos commit, and how does it contribute to Cloud Spanner's high availability?
How does Cloud Spanner handle schema changes with no downtime?
Your team of 50 developers maintains dozens of microservices deployed on GKE. Engineering leadership wants to shorten feedback loops and improve code quality by introducing Gemini Code Assist at both development time and in continuous integration (CI). All proprietary source code and model prompts must stay inside the company's Google Cloud project, and no workload may rely on external SaaS providers or be run in non-compliant regions. Which approach best satisfies these requirements?
Run Cloud Workstations with the Cloud Code plugin enabled for Gemini Code Assist, and add a scripted Gemini-powered step to Cloud Build that executes under the project's default Cloud Build service account.
Deploy a custom fine-tuned LLM on Compute Engine instances located in a lower-cost region that does not meet the organization's compliance standards, replacing existing static analysis.
Export the full repository each night to an external SaaS that uses a GPT-4 engine for automated code reviews, then manually merge the suggested patches.
Insert a Cloud Function in the pipeline that sends container images to an external generative-AI API for summarization before pushing them to Artifact Registry.
Answer Description
Running Cloud Workstations with the Cloud Code plugin lets developers access Gemini Code Assist for real-time completions and inline security recommendations while editing, without moving code outside the project. Adding a dedicated Cloud Build step that programmatically invokes the same Gemini model under the default Cloud Build service account lets the team prompt the model (for example, to scaffold unit tests or suggest refactors) during CI; all artifacts remain inside the project, and no third-party SaaS or out-of-region service is used. The other approaches either export code to external providers, run in regions that violate compliance, or transmit artifacts outside the project boundary, contradicting the stated constraints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Gemini Code Assist in GCP?
How does Cloud Build integrate with Gemini Code Assist?
Why is it important to keep proprietary code within the Google Cloud project?
What is GKE and how does it contribute to deploying microservices?
What is Gemini Code Assist and how does it improve development workflows?
How does Cloud Workstations and Cloud Build ensure data remains compliant and secure?
Your organization is designing a multi-environment Apigee X deployment. Strict security policy requires the following:
- Back-end microservices run in private GKE clusters inside two separate VPC networks: prod-svc-vpc and nonprod-svc-vpc. These VPCs must not be exposed to the public internet.
- Only Apigee should receive client traffic; clients must never connect directly to the clusters.
- Operational teams want clean separation of IAM, quotas, and billing between production and non-production while keeping administration effort low.
Which network architecture best satisfies the requirements?
Create separate VPC networks for prod and non-prod in the same Google Cloud project, deploy all Apigee instances there, and use Cloud NAT so the runtime nodes call backend services through their public IP addresses.
Create one Apigee organization with two instances that share the same runtime VPC; use firewall rules instead of VPC peering to reach the private GKE clusters over the public internet.
Create two Google Cloud projects, one per environment. In each project create an Apigee organization with a single Apigee instance whose runtime uses its own VPC network. Peer apigee-prod-vpc only with prod-svc-vpc and apigee-nonprod-vpc only with nonprod-svc-vpc.
Create one Apigee organization in a shared project and configure two environments (prod and non-prod) on a single Apigee instance that is attached to a shared VPC network peered to both service VPCs.
Answer Description
Using two Apigee organizations, each in its own Google Cloud project, gives hard separation of IAM policies, runtime quotas, and billing. Each organization contains one Apigee instance whose runtime nodes live in a dedicated VPC network (apigee-prod-vpc and apigee-nonprod-vpc). Peering each runtime VPC only with the corresponding service VPC lets the instance reach the private GKE clusters without exposing them publicly and avoids transitive connectivity between environments. Because Apigee X automatically manages the runtime VPCs, the only operational task is creating the two peering connections, so administration remains simple. The alternative designs either mix prod and non-prod traffic in the same Apigee org (reducing quota/ IAM isolation), rely on firewall rules alone without VPC peering (backend cannot be reached from a private Apigee runtime), or attempt to reuse a single runtime VPC for both instances (violates hard separation and introduces overlapping-IP risk).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why are two separate Google Cloud projects recommended for multi-environment Apigee X deployment?
What is VPC peering, and why is it necessary in this design?
How does Apigee X manage its runtime VPC networks automatically?
What is VPC peering in Google Cloud, and why is it important for Apigee X?
How does Apigee X manage runtime VPC networks, and why is it beneficial?
What is the advantage of using separate Google Cloud projects for prod and non-prod environments?
Your company is migrating loosely coupled microservices to Google Kubernetes Engine (GKE). Each service already passes unit tests and performance benchmarks run in Cloud Build. After several releases, run-time failures occur because of mismatched request and response formats once workloads reach the shared staging cluster. You must extend the CI/CD pipeline so these issues are caught earlier without markedly increasing build time. Which type of test should you add?
Add an automated end-to-end integration test stage that deploys all related microservices into a temporary GKE namespace and runs API contract scenarios across service boundaries.
Introduce stress tests that generate peak-load traffic against each microservice to validate autoscaling policies prior to staging deployment.
Expand the existing unit test suite with additional component-level functional tests that mock external dependencies for each microservice in isolation.
Insert a static application security testing (SAST) step in Cloud Build to scan source code and container images for known vulnerabilities before the build is promoted.
Answer Description
End-to-end integration testing validates how independently built services work together by exercising real network calls and data contracts between them in an environment that mimics production. This exposes schema or protocol mismatches that are invisible to unit or component-level functional tests. Static application security testing focuses on code vulnerabilities, not cross-service compatibility. Component-level functional tests already exist and test logic in isolation, so they will not reveal integration bugs. Stress or load tests measure performance and scaling behaviour but do not ensure that services exchange data correctly. Therefore, introducing automated integration tests that spin up all dependent microservices in an ephemeral environment is the most effective way to catch the observed issues early.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is end-to-end integration testing?
How do temporary GKE namespaces help in testing?
Why is API contract testing important in microservices?
What is an end-to-end integration test in GKE?
Why is static application security testing (SAST) unsuitable for integration issues?
What is a GKE namespace and why use it for temporary tests?
ExampleSoft must give an external penetration tester, Alice, temporary read-only access to Cloud Logging data in the production project. She is outside your Google Workspace, and you are not permitted to create service accounts or export logs. Which identity type should receive the Logs Viewer role (roles/logging.viewer) to uphold least-privilege principles and maintain good credential hygiene?
Add Alice to a new Google Group in ExampleSoft's domain and assign the role to that group.
Grant the role to Alice's personal Google Account (for example, [email protected]).
Configure workload identity federation so Alice receives temporary credentials mapped to an external principal.
Create a dedicated service account, generate a JSON key, and give the key file to Alice.
Answer Description
A personal Google Account represents an individual human user and can be granted IAM roles directly. Granting the Logs Viewer role to Alice's Gmail-based Google Account lets her authenticate interactively with her own credentials, leverage Google's security features such as 2-Step Verification, and avoids distributing long-lived shared secrets. Service accounts are intended for non-human workloads, and sharing their keys violates best practices. A Google Group is primarily for aggregating multiple principals, not for a single external tester, and would still require managing membership. Workload identity federation issues short-lived credentials for external workloads, not for interactive console sessions by a human tester, and adds unnecessary complexity for a short engagement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is IAM in GCP and why is it important?
What is a Google Group and how is it used in GCP IAM?
What is workload identity federation and when should it be used?
What is IAM in GCP?
Why is a personal Google Account considered secure for granting temporary access?
What are the drawbacks of using service accounts for external testers?
Your company ingests financial transaction records from multiple regions into a dual-region Cloud Storage bucket. Compliance regulations require that every object remain intact and undeleted for at least seven years. After the seven-year period, objects should be removed automatically to avoid unnecessary storage costs. Platform administrators must not be able to override the retention requirement. What should you do to meet these needs with minimal ongoing operational effort?
Enable Object Versioning on the bucket and configure a lifecycle rule to delete live and noncurrent object versions after 2,555 days.
Turn on default event-based holds for the bucket and require uploaders to release the hold after seven years so objects can be deleted.
Change the bucket's storage class to Archive and instruct administrators to delete objects manually once they reach seven years of age.
Set a seven-year bucket retention policy, lock the policy, and add a lifecycle rule that deletes objects older than 2,555 days.
Answer Description
A bucket-level retention policy set to seven years enforces an immutable hold on every object: no one-including project owners-can delete or overwrite data until the retention period expires. Locking the policy makes it permanent and prevents privileged users from shortening or disabling it, satisfying the "must not be able to bypass" constraint. When the retention window elapses, the objects become eligible for deletion; a complementary Object Lifecycle Management rule that deletes objects whose age exceeds 2,555 days (~7 years) will automatically clean them up, eliminating manual effort. Enabling only versioning plus a lifecycle rule does not stop an administrator from deleting all versions ahead of schedule. Simply switching to the Archive class offers lower cost but no enforced retention. Default event-based holds still require someone to release the holds manually, so they do not ensure automatic deletion and introduce operational overhead. Therefore, applying and locking a seven-year retention policy and adding an age-based lifecycle delete rule is the correct approach.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Object Lifecycle Management in Google Cloud Storage?
How does a retention policy differ from Object Versioning in Google Cloud Storage?
What happens when you lock a bucket retention policy in Google Cloud Storage?
What is a bucket retention policy in Cloud Storage?
What happens when a retention policy is locked in Cloud Storage?
How does Object Lifecycle Management work in Cloud Storage?
A retail enterprise is formalizing its non-functional requirements before rolling out a new set of micro-services across multiple regions. One proposal defines reliability as "the fraction of time the service is reachable from at least one region." Several architects object that this definition is incomplete. Based on guidance from the Google Cloud Well-Architected Framework, which alternative wording most accurately expresses the core concept of reliability so it can be used as the foundation for service-level objectives (SLOs)?
The percentage of time the service returns successful responses within 200 milliseconds.
The capacity of a system to scale out automatically whenever utilization exceeds 70 percent.
The ability of a system to perform its required functions under stated conditions for a specified period.
The guarantee that data will never be lost even if multiple zones fail.
Answer Description
Within the Google Cloud Well-Architected Framework, reliability is defined as the ability of a system to perform its required functions under stated conditions for a specified period. This wording highlights three key aspects that an SLO must capture: (1) correct functioning (the service does what users expect), (2) within declared operating conditions (such as load, dependencies, or infrastructure health), and (3) over a defined time window. Merely stating that the service is reachable describes availability, not full reliability. Focusing on data survival describes durability, and focusing on scaling or latency describes performance or elasticity. Therefore, the only option that aligns with the framework's definition of reliability is the statement that emphasizes sustained ability to meet functional requirements under specified conditions for a period of time.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between availability and reliability in cloud systems?
How does a Service-Level Objective (SLO) relate to system reliability?
What role does durability play compared to reliability?
Why is reliability defined as 'the ability of a system to perform its required functions under stated conditions for a specified period'?
How does availability differ from reliability in cloud architecture?
What role do SLOs (Service Level Objectives) play in defining reliability?
A financial-services company runs a real-time risk engine on Google Cloud and must stream 8 Gbps of data from its on-premises data center. The data center already has an MPLS edge router but no presence in a Google colocation facility, and the team cannot deploy new hardware. They need a private connection that delivers at least 10 Gbps aggregate bandwidth, offers a 99.9 % SLA per VLAN attachment, and can be provisioned in under two weeks. Which Google Cloud connectivity option best meets these requirements?
Use Private Service Connect to expose internal Google Cloud endpoints to the on-premises network.
Order two 5 Gbps VLAN attachments through Partner Interconnect in separate partner locations to create a redundant 10 Gbps private link.
Provision a redundant 10 Gbps Dedicated Interconnect at a Google colocation facility and extend the data center network to that site.
Configure HA VPN with two IPSec tunnels over different ISPs to Google Cloud.
Answer Description
Partner Interconnect lets a customer obtain private connectivity to Google through a service provider without installing equipment in a colocation site. When two VLAN attachments are placed in independent Partner Interconnect locations, Google provides a 99.9 % SLA for each attachment. Two 5 Gbps attachments satisfy the 10 Gbps throughput goal and can usually be provisioned quickly by the partner. Dedicated Interconnect requires physical cross-connects in a colocation facility that the company does not have and commonly takes longer to provision, although it offers a higher (99.99 %) SLA. HA VPN travels over the public internet and provides no bandwidth guarantees, while Private Service Connect exposes specific services rather than general network connectivity. Therefore, redundant Partner Interconnect VLAN attachments are the most suitable choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Partner Interconnect in Google Cloud?
How does Partner Interconnect achieve a 99.9% SLA for VLAN attachments?
What are the key differences between Partner Interconnect and Dedicated Interconnect?
What is Partner Interconnect in Google Cloud?
How does the SLA for Partner Interconnect compare to other options like Dedicated Interconnect?
What are the main differences between Partner Interconnect and HA VPN?
Your security team mandates that BigQuery data in the analytics-prod project must only be queried from Google-managed laptops that comply with company endpoint policies. In addition, the data must never be copied to projects outside analytics-prod, even if an IAM administrator accidentally grants BigQuery roles to another project's service account. Which security control design best meets both requirements?
Configure an organization-level hierarchical firewall policy that blocks all egress except to the corporate VPN and turn on BigQuery Data Access audit logs in analytics-prod.
Create a VPC Service Controls perimeter around analytics-prod and add an Access Context Manager access level that allows requests only from corporate-managed devices, denying all other egress.
Apply an organization policy that disables cross-project data export and enforces CMEK for BigQuery, while routing all traffic through Cloud NAT private IP ranges.
Enable Cloud Identity-Aware Proxy for BigQuery, create a context-aware access policy requiring compliant devices, and export BigQuery audit logs to Cloud Storage for additional monitoring.
Answer Description
A service perimeter created with VPC Service Controls prevents BigQuery data from being read by resources that are outside the perimeter, even when IAM permissions are misconfigured, thereby blocking cross-project exfiltration. When you attach an Access Context Manager access level that requires requests to originate from company-managed devices, BigQuery queries succeed only from compliant laptops. Identity-Aware Proxy cannot front BigQuery, organization policies cannot stop BigQuery cross-project export, and hierarchical firewall rules are effective only for network traffic to or from VM instances, not for server-to-service API calls such as BigQuery.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a VPC Service Controls perimeter?
What is Access Context Manager and how does it enforce device compliance?
How does IAM misconfiguration lead to security risks, and how does a VPC Service Controls perimeter mitigate them?
What is a VPC Service Control perimeter?
What is Access Context Manager access level and how does it work?
Why can't Identity-Aware Proxy protect BigQuery?
Your company runs a Spring Boot REST API on a single-zone managed instance group behind an external HTTP(S) load balancer in us-central1. After a recent zone-wide outage, management now requires at least 99.95 % availability and wants to eliminate all virtual-machine patching tasks. The traffic pattern is bursty but remains low most of the day, so reducing steady-state compute cost is important. Which architecture change best satisfies the new objectives while keeping ongoing costs as low as possible?
Migrate the application to GKE Autopilot with clusters in two regions, enable multi-cluster ingress, and run the workload on spot VMs for cost savings.
Containerize the service and deploy it to Cloud Run (fully managed) in two nearby regions behind a global external HTTP(S) load balancer, configuring one minimum instance per region.
Convert the instance template to preemptible VMs and move to a regional managed instance group spanning two zones behind the existing external HTTP(S) load balancer.
Deploy the API to App Engine flexible environment with manual scaling of two instances per region and use Cloud DNS latency-based routing across regions.
Answer Description
Cloud Run (fully managed) provides a 99.95 % monthly availability SLA per region and completely abstracts VM management, so there is no operating-system patching burden. Deploying the service as a container to Cloud Run in more than one region and fronting it with the platform's global external HTTP(S) load balancer delivers cross-regional failover that meets the 99.95 % availability target. Setting a single minimum instance in each region keeps idle-time charges very low because additional instances are started only when traffic increases.
Using preemptible or spot VMs in regional managed instance groups reduces cost but offers no availability SLA and still requires image maintenance. GKE Autopilot reduces some operational work but incurs higher baseline costs than Cloud Run and still needs cluster management. App Engine flexible requires per-instance billing even when idle, so its steady-state cost is higher than Cloud Run with minimal instances. Therefore, the Cloud Run approach offers the required availability and operational simplicity at the lowest steady-state cost.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Cloud Run, and how does it ensure high availability?
How does a global HTTP(S) load balancer support cross-regional failover?
Why is Cloud Run more cost-effective for bursty traffic compared to other options?
What is Cloud Run and why is it suitable for achieving 99.95% availability?
How does the global HTTP(S) load balancer ensure cross-regional failover?
What makes Cloud Run more cost-effective for bursty traffic patterns compared to other services?
During a phased migration of several hundred microservices to Google Kubernetes Engine, roughly 15-20 percent of the Cloud Build jobs fail because of missing Maven dependencies or integration-test mismatches. The operations lead asks how an AI-assisted migration tool such as CogniPort can be used to shorten recovery time without replacing the existing CI/CD tooling. What should you recommend?
Add a CogniPort post-build step in the existing Cloud Build pipeline so it analyzes compilation and test logs on every run and opens pull requests with suggested fixes.
Replace Cloud Build completely with CogniPort's native build service so that all compilation and tests run inside CogniPort.
Run CogniPort only once at the start of the migration to create an application inventory and then disable it to avoid runtime overhead.
Invoke CogniPort during the kubectl apply command so it can rewrite Kubernetes manifests before they are deployed to the cluster.
Answer Description
CogniPort is designed to parse build and test error output and generate automated remediation suggestions-such as updated dependency declarations or corrected test stubs-based on each run's logs. Adding a post-build CogniPort step inside the current Cloud Build pipeline lets it analyze every compilation and test log, open pull requests with proposed fixes, and keep the normal code-review workflow intact, thereby accelerating recovery from failures without changing the team's build system. Although CogniPort's native build service could also speed recovery, that approach would replace Cloud Build, contradicting the stated requirement. Running CogniPort only once or invoking it during kubectl apply would not provide continuous, code-level remediation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Cloud Build?
What does CogniPort do in a CI/CD pipeline?
How does Maven dependency management work?
What is a post-build step in CI/CD pipelines?
How does CogniPort analyze build and test logs?
What are the advantages of integrating AI tools into CI/CD workflows?
Woo!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.