🔥 40% Off Crucial Exams Memberships — Deal ends today!

3 hours, 32 minutes remaining!
00:20:00

GCP Associate Cloud Engineer Practice Test

Use the form below to configure your GCP Associate Cloud Engineer Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for GCP Associate Cloud Engineer
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

GCP Associate Cloud Engineer Information

GCP Associate Cloud Engineer Exam

The Google Cloud Certified Associate Cloud Engineer (ACE) exam serves as a crucial validation of your skills in deploying, monitoring, and maintaining projects on the Google Cloud Platform. This certification is designed for individuals who can use both the Google Cloud Console and the command-line interface to manage enterprise solutions. The exam assesses your ability to set up a cloud solution environment, plan and configure a cloud solution, deploy and implement it, ensure its successful operation, and configure access and security. It is a solid starting point for those new to the cloud and can act as a stepping stone to professional-level certifications. To be eligible, it's recommended to have at least six months of hands-on experience with Google Cloud products and solutions. The exam itself is a two-hour, multiple-choice and multiple-select test that costs $125.

The ACE exam covers a broad range of Google Cloud services and concepts. Key areas of focus include understanding and managing core services like Compute Engine, Google Kubernetes Engine (GKE), App Engine, and Cloud Storage. You should be proficient in launching virtual machine instances, configuring autoscaling, deploying applications, and knowing the different storage classes and their use cases. Additionally, a strong grasp of Identity and Access Management (IAM) is critical, including managing users, groups, roles, and service accounts according to best practices. The exam also delves into networking aspects like creating VPCs and subnets, and operational tasks such as monitoring with Cloud Monitoring, logging with Cloud Logging, and managing billing accounts. Familiarity with command-line tools like gcloud, bq, and gsutil is also essential.

Practice Exams for Preparation

A vital component of a successful preparation strategy is taking practice exams. These simulations are the best way to get a feel for the tone, style, and potential trickiness of the actual exam questions. By taking practice exams, you can quickly identify your strengths and pinpoint the specific exam domains that require further study. Many who have passed the exam attest that a significant portion of the questions on the actual test were very similar to those found in quality practice exams. These practice tests often provide detailed explanations for each answer, offering a deeper learning opportunity by explaining why a particular answer is correct and the others are not. This helps in not just memorizing answers, but truly understanding the underlying concepts. Fortunately, Google provides a set of sample questions to help you get familiar with the exam format, and numerous other platforms offer extensive practice tests. Consistent practice with these resources can significantly boost your confidence and increase your chances of passing the exam.

GCP Associate Cloud Engineer Logo
  • Free GCP Associate Cloud Engineer Practice Test

  • 20 Questions
  • Unlimited time
  • Setting up a cloud solution environment
    Planning and implementing a cloud solution
    Ensuring successful operation of a cloud solution
    Configuring access and security
Question 1 of 20

Your company uses on-premises Microsoft Active Directory for authentication. The IT team wants every new employee account and its group memberships to appear in Cloud Identity automatically, without administrators signing in to the Google Cloud console each day. Passwords must still be verified against Active Directory. Which approach best meets these requirements while minimizing manual effort?

  • Export Active Directory accounts to a CSV file each week and use the Admin console's bulk upload feature to import them into Cloud Identity.

  • Deploy Google Cloud Directory Sync and schedule it to synchronize users and groups from Active Directory to Cloud Identity.

  • Enable workforce identity federation so Google Cloud automatically reads user and group data directly from Active Directory at sign-in.

  • Create a Cloud Function triggered by Pub/Sub onboarding events that calls the Cloud Identity API to add users and groups.

Question 2 of 20

Your security team must ensure that only the Compute Engine bastion host, which runs under the service account [email protected], can initiate SSH sessions to virtual machines that are part of the front-end tier in your custom VPC. All front-end VMs already have the network tag web. Which Cloud Next Generation Firewall rule definition satisfies the requirement while following least-privilege best practices?

  • Ingress rule - action allow; source: 0.0.0.0/0; targets: service account [email protected]; protocol/port: tcp:22

  • Egress rule - action allow; destination: service account [email protected]; targets: network tag web; protocol/port: tcp:22

  • Ingress rule - action allow; source: service account [email protected]; targets: network tag web; protocol/port: tcp:22

  • Ingress rule - action deny; source: bastion host external IPv4 address; targets: network tag web; protocol/port: tcp:22

Question 3 of 20

Your security team asks you to generate a report that shows how the IAM policy on a particular Cloud Storage bucket has evolved during the last two weeks. No manual snapshots were taken, so you need to query Google Cloud for point-in-time metadata about the bucket's configuration on specific past dates. Which Google Cloud service should you use to retrieve this historical asset information?

  • Cloud Audit Logs

  • Cloud Asset Inventory

  • Cloud Logging

  • Cloud Monitoring

Question 4 of 20

You are diagnosing an issue in a GKE cluster where a new sidecar container should have been added to every Pod. To verify which namespaces actually contain the updated Pods, you want to display a list of all Pods running across every namespace from your workstation, where you are already authenticated to the cluster. Which single command accomplishes this goal most efficiently?

  • kubectl get pods --all

  • kubectl get all -n kube-system

  • kubectl get pods --namespace="*"

  • kubectl get pods -A

Question 5 of 20

Your company runs a containerized e-commerce API on GKE in us-central1. To reduce latency during flash-sale traffic, you need an external cache for 500-byte session objects with 15-minute TTLs. The cache must support Redis commands, provide automatic failover across zones, and require minimal operational effort. Which Google Cloud service best meets these needs?

  • Provision Memorystore for Redis in Standard Tier with automatic failover across two zones.

  • Run a Redis StatefulSet on GKE and attach a regional persistent disk for high availability.

  • Deploy Cloud Bigtable and enable its in-memory cache feature for session data.

  • Create a Cloud SQL for PostgreSQL instance using a memory-optimized machine type to store session rows.

Question 6 of 20

Your company keeps quarterly financial reports in a Cloud Storage bucket named gs://corp-reports. Compliance now requires you to delete every object that was uploaded before 1 July 2022 while leaving newer files untouched. You decide to add a lifecycle rule that uses the createdBefore condition. Which value should you supply for createdBefore so the rule is accepted and functions as intended?

  • 1656633600 (Unix epoch seconds)

  • 07/01/2022

  • 2022-07-01T00:00:00Z

  • 2022-07-01

Question 7 of 20

Your compliance team mandates that any object stored in the logs-prod Cloud Storage bucket be moved from the Standard storage class to Coldline exactly 180 days after it is created. Application code must not be changed, and the solution should require minimal ongoing operations. Which configuration best meets these requirements?

  • Run a periodic gsutil -m mv job from a Compute Engine instance to move objects older than 180 days to Coldline storage.

  • Deploy a Cloud Function triggered by Cloud Scheduler every day that copies objects older than 180 days to a new Coldline bucket and deletes the originals.

  • Add a bucket lifecycle rule whose action is SetStorageClass: Coldline with a condition age: 180.

  • Enable Object Versioning and set a bucket retention policy of 180 days so older versions are automatically stored in Coldline.

Question 8 of 20

Your team runs a stateless image-processing service in a managed instance group (MIG) that pulls jobs from a Cloud Pub/Sub subscription. CPU utilization is typically low, but the subscription backlog can grow quickly. You must configure the MIG (minimum 2, maximum 10 VMs) to add or remove instances so that the number of undelivered Pub/Sub messages stays close to 500 per VM. Which autoscaling configuration should you implement?

  • Configure autoscaling on average CPU utilization with a 60% target, assuming higher backlog will increase CPU and trigger scaling.

  • Attach an internal HTTP(S) load balancer in front of the MIG and set the autoscaler to maintain 80% load-balancing serving capacity.

  • Create an autoscaling policy that uses the Cloud Monitoring metric "pubsub.googleapis.com/subscription/num_undelivered_messages" with utilizationTargetType set to "GAUGE_PER_INSTANCE" and target set to 500.

  • Replace Pub/Sub with an App Engine task queue and use a scheduled autoscaler to add VMs only during business hours.

Question 9 of 20

Your company's microservices run on GKE and automatically export tracing data to Cloud Trace. Several times a day the frontend latency jumps from the normal 300 ms to more than 8 s. In an example slow trace you notice that almost the entire delay is in the child span named "cloudspanner.googleapis.com/BeginTransaction". You want to confirm quickly, in the Cloud Trace UI whether this Spanner RPC is the dominant source of latency across all recent slow requests, without exporting data to another tool. What should you do?

  • Define a log-based metric on the Cloud Spanner query logs and build a dashboard that charts the metric over time.

  • Create a new uptime check targeting the Spanner instance and monitor its response time in Cloud Monitoring.

  • Filter the trace list by the span name "cloudspanner.googleapis.com/BeginTransaction" and sort the matching traces by latency to inspect their distribution.

  • Export recent trace data to BigQuery and run a SQL query that computes the average duration of all Spanner spans.

Question 10 of 20

Your team regularly spins up new Compute Engine VMs for short-lived batch jobs. To standardize their configuration, you need a reusable custom image created directly from an existing snapshot named prod-app-snap. The image must belong to the image family appserver and store its data in the europe-west1 image storage location. Which single gcloud command meets these requirements?

  • gcloud compute images create appserver-v1 --source-disk prod-app-snap --source-disk-zone europe-west1 --family appserver

  • gcloud compute images create --name=appserver-v1 --snapshot prod-app-snap --region=europe-west1 --family appserver

  • gcloud compute images create appserver-v1 --source-snapshot prod-app-snap --family appserver --storage-location europe-west1

  • gcloud compute snapshots create appserver-v1 --source-snapshot prod-app-snap --family appserver --storage-location europe-west1

Question 11 of 20

Your security team wants to let the [email protected] Google Group create and manage Compute Engine VM instances in the "blue" project, but the group must not be able to delete the project, change billing, or administer other Google Cloud services. Which IAM configuration best satisfies the requirement while following the principle of least privilege?

  • Grant the group the roles/compute.admin role on the organization node that contains the project.

  • Grant the group the roles/owner role on the blue project.

  • Grant the group the roles/compute.instanceAdmin.v1 role on the blue project.

  • Grant the group the roles/editor role on the blue project.

Question 12 of 20

Your team operates a Java web service in a GKE cluster. Users report brief latency spikes during traffic peaks, and you suspect a CPU-intensive code path is responsible. Monitoring and Logging are already enabled on the cluster. Which action will let you begin collecting production CPU profiles in Cloud Profiler with the least disruption to the running Pods?

  • Add the Cloud Profiler Java agent to the application's startup command and grant the Pod's service account the Cloud Profiler Agent IAM role.

  • Convert the cluster to Autopilot mode and set an --enable-cloud-profiler flag on the control plane so profiles are gathered for all workloads.

  • Deploy a DaemonSet that periodically SSHes into each node, captures stack traces, and pushes them to Cloud Storage for later analysis.

  • Enable the Cloud Profiler API in the project and rely on the Ops Agent already running on each node to upload CPU samples automatically.

Question 13 of 20

Your data science team runs nightly Monte-Carlo simulations that take about two hours and write checkpoints to Cloud Storage every minute. The workload is stateless and can resume from the last checkpoint if a VM disappears. The team currently uses regular N2-highcpu-16 VMs in a zonal managed instance group and wants to reduce compute costs as much as possible without adding operational complexity. What should you do?

  • Build a custom machine type with 8 vCPUs and 8 GB RAM to reduce per-VM cost and continue running the group with standard VMs.

  • Create a new instance template that uses Spot VMs and replace the managed instance group's template with it, accepting that the instances may be preempted during the run.

  • Set the instances' availability policy to live migrate and automatic restart so they resume if host maintenance occurs, reducing wasted work.

  • Keep the current instance template but purchase one-year committed use discounts (CUDs) for the N2-highcpu-16 shape to lower the bill.

Question 14 of 20

Your development team is migrating a stateless Node.js API to Google Kubernetes Engine. They want to define a resource that will: (1) maintain a desired replica count, (2) support rolling updates with automated rollback on failure, and (3) allow declarative version history management. Which Kubernetes object best meets these requirements?

  • Deployment

  • Pod

  • Service

  • StatefulSet

Question 15 of 20

You manage hundreds of Compute Engine VMs that were created from various custom images and run in several GCP projects. Security policy now requires that all Linux VMs receive critical OS security patches every Wednesday at 02:00 in the VM's local time zone and that patch compliance reports are kept for auditors. You want an automated solution that minimizes operational overhead and does not require rebuilding the VMs. What should you do?

  • Attach a custom startup script to each instance template that runs apt-get update and apt-get upgrade at system boot.

  • Replace all custom images with the latest Container-Optimized OS image and rely on automatic image updates.

  • Install a third-party configuration-management agent (e.g., Chef or Puppet) on every VM and schedule a weekly patch job from that tool's control plane.

  • Enable the OS Config API in each project, verify that the OS Config agent is running on every VM, and create a weekly patch deployment in VM Manager for Wednesdays at 02:00.

Question 16 of 20

A healthcare company stores sensitive invoices in a Cloud Storage bucket that has both Uniform bucket-level access and Public access prevention enforced. An external auditor needs read-only access to a single CSV object for the next 7 days. The company does not want to create or manage an IAM identity for the auditor, and the bucket's security settings must remain unchanged. Which approach should you take?

  • Add the allUsers principal to the bucket IAM policy with the Storage Object Viewer role, then remove the binding after 7 days.

  • Change the bucket's Public access prevention setting to "inherited" and rely on the obscurity of the object's name for security.

  • Generate a Cloud Storage V4 signed URL for the CSV object that expires in 7 days using a service account that has storage.objects.get permission, and send the URL to the auditor.

  • Temporarily disable Uniform bucket-level access, add an object-level READ ACL for the auditor's email address, and re-enable Uniform bucket-level access after 7 days.

Question 17 of 20

You created a custom-mode VPC network named prod-net for a new application. Two regional subnets exist: 10.10.0.0/24 in us-central1 and 10.20.0.0/24 in us-east1. Instances in each subnet have outbound internet connectivity but cannot ping each other over their internal IPs. You must enable all internal traffic between the subnets while keeping any incoming traffic from the internet blocked. Which single firewall rule should you create?

  • Create an egress firewall rule in prod-net that allows all protocols to 0.0.0.0/0 with priority 1000.

  • Create an ingress firewall rule in prod-net that allows all protocols from 0.0.0.0/0 with priority 1000.

  • Create an egress firewall rule in prod-net that allows all protocols to 10.0.0.0/8 with priority 65534.

  • Create an ingress firewall rule in prod-net that allows all protocols from 10.0.0.0/8 with priority 65534.

Question 18 of 20

You run a nightly genomics pipeline that needs about 16 vCPUs for three hours to finish before the next business day. The workflow engine tolerates worker failures and can restart individual tasks on any healthy VM. The job currently runs on eight on-demand e2-standard-8 instances and uses no autoscaling. Finance has asked you to cut the compute bill for this workload by at least 70 % without lengthening the runtime. What should you do?

  • Containerize the pipeline and deploy it to Cloud Run with a maximum of eight instances.

  • Replace the existing VMs with eight e2-standard-8 Spot VMs and keep autoscaling disabled.

  • Migrate the workload to eight on-demand c2-standard-8 instances to finish faster and receive committed-use discounts.

  • Create a managed instance group that uses Spot VMs with a custom machine type of 4 vCPUs and 8 GB RAM, and configure the autoscaler to maintain 16 vCPUs at all times.

Question 19 of 20

Your team is configuring a Compute Engine VM that will only read objects from a single Cloud Storage bucket. The VM runs under a new user-managed service account. To follow the principle of least privilege, which IAM grant should you create?

  • Grant the service account the Editor role on the project.

  • Grant the service account the Storage Object Viewer role on the bucket.

  • Grant the service account the Storage Admin role on the project.

  • Grant the project's default Compute Engine service account the Owner role on the project.

Question 20 of 20

Your security team must track which users read or modify objects in Cloud Storage for compliance. Admin Activity audit logs are already collected, but Data Access audit logs are still missing. What is the most appropriate way to start collecting both DATA_READ and DATA_WRITE logs for the Cloud Storage service at the project level while minimizing operational overhead?

  • Create a log sink to route existing _Default bucket entries to BigQuery for long-term storage.

  • Enable the Cloud Audit Logs API and grant all users the Cloud Audit Logs Viewer role.

  • Update the project IAM policy to include an auditConfig for service "storage.googleapis.com" with log types DATA_READ and DATA_WRITE.

  • Enable uniform bucket-level access on the bucket and turn on Object Viewer logging in the bucket's permissions tab.