GCP Associate Cloud Engineer Practice Test
Use the form below to configure your GCP Associate Cloud Engineer Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

GCP Associate Cloud Engineer Information
GCP Associate Cloud Engineer Exam
The Google Cloud Certified Associate Cloud Engineer (ACE) exam serves as a crucial validation of your skills in deploying, monitoring, and maintaining projects on the Google Cloud Platform. This certification is designed for individuals who can use both the Google Cloud Console and the command-line interface to manage enterprise solutions. The exam assesses your ability to set up a cloud solution environment, plan and configure a cloud solution, deploy and implement it, ensure its successful operation, and configure access and security. It is a solid starting point for those new to the cloud and can act as a stepping stone to professional-level certifications. To be eligible, it's recommended to have at least six months of hands-on experience with Google Cloud products and solutions. The exam itself is a two-hour, multiple-choice and multiple-select test that costs $125.
The ACE exam covers a broad range of Google Cloud services and concepts. Key areas of focus include understanding and managing core services like Compute Engine, Google Kubernetes Engine (GKE), App Engine, and Cloud Storage. You should be proficient in launching virtual machine instances, configuring autoscaling, deploying applications, and knowing the different storage classes and their use cases. Additionally, a strong grasp of Identity and Access Management (IAM) is critical, including managing users, groups, roles, and service accounts according to best practices. The exam also delves into networking aspects like creating VPCs and subnets, and operational tasks such as monitoring with Cloud Monitoring, logging with Cloud Logging, and managing billing accounts. Familiarity with command-line tools like gcloud, bq, and gsutil is also essential.
Practice Exams for Preparation
A vital component of a successful preparation strategy is taking practice exams. These simulations are the best way to get a feel for the tone, style, and potential trickiness of the actual exam questions. By taking practice exams, you can quickly identify your strengths and pinpoint the specific exam domains that require further study. Many who have passed the exam attest that a significant portion of the questions on the actual test were very similar to those found in quality practice exams. These practice tests often provide detailed explanations for each answer, offering a deeper learning opportunity by explaining why a particular answer is correct and the others are not. This helps in not just memorizing answers, but truly understanding the underlying concepts. Fortunately, Google provides a set of sample questions to help you get familiar with the exam format, and numerous other platforms offer extensive practice tests. Consistent practice with these resources can significantly boost your confidence and increase your chances of passing the exam.

Free GCP Associate Cloud Engineer Practice Test
- 20 Questions
- Unlimited time
- Setting up a cloud solution environmentPlanning and implementing a cloud solutionEnsuring successful operation of a cloud solutionConfiguring access and security
Your company uses on-premises Microsoft Active Directory for authentication. The IT team wants every new employee account and its group memberships to appear in Cloud Identity automatically, without administrators signing in to the Google Cloud console each day. Passwords must still be verified against Active Directory. Which approach best meets these requirements while minimizing manual effort?
Export Active Directory accounts to a CSV file each week and use the Admin console's bulk upload feature to import them into Cloud Identity.
Deploy Google Cloud Directory Sync and schedule it to synchronize users and groups from Active Directory to Cloud Identity.
Enable workforce identity federation so Google Cloud automatically reads user and group data directly from Active Directory at sign-in.
Create a Cloud Function triggered by Pub/Sub onboarding events that calls the Cloud Identity API to add users and groups.
Answer Description
Google Cloud Directory Sync (GCDS) is designed to automate provisioning for Cloud Identity or Google Workspace. It regularly reads user and group objects from an LDAP source such as Active Directory and creates, updates, or deletes the corresponding accounts in Cloud Identity. Because GCDS copies only metadata-not passwords-authentication continues to occur against Active Directory when SAML or Google Credential Provider for Windows is configured. Identity federation alone does not provision users, bulk CSV uploads require ongoing manual work, and a custom Cloud Functions solution would duplicate the fully-supported GCDS capabilities and still require maintenance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Google Cloud Directory Sync (GCDS) and how does it work?
What is the difference between Cloud Identity and Active Directory?
Why is workforce identity federation insufficient in this scenario?
What is Google Cloud Directory Sync (GCDS)?
Why does identity federation alone not meet provisioning requirements?
What is the role of SAML in authentication when using Active Directory?
Your security team must ensure that only the Compute Engine bastion host, which runs under the service account [email protected], can initiate SSH sessions to virtual machines that are part of the front-end tier in your custom VPC. All front-end VMs already have the network tag web. Which Cloud Next Generation Firewall rule definition satisfies the requirement while following least-privilege best practices?
Ingress rule - action allow; source: 0.0.0.0/0; targets: service account [email protected]; protocol/port: tcp:22
Egress rule - action allow; destination: service account [email protected]; targets: network tag web; protocol/port: tcp:22
Ingress rule - action allow; source: service account [email protected]; targets: network tag web; protocol/port: tcp:22
Ingress rule - action deny; source: bastion host external IPv4 address; targets: network tag web; protocol/port: tcp:22
Answer Description
To restrict SSH so that only the bastion host can reach the front-end VMs you need an ingress rule that allows traffic coming from the bastion host and to the web-tagged instances. A service account can be used as the source in an ingress rule, while network tags identify the target instances. The action must be allow, and the protocol/port must be limited to tcp:22 for SSH. An egress rule would control traffic leaving the VMs, not traffic entering them. Using 0.0.0.0/0 or an external IP as the source is overly permissive, and a deny rule would block, not permit, the required access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a service account in GCP?
What is the purpose of network tags in GCP?
What is an ingress firewall rule in GCP?
What is a bastion host in GCP?
What is the role of a service account in GCP firewall rules?
What are network tags in GCP and how are they used in firewall rules?
Your security team asks you to generate a report that shows how the IAM policy on a particular Cloud Storage bucket has evolved during the last two weeks. No manual snapshots were taken, so you need to query Google Cloud for point-in-time metadata about the bucket's configuration on specific past dates. Which Google Cloud service should you use to retrieve this historical asset information?
Cloud Audit Logs
Cloud Asset Inventory
Cloud Logging
Cloud Monitoring
Answer Description
Cloud Asset Inventory keeps a time-series database of Google Cloud resources and their IAM policies. It lets you query or export the state of an asset at a past timestamp, making it ideal for reconstructing historical configurations such as earlier IAM policies on a Cloud Storage bucket. Cloud Audit Logs and Cloud Logging store event and log data, not full asset state. Cloud Monitoring tracks metrics, not resource configurations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Cloud Asset Inventory in GCP?
How does Cloud Asset Inventory differ from Cloud Audit Logs?
Can Cloud Asset Inventory be used to export resource configurations?
What is Cloud Asset Inventory used for?
How does Cloud Asset Inventory differ from Cloud Audit Logs?
Can Cloud Asset Inventory help with compliance auditing?
You are diagnosing an issue in a GKE cluster where a new sidecar container should have been added to every Pod. To verify which namespaces actually contain the updated Pods, you want to display a list of all Pods running across every namespace from your workstation, where you are already authenticated to the cluster. Which single command accomplishes this goal most efficiently?
kubectl get pods --all
kubectl get all -n kube-system
kubectl get pods --namespace="*"
kubectl get pods -A
Answer Description
Using kubectl, the flag --all-namespaces (short form -A) tells the client to query every namespace rather than the default namespace. The command "kubectl get pods -A" therefore prints a consolidated table of all Pods in the cluster regardless of namespace, letting you quickly see where the sidecar is present. The other commands are incorrect: specifying --namespace="*" or -n all is not valid syntax; "kubectl get all -n kube-system" only shows resources in the kube-system namespace; and "kubectl get pods --all" is not a supported flag, so it would return an error.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of namespaces in Kubernetes?
What is a sidecar container in Kubernetes?
Why use the '-A' flag in kubectl commands?
What is a namespace in Kubernetes?
What is a sidecar container?
How does the `kubectl get pods -A` command work?
Your company runs a containerized e-commerce API on GKE in us-central1. To reduce latency during flash-sale traffic, you need an external cache for 500-byte session objects with 15-minute TTLs. The cache must support Redis commands, provide automatic failover across zones, and require minimal operational effort. Which Google Cloud service best meets these needs?
Provision Memorystore for Redis in Standard Tier with automatic failover across two zones.
Run a Redis StatefulSet on GKE and attach a regional persistent disk for high availability.
Deploy Cloud Bigtable and enable its in-memory cache feature for session data.
Create a Cloud SQL for PostgreSQL instance using a memory-optimized machine type to store session rows.
Answer Description
Memorystore for Redis is a fully managed, in-memory data store that speaks the native Redis protocol. When you create an instance in Standard Tier, the service automatically provisions a replica in a different zone within the same region, replicates data asynchronously to that replica, and performs automatic failover if the primary node becomes unavailable. Because Google operates the service, there is no need for customers to handle cluster installation, patching, or failover scripting, satisfying the minimal-operations requirement.
Cloud SQL and Cloud Bigtable are durable databases, not low-latency in-memory caches, and they do not support Redis commands. Running a self-managed Redis StatefulSet on GKE could meet the protocol requirement but leaves failover, monitoring, and maintenance tasks to your team, conflicting with the low-operations mandate.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Memorystore for Redis?
How does automatic failover work in Memorystore for Redis?
Why isn’t Cloud SQL or Cloud Bigtable suitable for this use case?
What is Memorystore for Redis?
What does TTL mean in caching?
What is the difference between Memorystore Standard Tier and Basic Tier?
Your company keeps quarterly financial reports in a Cloud Storage bucket named gs://corp-reports. Compliance now requires you to delete every object that was uploaded before 1 July 2022 while leaving newer files untouched. You decide to add a lifecycle rule that uses the createdBefore condition. Which value should you supply for createdBefore so the rule is accepted and functions as intended?
1656633600(Unix epoch seconds)07/01/20222022-07-01T00:00:00Z2022-07-01
Answer Description
The createdBefore condition expects a calendar date formatted as YYYY-MM-DD and interprets that date at 00:00:00 UTC. Any object whose creation time is strictly earlier than that instant is selected by the rule. Therefore, to target objects uploaded before 1 July 2022 you must specify 2022-07-01. Supplying a timestamp, a locale-specific date such as 07/01/2022, or an ISO date-time string will cause the configuration to be rejected because they do not match the required syntax.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the createdBefore condition in Cloud Storage lifecycle rules?
Why does the createdBefore condition require the YYYY-MM-DD format?
How can I confirm which objects will be affected by a lifecycle rule in Cloud Storage?
How does the createdBefore condition work in Cloud Storage lifecycle rules?
Why can't timestamps or locale-specific dates be used with createdBefore?
What happens to objects created exactly on the createdBefore date?
Your compliance team mandates that any object stored in the logs-prod Cloud Storage bucket be moved from the Standard storage class to Coldline exactly 180 days after it is created. Application code must not be changed, and the solution should require minimal ongoing operations. Which configuration best meets these requirements?
Run a periodic
gsutil -m mvjob from a Compute Engine instance to move objects older than 180 days to Coldline storage.Deploy a Cloud Function triggered by Cloud Scheduler every day that copies objects older than 180 days to a new Coldline bucket and deletes the originals.
Add a bucket lifecycle rule whose action is SetStorageClass: Coldline with a condition
age: 180.Enable Object Versioning and set a bucket retention policy of 180 days so older versions are automatically stored in Coldline.
Answer Description
A lifecycle management rule is the simplest, fully managed way to automate storage-class transitions without touching application code. A rule with the action SetStorageClass set to Coldline and the condition age = 180 automatically rewrites each eligible object's metadata when it turns 180 days old, changing its class in place. Object retention policies or versioning do not switch storage classes; they only restrict deletions or keep older versions. Scheduled jobs or manual gsutil commands add operational overhead and are unnecessary when lifecycle rules natively support this transition.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a bucket lifecycle rule in Google Cloud Storage?
How does the SetStorageClass action work in a lifecycle rule?
Why is a lifecycle rule better than manual scripts or scheduled jobs for storage management?
What is a bucket lifecycle rule in Google Cloud Storage?
What is the Coldline storage class in Google Cloud?
Why is using lifecycle rules better than a manual gsutil command for storage class transitions?
Your team runs a stateless image-processing service in a managed instance group (MIG) that pulls jobs from a Cloud Pub/Sub subscription. CPU utilization is typically low, but the subscription backlog can grow quickly. You must configure the MIG (minimum 2, maximum 10 VMs) to add or remove instances so that the number of undelivered Pub/Sub messages stays close to 500 per VM. Which autoscaling configuration should you implement?
Configure autoscaling on average CPU utilization with a 60% target, assuming higher backlog will increase CPU and trigger scaling.
Attach an internal HTTP(S) load balancer in front of the MIG and set the autoscaler to maintain 80% load-balancing serving capacity.
Create an autoscaling policy that uses the Cloud Monitoring metric "pubsub.googleapis.com/subscription/num_undelivered_messages" with utilizationTargetType set to "GAUGE_PER_INSTANCE" and target set to 500.
Replace Pub/Sub with an App Engine task queue and use a scheduled autoscaler to add VMs only during business hours.
Answer Description
The Compute Engine autoscaler can scale a MIG using any Cloud Monitoring metric when you add a custom-metric utilization rule. The Pub/Sub metric "pubsub.googleapis.com/subscription/num_undelivered_messages" is a gauge that represents the current number of unacknowledged messages. By setting utilizationTargetType to GAUGE_PER_INSTANCE, the autoscaler keeps the metric value per VM near the specified target-in this case 500 messages per instance-causing the group to scale out when backlog per VM exceeds 500 and scale in when it drops below that threshold. CPU-based, load-balancer-based, or schedule-based policies would not react to message backlog spikes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Managed Instance Group (MIG)?
What is the 'GAUGE_PER_INSTANCE' utilizationTargetType?
How does the Pub/Sub metric 'num_undelivered_messages' affect autoscaling?
Your company's microservices run on GKE and automatically export tracing data to Cloud Trace. Several times a day the frontend latency jumps from the normal 300 ms to more than 8 s. In an example slow trace you notice that almost the entire delay is in the child span named "cloudspanner.googleapis.com/BeginTransaction". You want to confirm quickly, in the Cloud Trace UI whether this Spanner RPC is the dominant source of latency across all recent slow requests, without exporting data to another tool. What should you do?
Define a log-based metric on the Cloud Spanner query logs and build a dashboard that charts the metric over time.
Create a new uptime check targeting the Spanner instance and monitor its response time in Cloud Monitoring.
Filter the trace list by the span name "cloudspanner.googleapis.com/BeginTransaction" and sort the matching traces by latency to inspect their distribution.
Export recent trace data to BigQuery and run a SQL query that computes the average duration of all Spanner spans.
Answer Description
Cloud Trace lets you build an on-the-fly analysis by filtering the trace list with a span name expression such as span:"cloudspanner.googleapis.com/BeginTransaction". When you apply this filter and sort the results by latency, the list shows only the traces that contain that specific Spanner span, ordered from slowest to fastest. By scanning the latency column (or switching to the histogram view) you can immediately see whether traces that include this span are consistently slower than normal traffic. Creating external dashboards, uptime checks, or log-based metrics could add insight but would require additional configuration and would not directly correlate span-level latency across existing traces.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a span in Cloud Trace?
How do filters work in Cloud Trace?
What is the histogram view in Cloud Trace?
How does filtering by span name in Cloud Trace help identify latency issues?
What is a span in Cloud Trace?
What is the benefit of using the histogram view in Cloud Trace?
Your team regularly spins up new Compute Engine VMs for short-lived batch jobs. To standardize their configuration, you need a reusable custom image created directly from an existing snapshot named prod-app-snap. The image must belong to the image family appserver and store its data in the europe-west1 image storage location. Which single gcloud command meets these requirements?
gcloud compute images create appserver-v1 --source-disk prod-app-snap --source-disk-zone europe-west1 --family appserver
gcloud compute images create --name=appserver-v1 --snapshot prod-app-snap --region=europe-west1 --family appserver
gcloud compute images create appserver-v1 --source-snapshot prod-app-snap --family appserver --storage-location europe-west1
gcloud compute snapshots create appserver-v1 --source-snapshot prod-app-snap --family appserver --storage-location europe-west1
Answer Description
The gcloud compute images create command is used to build custom images. When the source is a snapshot you must supply the --source-snapshot flag, not --source-disk. The --family flag assigns the image to an image family, and --storage-location defines the geographic location where the image file is kept. Therefore the command that uses gcloud compute images create with --source-snapshot prod-app-snap, --family appserver, and --storage-location europe-west1 is correct. The other options either use the wrong gcloud resource (snapshots instead of images), specify an invalid flag such as --region or --source-disk, or omit the required --storage-location flag.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a snapshot in GCP Compute Engine?
What is the purpose of an image family in GCP?
What is the benefit of specifying a storage location in the gcloud command?
What is an image family in GCP?
What is the difference between a snapshot and an image in GCP?
Why is the --storage-location flag important when creating images?
Your security team wants to let the [email protected] Google Group create and manage Compute Engine VM instances in the "blue" project, but the group must not be able to delete the project, change billing, or administer other Google Cloud services. Which IAM configuration best satisfies the requirement while following the principle of least privilege?
Grant the group the roles/compute.admin role on the organization node that contains the project.
Grant the group the roles/owner role on the blue project.
Grant the group the roles/compute.instanceAdmin.v1 role on the blue project.
Grant the group the roles/editor role on the blue project.
Answer Description
Granting the predefined role roles/compute.instanceAdmin.v1 on the project lets members of the [email protected] group create, start, stop, and modify Compute Engine instances and attached disks. The role does not include broad permissions to administer other services, change billing, or delete the project, making it the least-privileged fit. Granting Editor or Owner at any level would give far more permissions than required, while assigning the Compute Admin role at the organization level would exceed the scope and violate least privilege.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the roles/compute.instanceAdmin.v1 role used for in Google Cloud?
What does the principle of least privilege mean in IAM configuration?
Why is granting roles/editor or roles/owner not appropriate in this scenario?
What is the principle of least privilege in Google Cloud IAM?
What permissions does the roles/compute.instanceAdmin.v1 role include?
Why is roles/editor or roles/owner considered excessive for this requirement?
Your team operates a Java web service in a GKE cluster. Users report brief latency spikes during traffic peaks, and you suspect a CPU-intensive code path is responsible. Monitoring and Logging are already enabled on the cluster. Which action will let you begin collecting production CPU profiles in Cloud Profiler with the least disruption to the running Pods?
Add the Cloud Profiler Java agent to the application's startup command and grant the Pod's service account the Cloud Profiler Agent IAM role.
Convert the cluster to Autopilot mode and set an
--enable-cloud-profilerflag on the control plane so profiles are gathered for all workloads.Deploy a DaemonSet that periodically SSHes into each node, captures stack traces, and pushes them to Cloud Storage for later analysis.
Enable the Cloud Profiler API in the project and rely on the Ops Agent already running on each node to upload CPU samples automatically.
Answer Description
Cloud Profiler does not gather data automatically from GKE workloads. The application process itself must load the Cloud Profiler language agent so that it can sample stack traces and send them to the Profiler backend. In GKE, the agent typically is added by including the Profiler Java agent JAR and setting the appropriate startup options or environment variables. In addition, the Kubernetes service account used by the Pods must have the roles/cloudprofiler.agent IAM role so the agent can write profile data. Simply enabling the API or relying on the Ops Agent running on the nodes does not activate profiling for in-process code, and there is no cluster-level flag or external trace-collection script that will populate Cloud Profiler.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Cloud Profiler and why is it used?
How do I integrate the Cloud Profiler Java agent with my application in GKE?
Why can't the Ops Agent or a DaemonSet handle profiling in GKE?
What is the Cloud Profiler Java agent and how does it work?
Why does the Pod's service account need the Cloud Profiler Agent IAM role?
Why can't the Ops Agent or a DaemonSet collect CPU profiles for GKE workloads?
Your data science team runs nightly Monte-Carlo simulations that take about two hours and write checkpoints to Cloud Storage every minute. The workload is stateless and can resume from the last checkpoint if a VM disappears. The team currently uses regular N2-highcpu-16 VMs in a zonal managed instance group and wants to reduce compute costs as much as possible without adding operational complexity. What should you do?
Build a custom machine type with 8 vCPUs and 8 GB RAM to reduce per-VM cost and continue running the group with standard VMs.
Create a new instance template that uses Spot VMs and replace the managed instance group's template with it, accepting that the instances may be preempted during the run.
Set the instances' availability policy to live migrate and automatic restart so they resume if host maintenance occurs, reducing wasted work.
Keep the current instance template but purchase one-year committed use discounts (CUDs) for the N2-highcpu-16 shape to lower the bill.
Answer Description
Because the simulations are short-lived, fault-tolerant, and restartable from checkpoints, they match the use case that Spot VMs (previously called preemptible VMs) are designed for. Re-creating the instance template to specify Spot VMs and using that template in the existing managed instance group immediately provides the steep Spot-pricing discount-often 60-91 percent compared with on-demand instances. Spot VMs cannot be live-migrated or automatically restarted, but that is acceptable for this workload, and the MIG will attempt to re-provision instances if capacity is available. The other options either keep full-price VMs, rely on committed use contracts, or change machine characteristics without gaining the larger cost savings that Spot pricing offers.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Spot VMs in Google Cloud?
How does a managed instance group benefit from Spot VMs for stateless workloads?
What are Monte Carlo simulations and why are they suitable for Spot VMs?
What are Spot VMs and how do they differ from regular VMs?
What is a managed instance group (MIG) in Google Cloud?
What are the cost savings of using Spot VMs compared to on-demand VMs?
Your development team is migrating a stateless Node.js API to Google Kubernetes Engine. They want to define a resource that will: (1) maintain a desired replica count, (2) support rolling updates with automated rollback on failure, and (3) allow declarative version history management. Which Kubernetes object best meets these requirements?
Deployment
Pod
Service
StatefulSet
Answer Description
A Deployment controller manages ReplicaSets to keep the desired number of identical Pods running. It offers built-in support for rolling updates, can automatically roll back if the new Pods fail readiness checks, and records revision history to enable easy rollbacks. A single Pod lacks orchestration capabilities, a Service only provides stable networking, and a StatefulSet focuses on ordered, unique Pods with stable identities-none of which directly handles stateless rolling updates and replica management.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a Pod and a Deployment in Kubernetes?
How does a rolling update work in Kubernetes?
What is the purpose of revision history in Kubernetes Deployments?
Why is a Kubernetes Deployment used for rolling updates?
What is the difference between a Deployment and a StatefulSet?
What happens if a rolling update fails in Kubernetes?
You manage hundreds of Compute Engine VMs that were created from various custom images and run in several GCP projects. Security policy now requires that all Linux VMs receive critical OS security patches every Wednesday at 02:00 in the VM's local time zone and that patch compliance reports are kept for auditors. You want an automated solution that minimizes operational overhead and does not require rebuilding the VMs. What should you do?
Attach a custom startup script to each instance template that runs
apt-get updateandapt-get upgradeat system boot.Replace all custom images with the latest Container-Optimized OS image and rely on automatic image updates.
Install a third-party configuration-management agent (e.g., Chef or Puppet) on every VM and schedule a weekly patch job from that tool's control plane.
Enable the OS Config API in each project, verify that the OS Config agent is running on every VM, and create a weekly patch deployment in VM Manager for Wednesdays at 02:00.
Answer Description
VM Manager's patch management feature lets you schedule recurring patch deployments without recreating the VMs. Enabling the OS Config API in each project and ensuring the OS Config agent is present on every VM allows you to create a weekly patch deployment resource that automatically executes every Wednesday at 02:00. VM Manager stores execution history and compliance data that can be exported for audit purposes. Installing a third-party tool or rebuilding instances is unnecessary, and startup scripts only apply once when a VM boots, so they cannot guarantee weekly execution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the OS Config API and its role in patch management?
Can you explain what VM Manager is and how it helps in automating patch management?
How does the OS Config agent work and why is it necessary on VMs?
What is the OS Config API in GCP?
How does VM Manager help with patch management in GCP?
Why are startup scripts insufficient for weekly patching in GCP?
A healthcare company stores sensitive invoices in a Cloud Storage bucket that has both Uniform bucket-level access and Public access prevention enforced. An external auditor needs read-only access to a single CSV object for the next 7 days. The company does not want to create or manage an IAM identity for the auditor, and the bucket's security settings must remain unchanged. Which approach should you take?
Add the allUsers principal to the bucket IAM policy with the Storage Object Viewer role, then remove the binding after 7 days.
Change the bucket's Public access prevention setting to "inherited" and rely on the obscurity of the object's name for security.
Generate a Cloud Storage V4 signed URL for the CSV object that expires in 7 days using a service account that has storage.objects.get permission, and send the URL to the auditor.
Temporarily disable Uniform bucket-level access, add an object-level READ ACL for the auditor's email address, and re-enable Uniform bucket-level access after 7 days.
Answer Description
Generating a V4 signed URL meets every requirement. A signed URL embeds a cryptographic signature created by a service account that already has storage.objects.get permission. Anyone who possesses the URL can download only the specified object, and the URL automatically expires after the duration you set (up to 7 days with V4). Signed URLs work even when Uniform bucket-level access is enabled because they do not rely on ACLs, and they are allowed under Public access prevention because access is authenticated by the signature. Granting allUsers IAM access would violate Public access prevention, object ACLs cannot be used while Uniform bucket-level access is on, and making the bucket public is disallowed and insecure.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a V4 signed URL?
How does Uniform bucket-level access affect ACLs?
What is Public access prevention in Google Cloud Storage?
What is a V4 signed URL in Cloud Storage?
How does Public access prevention work in Cloud Storage?
What are the benefits of Uniform bucket-level access in Cloud Storage?
You created a custom-mode VPC network named prod-net for a new application. Two regional subnets exist: 10.10.0.0/24 in us-central1 and 10.20.0.0/24 in us-east1. Instances in each subnet have outbound internet connectivity but cannot ping each other over their internal IPs. You must enable all internal traffic between the subnets while keeping any incoming traffic from the internet blocked. Which single firewall rule should you create?
Create an egress firewall rule in prod-net that allows all protocols to 0.0.0.0/0 with priority 1000.
Create an ingress firewall rule in prod-net that allows all protocols from 0.0.0.0/0 with priority 1000.
Create an egress firewall rule in prod-net that allows all protocols to 10.0.0.0/8 with priority 65534.
Create an ingress firewall rule in prod-net that allows all protocols from 10.0.0.0/8 with priority 65534.
Answer Description
A custom-mode VPC starts with only the two implied rules: allow-egress (priority 65535) and deny-ingress (priority 65535). Because the deny rule applies to all ingress traffic, even packets whose source is another subnet in the same network are blocked. To enable internal communication you need an ingress rule that explicitly allows traffic whose source is the network's private address space (for example 10.0.0.0/8, which includes both subnets). Any priority number lower than 65535 overrides the implied deny rule; 65534 is sufficient. An egress rule is unnecessary because egress is already allowed, and using 0.0.0.0/0 as the source would open the network to the public internet.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a custom-mode VPC network in GCP?
Why is an ingress rule required for internal communication within subnets?
What does priority mean in a firewall rule, and why is it important?
What is the difference between ingress and egress in VPC firewall rules?
Why is allowing internal traffic between subnets restricted by default in a custom-mode VPC?
What does the 10.0.0.0/8 address range represent in Google Cloud VPCs?
You run a nightly genomics pipeline that needs about 16 vCPUs for three hours to finish before the next business day. The workflow engine tolerates worker failures and can restart individual tasks on any healthy VM. The job currently runs on eight on-demand e2-standard-8 instances and uses no autoscaling. Finance has asked you to cut the compute bill for this workload by at least 70 % without lengthening the runtime. What should you do?
Containerize the pipeline and deploy it to Cloud Run with a maximum of eight instances.
Replace the existing VMs with eight e2-standard-8 Spot VMs and keep autoscaling disabled.
Migrate the workload to eight on-demand c2-standard-8 instances to finish faster and receive committed-use discounts.
Create a managed instance group that uses Spot VMs with a custom machine type of 4 vCPUs and 8 GB RAM, and configure the autoscaler to maintain 16 vCPUs at all times.
Answer Description
Spot VMs are billed at discounts of up to 91 % compared with on-demand pricing, but they can be preempted at any time. Using a managed instance group of Spot VMs lets Compute Engine automatically recreate any VM that is preempted, so capacity is quickly restored. By defining a custom machine type with 4 vCPUs and 8 GB RAM and an autoscaler that maintains an aggregate 16 vCPUs, you preserve the required total compute while roughly halving per-VM cost in addition to the Spot discount-easily exceeding the 70 % savings target. Keeping the original e2-standard-8 size on Spot VMs would save money but risks throughput loss during preemption because capacity is not automatically recovered. Cloud Run's maximum request timeout is 60 minutes, so it cannot handle a three-hour HPC batch job, and switching to on-demand C2 instances would increase costs instead of reducing them.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Spot VMs in GCP?
What is a Managed Instance Group in GCP?
How does autoscaling work in a Managed Instance Group?
What are Spot VMs in GCP and how do they differ from on-demand VMs?
What is a managed instance group in GCP?
How does autoscaling work in a managed instance group in GCP?
Your team is configuring a Compute Engine VM that will only read objects from a single Cloud Storage bucket. The VM runs under a new user-managed service account. To follow the principle of least privilege, which IAM grant should you create?
Grant the service account the Editor role on the project.
Grant the service account the Storage Object Viewer role on the bucket.
Grant the service account the Storage Admin role on the project.
Grant the project's default Compute Engine service account the Owner role on the project.
Answer Description
The Storage Object Viewer role (roles/storage.objectViewer) contains just the permissions needed to list and read objects, and nothing more. Granting it on the specific bucket limits the scope of access to exactly the required resource. The Editor and Storage Admin roles include many additional permissions across the entire project, violating least-privilege guidance. Giving the project's default Compute Engine service account Owner access is both overly broad and applied to the wrong principal.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the principle of least privilege and why is it important for IAM roles?
What is a user-managed service account and how does it differ from default service accounts?
What permissions does the Storage Object Viewer role grant?
What is the principle of least privilege in IAM?
What does the Storage Object Viewer role do in Google Cloud?
Why is it recommended to use user-managed service accounts for Compute Engine instead of default service accounts?
Your security team must track which users read or modify objects in Cloud Storage for compliance. Admin Activity audit logs are already collected, but Data Access audit logs are still missing. What is the most appropriate way to start collecting both DATA_READ and DATA_WRITE logs for the Cloud Storage service at the project level while minimizing operational overhead?
Create a log sink to route existing _Default bucket entries to BigQuery for long-term storage.
Enable the Cloud Audit Logs API and grant all users the Cloud Audit Logs Viewer role.
Update the project IAM policy to include an auditConfig for service "storage.googleapis.com" with log types DATA_READ and DATA_WRITE.
Enable uniform bucket-level access on the bucket and turn on Object Viewer logging in the bucket's permissions tab.
Answer Description
Data Access audit logs are disabled by default. To enable them, you add an auditConfig block to the project's IAM policy (or use the IAM-&-Admin > Audit Logs page) that specifies the service name and the log types. Setting the auditConfig for the service storage.googleapis.com with logType values DATA_READ and DATA_WRITE instructs Cloud Logging to record who reads or writes Cloud Storage objects. The other options do not activate Data Access logging: merely enabling an API, creating a sink, or changing bucket-level settings will not turn on these audit logs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an auditConfig in GCP IAM policy?
What is the difference between Admin Activity audit logs and Data Access audit logs?
Does enabling Cloud Audit Logs API automatically enable Data Access logs?
What are Admin Activity audit logs and Data Access audit logs in GCP?
What is an auditConfig block in GCP IAM policies?
How do you minimize operational overhead when enabling Data Access audit logs in GCP?
That's It!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.