GCP Associate Cloud Engineer Practice Test
Use the form below to configure your GCP Associate Cloud Engineer Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

GCP Associate Cloud Engineer Information
GCP Associate Cloud Engineer Exam
The Google Cloud Certified Associate Cloud Engineer (ACE) exam serves as a crucial validation of your skills in deploying, monitoring, and maintaining projects on the Google Cloud Platform. This certification is designed for individuals who can use both the Google Cloud Console and the command-line interface to manage enterprise solutions. The exam assesses your ability to set up a cloud solution environment, plan and configure a cloud solution, deploy and implement it, ensure its successful operation, and configure access and security. It is a solid starting point for those new to the cloud and can act as a stepping stone to professional-level certifications. To be eligible, it's recommended to have at least six months of hands-on experience with Google Cloud products and solutions. The exam itself is a two-hour, multiple-choice and multiple-select test that costs $125.
The ACE exam covers a broad range of Google Cloud services and concepts. Key areas of focus include understanding and managing core services like Compute Engine, Google Kubernetes Engine (GKE), App Engine, and Cloud Storage. You should be proficient in launching virtual machine instances, configuring autoscaling, deploying applications, and knowing the different storage classes and their use cases. Additionally, a strong grasp of Identity and Access Management (IAM) is critical, including managing users, groups, roles, and service accounts according to best practices. The exam also delves into networking aspects like creating VPCs and subnets, and operational tasks such as monitoring with Cloud Monitoring, logging with Cloud Logging, and managing billing accounts. Familiarity with command-line tools like gcloud, bq, and gsutil is also essential.
Practice Exams for Preparation
A vital component of a successful preparation strategy is taking practice exams. These simulations are the best way to get a feel for the tone, style, and potential trickiness of the actual exam questions. By taking practice exams, you can quickly identify your strengths and pinpoint the specific exam domains that require further study. Many who have passed the exam attest that a significant portion of the questions on the actual test were very similar to those found in quality practice exams. These practice tests often provide detailed explanations for each answer, offering a deeper learning opportunity by explaining why a particular answer is correct and the others are not. This helps in not just memorizing answers, but truly understanding the underlying concepts. Fortunately, Google provides a set of sample questions to help you get familiar with the exam format, and numerous other platforms offer extensive practice tests. Consistent practice with these resources can significantly boost your confidence and increase your chances of passing the exam.

Free GCP Associate Cloud Engineer Practice Test
- 20 Questions
- Unlimited time
- Setting up a cloud solution environmentPlanning and implementing a cloud solutionEnsuring successful operation of a cloud solutionConfiguring access and security
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
Your organization is creating an internal tool that lists every Identity and Access Management (IAM) policy binding configured in a Google Cloud project so that it can be rebuilt in another environment. To capture the minimum information required to reproduce each binding, which combination of elements must the tool record for every binding it discovers?
The resource, the IAM role, and the principal (member) that receives the role.
The IAM role, the principal, and any condition attached to the binding.
The principal, the resource, and the service perimeter that protects the resource.
The resource, the IAM role, and the audit log sink associated with the project.
Answer Description
An IAM policy is always attached to a specific Google Cloud resource such as a project, folder, or organization. Inside the policy, each binding pairs one role with one or more members (principals). Therefore, to recreate a binding you must know: 1) the resource that the policy is attached to, 2) the role being granted, and 3) the member (principal) receiving that role. Other items sometimes found in policies-such as conditions, audit log sinks, or service perimeters-are optional features that may be absent from many bindings, so they are not required in every case. Capturing the resource, role, and principal ensures the binding can always be reconstructed.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an IAM policy binding in Google Cloud?
What are optional features of IAM policies, such as conditions or audit log sinks?
How is an IAM role different from a principal in Google Cloud?
What is an IAM policy in Google Cloud?
What are IAM roles in Google Cloud and why are they important?
What is the principal or member in an IAM policy?
You are the organization administrator for ExampleCorp's Google Cloud environment. Security mandates that no new Compute Engine VM in any project should obtain an external IPv4 address, except for the network-engineering team that works only in the vpc-test project. Which configuration best meets this requirement while preserving least-privilege and minimizing repetitive work?
Create an IAM Deny policy at the Organization level that blocks the compute.instances.create permission for all users, then add an allow rule in the vpc-test project.
Delete the default VPC network from every project and create custom VPCs without Internet gateways; leave the default network intact in the vpc-test project.
Grant the network-engineering team the Compute Instance Admin role in the vpc-test project and remove that role from all other projects.
Apply the compute.vmExternalIpAccess constraint at the Organization level with "enforce" set to true (deny all), then add a project-level policy on vpc-test that allows only the network-engineering service account to use external IP addresses.
Answer Description
The compute.vmExternalIpAccess organization-policy constraint controls whether VMs can be created with external IPv4 addresses. By setting a policy at the Organization node that denies all principals, every folder and project automatically inherits the restriction. Because policies are inherited but can be overridden lower in the hierarchy, you can add a second policy only on the vpc-test project that specifies the network-engineering service account in the allowed list (or simply clears the enforcement flag). This keeps the default deny posture everywhere, avoids per-project repetition, and follows the principle of least privilege. The other options either rely on IAM roles (which do not block external IP assignment), network topologies that do not stop users from requesting external IPs, or IAM Deny rules that do not offer the fine-grained exception handling provided by the organization-policy constraint.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the compute.vmExternalIpAccess constraint in Google Cloud?
How does policy inheritance work in Google Cloud Organization policies?
What is the principle of least privilege, and why is it important in cloud security?
What is an organization-policy constraint in Google Cloud?
How does inheritance work for policies in Google Cloud?
What is the compute.vmExternalIpAccess constraint in Google Cloud?
Your team is reviewing the release notes for a new Google Cloud service. The notes state that the service is currently offered in "us-central1-a", "northamerica-northeast1", and "global". To plan high availability, they ask you which of these locations is a zone as defined by Google Cloud's resource hierarchy. Which location do you identify?
northamerica-northeast1
us
us-central1-a
global
Answer Description
A zone is the most granular Google Cloud deployment area and is identified by a region name followed by a lowercase letter, such as "-a" or "-b". "us-central1-a" fits this pattern, meaning it is a single zone inside the "us-central1" region. "northamerica-northeast1" lacks the final letter and therefore represents an entire region. "us" is a multi-regional location, and "global" indicates a service that is not tied to any specific geography. Only "us-central1-a" is a zonal identifier.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a zone, region, and global location in Google Cloud?
Why does a zone include a lowercase letter in its name?
How does high availability work across zones and regions in Google Cloud?
What is the difference between a zone and a region in Google Cloud?
What is the significance of 'global' in Google Cloud's resource hierarchy?
Why is high availability important in Google Cloud, and how can it be achieved across zones?
During an onboarding exercise you launch a script that tries to provision 150 vCPUs in the us-central1 region. The command fails with the error Quota 'CPUS' exceeded. You already have Owner permissions in the project. The CTO wants to understand why Google Cloud sets such default quotas in every project. Which explanation best describes the main reason these quotas exist?
They enforce each customer's committed-use discounts so that spending cannot exceed budget forecasts.
They protect the overall Google Cloud user community by limiting unexpected spikes in consumption from any one project.
They satisfy regional data-protection regulations by capping how many resources a single project may deploy in one location.
They reserve unused capacity for redundancy, ensuring every project can fail over to another zone during maintenance events.
Answer Description
Google Cloud uses quota limits to prevent any single customer-intentionally or accidentally-from consuming a disproportionate share of shared infrastructure. By throttling sudden, large spikes in resource usage, quotas help maintain service reliability and fair access for all customers. While quotas can incidentally help control costs, are unrelated to regulatory compliance, and do not function as high-availability reservations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why does Google Cloud enforce quotas?
How can I check and modify quotas in Google Cloud?
What happens if I exceed my project's quota limits?
What are quotas in Google Cloud?
How can you increase quotas for a project in Google Cloud?
What is the impact of quotas on cloud reliability?
You are working in Cloud Shell in project analytics-prod. Security asks you to create a new service account called etl-runner and set its display name to "ETL Batch SA" before any roles are granted. Which single gcloud command will accomplish this task?
gcloud iam service-accounts add-iam-policy-binding [email protected] --role="roles/iam.serviceAccountUser"gcloud iam service-accounts update [email protected] --display-name="ETL Batch SA" --project=analytics-prodgcloud iam service-accounts create etl-runner --display-name="ETL Batch SA" --project=analytics-prodgcloud services enable iam.googleapis.com --project=analytics-prod && gcloud iam service-accounts add-key [email protected]
Answer Description
To create a new service account you use gcloud iam service-accounts create. The command requires only the account ID; optional flags like --display-name (and --description) let you set metadata at creation time. Supplying the --project flag ensures the account is created in the intended project. The other commands either try to bind roles (which presumes the account already exists), add a key, or update an existing account-all of which do not create a new service account.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of a service account in GCP?
What does the `--display-name` flag do in the `gcloud iam service-accounts create` command?
Why is the `--project` flag required when creating a service account?
What is a service account in GCP?
What does the gcloud 'iam service-accounts create' command do?
Why use the '--display-name' flag when creating a service account?
Your company just created a new Google Cloud project. A Google Group of developers must be able to create, update, and delete most resources in the project, such as Compute Engine instances and Cloud Storage buckets. However, the security team requires that the group must not be able to modify IAM policies, link or unlink billing accounts, or delete the project. To satisfy these constraints with a single primitive IAM role and follow least-privilege principles, which role should you grant to the group?
Owner
Viewer
Editor
No primitive role satisfies these requirements; you must create a custom role
Answer Description
The Editor primitive role grants broad read-write access to nearly all resources in a project, so the developers can create, update, and delete Compute Engine instances, Cloud Storage buckets, and other resources. Editor does not include permissions such as resourcemanager.projects.setIamPolicy (modify IAM policies), resourcemanager.projects.delete (delete the project), or billing.projectManager (link or unlink billing accounts). Therefore, it meets the security team's constraints better than the Owner role, which is overly permissive. Viewer is read-only and would not let developers modify resources, and a custom role is unnecessary because the Editor role already satisfies the requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are primitive IAM roles in Google Cloud?
How does the Editor role differ from the Owner role in Google Cloud IAM?
What does least-privilege access mean in Google Cloud IAM?
What is the Editor role in GCP?
How does the Editor role follow least-privilege principles?
Why is a custom IAM role not necessary in this case?
Your team's deployment pipeline suddenly fails when trying to create several n2-standard-16 VM instances in the europe-west1 region. The Compute Engine error message is: "Quota 'CPUS (europe-west1)' exceeded. Limit: 96. Requested: 128." You need to restore the pipeline as quickly as possible and avoid the same problem in the future. Which action should you take first?
Modify the Terraform code to deploy smaller n2-standard-8 instances so that total vCPU usage stays under the existing 96-vCPU limit.
Create a new Google Cloud project, link it to the same billing account, and rerun the pipeline there to obtain fresh default quotas.
Redeploy the workload in another region where remaining CPUS quota is available, then file a quota request after deployment completes.
Submit a quota increase request for the CPUS quota in europe-west1 using the Cloud Console Quotas page.
Answer Description
The immediate blocker is that the regional vCPU quota has been exhausted. The fastest way to allow the pipeline to proceed is to request a quota increase for the CPUS resource in europe-west1. This is done in the Quotas page of the Google Cloud console (or with gcloud/serviceusage API) and, once approved, raises the regional limit so future deployments will succeed without code changes.
Choosing a different machine type or region might unblock today but does not ensure deployments will work the next time the quota is hit, and it forces changes to infrastructure code. Creating a new project gives a fresh quota allotment but requires configuring networking, IAM bindings, and billing links, which is slower than requesting additional quota and is not necessary when a single region quota is the only constraint.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a quota in Google Cloud?
How do you submit a quota increase request in Google Cloud?
What is the role of regional quotas in Google Cloud?
What is a quota in Google Cloud?
How can you request a quota increase in Google Cloud?
What is the difference between regional and global quotas in Google Cloud?
Your organization is moving a critical Linux-based application to a Compute Engine VM. Operations wants CPU and memory metrics to appear in Cloud Monitoring dashboards and needs the application's log files to be searchable in Cloud Logging. They prefer to deploy and maintain as few agents on the VM as possible. Which action will best meet these requirements?
Install the legacy Stackdriver Logging agent together with the legacy Stackdriver Monitoring agent to capture the required data.
Install the Google Cloud Ops Agent on the virtual machine to send both logs and metrics to Cloud Logging and Cloud Monitoring.
Install only the legacy Stackdriver Monitoring agent, which gathers both logs and metrics for Cloud Monitoring.
Simply enable the Cloud Logging and Cloud Monitoring APIs; the VM will export all logs and metrics without any agent.
Answer Description
The Google Cloud Ops Agent is a single agent that collects both system metrics (CPU, memory, disk, network, etc.) for Cloud Monitoring and application and system logs for Cloud Logging. Installing it on the VM satisfies the need for comprehensive observability while keeping operational overhead low because only one agent must be deployed and managed.
Enabling the Cloud Logging and Cloud Monitoring APIs alone is insufficient; without an agent, most guest-level metrics and many application logs are not captured. The legacy Stackdriver Monitoring agent collects metrics only, and the legacy Logging agent collects logs only-using both would require two separate agents, which conflicts with the requirement to minimize agent management. Cloud Trace client libraries capture distributed traces, not general logs or system metrics.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Google Cloud Ops Agent?
How do I install and configure the Google Cloud Ops Agent?
How does the Ops Agent compare to legacy Stackdriver agents?
What is the Google Cloud Ops Agent?
Why can’t enabling Cloud Logging and Monitoring APIs alone meet the requirements?
What are the differences between the Google Cloud Ops Agent and legacy Stackdriver agents?
You are asked to link an existing Google Cloud project called finance-prod to your company's centralized billing account. When you attempt this in the Cloud Console, the Link project button is disabled. Your identity currently has the Billing Account Viewer role on the billing account and the Viewer role on the project. Which combination of additional IAM roles will give you the minimum permissions required to complete the link without granting unnecessary broader access?
Assign Billing Account Administrator on the billing account and Viewer on the finance-prod project.
Assign Owner on the finance-prod project and Billing Account Viewer on the billing account.
Assign Billing Account User on the billing account and Project Billing Manager on the finance-prod project.
Assign Editor on the finance-prod project; no additional role is needed on the billing account.
Answer Description
Linking a project to a billing account requires permissions on two separate resources:
- Billing account - you need the billing.accounts.get and billing.accounts.update permissions, which are included in the Billing Account User role (roles/billing.user).
- Project - you need the resourcemanager.projects.updateBillingInfo permission, provided by the Project Billing Manager role (roles/resourcemanager.projectBillingManager).
Granting Billing Account User on the billing account plus Project Billing Manager on the project satisfies these requirements and follows the principle of least privilege. Billing Account Administrator or Owner roles would also work but grant unnecessary additional permissions. Similarly, granting Editor on the project is broader than needed.
Therefore, selecting Billing Account User for the billing account together with Project Billing Manager for the project is the correct and most restrictive solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the principle of least privilege in IAM?
What is the difference between Billing Account User and Billing Account Administrator?
What does the Project Billing Manager role allow you to do?
What are the responsibilities of the Billing Account User role in Google Cloud?
What permissions does the Project Billing Manager role provide in Google Cloud?
Why is it important to follow the principle of least privilege when assigning IAM roles in Google Cloud?
Your company just got a Cloud Identity account and now has an Organization node. You must migrate 30 standalone projects owned by different teams. Each project belongs to either the Finance or Engineering department. Teams need autonomy to manage their projects, and org administrators must apply future policy constraints (e.g., disabling external VM IPs) to Engineering only without affecting Finance. Which resource-hierarchy design meets these requirements with the least effort?
Leave the projects as standalone (not under any Organization) and use Shared VPC to centralize network administration instead of changing the hierarchy.
Create two top-level folders named Finance and Engineering under the Organization, move each project into its folder, and grant department leads IAM roles on their folder.
Set up a separate Organization for each department and transfer the projects to the corresponding Organization.
Move all projects directly under the Organization and tag them with labels for Finance or Engineering; grant IAM roles individually on every project.
Answer Description
Folders are the recommended way to group projects that share common administrators or policy requirements. By creating one folder for Finance and another for Engineering, you can:
- Move each team's projects into the appropriate folder once, then rely on inheritance for both IAM and Organization Policy.
- Grant department leads roles on their folder so they automatically manage all current and future projects inside it.
- Let central administrators attach constraints (such as blocking external VM IPs) to just the Engineering folder without touching Finance. Labels do not influence IAM or Organization Policy inheritance, so using only labels would require per-project administration. Maintaining separate Organizations would add unnecessary complexity and prevent shared billing or networking. Keeping projects outside any Organization would remove the ability to apply Organization Policies entirely. Therefore, structuring the hierarchy with dedicated top-level folders is the most efficient and scalable solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of an Organization node in GCP?
How do IAM and Organization Policies inherit in a GCP hierarchy?
What are the benefits of using folders in GCP's resource hierarchy?
What is an Organization node in GCP?
How do folders help in organizing projects in GCP?
What is the difference between labels and folders in GCP?
Your company runs dozens of Compute Engine VMs that host internal web applications. SREs want to forward OS-level metrics (CPU, memory, disk, network) and application logs to Cloud Monitoring and Cloud Logging without installing and maintaining two different agents. They also need a single YAML file on each VM to enable collection of NGINX access logs in addition to the default system telemetry. Which approach best meets these requirements with the least operational overhead?
Deploy a Prometheus sidecar on every VM for metrics and use a custom script to send log files to a Cloud Storage bucket.
Install the Google Cloud Ops Agent on each VM and add an nginx_access logging receiver to the agent's unified config.yaml file.
Enable Cloud Audit Logs at the project level and export them to Cloud Monitoring; no agent installation is required.
Install the legacy Monitoring agent for metrics and the legacy Logging agent with a Fluentd NGINX plugin to collect logs.
Answer Description
The Google Cloud Ops Agent is a single package that replaces the older standalone Monitoring and Logging agents. It ships system metrics and logs out of the box and can be extended by creating /etc/google-cloud-ops-agent/config.yaml, where you add receivers such as the built-in type: nginx_access to capture NGINX access logs. Installing both legacy agents or exporting logs in other ways would require two components or extra infrastructure, contrary to the requirement for one agent and one configuration file.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Google Cloud Ops Agent?
How do you configure the Google Cloud Ops Agent for NGINX logs?
Why are legacy agents not recommended for this use case?
What is the Google Cloud Ops Agent?
What does the unified config.yaml file do in Ops Agent?
How does the Ops Agent differ from legacy Monitoring and Logging agents?
Your company needs a new VPC named corp-net that must contain exactly two subnets: dev-us (10.10.0.0/16) in us-central1 and dev-eu (10.20.0.0/16) in europe-west1. No additional subnets should ever be created automatically. Which approach meets this requirement with the least manual effort?
Run
gcloud compute networks create corp-net --subnet-mode=custom, then create the dev-us and dev-eu subnets withgcloud compute networks subnets createspecifying--network=corp-net, the correct--region, and the desired--rangeCIDR blocks.Run
gcloud compute networks create corp-net --subnet-mode=auto, then usegcloud compute networks subnets expand-ip-rangeto adjust the IP ranges for dev-us and dev-eu.Create the default VPC, rename two of its existing subnets to dev-us and dev-eu, and change their IP ranges to 10.10.0.0/16 and 10.20.0.0/16.
Create an auto-mode VPC named corp-net, delete every automatically created subnet except dev-us and dev-eu, and rely on this configuration going forward.
Answer Description
Auto mode VPC networks always create one subnet per region at creation time and automatically add new subnets whenever Google Cloud launches a new region, so they cannot guarantee that only the two required subnets will exist. The most straightforward way to meet the requirement is to create the VPC in custom mode, which starts with no subnets, and then explicitly create just the two needed subnets, each with its region and IP CIDR range. Deleting subnets from an auto-mode network or trying to rename/resize default subnets is error-prone and still would not stop future automatic subnet creation. Therefore, the sequence that builds a custom-mode network and then adds the two subnets is the only correct solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a VPC in Google Cloud?
What is the difference between auto-mode and custom-mode VPCs?
What is an IP CIDR range and why is it important for subnets?
What is a VPC in GCP?
What is the difference between auto-mode and custom-mode VPCs in GCP?
What is CIDR and why is it used in configuring subnets?
Your company has a Google Cloud organization with separate folders for "prod" and "dev" projects. Security mandates that no new Compute Engine VM in any project under the prod folder may receive an external IPv4 address, but development teams must remain free to create such VMs in their own folder. Which approach best meets these requirements with the least administrative overhead?
Apply the constraint
constraints/compute.vmCanIpForwardin Deny mode on the organization node to block external IPs for every VM.Remove the
roles/compute.networkUserIAM role from all service accounts in prod projects to prevent them from getting external IP addresses.Delete the default VPC network in each prod project and require teams to create only custom subnets without any organization policy.
Apply the organization policy constraint
constraints/compute.vmExternalIpAccessin Deny mode on the prod folder so it is inherited by all production projects.
Answer Description
Google Cloud Organization Policy lets you set constraints that are inherited by all descendants in the resource hierarchy unless an ancestor overrides them. The constraint constraints/compute.vmExternalIpAccess controls whether new VM instances can obtain external IPv4 addresses. By setting this constraint to Deny at the prod folder level, every current and future project inside that folder will automatically block external IP assignment, while projects in the dev folder remain unaffected because they inherit policies from their own (less-restricted) ancestors. Applying the policy at the organization level would also affect dev projects, and using other constraints or IAM changes would not reliably block external IP creation. Deleting default VPC networks does not prevent users from adding external addresses to new VMs in custom networks.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an organization policy in Google Cloud?
Can you explain `constraints/compute.vmExternalIpAccess` in detail?
What is the difference between constraints and IAM roles in Google Cloud?
What is the organization policy constraint `constraints/compute.vmExternalIpAccess`?
What is the resource hierarchy in Google Cloud?
How does `constraints/compute.vmExternalIpAccess` differ from `constraints/compute.vmCanIpForward`?
A financial services company is creating a Cloud SQL instance that must satisfy German data-residency rules: all data must remain in a single geographic area, yet the database should continue operating if one zone in that area becomes unavailable. In the Google Cloud console, which location type best meets these requirements?
Global
Regional (europe-west3)
Multi-regional (europe)
Zonal (europe-west3-c)
Answer Description
A regional location (for example, europe-west3) keeps resources inside a single Google Cloud region, ensuring that data does not leave that geographic area. At the same time, the service can deploy synchronously replicated resources across multiple zones within that region, providing resilience against a single-zone outage. A zonal location provides no cross-zone redundancy, while a multi-regional or global location would place data in more than one region, violating the residency constraint.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a zonal and regional location in Google Cloud?
What are synchronous replicated resources in Google Cloud?
Why can't multi-regional or global location types satisfy data-residency requirements?
What is a Google Cloud region and zone?
How does regional redundancy in Cloud SQL work?
Why can't multi-regional or global locations meet German data-residency rules?
Your organization runs several Cloud Storage buckets in a single project. Only one analyst, [email protected], must be able to list and download objects from the bucket gs://analytics-data. She must not access any other GCS resources or change data. As the project owner, which single IAM change best satisfies the requirement while following least-privilege?
Grant [email protected] the Storage Object Viewer role on the entire project.
Grant [email protected] the Storage Admin role on the bucket gs://analytics-data.
Grant [email protected] the Viewer basic role on the project.
Grant [email protected] the Storage Object Viewer role on the bucket gs://analytics-data.
Answer Description
Granting the Storage Object Viewer role (roles/storage.objectViewer) on the specific bucket gives Alice permission to list objects and read their data in that bucket only. Because the binding is applied at the bucket level, it does not cascade to other buckets or project resources, preserving least-privilege. Granting the same role or any broader role (such as Viewer or Storage Admin) at the project level would allow Alice to read objects in every bucket in the project. Granting Storage Admin on the bucket would let her create, overwrite, or delete objects, violating the read-only requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the Storage Object Viewer role allow in GCP?
What is the principle of least privilege in IAM?
How does bucket-level IAM differ from project-level IAM in GCP?
What is the Storage Object Viewer role in Google Cloud IAM?
How does applying an IAM role at the bucket level differ from applying it at the project level?
Why is least-privilege access important in IAM?
Your company currently uses only individual Gmail accounts and has a single Google Cloud project that appears in the console with No organization. Management now wants to apply organization-wide IAM policies and centralize future project creation under an Organization resource, but they do not plan to purchase Google Workspace licenses. What is the most appropriate first step to obtain an Organization resource for the company?
Register the company's domain with Cloud Identity, verify domain ownership, and then sign in to Google Cloud from an account in that domain.
Convert one founder's Gmail account to a service account and assign it the Organization Administrator role.
Create a new self-serve billing account and link it to a placeholder project; the Organization resource is created automatically during billing setup.
Open a support ticket with Google Cloud and request that an Organization resource be manually provisioned for the existing project.
Answer Description
An Organization resource is created automatically for a domain after someone in that domain signs up for Google Cloud and the domain is managed by either Google Workspace or Cloud Identity. Because the company does not want Google Workspace, the correct approach is to enroll the company's domain in Cloud Identity (free or premium) and complete domain verification. When a user from that Cloud Identity domain next accesses Google Cloud, the Organization node is created and projects can be migrated under it. Creating billing accounts, requesting support, or manipulating service accounts do not create an Organization resource.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Cloud Identity, and how does it relate to creating an Organization resource?
How is domain verification performed in Cloud Identity?
Why can't a billing account or service account be used to create an Organization resource?
What is Cloud Identity and why is it used?
How does domain verification work in Cloud Identity?
What benefits does an Organization resource provide in Google Cloud?
During a disaster-recovery review, you are asked which existing resource would remain fully manageable if an entire Google Cloud region became unavailable. The project currently includes: a custom-mode VPC network, three subnetworks (us-east1, us-central1, europe-west1), a regional Cloud NAT gateway in us-east1, and a zonal Compute Engine VM in us-east1-b. Which component is classified as a global resource?
The regional Cloud NAT gateway
The zonal Compute Engine VM
The subnetworks
The custom-mode VPC network
Answer Description
The VPC network itself is a global resource. Its control plane spans Google's backbone and is not tied to any particular region or zone, so it is still manageable even if a single region goes down. Subnetworks inherit the regional location of the IP ranges they are created in, Cloud NAT gateways are regional resources that depend on regional subnets, and Compute Engine virtual machines are zonal resources that exist in a specific zone within a region.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a global resource in Google Cloud?
Why is a custom-mode VPC network classified as a global resource?
How does a global resource differ from regional and zonal resources?
What does it mean for a VPC network to be a global resource?
What is the difference between a regional resource and a zonal resource in Google Cloud?
Why are subnetworks considered regional while a VPC network is global?
Your finance team needs a daily feed of every Google Cloud SKU charge, tagged with project, service, and resource labels, so they can build long-term cost-allocation dashboards and join the data with internal tables using SQL. They ask you, the Cloud Engineer, to recommend the simplest native solution that delivers this granularity without manual file handling. What should you do?
Configure Cloud Billing export to Cloud Storage in CSV format and have the finance team import the files into BigQuery when needed.
Enable Cloud Billing detailed usage cost export to BigQuery and let the finance team query the dataset for their dashboards.
Create a budget for the billing account with Pub/Sub notifications and stream the messages to BigQuery for analysis.
Use the Billing Reports page in the Google Cloud console and schedule weekly PDF exports of the cost charts.
Answer Description
Exporting Cloud Billing data to BigQuery at the detailed usage cost level writes every usage record-including project, service, SKU, and label information-into a managed BigQuery dataset each day. Finance analysts can run SQL against the tables, join them with other datasets, and retain the data for as long as needed. Exporting to Cloud Storage creates files that must be ingested separately, the Billing Reports page is a UI without raw records, and budget-based Pub/Sub notifications only emit threshold events, not detailed cost lines.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Cloud Billing detailed usage cost export?
Why use BigQuery for cost analysis instead of Cloud Storage?
What benefits does Cloud Billing export provide for SQL-based querying?
What is Cloud Billing detailed usage cost export to BigQuery?
How does exporting to BigQuery differ from exporting to Cloud Storage?
What kind of labels and details can be captured in Cloud Billing export to BigQuery?
Your security team has prohibited granting the storage.objects.getIamPolicy permission in the payroll project. A group of analysts must be able to upload new objects and delete outdated objects in a sensitive Cloud Storage bucket, but they must not view or change IAM policies. The available predefined Storage roles all include the forbidden permission. How should you grant the required access while respecting the security constraint?
Use object ACLs to give the analysts OWNER access on all objects in the bucket while leaving IAM unchanged.
Enable Uniform bucket-level access and grant the analysts the Storage Admin role on the bucket so they inherit all necessary permissions automatically.
Create an organization- or project-level custom IAM role that includes only storage.objects.create and storage.objects.delete, then grant that role on the bucket to the analysts' Google Group.
Grant the analysts the predefined Storage Object Admin role on the bucket and add an IAM deny policy for storage.objects.getIamPolicy.
Answer Description
Predefined roles such as Storage Object Admin or Storage Admin bundle all storage.objects.* permissions, which include storage.objects.getIamPolicy-violating the security team's restriction. A custom role lets you pick only the necessary permissions (for example, storage.objects.create and storage.objects.delete) and omit the disallowed storage.objects.getIamPolicy. You can then bind that custom role to the analysts' group at the bucket level. Combining IAM conditions with broader roles would still include the prohibited permission, and using ACLs or Uniform bucket-level access does not remove permissions embedded in the predefined roles. Therefore, defining and assigning a custom IAM role is the correct and least-privilege solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a custom IAM role in Google Cloud?
How does the storage.objects.getIamPolicy permission work?
What is Uniform bucket-level access in Cloud Storage?
What is a custom IAM role?
What is the difference between an IAM condition and a custom IAM role?
How does enabling Uniform bucket-level access impact permissions?
Your organization is setting up its first Google Cloud project. Finance is willing to let Google automatically charge a corporate credit card or bank account whenever spending reaches Google-defined thresholds, and they do not need a formal monthly invoice or purchase-order approval workflow. Which type of Cloud Billing account best satisfies these requirements while keeping administration simple?
Request an invoiced (offline) Cloud Billing account so charges appear on a monthly invoice with net-30 terms.
Have a Google Cloud reseller manage the project under the reseller's billing account.
Create a self-serve (online) Cloud Billing account and attach a corporate credit card or bank account.
Operate the project without any billing account and rely solely on the always-free usage limits.
Answer Description
A self-serve (online) Cloud Billing account is designed for automatic payments. You register a credit card or bank account, and Google automatically bills the payment method when the accrued charges hit the threshold (or at the end of the billing cycle if the threshold is not reached). This model does not generate paper invoices or require purchase-order approval. In contrast, invoiced (offline) accounts are available only to customers that meet Google's credit requirements, generate a monthly invoice with net-30 terms, and typically involve manual payment by wire or check. Operating without a billing account would block paid usage, and using a reseller would delegate billing control to the partner. Therefore, a self-serve billing account is the appropriate choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Google Cloud self-serve billing account?
What is the threshold for charges in a self-serve billing account?
Can a self-serve billing account be used with a purchase order system?
What is a self-serve Cloud Billing account?
How does Google define billing thresholds in self-serve accounts?
Can an organization switch from self-serve to an invoiced billing account later?
Woo!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.