GCP Professional Cloud Security Engineer Practice Test
Use the form below to configure your GCP Professional Cloud Security Engineer Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

GCP Professional Cloud Security Engineer Information
Overview
The Google Cloud Professional Cloud Security Engineer (PCSE) certification is designed for security professionals who architect and implement secure workloads on Google Cloud Platform (GCP). Earning the credential signals that you can design robust access controls, manage data protection, configure network security, and ensure regulatory compliance in cloud environments. Because Google frequently updates its security services—such as Cloud Armor, BeyondCorp Enterprise, Chronicle, and Confidential Computing—the PCSE exam expects you to demonstrate both conceptual depth and hands-on familiarity with the latest GCP features.
Exam Format and Content Domains
The exam is a two-hour, multiple-choice and multiple-select test delivered at Pearson VUE test centers or online proctoring. Questions span five core domains:
- Configuring access-within GCP (IAM, service accounts, organization policies)
- Configuring network security (VPC service controls, Cloud Load Balancing, Private Service Connect)
- Ensuring data protection (Cloud KMS, CMEK, DLP, Secret Manager)
- Managing operational security (logging/monitoring with Cloud Audit Logs, Cloud Monitoring, Chronicle)
- Ensuring compliance (risk management frameworks, shared-responsibility model, incident response)
Expect scenario-based questions that require selecting the “best” choice among many viable solutions, so practice with real-world architectures is critical.
Why Practice Exams Matter
Taking high-quality practice exams is one of the most efficient ways to close knowledge gaps and build test-taking stamina. First, sample questions expose you to Google’s preferred terminology—e.g., distinguishing between “Cloud Armor edge policies” and “regional security policies”—so you aren’t surprised by phrasing on test day. Second, timed drills simulate the exam’s pacing, helping you learn to allocate roughly 90 seconds per question and flag tougher items for later review. Finally, detailed explanations turn each incorrect answer into a mini-lesson; over multiple iterations, you’ll identify patterns (for instance, Google almost always recommends using service accounts over user credentials in automated workflows). Aim to score consistently above 85 percent on reputable practice sets before scheduling the real exam.
Final Preparation Tips
Combine practice exams with hands-on labs in Qwiklabs or Cloud Skills Boost to reinforce muscle memory—creating VPC service perimeter policies once in the console and once via gcloud is more memorable than reading about it. Review the official exam guide and sample case studies, paying special attention to Google’s security best-practice documents and whitepapers. In the final week, focus on weak areas flagged by your practice-exam analytics and skim release notes for any major security service updates. With a balanced regimen of study, labs, and realistic mock tests, you’ll walk into the PCSE exam with confidence and a solid grasp of how to secure production workloads on Google Cloud.

Free GCP Professional Cloud Security Engineer Practice Test
- 20 Questions
- Unlimited time
- Configuring AccessSecuring communications and establishing boundary protectionEnsuring data protectionManaging operationsSupporting compliance requirements
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
Your company uses Microsoft Active Directory as the authoritative directory. Google Cloud Directory Sync (GCDS) currently provisions users and groups into Cloud Identity, so employees authenticate with passwords stored in Google. Security now requires that:
- Google must stop storing or validating user passwords,
- Password changes in Active Directory must take effect immediately when users access Google Workspace,
- Existing group synchronization must continue. Which approach best satisfies all requirements while introducing the fewest changes to the existing Google identities?
Replace GCDS with Workforce Identity Federation so Google Workspace relies on short-lived tokens issued by Active Directory and stop synchronizing directory objects.
Export users from Active Directory to a CSV file, import them into Cloud Identity, disable GCDS, and have users reset their Google passwords.
Enable Google Cloud Secure LDAP for authentication and disable SAML single sign-on while leaving GCDS in place for groups.
Retain GCDS for user and group provisioning but configure Google Workspace for SAML single sign-on that redirects authentication to an AD FS identity provider.
Answer Description
Keeping GCDS maintains the existing, automated provisioning and de-provisioning of users and groups. Adding a SAML-based single sign-on configuration with AD FS (or another IdP backed by Active Directory) delegates authentication to the on-premises IdP. Because Google no longer validates passwords, it no longer stores them, and any password change in Active Directory is immediately honored the next time a user authenticates. The other options either break group synchronization (Workforce Identity Federation), fail to remove Google-stored passwords (Secure LDAP or CSV import), or add significant operational overhead.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is GCDS and why is it used in this solution?
How does SAML single sign-on (SSO) work with AD FS?
Why does Workforce Identity Federation not meet the requirements?
What is Google Cloud Directory Sync (GCDS) and how does it work?
What is SAML single sign-on (SSO) and how does it integrate with AD FS?
Why is Workforce Identity Federation not suitable for this scenario?
A security team wants to tighten access controls in a large GCP organization where IAM roles are currently bound to dozens of individual user principals. Their goals are to 1) simplify future permission reviews, 2) delegate day-to-day onboarding and off-boarding of developers to team leads, and 3) ensure that no users accidentally retain permissions after leaving a group. Which approach best meets ALL three goals?
Assign broad organization-wide roles (such as roles/viewer) directly to every user and rely on audit logs to detect misuse.
Keep existing individual IAM bindings but place all projects inside a VPC Service Control perimeter to prevent lateral movement and data exfiltration.
Require every project owner to manage IAM bindings for their own project resources instead of centralizing permissions in groups.
Create least-privilege Google Groups for each functional role, grant all required IAM roles to those groups, and delegate group-membership administration to team leads while synchronizing group membership with the corporate directory.
Answer Description
Granting IAM roles to purpose-specific Google Groups (for example, "[email protected]") centralizes policy bindings, so an auditor can review access by looking at the single group instead of hundreds of users. Delegating group-membership management to team leads with the Groups Admin role lets them add or remove members without changing IAM policies, satisfying the operational requirement. Enforcing auto-sync of group membership with the company's authoritative HR data source (via the Cloud Identity Groups API or Google Cloud Directory Sync) guarantees that when an employee leaves or changes roles, their account is promptly removed from the group and their inherited permissions automatically disappear. Simply granting organization-level roles to all users or requiring project owners to manage IAM directly would either violate least privilege or fail to reduce administrative overhead. Relying only on VPC Service Controls secures data exfiltration paths but does not address user-level permission management.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Google Group and how is it used in IAM roles?
How does synchronizing Google Groups with a corporate directory enhance security?
What is least privilege access, and why is it important?
What are Google Groups, and why are they used in GCP IAM management?
How does synchronizing group membership with a corporate directory support security goals?
What is the least-privilege principle, and why is it critical for IAM in GCP?
Your security team must scan several terabytes of log files stored in Cloud Storage with Sensitive Data Protection (SDP). The files may contain U.S. Social Security numbers (formatted as "123-45-6789") and an internal customer identifier that always starts with "CUST-" followed by exactly 10 digits (for example, "CUST-0123456789"). The team wants to minimize configuration effort while keeping false positives low. Which detection strategy best meets these requirements?
Use the built-in US_SOCIAL_SECURITY_NUMBER infoType and create a custom regular-expression infoType named ACME_CUSTOMER_ID that matches the pattern "CUST-\d{10}" (optionally adding a hotword rule that looks for the string "CUST-").
Create custom regular-expression infoTypes for both SSNs and the customer ID so you can fully control pattern matching.
Use the built-in CREDIT_CARD_NUMBER infoType for SSNs and create a custom dictionary detector that lists every known customer ID.
Rely on the built-in PERSON_NAME infoType for SSNs and the built-in PHONE_NUMBER infoType for the customer ID because both contain digits and delimiters.
Answer Description
SDP already provides a built-in infoType, US_SOCIAL_SECURITY_NUMBER, that reliably detects the standard SSN pattern, so no additional configuration is needed for that element. The proprietary customer identifier has a unique format that is not covered by any built-in infoType, so the recommended approach is to define a custom regex detector (optionally combined with a hotword rule such as the literal text "CUST-") to target exactly the required pattern and reduce false positives. Creating custom detectors for SSNs would be unnecessary work, and choosing unrelated built-in infoTypes (e.g., CREDIT_CARD_NUMBER, PERSON_NAME, PHONE_NUMBER) would fail to detect the required data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Sensitive Data Protection (SDP) in GCP?
What is an infoType in Google Cloud's Sensitive Data Protection?
How do hotword rules reduce false positives in SDP scanning?
How does the Sensitive Data Protection (SDP) tool work in GCP?
What is an infoType in GCP's Sensitive Data Protection?
What is a hotword rule in custom infoType detection?
A multinational enterprise maintains an on-premises middleware service that must authenticate to Google Cloud Storage by using a JSON key for a Google Cloud service account. Compliance now mandates quarterly key rotation with zero downtime for the application. Which practice best satisfies Google-recommended guidance for rotating this unavoidable user-managed key while minimizing service disruption?
Delete the current key, immediately create a replacement with the same name, and restart the application to force it to pick up the new credential.
Periodically re-encrypt the existing key with a new Cloud KMS key version to satisfy rotation requirements without generating additional service account keys.
Extend the key's expiration date to 90 days and enable OS-level credential caching so the application keeps working during the renewal window.
Create a second key for the service account, update the application to use the new key, verify access, and then delete the original key-ensuring no more than two active keys exist at any time.
Answer Description
Google recommends having no more than two active user-managed keys per service account at any time. To rotate a key without downtime, you first create a second key, securely deploy the new key to the workload, verify successful authentication, and then delete the older key. This preserves continuous access because the application can switch to the new credential before the old one is removed. Simply deleting the existing key before deploying a replacement, or disabling the service account, would cause an outage. Re-encrypting or extending the key does not meet rotation requirements, because the underlying key material remains unchanged.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a JSON key for a Google Cloud service account?
Why does Google recommend having no more than two active keys for a service account?
How does one securely rotate a service account key in Google Cloud?
Why does Google Cloud recommend having no more than two active keys per service account?
How does key rotation improve security in Google Cloud?
What steps should be taken to securely deploy a new key to an application during key rotation?
You are investigating a potential data leak and must list only Cloud Audit Log entries for the last 24 hours that show a principal enumerating objects in any Cloud Storage bucket within project finance-prod. The investigator will run gcloud logging read from a workstation that already has application-default credentials for the project. Which advanced log filter should they supply to return only the relevant Data Access log entries and exclude every other service or log type?
logName="projects/finance-prod/logs/cloudaudit.googleapis.com%2Factivity" AND resource.type="gcs_bucket" AND protoPayload.methodName="storage.objects.list"
logName="projects/finance-prod/logs/cloudaudit.googleapis.com%2Fdata_access" AND resource.type="gcs_object" AND protoPayload.serviceName="storage.googleapis.com" AND protoPayload.methodName="storage.objects.get"
logName="projects/finance-prod/logs/cloudaudit.googleapis.com%2Fdata_access" AND resource.type="gcs_bucket" AND protoPayload.serviceName="storage.googleapis.com" AND protoPayload.methodName="storage.objects.list"
resource.type="gcs_bucket" AND protoPayload.serviceName="storage.googleapis.com" AND protoPayload.methodName="storage.buckets.list"
Answer Description
Cloud Storage object-listing operations are logged as Data Access entries with protoPayload.methodName="storage.objects.list" and protoPayload.serviceName="storage.googleapis.com". Restricting resource.type to gcs_bucket scopes the query to bucket-level operations, and specifying logName="projects/finance-prod/logs/cloudaudit.googleapis.com%2Fdata_access" guarantees that only the Data Access audit log is searched. The other options are incorrect because they either query the Admin Activity log, target the wrong resource type or method, or omit the logName filter and therefore risk returning entries from other log categories.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of Data Access audit logs in Google Cloud?
Why is the `resource.type` field important in advanced log filters?
What is the difference between the `data_access` and `activity` audit logs?
What is the difference between Data Access logs and Admin Activity logs in Cloud Audit Logs?
What does `protoPayload.methodName` signify in the log filter?
Why is the `logName` field important in advanced queries?
An EU-based healthcare provider is migrating a 100 TB PACS image archive to Google Cloud. GDPR and a national regulation require that
- all patient data remain within the EU at rest and during processing, and
- the encryption keys protecting that data must also stay in the same EU jurisdiction and be fully controlled by the customer. The analytics team occasionally launches GPU-accelerated jobs against the archive but wants to minimise operational overhead. Which Google Cloud configuration best satisfies these compliance constraints?
Create a dual-region Cloud Storage bucket "us-east4-northamerica-northeast1" protected by an external key manager located in Frankfurt, and process data on GPUs in "europe-west3".
Create a Cloud Storage regional bucket in "europe-west3" (Frankfurt) protected by a customer-managed Cloud KMS key in the same region, and run GPU-enabled Compute Engine instances in "europe-west3" when analytics is required.
Create a Cloud Storage multi-region bucket in "EU" using Google-managed encryption keys, and run GPU workloads on Compute Engine instances in "europe-west1" (Belgium).
Provision a Filestore instance in "us-central1" encrypted with a CMEK key stored in "europe-west3", and launch GPU jobs in "europe-west1".
Answer Description
Using Cloud Storage in the "europe-west3" (Frankfurt) region guarantees that stored objects never leave the EU, whereas a multi-region such as "EU" still meets data-residency needs but would scatter replicas across several EU countries and provide no single-region locality for tightly coupled GPU workloads. Enabling CMEK with a Cloud KMS key ring that is also in "europe-west3" keeps the cryptographic keys inside the same jurisdiction and under the customer's control. Compute Engine VMs with attached NVIDIA GPUs launched in the same region ensure that temporary processing stays inside the EU and avoids cross-region egress. Filestore in the US or any configuration in a non-EU region would violate data-sovereignty rules, and relying on Google-managed encryption keys does not allow the customer to guarantee where keys are stored or who can access them.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a CMEK in Google Cloud?
Why is a regional bucket better for compliance in this scenario compared to a multi-region bucket?
How do NVIDIA GPUs on Google Cloud benefit healthcare analytics workloads?
A multinational retailer stores raw event logs in several Google Cloud projects. Some logs reside in regional Cloud Storage buckets, while others are streamed into BigQuery datasets by a Dataflow pipeline. The security team must continuously discover any newly ingested credit-card or government-ID data, calculate risk scores, and view results in one central Sensitive Data Protection dashboard. They want to avoid scheduling a separate inspection job for every individual bucket or dataset, but are willing to perform a one-time setup in each project. Which approach best meets these requirements?
Use Cloud Asset Inventory to export metadata for all storage resources and query it in BigQuery to locate sensitive fields.
Create individual Cloud DLP inspection jobs for each Cloud Storage bucket and BigQuery dataset, then aggregate the findings in Cloud Logging.
Enable organization-level Sensitive Data Protection discovery for Cloud Storage, and in every project configure a BigQuery data profile scan so all datasets are automatically profiled and results appear in the Sensitive Data Protection dashboard.
Attach Data Catalog policy tags to BigQuery tables and rely on policy insights to detect new credit-card or government-ID data automatically.
Answer Description
Enabling organization-level Sensitive Data Protection discovery automatically profiles Cloud Storage buckets across all projects. To cover BigQuery data, each project that contains BigQuery resources needs a single data profile scan configuration, which automatically profiles every existing and newly created dataset in that project. Together, these settings provide continuous discovery, built-in risk scoring, and centralized visibility in the Sensitive Data Protection dashboard without requiring per-bucket or per-table jobs. Relying solely on manual inspection jobs, Cloud Asset Inventory, or Data Catalog policy tags would either miss continuous discovery, lack risk scoring, or require granular job scheduling.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Google Cloud Sensitive Data Protection?
How does a BigQuery data profile scan work?
What is the benefit of organization-level Sensitive Data Protection discovery?
What is Sensitive Data Protection in Google Cloud?
How does a data profile scan work in BigQuery for Sensitive Data Protection?
Why is organization-level discovery beneficial for Cloud Storage Sensitive Data Protection?
Your organization migrated to Cloud Identity and currently has four staff members who perform daily administration using Super Administrator privileges. A recent internal risk assessment highlights that this practice violates least-privilege principles and exposes the company if any of those credentials are phished. Security wants to (1) restrict routine use of Super Administrator power, (2) guarantee emergency recovery if the primary IdP or MFA service is unavailable, and (3) keep an auditable trail with minimal day-to-day overhead. Which strategy best satisfies all three goals?
Create two dedicated break-glass Super Administrator accounts that are excluded from SSO and 2-Step Verification, secured with long random passwords stored in an offline safe; assign the four staff members delegated admin roles matching their job duties and monitor any logins to the break-glass accounts.
Keep one existing Super Administrator account for everyday work and enforce FIDO2 security-key MFA on it; demote the other three to Help Desk Admin and rely on Access Context Manager to restrict their logins to corporate IP ranges.
Rotate the passwords of all four Super Administrator accounts monthly, require phone-based 2-Step Verification, and configure an automated rule that unlocks a fifth Super Administrator account if no admin logs in for 48 hours.
Enable Privileged Access Manager so the four staff members request time-bound elevation to the Super Administrator role whenever needed, and disable all standing Super Administrator accounts.
Answer Description
Limiting exposure means removing standing Super Administrator privileges from day-to-day identities and granting them only the narrow administrative roles required for routine tasks. Google recommends maintaining at least one (preferably two) emergency or "break-glass" Super Administrator accounts that are not subject to SSO or MFA enforcement, use strong randomly generated passwords stored offline, and are monitored for any sign-in activity. This arrangement ensures recovery even if the primary IdP or MFA infrastructure is unavailable, satisfies least-privilege by removing broad rights from everyday accounts, and keeps audit logs for the seldom-used break-glass logins. The other options either keep routine Super Administrator use, depend on the availability of MFA/IdP for recovery, or rely on features (like automatic unlocking) that do not provide controlled, auditable emergency access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is 'least-privilege' and why is it important in cloud environments?
What is a break-glass account and why would it bypass MFA and SSO?
What is the role of audit logs in monitoring break-glass account usage?
What are break-glass accounts in Cloud Identity?
How does enforcing least-privilege principles improve security?
Why are long, random passwords stored offline recommended for break-glass accounts?
Your security team mandates that every Compute Engine VM start from a CIS-hardened custom image that is automatically rebuilt when either (1) Google posts a new debian-11 base image or (2) approved hardening scripts change in Cloud Source Repositories. The pipeline must apply the scripts, install the latest patches, halt on any high-severity CVEs, and keep only the three newest compliant images. Which design delivers this with the least manual effort?
Deploy VMs with Deployment Manager that reference the publicly available debian-11-csi-hardened image family, attach Cloud Armor policies, and enable Shielded VM integrity monitoring to detect vulnerabilities. Allow teams to select any version within that family.
Enable OS patch management in VM Manager to run a weekly patch job and store the hardening scripts in a Cloud Storage bucket. Have each VM execute the scripts from startup-script metadata and rely on rolling updates in managed instance groups to phase in patched VMs.
Create two Cloud Build triggers: a Cloud Source Repositories trigger for the hardening branch and a Cloud Scheduler-initiated Pub/Sub trigger that runs daily. Both invoke a Cloud Build YAML file that runs Packer to build a shielded image from the latest debian-11 family, applies the hardening scripts, updates all packages, executes an in-pipeline vulnerability scanner that fails the build on any high or critical CVE, publishes the image to a custom family, and then deletes images in that family beyond the three newest.
When Google releases a new debian-11 image, manually create a local VM, run the hardening scripts, export the disk to Cloud Storage, and import it as a custom image. Mark the image as deprecated after three newer images exist.
Answer Description
Using Cloud Build triggers backed by Cloud Source Repositories and Cloud Scheduler meets the automation requirement. The pipeline invokes Packer to create a new image from the latest debian-11 base, applies the hardening scripts, updates packages, and runs a vulnerability-scanner container that exits non-zero if any high or critical CVEs are detected, stopping the build. On success, the build publishes the image to a custom family and a final step deletes older images so that only the three most recent remain. Approaches that rely on VM-side patch jobs, manual procedures, or simply referencing a public hardened family fail to satisfy one or more stated constraints (such as creating a gated golden image or enforcing retention).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Packer, and why is it used in this design?
How do Cloud Build triggers work in this solution?
What is the role of the vulnerability scanner in the pipeline?
What is Packer, and how does it help in creating custom images?
What is a Cloud Build trigger, and how does it work?
How does the vulnerability scanner ensure image compliance?
A financial services firm must replicate transaction data in real time from its New Jersey data center to a Google Cloud deployment in us-east1. The replication peaks at 18 Gbps and cannot traverse the public internet to satisfy regulatory controls. Latency should be minimized, and the data is already encrypted at the application layer, so link-level encryption is unnecessary. Which connectivity option meets these requirements while avoiding unnecessary cost and complexity?
Purchase two 10-Gbps Partner Interconnect VLAN attachments from different service providers and protect traffic with HA VPN over the links.
Order a single 100-Gbps Dedicated Interconnect circuit and enable MACsec encryption on the link.
Set up an HA VPN with four VPN tunnels over dual internet service providers to achieve 18 Gbps of encrypted bandwidth.
Provision two 10-Gbps Dedicated Interconnect connections in separate metropolitan zones and use them without additional VPN or MACsec encryption.
Answer Description
The firm requires (1) at least 18 Gbps of throughput, (2) a private path that stays off the public internet, and (3) minimal latency without additional encryption overhead. Two physically diverse 10-Gbps Dedicated Interconnect circuits deliver an aggregate 20 Gbps, meet the 99.9 % availability SLA, and avoid the extra equipment and operational overhead of VPN or MACsec. HA VPN alone traverses the internet, violating the requirement. A 100-Gbps Dedicated Interconnect with MACsec far exceeds bandwidth needs and adds cost and configuration effort. Partner Interconnect adds a service-provider middle-mile that can introduce additional latency and operational dependency, and layering HA VPN adds complexity not needed for already-encrypted data. Therefore, redundant 10-Gbps Dedicated Interconnect circuits are the most appropriate and cost-effective choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Dedicated Interconnect in Google Cloud?
How does Dedicated Interconnect ensure low latency?
What is the SLA for Dedicated Interconnect in Google Cloud?
What is Dedicated Interconnect in Google Cloud?
Why is link-level encryption like MACsec unnecessary in this scenario?
What is the difference between Dedicated Interconnect and Partner Interconnect in Google Cloud?
Your organization is starting a three-month project with an external research institute. The researchers authenticate with their own Azure Active Directory tenant, but they need temporary access to invoke Cloud Run services and read specific Cloud Storage buckets in your Google Cloud project. Company policy forbids creating Google accounts for them and bans distributing any long-lived credentials. Which approach best satisfies all requirements while following least-privilege practices?
Provision temporary Google Workspace accounts for the researchers, place them in a group with the necessary IAM roles, and enforce two-step verification on those accounts.
Create a workforce identity pool that trusts the institute's Azure AD as an OIDC provider, map researcher groups to narrowly scoped IAM roles on the project, and let researchers obtain short-lived Google credentials on demand.
Use Google Cloud Directory Sync to import the institute's Azure AD users into Cloud Identity and enable SAML-based single sign-on for them.
Generate user-managed keys for a dedicated service account that has the required IAM roles and distribute the keys to the institute's researchers for the duration of the project.
Answer Description
Workforce Identity Federation lets administrators create a workforce identity pool that trusts an external IdP such as Azure AD (via OIDC or SAML). Researchers sign in to Azure AD, then exchange the resulting security token for short-lived Google Cloud credentials that map to carefully scoped IAM roles on the target project. Because no Google accounts or user-managed service-account keys are created, the solution aligns with the mandate to avoid provisioning accounts and long-lived secrets. Google Cloud Directory Sync and Google Workspace accounts would still create identities in Cloud Identity, while handing out service-account keys would violate the ban on persistent credentials.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Workforce Identity Federation?
How does OIDC integrate with Google Cloud?
What are IAM roles and how do they ensure least-privilege access?
What is Workforce Identity Federation?
How does OIDC work in Workforce Identity Federation?
What are short-lived credentials, and why are they important?
A retailer's nightly Beam pipeline launches from a Cloud Composer environment and runs as a dedicated service account on Dataflow workers. The workers must read CSV files from an input bucket, load the transformed records into an existing BigQuery dataset, and write job logs to Cloud Logging. The service account currently holds the Editor role on both involved projects, which violates least-privilege policy. Which replacement IAM grant set meets the functional needs while eliminating overly permissive roles?
Grant roles/dataflow.worker on the Dataflow project, roles/storage.objectViewer on the input bucket, roles/bigquery.dataEditor on the target dataset, and roles/logging.logWriter on the project.
Give the service account roles/dataflow.admin on the project, roles/storage.legacyBucketReader on the bucket, and roles/bigquery.user on the project.
Assign roles/storage.admin and roles/bigquery.admin at the project level so the pipeline can manage all storage and BigQuery resources without further changes.
Replace Editor with roles/owner on the Dataflow project to cover all required permissions and future growth.
Answer Description
Granting Dataflow Worker on the project lets the service account start and manage Dataflow jobs. Granting Storage Object Viewer on only the input bucket is sufficient for reading objects without allowing writes or bucket administration. Granting BigQuery Data Editor on the specific dataset lets the pipeline create and append tables but not administer the entire project. Granting Logging Log Writer on the project allows log export without additional privileges. The other options continue to use broad Owner, Editor, Admin, or Dataflow Admin roles, all of which provide permissions far beyond what the pipeline requires.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the least-privilege policy, and why is it important in IAM roles?
What does the roles/dataflow.worker IAM role allow in Dataflow?
What is the roles/storage.objectViewer IAM role used for in this pipeline?
What is the role of Dataflow Worker in GCP IAM?
Why use Storage Object Viewer instead of Storage Admin?
How does BigQuery Data Editor align with least-privilege principles?
Your security team must allow the external vendor support group ([email protected] Google Group) to query a sensitive BigQuery dataset in the prod-analytics project, but only when requests come from the vendor's on-premises public CIDR range 203.0.113.0/24 and only on weekdays between 09:00 and 17:00 (America/New_York). The organization wants to avoid adding new proxy or networking components and must follow the principle of least privilege. Which approach best meets these requirements?
Configure an Access Context Manager service perimeter that specifies the vendor's IP range and business-hours access level, then grant the [email protected] group the BigQuery Data Viewer role at the project level without additional conditions.
Create a VPC Service Controls perimeter around the prod-analytics project and allow ingress only from 203.0.113.0/24 during business hours.
Add an IAM policy binding on the dataset that grants the BigQuery Data Viewer role to the [email protected] group with a condition limiting access to requests from 203.0.113.0/24 and to times between 09:00 and 17:00 on weekdays.
Define a custom BigQuery Viewer role, assign it to the [email protected] group, and require users to access the dataset through Cloud Identity-Aware Proxy restricted to the vendor's IP range and schedule.
Answer Description
Google Cloud IAM Conditions let you attach an attribute-based Boolean expression to an individual role binding. The condition language supports both request.ip and request.time attributes, so you can restrict the BigQuery Data Viewer role to apply only when the caller's source IP is within 203.0.113.0/24 and the access time falls between 09:00 and 17:00 on weekdays. This enforces the required constraints while granting the minimal BigQuery Data Viewer permissions to the vendor group and does not require any additional infrastructure.
VPC Service Controls service perimeters restrict data egress but cannot enforce time-of-day constraints. Identity-Aware Proxy governs web access to applications, not direct BigQuery API calls, and a custom role without conditions would lack the necessary contextual controls. Access Context Manager service perimeters also cannot grant BigQuery IAM roles; they only define network and device restrictions for requests that are already authorized by IAM, so they would still need an IAM binding with the proper condition. Therefore, an IAM conditional role binding is the correct and most straightforward solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are IAM Conditions in Google Cloud?
What is the principle of least privilege?
How does request.ip and request.time work in IAM Conditions?
What is IAM Conditions in Google Cloud?
How does IAM Conditions use attributes like request.ip and request.time?
Why is IAM conditional role binding preferred over VPC Service Controls or Access Context Manager in this scenario?
Your organization's GitHub Actions pipeline builds container images and pushes them to Artifact Registry in a Google Cloud project. The workflow currently authenticates with a JSON key for a user-managed service account, but new policy mandates that no long-lived Google-issued credential may exist outside Google Cloud. Short-lived OAuth 2.0 access tokens (≤1 hour) must be generated just-in-time from the workflow without human interaction. Which solution best meets these requirements while respecting least privilege?
Store the existing JSON service-account key in Secret Manager and configure the workflow to fetch the key at runtime, rotating the key every seven days with Cloud Scheduler.
Place Artifact Registry into a VPC Service Controls perimeter and add the GitHub runners' IP range to an access level, removing the need for service account credentials during image pushes.
Create a workload identity pool with a GitHub OIDC provider and allow the pool to impersonate a minimally scoped service account, so the workflow exchanges its GitHub OIDC token for a short-lived Google Cloud access token at runtime.
Run gcloud auth application-default login locally, commit the generated Application Default Credentials file that contains a refresh token, and let the workflow exchange the refresh token for one-hour access tokens when needed.
Answer Description
Workload Identity Federation lets external workloads (including GitHub Actions) exchange an external OIDC token for a short-lived Google Cloud access token, eliminating the need to store a service-account key. When you create a workload identity pool and a GitHub provider, GitHub issues an OIDC token at build time that Google's Security Token Service exchanges for an access token valid for up to one hour. Granting the pool permission to impersonate a narrowly scoped service account maintains least privilege. Using a gcloud-generated refresh token would leave a long-lived credential in the repository, violating policy. VPC Service Controls protect against data exfiltration but do not provide authentication. Storing and rotating the JSON key in Secret Manager still relies on a long-lived key and does not meet the short-lived-credential requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Workload Identity Federation in Google Cloud?
How do OIDC tokens work in GitHub Actions pipelines?
What is the Security Token Service (STS) in Google Cloud?
What is Workload Identity Federation in Google Cloud?
How does impersonating a minimally scoped service account support least privilege?
What is the difference between using a JSON key and short-lived OAuth 2.0 tokens for authentication?
Your organization has migrated to Google Workspace and must hard-enforce 2-Step Verification (2SV) for every user in the Finance organizational unit (OU) within 30 days, while leaving 2SV optional for all other OUs that include a break-glass super-administrator account. Which Admin console configuration best meets the requirement with the least operational effort?
Add all Finance users to a Google Group, create a Context-Aware Access level for that group, and configure the level to require 2SV before allowing access to Google services.
Enable the Advanced Protection Program for the Finance OU while simultaneously disabling 2SV for the top-level organization.
Navigate to Security > Authentication > 2-Step Verification, set Enforcement to "On" for the Finance OU only, keep Enforcement "Off" at the top-level organization, and ensure 2SV remains allowed for all users.
Generate and distribute backup verification codes to Finance users and keep 2SV Enforcement "Off"; instruct them to use the codes to sign in.
Answer Description
2-Step Verification enforcement can be scoped to any child organizational unit or Google Group. By turning on enforcement only on the Finance OU, its users are forced to enroll after the grace period you define, while OUs higher in the hierarchy-including the one that contains the break-glass super-administrator-remain unaffected because child OU settings override inherited settings. Advanced Protection, context-aware access, or relying on backup codes do not themselves impose mandatory 2SV enrollment for a specific OU.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is 2-Step Verification (2SV)?
What is the significance of Organizational Units (OUs) in Google Workspace?
Why is a break-glass super-admin account needed?
What is 2-Step Verification (2SV) and why is it important?
What are break-glass accounts and why should they bypass 2SV enforcement?
How does OU hierarchy influence security settings in Google Workspace?
Your company operates over 150 Google Cloud projects in a single organization. The security operations team must centrally activate Security Command Center (SCC) so they can manage detectors, create mute rules, and view findings across all projects. Individual application teams should have read-only visibility into findings limited to their own projects. What is the most efficient way to configure SCC and IAM to meet these requirements?
Enable the SCC Standard tier in every project. Grant the security operations team the Logging Admin role at the organization level and grant application teams the Logging Viewer role on their projects.
Enable SCC Premium at the organization level. Grant the security operations team the Security Center Admin role at the organization level, and grant each application team the Security Center Findings Viewer role on only their respective projects.
Enable SCC Premium at the organization level. Grant the security operations team the Project Owner role on all projects and grant application teams the Security Center Source Admin role at the organization level.
Enable SCC Premium separately in each project using automation. Grant the security operations team the Security Center Admin role on every project and let application teams inherit Viewer permissions from the organization.
Answer Description
Activating Security Command Center at the organization level provides a single control plane and automatically onboards every current and future project, eliminating the need to manage per-project activations. Granting the security operations team the Security Center Admin role at the organization level allows them to configure services, create mute rules, and view all organization-wide findings. Giving each application team the Security Center Findings Viewer role only on its own projects limits access to read-only visibility for just those resources. The alternative options either activate SCC separately in every project, which introduces unnecessary operational overhead, use the Standard tier that lacks advanced detectors, or assign overly broad or incorrect IAM roles that fail to enforce least privilege.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Security Command Center (SCC) in Google Cloud?
What is the difference between SCC Standard and SCC Premium?
What does the Security Center Admin role allow in SCC?
What is Security Command Center (SCC) in Google Cloud?
What is the difference between SCC Standard and Premium tiers?
What do the IAM roles used in SCC configuration mean?
A software team is building a cross-platform mobile app that lets Google Workspace users view and update objects in their own Cloud Storage buckets. Security has mandated the following:
- The app must never embed or distribute long-lived Google credentials.
- Each user must grant only the minimum necessary permissions.
- Users must be able to withdraw the app's access at any time without changing their passwords. Which approach best satisfies all requirements?
Embed an API key restricted to Cloud Storage in the application code and rotate the key monthly.
Use Workload Identity Federation with a public identity pool that maps each device ID to a Storage service account.
Implement the OAuth 2.0 authorization-code flow and request only the Cloud Storage read/write scope, storing the refresh token securely on the backend.
Package a dedicated service account key with the mobile app and grant it the Storage Object Admin IAM role.
Answer Description
OAuth 2.0's three-legged (authorization-code) flow lets the user authenticate with Google, approve a specific set of scopes-for example, devstorage.read_write-and returns a short-lived (≈1-hour) access token plus an optional refresh token. No static credentials are shipped in the binary, and the user can later revoke the app's consent from their Google Account, instantly invalidating the refresh token. Service account keys or API keys are long-lived and hard to revoke per user, while Workload Identity Federation addresses non-human workloads, not per-user delegation in a consumer mobile app.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is OAuth 2.0's authorization-code flow?
Why is embedding long-lived credentials in the app a security risk?
What is the difference between OAuth 2.0 and Workload Identity Federation?
What is OAuth 2.0 authorization-code flow?
What are the differences between a refresh token and an access token?
How does OAuth 2.0 enhance security for mobile apps?
You manage a Cloud Storage bucket that receives daily transaction CSV files in the Standard storage class. The finance team requires two automated controls: (1) minimize storage costs by moving files to a lower-cost class 30 days after upload, and (2) permanently remove the files exactly one year after they have been moved. Which lifecycle rule configuration satisfies both requirements while following Google-recommended lifecycle actions?
Add a SetStorageClass action that changes objects to Nearline when Age = 30 days, plus a Delete action that removes objects when Age = 395 days.
Add a single Delete action when Age = 365 days; no additional rules are needed.
Add a SetStorageClass action when Age = 30 days and another SetStorageClass action when Age = 365 days.
Add a Delete action when Age = 30 days, and a SetStorageClass action to Nearline when Age = 395 days.
Answer Description
Cloud Storage lifecycle management supports two actions that are relevant here: SetStorageClass (to transition objects to a different storage class) and Delete (to remove objects). To cut costs after 30 days, you configure a SetStorageClass action that transitions objects from Standard to Nearline when their Age condition reaches 30 days. To meet the 12-month retention that starts at that point, you add a Delete action with an Age condition of 395 days (30 + 365). This ensures objects live 30 days in Standard, 365 additional days in Nearline, and are then deleted. Using Delete first, using SetStorageClass to delete data, or using only one type of action would fail to meet either the cost-optimization or retention requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What lifecycle management actions are available in Cloud Storage?
What are the differences between Standard and Nearline storage classes?
Why is the Delete action set to 395 days and not 365 days?
What are storage classes in Google Cloud Storage and how do they differ?
How does lifecycle management work in Google Cloud Storage?
Why is the Delete action set to 395 days instead of exactly 365 days?
Your organization hosts all finance workloads inside a dedicated Google Cloud folder. Compliance now requires that any request to Cloud Storage APIs for projects in this folder be permitted only when the caller is either (a) coming from one of your on-premises NAT IP address ranges or (b) using a company-managed, encrypted device that meets Google endpoint-verification standards. You must enforce this control centrally without changing individual bucket IAM policies and ensure that any future projects created in the finance folder automatically inherit the restriction. What should you do?
Attach a Cloud Armor security policy to every finance bucket's JSON API endpoint, restricting traffic to the trusted IP ranges and permitting requests only from devices presenting valid endpoint-verification headers.
Configure VPC firewall rules in each finance project that only allow egress from approved corporate IP ranges and require mutual TLS with client certificates from managed devices.
Create an organization-level Access Policy. Define a custom access level that allows either the trusted on-premises CIDR ranges or compliant, company-managed devices. Create a VPC Service Controls perimeter that includes the finance folder and add the access level to the perimeter's ingress rules.
Add an IAM conditional binding at the organization level that grants storage.objectViewer to all finance users only when request.ip and request.device attributes match the corporate policy.
Answer Description
Create an organization-wide access policy in Access Context Manager and define a custom access level that allows requests originating either from the trusted on-premises CIDR ranges or from devices that satisfy the required endpoint-verification attributes. Then create a VPC Service Controls service perimeter that protects Cloud Storage, add the finance folder to the perimeter so all present and future projects are included, and attach the access level to the perimeter's ingress rules. Any request coming from outside the perimeter must now meet the specified network or device attributes before it is allowed.
VPC firewall rules apply to traffic to or from VM instances and cannot validate device posture or protect Google-managed service APIs. Cloud Armor secures HTTP(S) traffic through external load balancers and cannot filter direct Cloud Storage JSON or XML API calls, nor can it evaluate device compliance. IAM conditional bindings do inherit to child projects, but IAM Conditions cannot reference device compliance attributes and would require role-specific bindings that do not universally cover all Cloud Storage access paths, making them insufficient for this requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is VPC Service Controls?
What is Access Context Manager?
How does endpoint verification work in Google Cloud?
What is Access Context Manager in Google Cloud?
What are VPC Service Controls and how do they improve security?
What is endpoint verification in Google Cloud?
Your organization is migrating sensitive genomics data to Cloud Storage. A regional privacy law requires that the encryption keys must never leave the company-owned, on-premises HSM cluster, and security policy mandates that any dataset can be rendered unreadable at once by disabling the on-prem key. Developers do not want to modify application code beyond selecting an encryption option for the bucket. Which Google Cloud approach best satisfies these requirements?
Enable Customer-Managed Encryption Keys backed by Cloud HSM for the bucket.
Protect the bucket with a Cloud External Key Manager (EKM) key and enable CMEK using the external key reference.
Configure CMEK with a software-backed symmetric key stored in Cloud KMS and rotate it quarterly.
Rely on Google default encryption and enforce Bucket Lock to prevent key access by Google personnel.
Answer Description
Because the keys must remain in an on-premises HSM and administrators need the ability to make data inaccessible by disabling that external key, Cloud External Key Manager (EKM) is the correct choice. EKM allows Cloud Storage objects to be protected by a key that resides and is operated entirely outside Google's infrastructure; turning off or destroying the external key immediately renders the data unreadable (crypto-shredding).
Using CMEK with Cloud KMS software keys would store keys in Google-managed software, violating the "never leave" requirement. CMEK with Cloud HSM keeps keys inside Google-hosted HSMs, still outside the organization's premises. Relying on Google default encryption gives Google full control of the keys and provides no immediate crypto-shred capability to the customer.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Cloud External Key Manager (EKM)?
How does crypto-shredding work with EKM?
What is the difference between CMEK and Cloud EKM?
What is Cloud External Key Manager (EKM)?
How does crypto-shredding work with Cloud External Key Manager?
Why are CMEK options with Cloud KMS or Cloud HSM unsuitable in this case?
Wow!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.