GCP Professional Cloud Security Engineer Practice Test
Use the form below to configure your GCP Professional Cloud Security Engineer Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

GCP Professional Cloud Security Engineer Information
Overview
The Google Cloud Professional Cloud Security Engineer (PCSE) certification is designed for security professionals who architect and implement secure workloads on Google Cloud Platform (GCP). Earning the credential signals that you can design robust access controls, manage data protection, configure network security, and ensure regulatory compliance in cloud environments. Because Google frequently updates its security services—such as Cloud Armor, BeyondCorp Enterprise, Chronicle, and Confidential Computing—the PCSE exam expects you to demonstrate both conceptual depth and hands-on familiarity with the latest GCP features.
Exam Format and Content Domains
The exam is a two-hour, multiple-choice and multiple-select test delivered at Pearson VUE test centers or online proctoring. Questions span five core domains:
- Configuring access-within GCP (IAM, service accounts, organization policies)
- Configuring network security (VPC service controls, Cloud Load Balancing, Private Service Connect)
- Ensuring data protection (Cloud KMS, CMEK, DLP, Secret Manager)
- Managing operational security (logging/monitoring with Cloud Audit Logs, Cloud Monitoring, Chronicle)
- Ensuring compliance (risk management frameworks, shared-responsibility model, incident response)
Expect scenario-based questions that require selecting the “best” choice among many viable solutions, so practice with real-world architectures is critical.
Why Practice Exams Matter
Taking high-quality practice exams is one of the most efficient ways to close knowledge gaps and build test-taking stamina. First, sample questions expose you to Google’s preferred terminology—e.g., distinguishing between “Cloud Armor edge policies” and “regional security policies”—so you aren’t surprised by phrasing on test day. Second, timed drills simulate the exam’s pacing, helping you learn to allocate roughly 90 seconds per question and flag tougher items for later review. Finally, detailed explanations turn each incorrect answer into a mini-lesson; over multiple iterations, you’ll identify patterns (for instance, Google almost always recommends using service accounts over user credentials in automated workflows). Aim to score consistently above 85 percent on reputable practice sets before scheduling the real exam.
Final Preparation Tips
Combine practice exams with hands-on labs in Qwiklabs or Cloud Skills Boost to reinforce muscle memory—creating VPC service perimeter policies once in the console and once via gcloud is more memorable than reading about it. Review the official exam guide and sample case studies, paying special attention to Google’s security best-practice documents and whitepapers. In the final week, focus on weak areas flagged by your practice-exam analytics and skim release notes for any major security service updates. With a balanced regimen of study, labs, and realistic mock tests, you’ll walk into the PCSE exam with confidence and a solid grasp of how to secure production workloads on Google Cloud.

Free GCP Professional Cloud Security Engineer Practice Test
- 20 Questions
- Unlimited time
- Configuring AccessSecuring communications and establishing boundary protectionEnsuring data protectionManaging operationsSupporting compliance requirements
Your organization runs hundreds of projects. Cloud IDS threat detection (fed by Packet Mirroring) and VPC Flow Logs are enabled in every project. The security operations team wants to correlate IDS threat events with flow-level network metadata using familiar SQL queries. They must keep the data for 18 months and want to minimize operational overhead by avoiding custom ETL jobs or separate BigQuery datasets. Which solution best meets these requirements?
Stream both Cloud IDS and VPC Flow Logs to Pub/Sub, process them with a Dataflow pipeline that writes to BigQuery, and schedule a job to delete partitions older than 550 days.
Forward Cloud IDS alerts to Chronicle and export VPC Flow Logs to Cloud Storage; query the combined data through Chronicle's YARA-L interface.
Enable the Cloud IDS BigQuery export feature and add a second sink that exports VPC Flow Logs to the same BigQuery dataset; configure table partition expiration for 550 days.
Create an organization-level aggregated log sink that routes Cloud IDS and VPC Flow Logs into a dedicated log bucket, enable Log Analytics on that bucket, set the bucket retention to 550 days, and grant analysts read-only Logging IAM roles.
Answer Description
Both Cloud IDS logs and VPC Flow Logs are ingested into Cloud Logging. By creating an organization-level aggregated sink that routes all relevant log entries to a centralized log bucket, you guarantee a single storage location across projects. Upgrading that bucket to Log Analytics activates the built-in BigQuery execution engine, letting analysts run standard SQL directly against the logs without exporting them. The bucket's retention can be configured to any value between 1 and 3650 days, so setting it to roughly 550 days satisfies the 18-month archive requirement. Granting read-only Logging roles on the bucket enforces least-privilege access. The other options either require managing external BigQuery datasets, additional ETL pipelines, or use products (Chronicle, Cloud Trace) that do not natively satisfy the stated constraints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Log Analytics in Cloud Logging?
What is an aggregated log sink in GCP?
How does Packet Mirroring support Cloud IDS?
What is Cloud IDS and how does it integrate with Packet Mirroring?
What is an aggregated log sink in Google Cloud?
How does Log Analytics with a centralized log bucket enable SQL querying?
Your company hosts the public DNS zone corp.example in Cloud DNS. After investigating recent cache-poisoning attempts, the security team asks you to implement a control that allows validating recursive resolvers on the internet to cryptographically verify that the answers they receive for corp.example are authentic and untampered. The operations team wants a solution that minimizes ongoing key-management overhead for them. What should you do?
Enforce DNS over TLS for all clients and block UDP/53 on the corporate firewall to prevent on-path tampering of DNS responses.
Deploy secondary authoritative DNS servers in another project and front them with Cloud CDN so cached DNS responses remain available during outages.
Enable DNSSEC for the Cloud DNS managed zone, rely on Cloud DNS to create and automatically rotate the ZSK, manually manage the KSK, and publish the generated DS record with the domain registrar.
Enable Cloud DNS query logging and create Cloud Logging alerts to detect suspicious NXDOMAIN or SERVFAIL spikes indicating cache-poisoning attempts.
Answer Description
Turning on DNSSEC for the Cloud DNS public zone instructs Cloud DNS to sign each resource-record set with RRSIG records that validating resolvers can check against the zone's DNSKEY records, protecting against spoofing and cache poisoning. When DNSSEC is enabled, Cloud DNS automatically creates and can automatically rotate the Zone-Signing Keys (ZSKs). You must still create (and periodically rotate) the Key-Signing Key (KSK) manually and publish the associated Delegation Signer (DS) record at your domain registrar to complete the chain of trust. Solutions based solely on encrypted transport (DoT/DoH), logging, or caching do not provide the cryptographic data-integrity guarantees required. Therefore, enabling DNSSEC with Cloud-managed ZSKs and manually publishing the DS record provides strong authenticity with minimal ongoing effort.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is DNSSEC and why is it important?
What is the difference between ZSK and KSK in DNSSEC?
What is cache poisoning in DNS and how does DNSSEC prevent it?
What is DNSSEC and how does it work?
What is the difference between ZSK and KSK in DNSSEC?
Why is DNS over TLS (DoT) or DNS over HTTPS (DoH) insufficient for cache poisoning protection?
Your organization stores employee records in a BigQuery table. All staff must be able to run existing queries on the table, but only members of the "hr-analysts" group should see the SSN and Salary columns. Other users must receive NULLs for those two columns without modifying any queries or creating additional views. Which approach meets the requirement while following Google-recommended practices for column-level security?
Apply a row-level security policy that filters out SSN and Salary for non-HR users.
Create a Data Catalog taxonomy, assign policy tags to the SSN and Salary columns, and grant roles/datacatalog.categoryFineGrainedReader on those policy tags to the hr-analysts group only.
Encrypt the SSN and Salary columns with a dedicated CMEK key and grant Cloud KMS access only to the hr-analysts group.
Build an authorized view that omits SSN and Salary, share that view with all users, and revoke access to the underlying table.
Answer Description
BigQuery column-level security relies on Data Catalog policy tags. By creating a taxonomy, tagging the SSN and Salary columns, and granting the hr-analysts group the Data Catalog fine-grained reader role (roles/datacatalog.categoryFineGrainedReader) on those tags, only that group can read the tagged columns. Everyone else retains their existing table access but receives NULLs for the restricted columns. Authorized views, row-level security, or CMEK key permissions do not provide transparent, policy-based column masking for this scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Data Catalog taxonomies and policy tags?
How does roles/datacatalog.categoryFineGrainedReader enable column-level security?
Why are authorized views or row-level security policies insufficient for this use case?
What are Data Catalog policy tags?
What is the role of roles/datacatalog.categoryFineGrainedReader?
Why is column-level security preferred over authorized views or encryption in this case?
Your organization is hardening access to Vertex AI.
- The data-science team ([email protected]) must be able to open managed notebooks, launch custom training jobs, and register the resulting Model artifacts. They must not be able to deploy or delete models, update Endpoints, or change IAM policies.
- The MLOps team ([email protected]) is responsible for production serving. They need to deploy models to Endpoints and manage traffic splits, but they must not create or modify Datasets.
Which assignment of predefined IAM roles best enforces the required least-privilege separation?
Grant both groups the role roles/aiplatform.admin and rely on Cloud Audit Logs for accountability.
Grant [email protected] the role roles/aiplatform.user, and grant [email protected] the role roles/aiplatform.deploymentResourceAdmin.
Grant [email protected] roles/aiplatform.viewer, and grant [email protected] roles/aiplatform.admin.
Grant [email protected] the project-level role roles/editor, and grant [email protected] roles/aiplatform.user.
Answer Description
The Vertex AI User role lets a principal create and run training pipelines and register Model resources, but it does not grant permissions to deploy models, update Endpoints, or set IAM policies. This satisfies the data-science requirements. The Vertex AI Deployment Resource Admin role is limited to managing online serving resources (Endpoints and deployed models) and does not allow changes to Datasets or training resources, meeting the MLOps needs. Granting broader roles such as Editor or Vertex AI Admin would violate the least-privilege objective, while Viewer would prevent the data-science team from running experiments.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the roles/aiplatform.user IAM role?
What is the difference between roles/aiplatform.user and roles/aiplatform.deploymentResourceAdmin?
Why are predefined IAM roles preferred for Vertex AI access control?
What is the Vertex AI User role?
What is the Vertex AI Deployment Resource Admin role?
Why is the least-privilege principle important in IAM roles?
Your organization uses on-premises Microsoft Active Directory as the authoritative source for user identities. Google Cloud Directory Sync (GCDS) runs every night to keep Google Workspace in sync. A project manager asks whether the help-desk team can update an employee's phone number only in the Google Admin console and rely on the next GCDS cycle to push that change back into Active Directory. What accurately describes how GCDS will behave in this situation?
GCDS can write attribute changes back to Active Directory if write-back is enabled in the synchronization profile.
The help-desk can enable the Cloud Directory API in the Admin console to allow GCDS to propagate Google Workspace edits to Active Directory on the next sync.
The change will remain only in Google Workspace because GCDS synchronizes data in one direction-from Active Directory to Google-without updating the LDAP source.
GCDS supports bidirectional synchronization for groups but not for individual user attributes like phone numbers.
Answer Description
GCDS is designed for one-way synchronization: it reads objects from an LDAP directory such as Active Directory and creates, updates, or suspends matching objects in Google Cloud or Google Workspace. It never writes data back to the LDAP source, regardless of configuration. Therefore, any modification made directly in Google Workspace-such as changing a phone number-remains only in Google Workspace; the value in Active Directory is unchanged. Statements suggesting bidirectional sync, group-only write-back, or the need for additional APIs to enable LDAP updates are incorrect because GCDS has no capability to modify the source directory.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Google Cloud Directory Sync (GCDS)?
Can GCDS perform bidirectional synchronization?
What are the typical use cases for GCDS?
What is Google Cloud Directory Sync (GCDS)?
Why is GCDS only capable of one-way synchronization?
If GCDS cannot write back to Active Directory, how should updates be handled?
Your company hosts the public DNS zone "contoso.com" in Cloud DNS. Security requires DNSSEC to protect against cache-poisoning attacks. You change the zone's dnssec_state from "off" to "on" using Terraform and select the RSASHA256 key algorithm. The apply completes and a key-signing key now appears in the Cloud DNS console, yet public resolvers still mark the zone as "insecure." What action must you take to finish the DNSSEC rollout?
Enable DNSSEC validation on every internal and external recursive resolver that queries the zone.
Manually add DNSKEY and RRSIG records to the zone file so validators can see the signatures.
Create an asymmetric key in Cloud KMS and upload its public portion to Cloud DNS as an external KSK.
Submit the DS record provided by Cloud DNS to the domain registrar so the .com parent zone publishes it.
Answer Description
Cloud DNS automatically publishes DNSKEY and RRSIG records after you enable DNSSEC, but the chain of trust is not complete until the parent zone (.com) advertises that the child zone is signed. You do this by adding the DS (Delegation Signer) record that Cloud DNS generates to the domain's registrar. Without that DS record, validating resolvers have no way to verify signatures, so the zone remains insecure. Manually creating DNSKEY/RRSIG records is unnecessary because Cloud DNS manages them. Client-side resolvers do not need special configuration beyond normal DNSSEC validation, and Cloud DNS does not support importing an external Cloud KMS key as a KSK.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is DNSSEC and why is it important?
What is a DS record and how does it complete the DNSSEC chain of trust?
Why can’t DNSSEC validation be completed by just enabling DNSSEC in Cloud DNS?
What is DNSSEC?
What is a DS record and why is it important for DNSSEC?
Why does the parent zone (.com) need to publish the DS record?
Your financial-services firm must inspect payment-card records that reside in an on-premises Oracle database before a nightly ETL job loads them into BigQuery. You decide to use Sensitive Data Protection (Cloud DLP) hybrid inspection so that discovery happens while the data is still on-premises. From the options below, choose the statement that correctly reflects a mandatory configuration or workflow requirement for a hybrid inspection job.
Hybrid inspection supports only automatic sampling and therefore does not allow you to specify custom infoTypes or inspection rules.
You stream the on-premises records to the DLP job by invoking the projects.dlpJobs.hybridInspect API and specifying the job's resource name in each request.
You must configure a Cloud Pub/Sub topic that automatically triggers the DLP service to pull data from the on-premises source.
The data must first be exported to a Cloud Storage bucket, because hybrid inspection jobs can only inspect objects stored in Google Cloud.
Answer Description
After you create a hybrid inspection job, you must push data from the external system to the job by calling the projects.dlpJobs.hybridInspect (or projects.jobTriggers.hybridInspect) method. Each call is made to a URL that contains the DLP job's resource name (for example, projects/my-proj/dlpJobs/123456) and includes a HybridContentItem with the data and required metadata. The DLP service does not automatically pull data, does not require Cloud Storage staging, and still allows custom infoTypes. Pub/Sub can be used for notifications but is not required to start the inspection.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Sensitive Data Protection with Cloud DLP?
What is the projects.dlpJobs.hybridInspect API used for?
How is metadata used in DLP hybrid inspection jobs?
What is Sensitive Data Protection (Cloud DLP)?
How does the projects.dlpJobs.hybridInspect API work?
Why isn’t Cloud Storage or Pub/Sub required for hybrid inspection jobs?
Your security architecture requires VM workloads in your production VPC to call a third-party fraud-detection service hosted in a separate Google Cloud project. Traffic must remain on Google's private backbone, the service cannot expose a public IP, and VPC Network Peering is impossible because the networks overlap. The provider also wants to avoid updating routes or firewall rules when new consumer projects onboard. Which design meets these needs?
Configure Cloud VPN tunnels from each consumer VPC to the provider VPC and advertise the service subnet with dynamic routing.
Establish VPC Network Peering between each consumer VPC and the provider VPC, then expose the service through an internal TCP load balancer.
Assign an external IP address to the provider's load balancer and have consumers reach the service over HTTPS through Cloud Armor-protected endpoints.
Create a Private Service Connect endpoint in every consumer VPC that points to the provider's service attachment published behind an internal load balancer.
Answer Description
Private Service Connect lets a service producer publish a service attachment behind an internal load balancer. Each consumer project creates its own PSC endpoint, allocates a regional internal IP, and privately reaches the service over Google's backbone-no public IPs or peering required. Because consumer traffic appears as ordinary egress, the producer avoids per-consumer route or firewall updates. Peering, Cloud VPN, or an external load balancer would violate the overlapping-CIDR or private-backbone constraints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Private Service Connect (PSC)?
Why is VPC Network Peering not suitable for overlapping CIDR ranges?
How does Private Service Connect avoid per-consumer route and firewall updates?
What is Private Service Connect (PSC) in Google Cloud?
Why is VPC Network Peering not suitable when networks overlap?
How does Private Service Connect simplify onboarding for multiple consumer projects?
Your security operations team runs Google Cloud Security Command Center (SCC) Premium across the entire organization. Event Threat Detection has generated a high-severity finding that suggests credential exfiltration in several production projects. Per your incident-response agreement, on-call Mandiant analysts must receive the related log data within minutes so they can start triage, but they must not gain broad access to your internal logs. You also need to keep an untampered, long-term copy of all incident-related log entries for later forensic analysis. Which approach best meets these requirements?
Provide Viewer access to the SCC dashboard at the organization level and instruct Mandiant to download any required logs directly from the console.
Enable BigQuery log export for the impacted projects, share the dataset with the Mandiant service account, and run a scheduled Dataflow job every six hours to copy the tables to an immutable bucket.
Grant the Mandiant service account the Logging Viewer role on each affected project and enable real-time streaming in Logs Explorer; rely on the default Cloud Audit Logs retention for forensic preservation.
Create two aggregated organization-level log sinks with identical filters: one streams matching entries to a Pub/Sub topic in an "ir-partner" project where the Mandiant service account has only the Pub/Sub Subscriber role; the other exports the same entries to a Cloud Storage bucket that has object versioning and a locked retention policy.
Answer Description
Creating two separate aggregated organization-level log sinks with identical filters meets all objectives. The first sink streams matching log entries in near real time to a Pub/Sub topic located in a dedicated "ir-partner" project; granting the Mandiant-supplied service account the Pub/Sub Subscriber IAM role lets them pull or stream only those entries, honoring least-privilege. The second sink exports the same filtered logs to a Cloud Storage bucket that has both object versioning enabled and a locked retention policy, ensuring that no log object can be altered or deleted during the retention period, thus providing an immutable archive. Alternatives either expose excessive log access (project-level Logging Viewer), introduce unacceptable latency (BigQuery export), or fail to preserve raw logs immutably (dashboard access only).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Google Cloud Security Command Center (SCC)?
Why is Pub/Sub used for exporting logs to Mandiant analysts?
How does a Cloud Storage bucket with versioning and a locked retention policy ensure log immutability?
What is a log sink in Google Cloud?
What is the Pub/Sub Subscriber role in Google Cloud?
What is a locked retention policy in Cloud Storage?
Your company detects malware on a production Compute Engine VM that successfully retrieves a service-account access token from the instance metadata server and then tries to upload it to random public IP addresses. The VM must remain online until the next maintenance window and still needs to reach Google Cloud APIs over Private Google Access (199.36.153.8/30). Which action provides an immediate, least-disruptive mitigation using only VPC firewall rules?
Add an ingress deny rule on TCP port 80 for the VM to stop internet hosts from connecting.
Enable VPC Service Controls on the project to restrict data exfiltration for the VM.
Create an egress deny rule that blocks traffic to 169.254.169.254/32 from the VM.
Apply two high-priority egress rules to the VM's network tag: first allow traffic to 199.36.153.8/30, then deny all remaining egress to 0.0.0.0/0.
Answer Description
VPC firewall rules filter traffic that leaves or enters a VM's virtual NIC. They cannot block the link-local metadata address (169.254.169.254), which is reached over the NIC's loopback path and therefore never evaluated by VPC firewalls. The practical control point is egress to external networks: prevent any destination except the Private Google Access IP range that the workload legitimately needs. Creating a very high-priority egress rule that allows traffic to 199.36.153.8/30 for the affected VM, followed by another high-priority rule that denies all remaining egress (0.0.0.0/0), stops the stolen token from leaving the VM while keeping calls to Google APIs functional. The other options either target traffic paths the firewall cannot control (metadata server), address inbound rather than outbound flows, or require additional services instead of the requested firewall-only fix.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are VPC firewall rules in GCP?
What is Private Google Access, and why is it needed?
Why can't VPC firewalls block the metadata server (169.254.169.254)?
Why can't VPC firewall rules block traffic to the metadata server's IP address 169.254.169.254/32?
What is Private Google Access, and why is 199.36.153.8/30 important in this solution?
How do 'high-priority' firewall rules function in GCP VPC settings?
In your organization-level logging strategy, the security team mandates that every API call that deletes or modifies Cloud SQL instances must be logged centrally for incident investigations. Budget constraints forbid enabling any high-volume, chargeable logs. Which action ensures that the required events are captured and routed to the centralized log bucket without incurring additional logging fees?
Rely on the default Admin Activity audit logs and create an organization-level log sink filtering for Cloud SQL Admin Activity entries.
Enable Cloud SQL Data Access audit logs and create a project-level sink to export them.
Enable Cloud Asset Inventory feeds and configure real-time export to BigQuery.
Turn on Cloud SQL maintenance events and export them via Pub/Sub to the SIEM.
Answer Description
Admin Activity audit logs are written automatically for all Google Cloud services and record configuration-changing API calls such as the creation, update, or deletion of Cloud SQL instances. These logs are always on and their ingestion does not generate charges. By creating an organization-level log sink that filters for the Cloud SQL Admin Activity log entries, the events can be forwarded to the central log bucket without turning on any additional, billable log types. Enabling Data Access logs, Asset Inventory feeds, or maintenance events would either incur extra logging fees or fail to capture every configuration-changing API call.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Admin Activity audit logs in GCP?
What is a log sink in GCP, and how does it work?
What is the difference between Admin Activity audit logs and Data Access audit logs in GCP?
What are Admin Activity audit logs in Google Cloud?
What is an organization-level log sink and why is it useful?
How do Admin Activity logs differ from Data Access audit logs?
A hospital system ingests millions of patient encounters each night into a BigQuery dataset. Epidemiology researchers need to join this data with other public health datasets and perform aggregate analytics, but HIPAA requires that direct identifiers such as patient name and Social Security number (SSN) never be exposed to them. Compliance officers also insist that the original, fully-identified tables remain available to a limited group of clinicians. Which solution most effectively meets these requirements while minimizing ongoing operational effort?
Configure a recurring Sensitive Data Protection inspection job on the landing dataset that applies a de-identification template to tokenize detected PHI and writes the transformed output to a separate BigQuery table used by the research team.
Grant the research team the BigQuery Data Viewer role on the original tables and rely on Cloud Audit Logs to demonstrate compliance with HIPAA requirements.
Nightly export the dataset to Cloud Storage, run a custom Dataflow pipeline that replaces patient names and SSNs with random strings, then re-import the sanitized files into BigQuery for researchers.
Apply Data Catalog policy tags to the name and SSN columns and deny access to those tags for researchers, allowing them to query the original tables with those columns returning NULL.
Answer Description
A recurring Sensitive Data Protection (formerly Cloud DLP) inspection job can automatically detect built-in infoTypes such as PERSON_NAME and US_SOCIAL_SECURITY_NUMBER in the landing tables. By attaching a de-identification template that uses tokenization or format-preserving encryption, the job can write the transformed results into a separate de-identified BigQuery table that maintains referential integrity for analytics but removes direct identifiers from the researchers' view. Because the job is scheduled, new nightly ingests are handled automatically. Simply granting Data Viewer on the raw tables violates HIPAA, exporting to Cloud Storage for custom scrubbing adds unnecessary complexity and operational overhead, and Data Catalog policy tags hide entire columns rather than transform their values-preventing researchers from performing joins that rely on the identifiers. Therefore, the automated SDP inspection and de-identification workflow is the best fit.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Sensitive Data Protection (SDP) in GCP?
How does tokenization work in Sensitive Data Protection?
What are infoTypes in Google Cloud's Sensitive Data Protection?
What is Sensitive Data Protection in GCP?
What is tokenization, and how does it help with compliance?
How do Data Catalog policy tags differ from data de-identification?
Your organization is moving its collaboration platform to Google Workspace (SaaS). The security team is mapping controls to the shared responsibility model before the migration. Which statement accurately reflects how responsibilities are divided between Google and the customer in this SaaS scenario?
Google manages the customer's internal IAM groups, and the customer is responsible for firmware updates on Google's server hardware.
The customer is accountable for Gmail service availability, whereas Google defines and enforces data-loss-prevention policies for all mailboxes.
Google supplies customer-managed encryption keys by default, and the customer must patch the operating systems that host Workspace services.
Google operates and patches the underlying infrastructure and Workspace applications, while the customer configures Drive sharing permissions and retention policies for its data.
Answer Description
In a Software-as-a-Service offering such as Google Workspace, Google is responsible for operating, securing, and patching the service stack (data centers, network, and the Workspace application code). The customer, however, retains control over how the service is used - for example, setting sharing permissions, retention rules, and other policy configurations that govern its own data and users. The other options invert or misallocate duties: Google does not manage a customer's IAM groups, supply customer-managed keys by default, or define the customer's DLP rules, and customers do not patch Google servers or guarantee Workspace availability.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the shared responsibility model in Google Workspace?
What are examples of customer-specific controls in SaaS services like Google Workspace?
Why doesn't the customer handle things like server patching in Google Workspace?
What does the shared responsibility model mean for SaaS offerings like Google Workspace?
Can customers use Google Workspace with customer-managed encryption keys?
How does Google secure its infrastructure in a SaaS model?
Your organization hosts an ERP stack on Compute Engine VMs inside the prod-vpc network. A new compliance mandate states that the Cloud SQL for PostgreSQL instance that backs the application must NEVER be reachable over the public internet, but it must stay accessible to
- application VMs in prod-vpc, and
- database administrators who connect from the corporate data-center through an existing Cloud VPN tunnel. What is the most operationally efficient configuration to meet this requirement?
Expose the Cloud SQL endpoint behind an Internal TCP/UDP Load Balancer whose backend is the database instance.
Maintain the public IP and require all access to go through a hardened bastion VM that forwards traffic to the database.
Keep the public IP, but restrict it to the office's external CIDR by adding that range to the Cloud SQL authorized networks list.
Create the instance with Private IP enabled and delete or disable its public IP so that it is reachable only through the VPC network and connected VPN.
Answer Description
Creating the Cloud SQL instance with only a Private IP address (and removing/disabling any Public IP) forces all traffic to stay within the VPC. Private-IP instances are reachable from resources in the same VPC and from on-prem environments that are connected by Cloud VPN or Cloud Interconnect, but they expose no routable address on the public internet. Relying on authorized networks or the Cloud SQL Auth proxy while keeping a public IP still permits internet-routable connectivity and therefore fails the mandate. Placing an internal load balancer in front of Cloud SQL is not supported, and a bastion host introduces additional maintenance without eliminating the public endpoint.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Private IP in GCP Cloud SQL?
What is Cloud VPN and how does it ensure secure connectivity?
Why is exposing Cloud SQL via an internal load balancer not supported?
What is a Private IP address in Cloud SQL?
How does Cloud VPN enable connectivity between on-premise systems and GCP?
Why is it operationally inefficient to use a hardened bastion host for Cloud SQL?
Your fintech company is migrating a PCI DSS-regulated platform also subject to GDPR. Cardholder data must stay only in the Frankfurt region (europe-west3). Policy requires Google staff access to projects only with explicit, time-bound security-team approval and full audit logs. You must stop cross-project data exfiltration from the PCI environment without managing many firewall rules. Which Google Cloud design meets all requirements with minimal operational overhead?
Create an EU Assured Workloads environment, apply the gcp.resourceLocations organization policy to allow only europe-west3, enable Access Approval, and place all PCI projects inside a VPC Service Controls perimeter.
Host databases on Cloud SQL encrypted with customer-supplied keys stored in us-central1, disable external IPs on all VMs via organization policy, and depend on Cloud Audit Logs alone to monitor provider access.
Store all cardholder data in a Cloud Storage Multi-Region EU bucket protected with CMEK, turn on Access Transparency, and rely on custom VPC firewall egress rules to limit data flows.
Tokenize card data with Cloud DLP, keep workloads in europe-west3 using default project settings, and require support engineers to connect through Identity-Aware Proxy for troubleshooting access.
Answer Description
Creating an Assured Workloads environment with the EU (PCI DSS) compliance regime imposes EU-based personnel controls. Adding the gcp.resourceLocations organization policy restricts resource creation strictly to europe-west3, ensuring data residency in Frankfurt. Enabling Access Approval forces just-in-time, time-bounded permission before any Google staff action and pairs with Access Transparency for auditing. Wrapping the PCI projects in a VPC Service Controls perimeter blocks API-level data exfiltration to other projects without maintaining individual firewall rules. The remaining options each miss at least one mandatory control: multi-region storage does not confine data to Frankfurt, Identity-Aware Proxy addresses customer but not provider access, and storing keys in us-central1 violates the data-residency mandate.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Assured Workloads in Google Cloud?
What does the gcp.resourceLocations organization policy do?
How does VPC Service Controls prevent cross-project data exfiltration?
What are Assured Workloads in Google Cloud?
What is the role of VPC Service Controls in data security?
What is Access Approval and how does it work?
Your security team keeps a 256-bit AES key in an on-premises FIPS-validated HSM and wants to reuse that key as the customer-managed encryption key (CMEK) for a BigQuery dataset stored in the europe-west1 region. You must import the key into Cloud KMS while ensuring the key material is never sent to Google in plaintext. Which procedure satisfies Google Cloud's requirements and the security goal?
Create a key ring and symmetric key in europe-west1, generate a SOFTWARE protection-level import job, wrap the AES key offline with AES-KWP (RFC 5649) using the job's public key, then run gcloud kms keys versions import to upload the wrapped key.
Configure Cloud External Key Manager (EKM) to reference the on-premises HSM URI and assign that external key to the BigQuery dataset instead of importing the key into Cloud KMS.
Create a key ring in the global location and paste the Base64-encoded 32-byte key directly into the first key version by using the Cloud Console's Upload key material option.
Create a hardware-backed key in Cloud HSM and copy the on-premises key bytes into the first key version through the KMS REST API without wrapping.
Answer Description
The key ring and symmetric CryptoKey must reside in the same region (europe-west1) as the BigQuery dataset. Create an import job whose protection level matches the target CryptoKey version (SOFTWARE or HSM). Download the import job's public wrapping key and wrap the 256-bit AES key offline with a supported algorithm such as AES-KWP (RFC 5649) or RSA_OAEP_3072_SHA1_AES_256. Finally, use gcloud kms keys versions import (or the equivalent API method) to upload the wrapped key material. This process keeps the key encrypted during transit and complies with Cloud KMS BYOK requirements. Directly pasting raw Base64 bytes, copying key material without wrapping, or using Cloud EKM (which links to but does not import the key) do not meet the import workflow requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AES-KWP (RFC 5649)?
What is a Cloud KMS import job and why is it needed?
Why should the BigQuery dataset and CryptoKey be in the same region?
What is a key ring in Google Cloud KMS?
What is AES-KWP (RFC 5649) and why is it used for key wrapping?
What is gcloud kms keys versions import and what does it do?
A security assessment of several public-facing Compute Engine VMs shows that the instances still allow access to the legacy metadata endpoints /computeMetadata/v0.1 and /computeMetadata/v1beta1. Firewalls already block all inbound traffic except TCP 443 to the web application. Why does keeping these legacy endpoints enabled remain a serious security risk?
The legacy endpoints store all imported SSH public keys in plaintext files that are world-readable on the boot disk, exposing administrator access.
They disable automatic rotation of customer-managed encryption keys for attached persistent disks, increasing the chance of cryptographic compromise.
They respond to requests from processes inside the VM without requiring the protective X-Google-Metadata-Request (Metadata-Flavor: Google) header, letting an attacker exploit an SSRF-vulnerable application to steal the VM's service-account access token.
Anyone on the internet can reach the metadata server directly if a public firewall rule allows HTTPS, so attackers can download the entire instance metadata.
Answer Description
The primary danger comes from server-side request forgery (SSRF) or remote-code-execution flaws in software running inside the VM. If attacker-supplied input can make the application issue HTTP requests, the attacker can direct the code to query the metadata server. The legacy endpoints reply without requiring the protective header (Metadata-Flavor: Google), so an attacker can retrieve the VM's OAuth access token for its attached service account and use that token to access other Google Cloud resources. Inbound firewall rules do not mitigate this because the metadata server is reached over the VM's loopback interface. The other options describe issues that are either incorrect (user-level SSH keys are not stored on disk in plaintext), already mitigated by the firewall, or unrelated to the metadata server (disk-encryption key rotation).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Server-Side Request Forgery (SSRF)?
What is the purpose of the X-Google-Metadata-Request header?
How do service-account access tokens work and why are they important?
Why are legacy metadata endpoints a security risk?
What is the purpose of the 'Metadata-Flavor: Google' header?
How do SSRF vulnerabilities exploit legacy metadata endpoints in VMs?
Your company uses Cloud Identity with mandatory SAML-based single sign-on (SSO) to an external identity provider (IdP). All existing Google Cloud "Super Administrator" accounts are federated through that IdP. Security leadership is concerned that a prolonged IdP outage would leave the company unable to administer Google Cloud. At the same time, they want to reduce the risk of account takeover for day-to-day Super Administrator logins. Which approach best satisfies both objectives while following Google-recommended practices?
Create two additional Cloud Identity-native Super Administrator accounts excluded from SSO, protect them with hardware security-key 2-Step Verification, and store their credentials in a secure offline location for emergency use only.
Disable SAML SSO for the entire domain so Super Administrators can always sign in with Google passwords protected only by CAPTCHA challenges.
Grant the Super Administrator role to a service account, download its private key, and distribute the key to on-call engineers for use if the IdP is unreachable.
Configure an IAM Deny policy that exempts principals holding the Super Administrator role from any authentication failures caused by IdP outages.
Answer Description
Google recommends having at least one "break-glass" or emergency Super Administrator account that does not depend on the external IdP. Creating one or two native (non-federated) Super Administrator accounts whose strong, randomly generated passwords and hardware-based 2-Step Verification factors are stored securely offline ensures administrative access even if the IdP is unavailable. Regular Super Administrators continue to authenticate with SSO, protected by hardware security keys. Granting a service account Super Administrator privileges and sharing a JSON key is risky because keys are long-lived, hard to revoke, and violate least-privilege principles. Disabling SSO for everyone weakens security and does not meet the "minimize risk" requirement. IAM Deny policies cannot guarantee access during IdP outages and cannot override authentication failures, so they do not address the stated problem.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SAML-based SSO and how does it work with external IdPs?
Why are emergency 'break-glass' Super Administrator accounts important?
What are hardware security keys used for in 2-Step Verification?
What is SAML-based SSO and how does it work?
What are 'break-glass' accounts and why are they important?
Why is hardware-based 2-Step Verification recommended for Super Administrator accounts?
Your organization must prevent PHI that resides in a production Cloud Storage bucket from being copied to any Google Cloud resource outside a tightly controlled analytics environment, even if a valid credential is leaked. The analytics workload runs in a separate project. External analysts employed by a partner need to load reference data into a BigQuery dataset in the analytics project from a known static public IPv4 /29 block. Which architecture change most effectively enforces these compliance requirements while allowing the partner upload path to continue working?
Merge analytics and production workloads into a Shared VPC host project and apply hierarchical firewall egress rules that allow traffic only to BigQuery API endpoints.
Harden IAM by removing the Storage Object Admin role from all users outside the analytics project and set the compute.vmExternalIpAccess organization policy constraint to deny.
Place both projects in a single VPC Service Controls perimeter; add an ingress policy that allows BigQuery requests only when they originate from the partner's static IP range, and leave the perimeter's egress policy at its default deny setting.
Enable Private Service Connect for BigQuery in both projects, disable Cloud NAT, and rely on VPC firewall rules to restrict internet egress.
Answer Description
A single VPC Service Controls service perimeter around the production and analytics projects blocks all BigQuery and Cloud Storage calls that attempt to reach projects or services outside the perimeter, mitigating data-exfiltration risk even if credentials are stolen. Because service perimeters deny egress by default, PHI cannot be exported. An ingress policy can be added that references an access level matching the partner's static IP range, permitting just-in-time BigQuery ingestion traffic from that network into the analytics project. Private Service Connect, firewall rules, and Shared VPC egress rules do not enforce data movement controls on Google-managed APIs; IAM and org policy hardening alone cannot stop programmatic exports once a principal has data-access permissions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is VPC Service Controls in Google Cloud?
How does an ingress policy work in VPC Service Controls?
Why does Private Service Connect not enforce data movement controls effectively in this scenario?
What are VPC Service Controls?
How does an ingress policy work in VPC Service Controls?
Why can't IAM or organization policy constraints alone enforce data exfiltration controls for PHI?
In your production VPC, all VM instances now have external access blocked by default. Only the batch-processing group (instances tagged updater) should be able to fetch software from public repositories on the internet; every other instance must be prevented from initiating outbound connections. Which combination of Cloud VPC firewall rules satisfies this requirement while following principle of least privilege?
Create an egress allow rule to 0.0.0.0/0 with priority 50 that targets the updater tag, and an egress deny rule to 0.0.0.0/0 with priority 100 that targets all instances.
Create a single egress deny rule (priority 1000) that blocks 0.0.0.0/0 for all instances, and rely on Cloud NAT to let updater-tagged VMs connect.
Create an ingress deny rule (priority 100) for 0.0.0.0/0 that targets all instances, and an egress allow rule (priority 50) to 0.0.0.0/0 for the updater tag.
Create an egress allow rule to 0.0.0.0/0 with priority 2000 that targets the updater tag, and an egress deny rule to 0.0.0.0/0 with priority 1000 that targets all instances.
Answer Description
A lower priority number means the rule is evaluated first. By creating an egress allow rule with a priority of 50 that targets only instances tagged updater, those VMs are matched and traffic is permitted. The subsequent rule with priority 100 denies egress to 0.0.0.0/0 for every instance, so all traffic from non-tagged VMs is blocked. NAT does not bypass firewall rules, and an ingress rule would not control outbound connections, so the other options fail to meet the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the principle of least privilege in cloud security?
How does priority work in Cloud VPC firewall rules?
Why doesn't Cloud NAT bypass egress firewall rules?
What is the principle of least privilege?
How does Cloud VPC firewall rule priority work?
What is the difference between egress and ingress firewall rules?
Gnarly!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.