🔥 40% Off Crucial Exams Memberships — Deal ends today!

3 hours, 32 minutes remaining!
00:20:00

GCP Professional Cloud Security Engineer Practice Test

Use the form below to configure your GCP Professional Cloud Security Engineer Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for GCP Professional Cloud Security Engineer
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

GCP Professional Cloud Security Engineer Information

Overview

The Google Cloud Professional Cloud Security Engineer (PCSE) certification is designed for security professionals who architect and implement secure workloads on Google Cloud Platform (GCP). Earning the credential signals that you can design robust access controls, manage data protection, configure network security, and ensure regulatory compliance in cloud environments. Because Google frequently updates its security services—such as Cloud Armor, BeyondCorp Enterprise, Chronicle, and Confidential Computing—the PCSE exam expects you to demonstrate both conceptual depth and hands-on familiarity with the latest GCP features.

Exam Format and Content Domains

The exam is a two-hour, multiple-choice and multiple-select test delivered at Pearson VUE test centers or online proctoring. Questions span five core domains:

  1. Configuring access-within GCP (IAM, service accounts, organization policies)
  2. Configuring network security (VPC service controls, Cloud Load Balancing, Private Service Connect)
  3. Ensuring data protection (Cloud KMS, CMEK, DLP, Secret Manager)
  4. Managing operational security (logging/monitoring with Cloud Audit Logs, Cloud Monitoring, Chronicle)
  5. Ensuring compliance (risk management frameworks, shared-responsibility model, incident response)
    Expect scenario-based questions that require selecting the “best” choice among many viable solutions, so practice with real-world architectures is critical.

Why Practice Exams Matter

Taking high-quality practice exams is one of the most efficient ways to close knowledge gaps and build test-taking stamina. First, sample questions expose you to Google’s preferred terminology—e.g., distinguishing between “Cloud Armor edge policies” and “regional security policies”—so you aren’t surprised by phrasing on test day. Second, timed drills simulate the exam’s pacing, helping you learn to allocate roughly 90 seconds per question and flag tougher items for later review. Finally, detailed explanations turn each incorrect answer into a mini-lesson; over multiple iterations, you’ll identify patterns (for instance, Google almost always recommends using service accounts over user credentials in automated workflows). Aim to score consistently above 85 percent on reputable practice sets before scheduling the real exam.

Final Preparation Tips

Combine practice exams with hands-on labs in Qwiklabs or Cloud Skills Boost to reinforce muscle memory—creating VPC service perimeter policies once in the console and once via gcloud is more memorable than reading about it. Review the official exam guide and sample case studies, paying special attention to Google’s security best-practice documents and whitepapers. In the final week, focus on weak areas flagged by your practice-exam analytics and skim release notes for any major security service updates. With a balanced regimen of study, labs, and realistic mock tests, you’ll walk into the PCSE exam with confidence and a solid grasp of how to secure production workloads on Google Cloud.

GCP Professional Cloud Security Engineer Logo
  • Free GCP Professional Cloud Security Engineer Practice Test

  • 20 Questions
  • Unlimited time
  • Configuring Access
    Securing communications and establishing boundary protection
    Ensuring data protection
    Managing operations
    Supporting compliance requirements
Question 1 of 20

Your organization runs hundreds of projects. Cloud IDS threat detection (fed by Packet Mirroring) and VPC Flow Logs are enabled in every project. The security operations team wants to correlate IDS threat events with flow-level network metadata using familiar SQL queries. They must keep the data for 18 months and want to minimize operational overhead by avoiding custom ETL jobs or separate BigQuery datasets. Which solution best meets these requirements?

  • Stream both Cloud IDS and VPC Flow Logs to Pub/Sub, process them with a Dataflow pipeline that writes to BigQuery, and schedule a job to delete partitions older than 550 days.

  • Forward Cloud IDS alerts to Chronicle and export VPC Flow Logs to Cloud Storage; query the combined data through Chronicle's YARA-L interface.

  • Enable the Cloud IDS BigQuery export feature and add a second sink that exports VPC Flow Logs to the same BigQuery dataset; configure table partition expiration for 550 days.

  • Create an organization-level aggregated log sink that routes Cloud IDS and VPC Flow Logs into a dedicated log bucket, enable Log Analytics on that bucket, set the bucket retention to 550 days, and grant analysts read-only Logging IAM roles.

Question 2 of 20

Your company hosts the public DNS zone corp.example in Cloud DNS. After investigating recent cache-poisoning attempts, the security team asks you to implement a control that allows validating recursive resolvers on the internet to cryptographically verify that the answers they receive for corp.example are authentic and untampered. The operations team wants a solution that minimizes ongoing key-management overhead for them. What should you do?

  • Enforce DNS over TLS for all clients and block UDP/53 on the corporate firewall to prevent on-path tampering of DNS responses.

  • Deploy secondary authoritative DNS servers in another project and front them with Cloud CDN so cached DNS responses remain available during outages.

  • Enable DNSSEC for the Cloud DNS managed zone, rely on Cloud DNS to create and automatically rotate the ZSK, manually manage the KSK, and publish the generated DS record with the domain registrar.

  • Enable Cloud DNS query logging and create Cloud Logging alerts to detect suspicious NXDOMAIN or SERVFAIL spikes indicating cache-poisoning attempts.

Question 3 of 20

Your organization stores employee records in a BigQuery table. All staff must be able to run existing queries on the table, but only members of the "hr-analysts" group should see the SSN and Salary columns. Other users must receive NULLs for those two columns without modifying any queries or creating additional views. Which approach meets the requirement while following Google-recommended practices for column-level security?

  • Apply a row-level security policy that filters out SSN and Salary for non-HR users.

  • Create a Data Catalog taxonomy, assign policy tags to the SSN and Salary columns, and grant roles/datacatalog.categoryFineGrainedReader on those policy tags to the hr-analysts group only.

  • Encrypt the SSN and Salary columns with a dedicated CMEK key and grant Cloud KMS access only to the hr-analysts group.

  • Build an authorized view that omits SSN and Salary, share that view with all users, and revoke access to the underlying table.

Question 4 of 20

Your organization is hardening access to Vertex AI.

  • The data-science team ([email protected]) must be able to open managed notebooks, launch custom training jobs, and register the resulting Model artifacts. They must not be able to deploy or delete models, update Endpoints, or change IAM policies.
  • The MLOps team ([email protected]) is responsible for production serving. They need to deploy models to Endpoints and manage traffic splits, but they must not create or modify Datasets.
    Which assignment of predefined IAM roles best enforces the required least-privilege separation?
Question 5 of 20

Your organization uses on-premises Microsoft Active Directory as the authoritative source for user identities. Google Cloud Directory Sync (GCDS) runs every night to keep Google Workspace in sync. A project manager asks whether the help-desk team can update an employee's phone number only in the Google Admin console and rely on the next GCDS cycle to push that change back into Active Directory. What accurately describes how GCDS will behave in this situation?

  • GCDS can write attribute changes back to Active Directory if write-back is enabled in the synchronization profile.

  • The help-desk can enable the Cloud Directory API in the Admin console to allow GCDS to propagate Google Workspace edits to Active Directory on the next sync.

  • The change will remain only in Google Workspace because GCDS synchronizes data in one direction-from Active Directory to Google-without updating the LDAP source.

  • GCDS supports bidirectional synchronization for groups but not for individual user attributes like phone numbers.

Question 6 of 20

Your company hosts the public DNS zone "contoso.com" in Cloud DNS. Security requires DNSSEC to protect against cache-poisoning attacks. You change the zone's dnssec_state from "off" to "on" using Terraform and select the RSASHA256 key algorithm. The apply completes and a key-signing key now appears in the Cloud DNS console, yet public resolvers still mark the zone as "insecure." What action must you take to finish the DNSSEC rollout?

  • Enable DNSSEC validation on every internal and external recursive resolver that queries the zone.

  • Manually add DNSKEY and RRSIG records to the zone file so validators can see the signatures.

  • Create an asymmetric key in Cloud KMS and upload its public portion to Cloud DNS as an external KSK.

  • Submit the DS record provided by Cloud DNS to the domain registrar so the .com parent zone publishes it.

Question 7 of 20

Your financial-services firm must inspect payment-card records that reside in an on-premises Oracle database before a nightly ETL job loads them into BigQuery. You decide to use Sensitive Data Protection (Cloud DLP) hybrid inspection so that discovery happens while the data is still on-premises. From the options below, choose the statement that correctly reflects a mandatory configuration or workflow requirement for a hybrid inspection job.

  • Hybrid inspection supports only automatic sampling and therefore does not allow you to specify custom infoTypes or inspection rules.

  • You stream the on-premises records to the DLP job by invoking the projects.dlpJobs.hybridInspect API and specifying the job's resource name in each request.

  • You must configure a Cloud Pub/Sub topic that automatically triggers the DLP service to pull data from the on-premises source.

  • The data must first be exported to a Cloud Storage bucket, because hybrid inspection jobs can only inspect objects stored in Google Cloud.

Question 8 of 20

Your security architecture requires VM workloads in your production VPC to call a third-party fraud-detection service hosted in a separate Google Cloud project. Traffic must remain on Google's private backbone, the service cannot expose a public IP, and VPC Network Peering is impossible because the networks overlap. The provider also wants to avoid updating routes or firewall rules when new consumer projects onboard. Which design meets these needs?

  • Configure Cloud VPN tunnels from each consumer VPC to the provider VPC and advertise the service subnet with dynamic routing.

  • Establish VPC Network Peering between each consumer VPC and the provider VPC, then expose the service through an internal TCP load balancer.

  • Assign an external IP address to the provider's load balancer and have consumers reach the service over HTTPS through Cloud Armor-protected endpoints.

  • Create a Private Service Connect endpoint in every consumer VPC that points to the provider's service attachment published behind an internal load balancer.

Question 9 of 20

Your security operations team runs Google Cloud Security Command Center (SCC) Premium across the entire organization. Event Threat Detection has generated a high-severity finding that suggests credential exfiltration in several production projects. Per your incident-response agreement, on-call Mandiant analysts must receive the related log data within minutes so they can start triage, but they must not gain broad access to your internal logs. You also need to keep an untampered, long-term copy of all incident-related log entries for later forensic analysis. Which approach best meets these requirements?

  • Provide Viewer access to the SCC dashboard at the organization level and instruct Mandiant to download any required logs directly from the console.

  • Enable BigQuery log export for the impacted projects, share the dataset with the Mandiant service account, and run a scheduled Dataflow job every six hours to copy the tables to an immutable bucket.

  • Grant the Mandiant service account the Logging Viewer role on each affected project and enable real-time streaming in Logs Explorer; rely on the default Cloud Audit Logs retention for forensic preservation.

  • Create two aggregated organization-level log sinks with identical filters: one streams matching entries to a Pub/Sub topic in an "ir-partner" project where the Mandiant service account has only the Pub/Sub Subscriber role; the other exports the same entries to a Cloud Storage bucket that has object versioning and a locked retention policy.

Question 10 of 20

Your company detects malware on a production Compute Engine VM that successfully retrieves a service-account access token from the instance metadata server and then tries to upload it to random public IP addresses. The VM must remain online until the next maintenance window and still needs to reach Google Cloud APIs over Private Google Access (199.36.153.8/30). Which action provides an immediate, least-disruptive mitigation using only VPC firewall rules?

  • Add an ingress deny rule on TCP port 80 for the VM to stop internet hosts from connecting.

  • Enable VPC Service Controls on the project to restrict data exfiltration for the VM.

  • Create an egress deny rule that blocks traffic to 169.254.169.254/32 from the VM.

  • Apply two high-priority egress rules to the VM's network tag: first allow traffic to 199.36.153.8/30, then deny all remaining egress to 0.0.0.0/0.

Question 11 of 20

In your organization-level logging strategy, the security team mandates that every API call that deletes or modifies Cloud SQL instances must be logged centrally for incident investigations. Budget constraints forbid enabling any high-volume, chargeable logs. Which action ensures that the required events are captured and routed to the centralized log bucket without incurring additional logging fees?

  • Rely on the default Admin Activity audit logs and create an organization-level log sink filtering for Cloud SQL Admin Activity entries.

  • Enable Cloud SQL Data Access audit logs and create a project-level sink to export them.

  • Enable Cloud Asset Inventory feeds and configure real-time export to BigQuery.

  • Turn on Cloud SQL maintenance events and export them via Pub/Sub to the SIEM.

Question 12 of 20

A hospital system ingests millions of patient encounters each night into a BigQuery dataset. Epidemiology researchers need to join this data with other public health datasets and perform aggregate analytics, but HIPAA requires that direct identifiers such as patient name and Social Security number (SSN) never be exposed to them. Compliance officers also insist that the original, fully-identified tables remain available to a limited group of clinicians. Which solution most effectively meets these requirements while minimizing ongoing operational effort?

  • Configure a recurring Sensitive Data Protection inspection job on the landing dataset that applies a de-identification template to tokenize detected PHI and writes the transformed output to a separate BigQuery table used by the research team.

  • Grant the research team the BigQuery Data Viewer role on the original tables and rely on Cloud Audit Logs to demonstrate compliance with HIPAA requirements.

  • Nightly export the dataset to Cloud Storage, run a custom Dataflow pipeline that replaces patient names and SSNs with random strings, then re-import the sanitized files into BigQuery for researchers.

  • Apply Data Catalog policy tags to the name and SSN columns and deny access to those tags for researchers, allowing them to query the original tables with those columns returning NULL.

Question 13 of 20

Your organization is moving its collaboration platform to Google Workspace (SaaS). The security team is mapping controls to the shared responsibility model before the migration. Which statement accurately reflects how responsibilities are divided between Google and the customer in this SaaS scenario?

  • Google manages the customer's internal IAM groups, and the customer is responsible for firmware updates on Google's server hardware.

  • The customer is accountable for Gmail service availability, whereas Google defines and enforces data-loss-prevention policies for all mailboxes.

  • Google supplies customer-managed encryption keys by default, and the customer must patch the operating systems that host Workspace services.

  • Google operates and patches the underlying infrastructure and Workspace applications, while the customer configures Drive sharing permissions and retention policies for its data.

Question 14 of 20

Your organization hosts an ERP stack on Compute Engine VMs inside the prod-vpc network. A new compliance mandate states that the Cloud SQL for PostgreSQL instance that backs the application must NEVER be reachable over the public internet, but it must stay accessible to

  1. application VMs in prod-vpc, and
  2. database administrators who connect from the corporate data-center through an existing Cloud VPN tunnel. What is the most operationally efficient configuration to meet this requirement?
  • Expose the Cloud SQL endpoint behind an Internal TCP/UDP Load Balancer whose backend is the database instance.

  • Maintain the public IP and require all access to go through a hardened bastion VM that forwards traffic to the database.

  • Keep the public IP, but restrict it to the office's external CIDR by adding that range to the Cloud SQL authorized networks list.

  • Create the instance with Private IP enabled and delete or disable its public IP so that it is reachable only through the VPC network and connected VPN.

Question 15 of 20

Your fintech company is migrating a PCI DSS-regulated platform also subject to GDPR. Cardholder data must stay only in the Frankfurt region (europe-west3). Policy requires Google staff access to projects only with explicit, time-bound security-team approval and full audit logs. You must stop cross-project data exfiltration from the PCI environment without managing many firewall rules. Which Google Cloud design meets all requirements with minimal operational overhead?

  • Create an EU Assured Workloads environment, apply the gcp.resourceLocations organization policy to allow only europe-west3, enable Access Approval, and place all PCI projects inside a VPC Service Controls perimeter.

  • Host databases on Cloud SQL encrypted with customer-supplied keys stored in us-central1, disable external IPs on all VMs via organization policy, and depend on Cloud Audit Logs alone to monitor provider access.

  • Store all cardholder data in a Cloud Storage Multi-Region EU bucket protected with CMEK, turn on Access Transparency, and rely on custom VPC firewall egress rules to limit data flows.

  • Tokenize card data with Cloud DLP, keep workloads in europe-west3 using default project settings, and require support engineers to connect through Identity-Aware Proxy for troubleshooting access.

Question 16 of 20

Your security team keeps a 256-bit AES key in an on-premises FIPS-validated HSM and wants to reuse that key as the customer-managed encryption key (CMEK) for a BigQuery dataset stored in the europe-west1 region. You must import the key into Cloud KMS while ensuring the key material is never sent to Google in plaintext. Which procedure satisfies Google Cloud's requirements and the security goal?

  • Create a key ring and symmetric key in europe-west1, generate a SOFTWARE protection-level import job, wrap the AES key offline with AES-KWP (RFC 5649) using the job's public key, then run gcloud kms keys versions import to upload the wrapped key.

  • Configure Cloud External Key Manager (EKM) to reference the on-premises HSM URI and assign that external key to the BigQuery dataset instead of importing the key into Cloud KMS.

  • Create a key ring in the global location and paste the Base64-encoded 32-byte key directly into the first key version by using the Cloud Console's Upload key material option.

  • Create a hardware-backed key in Cloud HSM and copy the on-premises key bytes into the first key version through the KMS REST API without wrapping.

Question 17 of 20

A security assessment of several public-facing Compute Engine VMs shows that the instances still allow access to the legacy metadata endpoints /computeMetadata/v0.1 and /computeMetadata/v1beta1. Firewalls already block all inbound traffic except TCP 443 to the web application. Why does keeping these legacy endpoints enabled remain a serious security risk?

  • The legacy endpoints store all imported SSH public keys in plaintext files that are world-readable on the boot disk, exposing administrator access.

  • They disable automatic rotation of customer-managed encryption keys for attached persistent disks, increasing the chance of cryptographic compromise.

  • They respond to requests from processes inside the VM without requiring the protective X-Google-Metadata-Request (Metadata-Flavor: Google) header, letting an attacker exploit an SSRF-vulnerable application to steal the VM's service-account access token.

  • Anyone on the internet can reach the metadata server directly if a public firewall rule allows HTTPS, so attackers can download the entire instance metadata.

Question 18 of 20

Your company uses Cloud Identity with mandatory SAML-based single sign-on (SSO) to an external identity provider (IdP). All existing Google Cloud "Super Administrator" accounts are federated through that IdP. Security leadership is concerned that a prolonged IdP outage would leave the company unable to administer Google Cloud. At the same time, they want to reduce the risk of account takeover for day-to-day Super Administrator logins. Which approach best satisfies both objectives while following Google-recommended practices?

  • Create two additional Cloud Identity-native Super Administrator accounts excluded from SSO, protect them with hardware security-key 2-Step Verification, and store their credentials in a secure offline location for emergency use only.

  • Disable SAML SSO for the entire domain so Super Administrators can always sign in with Google passwords protected only by CAPTCHA challenges.

  • Grant the Super Administrator role to a service account, download its private key, and distribute the key to on-call engineers for use if the IdP is unreachable.

  • Configure an IAM Deny policy that exempts principals holding the Super Administrator role from any authentication failures caused by IdP outages.

Question 19 of 20

Your organization must prevent PHI that resides in a production Cloud Storage bucket from being copied to any Google Cloud resource outside a tightly controlled analytics environment, even if a valid credential is leaked. The analytics workload runs in a separate project. External analysts employed by a partner need to load reference data into a BigQuery dataset in the analytics project from a known static public IPv4 /29 block. Which architecture change most effectively enforces these compliance requirements while allowing the partner upload path to continue working?

  • Merge analytics and production workloads into a Shared VPC host project and apply hierarchical firewall egress rules that allow traffic only to BigQuery API endpoints.

  • Harden IAM by removing the Storage Object Admin role from all users outside the analytics project and set the compute.vmExternalIpAccess organization policy constraint to deny.

  • Place both projects in a single VPC Service Controls perimeter; add an ingress policy that allows BigQuery requests only when they originate from the partner's static IP range, and leave the perimeter's egress policy at its default deny setting.

  • Enable Private Service Connect for BigQuery in both projects, disable Cloud NAT, and rely on VPC firewall rules to restrict internet egress.

Question 20 of 20

In your production VPC, all VM instances now have external access blocked by default. Only the batch-processing group (instances tagged updater) should be able to fetch software from public repositories on the internet; every other instance must be prevented from initiating outbound connections. Which combination of Cloud VPC firewall rules satisfies this requirement while following principle of least privilege?

  • Create an egress allow rule to 0.0.0.0/0 with priority 50 that targets the updater tag, and an egress deny rule to 0.0.0.0/0 with priority 100 that targets all instances.

  • Create a single egress deny rule (priority 1000) that blocks 0.0.0.0/0 for all instances, and rely on Cloud NAT to let updater-tagged VMs connect.

  • Create an ingress deny rule (priority 100) for 0.0.0.0/0 that targets all instances, and an egress allow rule (priority 50) to 0.0.0.0/0 for the updater tag.

  • Create an egress allow rule to 0.0.0.0/0 with priority 2000 that targets the updater tag, and an egress deny rule to 0.0.0.0/0 with priority 1000 that targets all instances.