🔥 40% Off Crucial Exams Memberships — Deal ends today!

3 hours, 32 minutes remaining!
00:20:00

GCP Professional Cloud Security Engineer Practice Test

Use the form below to configure your GCP Professional Cloud Security Engineer Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for GCP Professional Cloud Security Engineer
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

GCP Professional Cloud Security Engineer Information

Overview

The Google Cloud Professional Cloud Security Engineer (PCSE) certification is designed for security professionals who architect and implement secure workloads on Google Cloud Platform (GCP). Earning the credential signals that you can design robust access controls, manage data protection, configure network security, and ensure regulatory compliance in cloud environments. Because Google frequently updates its security services—such as Cloud Armor, BeyondCorp Enterprise, Chronicle, and Confidential Computing—the PCSE exam expects you to demonstrate both conceptual depth and hands-on familiarity with the latest GCP features.

Exam Format and Content Domains

The exam is a two-hour, multiple-choice and multiple-select test delivered at Pearson VUE test centers or online proctoring. Questions span five core domains:

  1. Configuring access-within GCP (IAM, service accounts, organization policies)
  2. Configuring network security (VPC service controls, Cloud Load Balancing, Private Service Connect)
  3. Ensuring data protection (Cloud KMS, CMEK, DLP, Secret Manager)
  4. Managing operational security (logging/monitoring with Cloud Audit Logs, Cloud Monitoring, Chronicle)
  5. Ensuring compliance (risk management frameworks, shared-responsibility model, incident response)
    Expect scenario-based questions that require selecting the “best” choice among many viable solutions, so practice with real-world architectures is critical.

Why Practice Exams Matter

Taking high-quality practice exams is one of the most efficient ways to close knowledge gaps and build test-taking stamina. First, sample questions expose you to Google’s preferred terminology—e.g., distinguishing between “Cloud Armor edge policies” and “regional security policies”—so you aren’t surprised by phrasing on test day. Second, timed drills simulate the exam’s pacing, helping you learn to allocate roughly 90 seconds per question and flag tougher items for later review. Finally, detailed explanations turn each incorrect answer into a mini-lesson; over multiple iterations, you’ll identify patterns (for instance, Google almost always recommends using service accounts over user credentials in automated workflows). Aim to score consistently above 85 percent on reputable practice sets before scheduling the real exam.

Final Preparation Tips

Combine practice exams with hands-on labs in Qwiklabs or Cloud Skills Boost to reinforce muscle memory—creating VPC service perimeter policies once in the console and once via gcloud is more memorable than reading about it. Review the official exam guide and sample case studies, paying special attention to Google’s security best-practice documents and whitepapers. In the final week, focus on weak areas flagged by your practice-exam analytics and skim release notes for any major security service updates. With a balanced regimen of study, labs, and realistic mock tests, you’ll walk into the PCSE exam with confidence and a solid grasp of how to secure production workloads on Google Cloud.

GCP Professional Cloud Security Engineer Logo
  • Free GCP Professional Cloud Security Engineer Practice Test

  • 20 Questions
  • Unlimited time
  • Configuring Access
    Securing communications and establishing boundary protection
    Ensuring data protection
    Managing operations
    Supporting compliance requirements

Free Preview

This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!

Question 1 of 20

Your company uses Microsoft Active Directory as the authoritative directory. Google Cloud Directory Sync (GCDS) currently provisions users and groups into Cloud Identity, so employees authenticate with passwords stored in Google. Security now requires that:

  1. Google must stop storing or validating user passwords,
  2. Password changes in Active Directory must take effect immediately when users access Google Workspace,
  3. Existing group synchronization must continue. Which approach best satisfies all requirements while introducing the fewest changes to the existing Google identities?
  • Replace GCDS with Workforce Identity Federation so Google Workspace relies on short-lived tokens issued by Active Directory and stop synchronizing directory objects.

  • Export users from Active Directory to a CSV file, import them into Cloud Identity, disable GCDS, and have users reset their Google passwords.

  • Enable Google Cloud Secure LDAP for authentication and disable SAML single sign-on while leaving GCDS in place for groups.

  • Retain GCDS for user and group provisioning but configure Google Workspace for SAML single sign-on that redirects authentication to an AD FS identity provider.

Question 2 of 20

A security team wants to tighten access controls in a large GCP organization where IAM roles are currently bound to dozens of individual user principals. Their goals are to 1) simplify future permission reviews, 2) delegate day-to-day onboarding and off-boarding of developers to team leads, and 3) ensure that no users accidentally retain permissions after leaving a group. Which approach best meets ALL three goals?

  • Assign broad organization-wide roles (such as roles/viewer) directly to every user and rely on audit logs to detect misuse.

  • Keep existing individual IAM bindings but place all projects inside a VPC Service Control perimeter to prevent lateral movement and data exfiltration.

  • Require every project owner to manage IAM bindings for their own project resources instead of centralizing permissions in groups.

  • Create least-privilege Google Groups for each functional role, grant all required IAM roles to those groups, and delegate group-membership administration to team leads while synchronizing group membership with the corporate directory.

Question 3 of 20

Your security team must scan several terabytes of log files stored in Cloud Storage with Sensitive Data Protection (SDP). The files may contain U.S. Social Security numbers (formatted as "123-45-6789") and an internal customer identifier that always starts with "CUST-" followed by exactly 10 digits (for example, "CUST-0123456789"). The team wants to minimize configuration effort while keeping false positives low. Which detection strategy best meets these requirements?

  • Use the built-in US_SOCIAL_SECURITY_NUMBER infoType and create a custom regular-expression infoType named ACME_CUSTOMER_ID that matches the pattern "CUST-\d{10}" (optionally adding a hotword rule that looks for the string "CUST-").

  • Create custom regular-expression infoTypes for both SSNs and the customer ID so you can fully control pattern matching.

  • Use the built-in CREDIT_CARD_NUMBER infoType for SSNs and create a custom dictionary detector that lists every known customer ID.

  • Rely on the built-in PERSON_NAME infoType for SSNs and the built-in PHONE_NUMBER infoType for the customer ID because both contain digits and delimiters.

Question 4 of 20

A multinational enterprise maintains an on-premises middleware service that must authenticate to Google Cloud Storage by using a JSON key for a Google Cloud service account. Compliance now mandates quarterly key rotation with zero downtime for the application. Which practice best satisfies Google-recommended guidance for rotating this unavoidable user-managed key while minimizing service disruption?

  • Delete the current key, immediately create a replacement with the same name, and restart the application to force it to pick up the new credential.

  • Periodically re-encrypt the existing key with a new Cloud KMS key version to satisfy rotation requirements without generating additional service account keys.

  • Extend the key's expiration date to 90 days and enable OS-level credential caching so the application keeps working during the renewal window.

  • Create a second key for the service account, update the application to use the new key, verify access, and then delete the original key-ensuring no more than two active keys exist at any time.

Question 5 of 20

You are investigating a potential data leak and must list only Cloud Audit Log entries for the last 24 hours that show a principal enumerating objects in any Cloud Storage bucket within project finance-prod. The investigator will run gcloud logging read from a workstation that already has application-default credentials for the project. Which advanced log filter should they supply to return only the relevant Data Access log entries and exclude every other service or log type?

  • logName="projects/finance-prod/logs/cloudaudit.googleapis.com%2Factivity" AND resource.type="gcs_bucket" AND protoPayload.methodName="storage.objects.list"

  • logName="projects/finance-prod/logs/cloudaudit.googleapis.com%2Fdata_access" AND resource.type="gcs_object" AND protoPayload.serviceName="storage.googleapis.com" AND protoPayload.methodName="storage.objects.get"

  • logName="projects/finance-prod/logs/cloudaudit.googleapis.com%2Fdata_access" AND resource.type="gcs_bucket" AND protoPayload.serviceName="storage.googleapis.com" AND protoPayload.methodName="storage.objects.list"

  • resource.type="gcs_bucket" AND protoPayload.serviceName="storage.googleapis.com" AND protoPayload.methodName="storage.buckets.list"

Question 6 of 20

An EU-based healthcare provider is migrating a 100 TB PACS image archive to Google Cloud. GDPR and a national regulation require that

  1. all patient data remain within the EU at rest and during processing, and
  2. the encryption keys protecting that data must also stay in the same EU jurisdiction and be fully controlled by the customer. The analytics team occasionally launches GPU-accelerated jobs against the archive but wants to minimise operational overhead. Which Google Cloud configuration best satisfies these compliance constraints?
  • Create a dual-region Cloud Storage bucket "us-east4-northamerica-northeast1" protected by an external key manager located in Frankfurt, and process data on GPUs in "europe-west3".

  • Create a Cloud Storage regional bucket in "europe-west3" (Frankfurt) protected by a customer-managed Cloud KMS key in the same region, and run GPU-enabled Compute Engine instances in "europe-west3" when analytics is required.

  • Create a Cloud Storage multi-region bucket in "EU" using Google-managed encryption keys, and run GPU workloads on Compute Engine instances in "europe-west1" (Belgium).

  • Provision a Filestore instance in "us-central1" encrypted with a CMEK key stored in "europe-west3", and launch GPU jobs in "europe-west1".

Question 7 of 20

A multinational retailer stores raw event logs in several Google Cloud projects. Some logs reside in regional Cloud Storage buckets, while others are streamed into BigQuery datasets by a Dataflow pipeline. The security team must continuously discover any newly ingested credit-card or government-ID data, calculate risk scores, and view results in one central Sensitive Data Protection dashboard. They want to avoid scheduling a separate inspection job for every individual bucket or dataset, but are willing to perform a one-time setup in each project. Which approach best meets these requirements?

  • Use Cloud Asset Inventory to export metadata for all storage resources and query it in BigQuery to locate sensitive fields.

  • Create individual Cloud DLP inspection jobs for each Cloud Storage bucket and BigQuery dataset, then aggregate the findings in Cloud Logging.

  • Enable organization-level Sensitive Data Protection discovery for Cloud Storage, and in every project configure a BigQuery data profile scan so all datasets are automatically profiled and results appear in the Sensitive Data Protection dashboard.

  • Attach Data Catalog policy tags to BigQuery tables and rely on policy insights to detect new credit-card or government-ID data automatically.

Question 8 of 20

Your organization migrated to Cloud Identity and currently has four staff members who perform daily administration using Super Administrator privileges. A recent internal risk assessment highlights that this practice violates least-privilege principles and exposes the company if any of those credentials are phished. Security wants to (1) restrict routine use of Super Administrator power, (2) guarantee emergency recovery if the primary IdP or MFA service is unavailable, and (3) keep an auditable trail with minimal day-to-day overhead. Which strategy best satisfies all three goals?

  • Create two dedicated break-glass Super Administrator accounts that are excluded from SSO and 2-Step Verification, secured with long random passwords stored in an offline safe; assign the four staff members delegated admin roles matching their job duties and monitor any logins to the break-glass accounts.

  • Keep one existing Super Administrator account for everyday work and enforce FIDO2 security-key MFA on it; demote the other three to Help Desk Admin and rely on Access Context Manager to restrict their logins to corporate IP ranges.

  • Rotate the passwords of all four Super Administrator accounts monthly, require phone-based 2-Step Verification, and configure an automated rule that unlocks a fifth Super Administrator account if no admin logs in for 48 hours.

  • Enable Privileged Access Manager so the four staff members request time-bound elevation to the Super Administrator role whenever needed, and disable all standing Super Administrator accounts.

Question 9 of 20

Your security team mandates that every Compute Engine VM start from a CIS-hardened custom image that is automatically rebuilt when either (1) Google posts a new debian-11 base image or (2) approved hardening scripts change in Cloud Source Repositories. The pipeline must apply the scripts, install the latest patches, halt on any high-severity CVEs, and keep only the three newest compliant images. Which design delivers this with the least manual effort?

  • Deploy VMs with Deployment Manager that reference the publicly available debian-11-csi-hardened image family, attach Cloud Armor policies, and enable Shielded VM integrity monitoring to detect vulnerabilities. Allow teams to select any version within that family.

  • Enable OS patch management in VM Manager to run a weekly patch job and store the hardening scripts in a Cloud Storage bucket. Have each VM execute the scripts from startup-script metadata and rely on rolling updates in managed instance groups to phase in patched VMs.

  • Create two Cloud Build triggers: a Cloud Source Repositories trigger for the hardening branch and a Cloud Scheduler-initiated Pub/Sub trigger that runs daily. Both invoke a Cloud Build YAML file that runs Packer to build a shielded image from the latest debian-11 family, applies the hardening scripts, updates all packages, executes an in-pipeline vulnerability scanner that fails the build on any high or critical CVE, publishes the image to a custom family, and then deletes images in that family beyond the three newest.

  • When Google releases a new debian-11 image, manually create a local VM, run the hardening scripts, export the disk to Cloud Storage, and import it as a custom image. Mark the image as deprecated after three newer images exist.

Question 10 of 20

A financial services firm must replicate transaction data in real time from its New Jersey data center to a Google Cloud deployment in us-east1. The replication peaks at 18 Gbps and cannot traverse the public internet to satisfy regulatory controls. Latency should be minimized, and the data is already encrypted at the application layer, so link-level encryption is unnecessary. Which connectivity option meets these requirements while avoiding unnecessary cost and complexity?

  • Purchase two 10-Gbps Partner Interconnect VLAN attachments from different service providers and protect traffic with HA VPN over the links.

  • Order a single 100-Gbps Dedicated Interconnect circuit and enable MACsec encryption on the link.

  • Set up an HA VPN with four VPN tunnels over dual internet service providers to achieve 18 Gbps of encrypted bandwidth.

  • Provision two 10-Gbps Dedicated Interconnect connections in separate metropolitan zones and use them without additional VPN or MACsec encryption.

Question 11 of 20

Your organization is starting a three-month project with an external research institute. The researchers authenticate with their own Azure Active Directory tenant, but they need temporary access to invoke Cloud Run services and read specific Cloud Storage buckets in your Google Cloud project. Company policy forbids creating Google accounts for them and bans distributing any long-lived credentials. Which approach best satisfies all requirements while following least-privilege practices?

  • Provision temporary Google Workspace accounts for the researchers, place them in a group with the necessary IAM roles, and enforce two-step verification on those accounts.

  • Create a workforce identity pool that trusts the institute's Azure AD as an OIDC provider, map researcher groups to narrowly scoped IAM roles on the project, and let researchers obtain short-lived Google credentials on demand.

  • Use Google Cloud Directory Sync to import the institute's Azure AD users into Cloud Identity and enable SAML-based single sign-on for them.

  • Generate user-managed keys for a dedicated service account that has the required IAM roles and distribute the keys to the institute's researchers for the duration of the project.

Question 12 of 20

A retailer's nightly Beam pipeline launches from a Cloud Composer environment and runs as a dedicated service account on Dataflow workers. The workers must read CSV files from an input bucket, load the transformed records into an existing BigQuery dataset, and write job logs to Cloud Logging. The service account currently holds the Editor role on both involved projects, which violates least-privilege policy. Which replacement IAM grant set meets the functional needs while eliminating overly permissive roles?

  • Grant roles/dataflow.worker on the Dataflow project, roles/storage.objectViewer on the input bucket, roles/bigquery.dataEditor on the target dataset, and roles/logging.logWriter on the project.

  • Give the service account roles/dataflow.admin on the project, roles/storage.legacyBucketReader on the bucket, and roles/bigquery.user on the project.

  • Assign roles/storage.admin and roles/bigquery.admin at the project level so the pipeline can manage all storage and BigQuery resources without further changes.

  • Replace Editor with roles/owner on the Dataflow project to cover all required permissions and future growth.

Question 13 of 20

Your security team must allow the external vendor support group ([email protected] Google Group) to query a sensitive BigQuery dataset in the prod-analytics project, but only when requests come from the vendor's on-premises public CIDR range 203.0.113.0/24 and only on weekdays between 09:00 and 17:00 (America/New_York). The organization wants to avoid adding new proxy or networking components and must follow the principle of least privilege. Which approach best meets these requirements?

  • Configure an Access Context Manager service perimeter that specifies the vendor's IP range and business-hours access level, then grant the [email protected] group the BigQuery Data Viewer role at the project level without additional conditions.

  • Create a VPC Service Controls perimeter around the prod-analytics project and allow ingress only from 203.0.113.0/24 during business hours.

  • Add an IAM policy binding on the dataset that grants the BigQuery Data Viewer role to the [email protected] group with a condition limiting access to requests from 203.0.113.0/24 and to times between 09:00 and 17:00 on weekdays.

  • Define a custom BigQuery Viewer role, assign it to the [email protected] group, and require users to access the dataset through Cloud Identity-Aware Proxy restricted to the vendor's IP range and schedule.

Question 14 of 20

Your organization's GitHub Actions pipeline builds container images and pushes them to Artifact Registry in a Google Cloud project. The workflow currently authenticates with a JSON key for a user-managed service account, but new policy mandates that no long-lived Google-issued credential may exist outside Google Cloud. Short-lived OAuth 2.0 access tokens (≤1 hour) must be generated just-in-time from the workflow without human interaction. Which solution best meets these requirements while respecting least privilege?

  • Store the existing JSON service-account key in Secret Manager and configure the workflow to fetch the key at runtime, rotating the key every seven days with Cloud Scheduler.

  • Place Artifact Registry into a VPC Service Controls perimeter and add the GitHub runners' IP range to an access level, removing the need for service account credentials during image pushes.

  • Create a workload identity pool with a GitHub OIDC provider and allow the pool to impersonate a minimally scoped service account, so the workflow exchanges its GitHub OIDC token for a short-lived Google Cloud access token at runtime.

  • Run gcloud auth application-default login locally, commit the generated Application Default Credentials file that contains a refresh token, and let the workflow exchange the refresh token for one-hour access tokens when needed.

Question 15 of 20

Your organization has migrated to Google Workspace and must hard-enforce 2-Step Verification (2SV) for every user in the Finance organizational unit (OU) within 30 days, while leaving 2SV optional for all other OUs that include a break-glass super-administrator account. Which Admin console configuration best meets the requirement with the least operational effort?

  • Add all Finance users to a Google Group, create a Context-Aware Access level for that group, and configure the level to require 2SV before allowing access to Google services.

  • Enable the Advanced Protection Program for the Finance OU while simultaneously disabling 2SV for the top-level organization.

  • Navigate to Security > Authentication > 2-Step Verification, set Enforcement to "On" for the Finance OU only, keep Enforcement "Off" at the top-level organization, and ensure 2SV remains allowed for all users.

  • Generate and distribute backup verification codes to Finance users and keep 2SV Enforcement "Off"; instruct them to use the codes to sign in.

Question 16 of 20

Your company operates over 150 Google Cloud projects in a single organization. The security operations team must centrally activate Security Command Center (SCC) so they can manage detectors, create mute rules, and view findings across all projects. Individual application teams should have read-only visibility into findings limited to their own projects. What is the most efficient way to configure SCC and IAM to meet these requirements?

  • Enable the SCC Standard tier in every project. Grant the security operations team the Logging Admin role at the organization level and grant application teams the Logging Viewer role on their projects.

  • Enable SCC Premium at the organization level. Grant the security operations team the Security Center Admin role at the organization level, and grant each application team the Security Center Findings Viewer role on only their respective projects.

  • Enable SCC Premium at the organization level. Grant the security operations team the Project Owner role on all projects and grant application teams the Security Center Source Admin role at the organization level.

  • Enable SCC Premium separately in each project using automation. Grant the security operations team the Security Center Admin role on every project and let application teams inherit Viewer permissions from the organization.

Question 17 of 20

A software team is building a cross-platform mobile app that lets Google Workspace users view and update objects in their own Cloud Storage buckets. Security has mandated the following:

  • The app must never embed or distribute long-lived Google credentials.
  • Each user must grant only the minimum necessary permissions.
  • Users must be able to withdraw the app's access at any time without changing their passwords. Which approach best satisfies all requirements?
  • Embed an API key restricted to Cloud Storage in the application code and rotate the key monthly.

  • Use Workload Identity Federation with a public identity pool that maps each device ID to a Storage service account.

  • Implement the OAuth 2.0 authorization-code flow and request only the Cloud Storage read/write scope, storing the refresh token securely on the backend.

  • Package a dedicated service account key with the mobile app and grant it the Storage Object Admin IAM role.

Question 18 of 20

You manage a Cloud Storage bucket that receives daily transaction CSV files in the Standard storage class. The finance team requires two automated controls: (1) minimize storage costs by moving files to a lower-cost class 30 days after upload, and (2) permanently remove the files exactly one year after they have been moved. Which lifecycle rule configuration satisfies both requirements while following Google-recommended lifecycle actions?

  • Add a SetStorageClass action that changes objects to Nearline when Age = 30 days, plus a Delete action that removes objects when Age = 395 days.

  • Add a single Delete action when Age = 365 days; no additional rules are needed.

  • Add a SetStorageClass action when Age = 30 days and another SetStorageClass action when Age = 365 days.

  • Add a Delete action when Age = 30 days, and a SetStorageClass action to Nearline when Age = 395 days.

Question 19 of 20

Your organization hosts all finance workloads inside a dedicated Google Cloud folder. Compliance now requires that any request to Cloud Storage APIs for projects in this folder be permitted only when the caller is either (a) coming from one of your on-premises NAT IP address ranges or (b) using a company-managed, encrypted device that meets Google endpoint-verification standards. You must enforce this control centrally without changing individual bucket IAM policies and ensure that any future projects created in the finance folder automatically inherit the restriction. What should you do?

  • Attach a Cloud Armor security policy to every finance bucket's JSON API endpoint, restricting traffic to the trusted IP ranges and permitting requests only from devices presenting valid endpoint-verification headers.

  • Configure VPC firewall rules in each finance project that only allow egress from approved corporate IP ranges and require mutual TLS with client certificates from managed devices.

  • Create an organization-level Access Policy. Define a custom access level that allows either the trusted on-premises CIDR ranges or compliant, company-managed devices. Create a VPC Service Controls perimeter that includes the finance folder and add the access level to the perimeter's ingress rules.

  • Add an IAM conditional binding at the organization level that grants storage.objectViewer to all finance users only when request.ip and request.device attributes match the corporate policy.

Question 20 of 20

Your organization is migrating sensitive genomics data to Cloud Storage. A regional privacy law requires that the encryption keys must never leave the company-owned, on-premises HSM cluster, and security policy mandates that any dataset can be rendered unreadable at once by disabling the on-prem key. Developers do not want to modify application code beyond selecting an encryption option for the bucket. Which Google Cloud approach best satisfies these requirements?

  • Enable Customer-Managed Encryption Keys backed by Cloud HSM for the bucket.

  • Protect the bucket with a Cloud External Key Manager (EKM) key and enable CMEK using the external key reference.

  • Configure CMEK with a software-backed symmetric key stored in Cloud KMS and rotate it quarterly.

  • Rely on Google default encryption and enforce Bucket Lock to prevent key access by Google personnel.