🔥 40% Off Crucial Exams Memberships — Deal ends today!

2 hours, 30 minutes remaining!

GCP Professional Cloud Architect Practice Question

Your team has found a text-generation foundation model in Vertex AI Model Garden that must be consumed by a Cloud Run microservice. You need the fastest way to expose the model as a production-ready, auto-scaled API without building or managing any custom inference infrastructure. The solution must also let you roll out newer model versions later with minimal code changes in the consuming service. What should you do?

  • Create a BigQuery ML remote model that references the Model Garden model, then invoke predictions from Cloud Run by submitting SQL queries to BigQuery.

  • Use Model Garden's Deploy to Endpoint feature to create a managed Vertex AI online prediction endpoint and call it from Cloud Run through the Vertex AI REST API.

  • Package the model files inside the Cloud Run container image and load them at startup so the service performs in-process inference.

  • Export the model artifact to Cloud Storage, build a custom prediction server in a container, deploy it to Cloud Run, and have the microservice call that HTTPS endpoint.

GCP Professional Cloud Architect
Managing and provisioning a solution infrastructure
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot