A financial platform is receiving complaints about slow responses. The operations team needs to see how long each user request spends in the API gateway, authentication service, and core processing engine so they can locate the bottleneck. Which approach will provide the needed end-to-end timing information?
Create auto-scaling rules that add instances when average request latency rises
Propagate a unique trace ID with every request and record a span in each service
Collect CPU, memory, and network metrics for each container in an APM dashboard
Enable detailed debug logging in every service and aggregate the logs centrally
Passing a unique trace ID with every incoming request and recording a span in each service implements distributed tracing. The collected spans are stitched together by the trace ID, showing the latency added by each component. Debug-level logging alone, metrics dashboards, or auto-scaling rules do not correlate every step of an individual request or break down its timing.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is distributed tracing in system monitoring?
Open an interactive chat with Bash
How are unique identifiers used in distributed tracing?
Open an interactive chat with Bash
Why aren't container metrics alone sufficient for diagnosing delays?