監控
透過集合功能整理內容
你可以依據偏好儲存及分類內容。
最佳化效能的第一步是找出主要指標,通常與延遲和輸送量有關。新增監控功能來擷取及追蹤這些指標,可找出應用程式的弱點。有了指標,您就能進行最佳化,進而提升成效指標。
此外,許多監控工具都可讓您為指標設定快訊,在達到特定門檻時收到通知。舉例來說,您可以設定快訊,在要求失敗百分比比正常水準增加超過 x% 時收到通知。監控工具可協助您瞭解正常效能,並找出延遲、錯誤數量和其他重要指標的異常尖峰。在業務關鍵時間範圍內,或將新程式碼推送至正式環境後,監控這些指標的能力就顯得格外重要。
找出延遲指標
請盡可能確保 UI 保持回應性,並注意使用者對行動應用程式的標準更高。此外,也應測量及追蹤後端服務的延遲時間,因為如果未檢查,可能會導致輸送量問題。
建議追蹤的指標包括:
- 要求時間長度
- 以子系統精細度 (例如 API 呼叫) 顯示的要求時間長度
- 工作時間長度
找出輸送量指標
輸送量是指在指定時間內處理的要求總數。子系統的延遲可能會影響總處理量,因此您可能需要針對延遲進行最佳化,以提升總處理量。
建議追蹤的指標包括:
- 每秒查詢次數
- 每秒傳輸的資料大小
- 每秒 I/O 作業數
- 資源使用率,例如 CPU 或記憶體用量
- 待處理工作量大小,例如發布/訂閱或執行緒數量
不只是平均值
評估成效時,常見的錯誤是只查看平均情況。雖然這項資訊很有用,但無法深入瞭解延遲時間的分布情形。建議您追蹤成效百分位數,例如指標的第 50/75/90/99 個百分位數。
一般來說,最佳化作業可分為兩個步驟。首先,請針對第 90 個百分位數的延遲時間進行最佳化。接著,請考慮第 99 個百分位數,也就是尾延遲:一小部分要求需要較長時間才能完成。
伺服器端監控,可取得詳細結果
一般來說,追蹤指標時最好使用伺服器端剖析。伺服器端通常更容易進行儀表化,可存取更精細的資料,且較不受連線問題干擾。
瀏覽器監控功能,可提供端對端檢視畫面
瀏覽器剖析可進一步深入瞭解使用者體驗。這項功能會顯示要求速度緩慢的網頁,方便您與伺服器端監控建立關聯,進一步分析。
Google Analytics 提供網頁載入時間的即時監控功能,您可以在網頁時間報表中查看相關資料。這項功能提供多種實用檢視畫面,可協助您瞭解網站的使用者體驗,特別是:
雲端監控
您可以使用多種工具擷取及監控應用程式的效能指標。舉例來說,您可以使用 Google Cloud Logging 將效能指標記錄到 Google Cloud 專案,然後在 Google Cloud Monitoring 中設定資訊主頁,監控及區隔記錄的指標。
如需從 Python 用戶端程式庫的自訂攔截器記錄到 Google Cloud Logging 的範例,請參閱記錄指南。在 Google Cloud 中取得這些資料後,您就能根據記錄的資料建立指標,透過 Google Cloud Monitoring 掌握應用程式的狀況。請按照使用者定義記錄指標的指南操作,使用傳送至 Google Cloud Logging 的記錄建立指標。
或者,您也可以使用 Monitoring 用戶端程式庫在程式碼中定義指標,並將指標直接傳送至 Monitoring,與記錄分開。
記錄指標範例
假設您想監控 is_fault
值,以便更瞭解應用程式中的錯誤率。您可以從記錄中將 is_fault
值擷取到新的計數器指標 ErrorCount
。


在 Cloud Logging 中,標籤可讓您根據記錄檔中的其他資料,將指標分組到各類別。您可以為傳送至 Cloud Logging 的 method
欄位設定標籤,查看 Google Ads API 方法的錯誤計數細目。
設定 ErrorCount
指標和 Method
標籤後,您可以在 Monitoring 資訊主頁中建立新圖表,監控依 Method
分組的 ErrorCount
。

快訊
您可以在 Cloud Monitoring 和其他工具中設定快訊政策,指定指標應在何時及如何觸發快訊。如需設定 Cloud Monitoring 快訊的操作說明,請參閱快訊指南。
除非另有註明,否則本頁面中的內容是採用創用 CC 姓名標示 4.0 授權,程式碼範例則為阿帕契 2.0 授權。詳情請參閱《Google Developers 網站政策》。Java 是 Oracle 和/或其關聯企業的註冊商標。
上次更新時間:2025-09-05 (世界標準時間)。
[null,null,["上次更新時間:2025-09-05 (世界標準時間)。"],[[["\u003cp\u003ePerformance optimization involves identifying key metrics like latency and throughput to pinpoint areas for improvement.\u003c/p\u003e\n"],["\u003cp\u003eMonitoring tools enable tracking of these metrics, setting up alerts for thresholds, and visualizing performance trends.\u003c/p\u003e\n"],["\u003cp\u003eOptimizing for latency percentiles, such as the 90th and 99th, offers a comprehensive approach to performance enhancement.\u003c/p\u003e\n"],["\u003cp\u003eServer-side and browser monitoring provide different perspectives, with server-side offering granular data and browser monitoring reflecting user experience.\u003c/p\u003e\n"],["\u003cp\u003eLeverage tools like Google Cloud Logging and Monitoring to capture, track, and analyze performance metrics, facilitating efficient optimization strategies.\u003c/p\u003e\n"]]],[],null,["# Monitoring\n\nPerformance optimization starts with identifying key metrics, usually related to\nlatency and throughput. The addition of monitoring to capture and track these\nmetrics exposes weak points in the application. With metrics, optimization can\nbe undertaken to improve performance metrics.\n\nAdditionally, many monitoring tools let you set up alerts for your metrics, so\nthat you are notified when a certain threshold is met. For example, you might\nset up an alert to notify you when the percentage of failed requests increases\nby more than *x*% of the normal levels. Monitoring tools can help you identify\nwhat normal performance looks like and identify unusual spikes in latency, error\nquantities, and other key metrics. The ability to monitor these metrics is\nespecially important during business critical timeframes, or after new code has\nbeen pushed to production.\n\nIdentify latency metrics\n------------------------\n\nEnsure that you keep your UI as responsive as you can, noting that users expect\neven higher standards from [mobile apps](/web/fundamentals/performance/why-performance-matters). Latency should also be measured\nand tracked for backend services, particularly since it can lead to throughput\nissues if left unchecked.\n\nSuggested metrics to track include the following:\n\n- Request duration\n- Request duration at subsystem granularity (such as API calls)\n- Job duration\n\nIdentify throughput metrics\n---------------------------\n\nThroughput is a measure of the total number of requests served over a given\nperiod of time. Throughput can be affected by latency of subsystems, so you\nmight need to optimize for latency to improve throughput.\n\nHere are some suggested metrics to track:\n\n- Queries per second\n- Size of data transferred per second\n- Number of I/O operations per second\n- Resource utilization, such as CPU or memory usage\n- Size of processing backlog, such as pub/sub or number of threads\n\nNot just the mean\n-----------------\n\nA common mistake in measuring performance is only looking at the mean (average)\ncase. While this is useful, it doesn't provide insight into the distribution of\nlatency. A better metric to track is the performance percentiles, for example\nthe 50th/75th/90th/99th percentile for a metric.\n\nGenerally, optimizing can be done in two steps. First, optimize for 90th\npercentile latency. Then, consider the 99th percentile---also known as tail\nlatency: the small portion of requests which take much longer to complete.\n\nServer-side monitoring for detailed results\n-------------------------------------------\n\nServer-side profiling is generally preferred for tracking metrics. The server\nside is usually much easier to instrument, allows access to more granular data,\nand is less subject to perturbation from connectivity issues.\n\nBrowser monitoring for end-to-end visibility\n--------------------------------------------\n\nBrowser profiling can provide additional insights into the end user experience.\nIt can show which pages have slow requests, which you can then correlate to\nserver-side monitoring for further analysis.\n\n[Google Analytics](/analytics) provides out-of-the-box monitoring for page load\ntimes in the [page timings report](//support.google.com/analytics/answer/1205784#PageTimings). This provides several useful views\nfor understanding the user experience on your site, in particular:\n\n- Page load times\n- Redirect load times\n- Server response times\n\nMonitoring in the cloud\n-----------------------\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nThere are many tools you can use to capture and monitor performance metrics for\nyour application. For example, you can use [Google Cloud Logging](//cloud.google.com/logging) to log\nperformance metrics to your [Google Cloud Project](/google-ads/api/docs/oauth/cloud-project), then set up\ndashboards in [Google Cloud Monitoring](//cloud.google.com/monitoring) to monitor and segment the logged\nmetrics.\n\nCheck out the [Logging guide](/google-ads/api/docs/productionize/logging) for an [example](/google-ads/api/docs/productionize/logging#option_4_implement_a_custom_grpc_logging_interceptor) of logging to\nGoogle Cloud Logging from a custom interceptor in the Python client library.\nWith that data available in Google Cloud, you can build metrics on top of the\nlogged data to gain visibility into your application through Google Cloud\nMonitoring. Follow the [guide](//cloud.google.com/logging/docs/logs-based-metrics#user-defined_metrics) for user-defined log-based metrics to\nbuild metrics using the logs sent to Google Cloud Logging.\n\nAlternatively, you could use the Monitoring client [libraries](//cloud.google.com/monitoring/docs/reference/libraries) to define\nmetrics in your code and send them directly to Monitoring, separate from the\nlogs.\n\n### Log-based metrics example\n\nSuppose you want to monitor the `is_fault` value to better understand error\nrates in your application. You can extract the `is_fault` value from the logs\ninto a new [counter metric](//cloud.google.com/logging/docs/logs-based-metrics/counter-metrics), `ErrorCount`.\n\nIn Cloud Logging, [labels](//cloud.google.com/logging/docs/logs-based-metrics/labels) let you group your metrics into categories\nbased on other data in the logs. You can configure a label for the [`method`\nfield sent to Cloud Logging](/google-ads/api/docs/productionize/logging#option_4_implement_a_custom_grpc_logging_interceptor) in order to look at how the error count is\nbroken down by the Google Ads API method.\n\nWith the `ErrorCount` metric and the `Method` label configured, you can [create\na new\nchart](//cloud.google.com/logging/docs/logs-based-metrics/charts-and-alerts) in\na Monitoring dashboard to monitor `ErrorCount`, grouped by `Method`.\n\n### Alerts\n\nIt's possible in Cloud Monitoring and in other tools to configure alert policies\nthat specify when and how alerts should be triggered by your metrics. For\ninstructions on setting up Cloud Monitoring alerts, follow the\n[alerts guide](//cloud.google.com/logging/docs/logs-based-metrics/charts-and-alerts#alert-on-lbm)."]]