发布后监控
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
为确保良好的用户体验,Google 会检查所提供的操作链接是否有效,以及是否遵守商家信息操作政策。为此,Google 将结合使用人工审核和自动化审核。
手动检查
Google 拥有遍布全球的团队。建议不要在这些地区屏蔽操作链接,以确保及时完成初始配置并获得发布后支持。
自动检查(抓取工具)
“订购重定向”的 Web 抓取工具会定期访问您的操作链接。如果网络爬虫从操作链接收到 4xx 或 5xx 状态代码,该链接将被停用并列在地点操作数据质量信息中心中。
网络爬虫检测
为确保网络抓取工具不会被禁止(这会导致您的操作链接被停用),请确保您的系统始终允许我们的网络抓取工具查询您的网页。如需识别我们的网页抓取工具,请执行以下操作:
- 网页抓取工具用户代理将包含字符串 Google-Food
- 示例:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko; Google-Food) Chrome/104.0.5112.101 Safari/537.36
- 您还可以按照“验证 Googlebot 和其他 Google 抓取工具”中的建议,使用 DNS 反向查找来检查调用是否来自 Google。在我们的具体案例中,反向 DNS 解析遵循以下模式:
google-proxy-***-***-***-***.google.com
。
缓存
为了减轻合作伙伴网站的负载,我们的抓取工具通常配置为遵循响应中存在的所有标准 HTTP 缓存标头。这意味着,对于配置正确的网站,我们会避免重复提取很少更改的内容(例如 JavaScript 库)。如需详细了解如何实现缓存,请参阅这篇 HTTP 缓存文档。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2025-08-28。
[null,null,["最后更新时间 (UTC):2025-08-28。"],[],[],null,["To ensure a good user experience, Google will check that the provided action links are operational and abide by [Place Action policies](/actions-center/verticals/ordering/redirect/policies/integration-policies#place-action-policies). To do so, Google will use a combination of human and automated review.\n\nManual checks\n\nGoogle has a globally distributed team. It's recommended that action links are not blocked in these regions to ensure timely onboarding and post-launch support.\n\nAutomated checks (Crawlers)\n\nThe web crawler for Ordering Redirect will periodically access your action links. If the web crawler receives a 4xx or 5xx status code from an action link, the link will be disabled and listed in the [Place Actions Data Quality dashboard](https://actionscenter.google.com/dashboards/placeactionsdataquality).\n\nWeb crawler detection\n\nTo ensure that the web scraper does not get banned (which will cause your action link to be disabled) make sure your system allows our web scraper to query your page at all times. To identify our web scraper:\n\n- The web crawler User-Agent will contain the string Google-Food\n - *Example:* Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko; **Google-Food**) Chrome/104.0.5112.101 Safari/537.36\n- You may also check if the calls come from google using reverse DNS as recommended in [\"Verifying Googlebot and other Google crawlers\"](https://developers.google.com/search/docs/crawling-indexing/verifying-googlebot). In our specific case, the reverse DNS resolution follows this pattern: `google-proxy-***-***-***-***.google.com`.\n\nCaching\n\nFor purposes of reducing load on the partner website, our crawlers are generally configured to respect all standard HTTP caching headers present in the response. That means that for correctly configured websites we avoid repeatedly fetching content that changes rarely (e.g. JavaScript libraries). For more details on how to implement caching, read this [HTTP caching documentation](https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Caching)."]]