Post-launch Monitoring
Stay organized with collections
Save and categorize content based on your preferences.
To ensure a good user experience, Google will check that the provided action links are operational and abide by Place Action policies. To do so, Google will use a combination of human and automated review.
Manual checks
Google has a globally distributed team. It's recommended that action links are not blocked in these regions to ensure timely onboarding and post-launch support.
Automated checks (Crawlers)
The web crawler for Appointments Redirect will periodically access your action links. If the web crawler receives a 4xx or 5xx status code from an action link, the link will be disabled and listed in the Place Actions Data Quality dashboard.
Web crawler detection
To ensure that the web scraper does not get banned (which will cause your action link to be disabled) make sure your system allows our web scraper to query your page at all times. To identify our web scraper:
- The web crawler User-Agent will contain the string Google-Appointments
- Example: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko; Google-Appointments) Chrome/104.0.5112.101 Safari/537.36
- You may also check if the calls come from google using reverse DNS as recommended in "Verifying Googlebot and other Google crawlers". In our specific case, the reverse DNS resolution follows this pattern:
google-proxy-***-***-***-***.google.com
.
Caching
For purposes of reducing load on the partner website, our crawlers are generally configured to respect all standard HTTP caching headers present in the response. That means that for correctly configured websites we avoid repeatedly fetching content that changes rarely (e.g. JavaScript libraries). For more details on how to implement caching, read this HTTP caching documentation.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-27 UTC.
[null,null,["Last updated 2025-08-27 UTC."],[],[],null,["To ensure a good user experience, Google will check that the provided action links are operational and abide by [Place Action policies](/actions-center/verticals/appointments/redirect/policies/integration-policies#place-action-policies). To do so, Google will use a combination of human and automated review.\n\nManual checks\n\nGoogle has a globally distributed team. It's recommended that action links are not blocked in these regions to ensure timely onboarding and post-launch support.\n\nAutomated checks (Crawlers)\n\nThe web crawler for Appointments Redirect will periodically access your action links. If the web crawler receives a 4xx or 5xx status code from an action link, the link will be disabled and listed in the [Place Actions Data Quality dashboard](https://actionscenter.google.com/dashboards/placeactionsdataquality).\n\nWeb crawler detection\n\nTo ensure that the web scraper does not get banned (which will cause your action link to be disabled) make sure your system allows our web scraper to query your page at all times. To identify our web scraper:\n\n- The web crawler User-Agent will contain the string Google-Appointments\n - *Example:* Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko; **Google-Appointments**) Chrome/104.0.5112.101 Safari/537.36\n- You may also check if the calls come from google using reverse DNS as recommended in [\"Verifying Googlebot and other Google crawlers\"](https://developers.google.com/search/docs/crawling-indexing/verifying-googlebot). In our specific case, the reverse DNS resolution follows this pattern: `google-proxy-***-***-***-***.google.com`.\n\nCaching\n\nFor purposes of reducing load on the partner website, our crawlers are generally configured to respect all standard HTTP caching headers present in the response. That means that for correctly configured websites we avoid repeatedly fetching content that changes rarely (e.g. JavaScript libraries). For more details on how to implement caching, read this [HTTP caching documentation](https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Caching)."]]