[null,null,["最后更新时间 (UTC):2025-08-04。"],[[["\u003cp\u003eUsing pagination, "load more", or infinite scroll can improve user experience by displaying content incrementally, reducing network traffic, and making pages load faster.\u003c/p\u003e\n"],["\u003cp\u003eSearch engines primarily rely on \u003ccode\u003e<a href>\u003c/code\u003e links for crawling; ensure all content is accessible through such links for proper indexing, especially with JavaScript-based loading methods.\u003c/p\u003e\n"],["\u003cp\u003ePagination best practices include linking pages sequentially, using unique URLs for each page with appropriate canonical tags, and avoiding indexing URLs with filters or alternative sort orders to optimize crawling and indexing.\u003c/p\u003e\n"],["\u003cp\u003eGoogle no longer uses \u003ccode\u003erel="next"\u003c/code\u003e and \u003ccode\u003erel="prev"\u003c/code\u003e; instead, focus on proper \u003ccode\u003e<a href>\u003c/code\u003e links and consider sitemaps or Google Merchant Center feeds for enhanced content discovery by Google.\u003c/p\u003e\n"],["\u003cp\u003eChoose the most appropriate UX pattern for your content based on the quantity of content, desired user experience and the technical capabilities of your site.\u003c/p\u003e\n"]]],["Websites often use pagination, \"load more,\" or infinite scroll to display subsets of content, enhancing user experience and site performance. Pagination uses numbered links, while \"load more\" and infinite scroll dynamically add content. Google primarily indexes URLs in `\u003ca\u003e` tags and doesn't trigger JavaScript actions for content updates. For pagination, each page should have a unique URL with sequential links. Avoid using the first page as the canonical URL and avoid indexing filtered or sorted versions of the same list to avoid duplicate content.\n"],null,["# Pagination Best Practices for Google | Google Search Central\n\nPagination, incremental page loading, and their impact on Google Search\n=======================================================================\n\n\nYou can improve the experience of users on your site by displaying a subset of results to\nimprove page performance, but you may need to take action to ensure the Google\ncrawler can find all your site content.\n\n\nFor example, you may display a subset of available products in response to a user using the\nsearch box on your ecommerce site---the full set of matches may be too large to display on a\nsingle web page, or take too long to retrieve.\n\n\nBeyond search results, you may load partial results on your ecommerce site for:\n\n- Category pages where all products in a category are displayed\n- Blog posts or newsletter titles that a site has published over time\n- User reviews on a product page\n- Comments on a blog post\n\n\nHaving your site incrementally load content, in response to user actions, can benefit your users by:\n\n- Improving user experience as the initial page load is faster than loading all results at once.\n- Reducing network traffic, which is particularly important for mobile devices.\n- Improving backend performance by reducing the volume of content retrieved from databases or similar.\n- Improving reliability by avoiding excessively long lists that may hit resource limits leading to errors in the browser and backend systems.\n\nSelecting the best UX pattern for your site\n-------------------------------------------\n\n\nTo display a subset of a larger list, you can choose between different UX patterns:\n\n- **Pagination**: Where a user can use links such as \"next\", \"previous\", and page numbers to navigate between pages that display one page of results at a time.\n- **Load more**: Buttons that a user can click to extend an initial set of displayed results.\n- **Infinite scroll** : Where a user can scroll to the end of the page to cause more content to be loaded. (Learn more about [infinite scroll search-friendly recommendations](/search/docs/crawling-indexing/javascript/lazy-loading#paginated-infinite-scroll).)\n\n\nConsider the following table when choosing the most suitable user experience for your site.\n\n| UX Pattern ||\n|-----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Pagination | |-----------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **Pros:** - Gives users insight into result size and current position | **Cons:** - More complex controls for users to navigate through results - Content is split across multiple pages rather than being a single continuous list - Viewing more requires new page loads | |\n| Load more | |---------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------| | **Pros:** - Uses a single page for all content - Can inform user of total result size (on or near the button) | **Cons:** - Can't handle very large numbers of results as all of the results are included on a single web page | |\n| Infinite scroll | |------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------| | **Pros:** - Uses a single page for all content - Intuitive -- the user just keeps scrolling to view more content | **Cons:** - Can lead to \"scrolling fatigue\" because of unclear result size - Can't handle very large numbers of results | |\n\nHow Google indexes the different strategies\n-------------------------------------------\n\n\nOnce you've selected the most appropriate UX strategy for your site and SEO, make sure the Google\ncrawler can find all of your content.\n\n\nFor example, you can implement pagination using links to new pages on your ecommerce site, or\nusing JavaScript to update the current page. \"Load more\" functionalities and infinite scroll\nare generally implemented using JavaScript. When crawling a site to find pages to index,\nGoogle generally crawls URLs found in the `href` attribute of\n`\u003ca\u003e` elements. Google's crawlers don't \"click\" buttons and generally don't\ntrigger JavaScript functions that require user actions to update the current page contents.\n\n\nIf your site uses JavaScript, follow these\n[JavaScript SEO best practices](/search/docs/crawling-indexing/javascript/javascript-seo-basics).\nIn addition to best practices, such as making sure links on your site are crawlable, consider using a\n[sitemap](/search/docs/crawling-indexing/sitemaps/build-sitemap)\nfile or a\n[Google Merchant Center feed](https://support.google.com/merchants/answer/7439058)\nto help Google find all of the products on your site.\n\nBest practices when implementing pagination\n-------------------------------------------\n\n\nTo make sure Google can crawl and index your paginated content, follow these best practices:\n\n- [Link pages sequentially](#sequentially)\n- [Use URLs correctly](#use-urls-correctly)\n- [Avoid indexing URLs with filters or alternative sort orders](#avoid-indexing-variations)\n\n### Link pages sequentially\n\n\nTo make sure search engines understand the relationship between pages of paginated content,\ninclude links from each page to the following page using `\u003ca href\u003e` tags. This\ncan help Googlebot (the Google web crawler) find subsequent pages.\n\n\nIn addition, consider linking from all individual pages in a collection back to the first page\nof the collection to emphasize the start of the collection to Google. This can give Google a\nhint that the first page of a collection might be a better landing page than other pages in\nthe collection.\n| Normally, we recommend that you give web pages distinct titles to help differentiate them. However, pages in a paginated sequence don't need to follow this recommendation. You can use the same titles and descriptions for all pages in the sequence. Google tries to recognize pages in a sequence and index them accordingly.\n\n### Use URLs correctly\n\n- **Give each page a unique URL** . For example, include a `?page=n` query parameter, as URLs in a paginated sequence are treated as separate pages by Google.\n- **Don't use the first page of a paginated sequence as the canonical page** . Instead, give each page its own [canonical URL](/search/docs/crawling-indexing/consolidate-duplicate-urls).\n- **Don't use URL fragment identifiers (the text after a `#` in a URL) for page numbers in a collection** . Google ignores fragment identifiers. If Googlebot sees a URL to the next page that only differs by the text after the `#`, it may not follow the link, thinking it has already retrieved the page.\n- **Consider using\n [preload, preconnect, or prefetch](https://web.dev/learn/performance/resource-hints)** to optimize the performance for a user moving to the next page.\n\n| In the past, Google used `\u003clink rel=\"next\" href=\"...\"\u003e` and `\u003clink rel=\"prev\" href=\"...\"\u003e` to identify next page and previous page relationships. Google no longer uses these tags, although these links may still be used by other search engines.\n\n### Avoid indexing URLs with filters or alternative sort orders\n\n\nYou may choose to support filters or different sort orders for long lists of results on your\nsite. For example, you may support `?order=price` on URLs to return the same\nlist of results ordered by price.\n\n\nTo avoid indexing variations of the same list of results, block unwanted URLs from being indexed with the\n[`noindex` robots `meta` tag](/search/docs/crawling-indexing/block-indexing)\nor discourage crawling of particular\n[URL patterns with a robots.txt file](/search/docs/crawling-indexing/robots/robots_txt#url-matching-based-on-path-values)."]]