URL normalization

From Seo Wiki - Search Engine Optimization and Programming Languages
Jump to navigationJump to search

URL normalization (or URL canonicalization) is the process by which URLs are modified and standardized in a consistent manner. The goal of the normalization process is to transform a URL into a normalized or canonical URL so it is possible to determine if two syntactically different URLs are equivalent.

Search engines employ URL normalization in order to assign importance to web pages and to reduce indexing of duplicate pages. Web crawlers perform URL normalization in order to avoid crawling the same resource more than once. Web browsers may perform normalization to determine if a link has been visited or to determine if a page has been cached.

Normalization process

There are several types of normalization that may be performed:

  • Converting the scheme and host to lower case. The scheme and host components of the URL are case-insensitive. Most normalizers will convert them to lowercase. Example:
HTTP://www.Example.com/http://www.example.com/
  • Adding trailing / Directories are indicated with a trailing slash and should be included in URLs. Example:
http://www.example.comhttp://www.example.com/
  • Removing directory index. Default directory indexes are generally not needed in URLs. Examples:
http://www.example.com/default.asphttp://www.example.com/
http://www.example.com/a/index.htmlhttp://www.example.com/a/
  • Capitalizing letters in escape sequences. All letters within a percent-encoding triplet (e.g., "%3A") are case-insensitive, and should be capitalized. Example:
http://www.example.com/a%c2%b1bhttp://www.example.com/a%C2%B1b
  • Removing the fragment. The fragment component of a URL is usually removed. Example:
http://www.example.com/bar.html#section1http://www.example.com/bar.html
  • Removing the default port. The default port (port 80 for the “http” scheme) may be removed from (or added to) a URL. Example:
http://www.example.com:80/bar.htmlhttp://www.example.com/bar.html
  • Removing dot-segments. The segments “..” and “.” are usually removed from a URL according to the algorithm described in RFC 3986 (or a similar algorithm). Example:
http://www.example.com/../a/b/../c/./d.htmlhttp://www.example.com/a/c/d.html
  • Removing “www” as the first domain label. Some websites operate in two Internet domains: one whose least significant label is “www” and another whose name is the result of omitting the least significant label from the name of the first. For example, http://example.com/ and http://www.example.com/ may access the same website. Although many websites redirect the user to the non-www address (or vice versa), some do not. A normalizer may perform extra processing to determine if there is a non-www equivalent and then normalize all URLs to the non-www prefix. Example:
http://www.example.com/http://example.com/
  • Sorting the variables of active pages. Some active web pages have more than one variable in the URL. A normalizer can remove all the variables with their data, sort them into alphabetical order (by variable name), and reassemble the URL. Example:
http://www.example.com/display?lang=en&article=fredhttp://www.example.com/display?article=fred&lang=en
  • Removing arbitrary querystring variables. An active page may expect certain variables to appear in the querystring; all unexpected variables should be removed. Example:
http://www.example.com/display?id=123&fakefoo=fakebarhttp://www.example.com/display?id=123
  • Removing default querystring variables. A default value in the querystring will render identically whether it is there or not. When a default value appears in the querystring, it can be removed. Example:
http://www.example.com/display?id=&sort=ascendinghttp://www.example.com/display
  • Removing the "?" when the querystring is empty. When the querystring is empty, there is no need for the "?". Example:
http://www.example.com/display?http://www.example.com/display

Normalization based on URL lists

Some normalization rules may be developed for specific websites by examining URL lists obtained from previous crawls or web server logs. For example, if the URL

http://foo.org/story?id=xyz

appears in a crawl log several times along with

http://foo.org/story_xyz

we may assume that the two URLs are equivalent and can be normalized to one of the URL forms.

Schonfeld et al. (2006) present a heuristic called DustBuster for detecting DUST (different URLs with similar text) rules that can be applied to URL lists. They showed that once the correct DUST rules were found and applied with a canonicalization algorithm, they were able to find up to 68% of the redundant URLs in a URL list.

References

See also

If you like SEOmastering Site, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...