Recent Posts

Link rot (or linkrot ) is the process by which hyperlinks on individual websites or the Internet in general point to web pages , servers or other resources that have become permanently unavailable. The phrase also describes the effects of failing to update out-of-date web pages that clutter search engine results. Research [1] [2] shows that the half-life of a random webpage is two years.

Terminology

Link rot is also called “link death”, “link breaking” or “reference rot”. A link that does not work any more is called a “broken link”, “dead link”, or “dangling link”. Formally, this is a form of dangling reference :

Causes

One of the most common reasons for a broken link is that it does not exist. This error occurred frequently , which indicates that the web server has responded to this problem. Another type of dead link occurs when the server targets the target page stops working or relocates to a new domain name . The browser may return a DNS error or display a site unrelated to the content originally sought. The latter can occur when a domain name is reregistered by another party. Other reasons for broken links include:

Websites can be restructured or redesigned, or the underlying technology can be changed, altering or invalidating large numbers of inbound or internal links.

Many news sites keep articles open for a short time period, and then move them behind a paywall . This causes a significant loss of supporting links in sites using media sites as references.

Links may expire.

Content may be intentionally removed by the owner.

Links may be removed as a result of legal action or short order.

Search results from social media Such As Facebook and Tumblr are prone to link rot Because of frequent exchange in user privacy, the deletion of accounts, search result pointing to a dynamic page That HAS new results That Differ from the cached result , or the deletion of links or photos.

Links can contain ephemeral, user-specific information, such as session or login data. Because these are not universally valid, the result can be a broken link.

A link might be broken because of some form of blocking such as filters or firewalls .

A website may be closed or taken down, invalidating the links which are pointing to it.

A website might change its domain name. Links pointing to the old name might then become invalid.

Dead links can also occur on the authoring side, when the content is provided by the Internet and the targets.

Prevalence

The 404 “Not Found” is the occasional web user. A number of studies have examined the prevalence of links on the web, in academic literature, and in digital libraries . [3] In 2003 experiment, Fetterly et al. discovered that about one link out of every 200 disappeared each week from the Internet. McCown et al. (2005) discovered that half of the URLs cited in D-Lib Magazinearticles were no longer available (Spinellis, 2003, Lawrence et al., 2001). Nelson and Allen (2002) examined 3 years ago and no longer accessible after one year. In 2014, bookmarking site Pinboard owner Maciej Cegłowski reported a “pretty steady rate” of 5% link rot per year. [4]

A 2014 Harvard Law School study by Jonathan Zittrain , Kendra Albert and Lawrence Lessig , determined that approximately 50% of the URLs in US Supreme Court opinions no longer link to the original information. [5] They also found that in a selection of legal journals published between 1999 and 2011, more than 70% of the links no longer functioned as intended. A 2013 study in BMC Bioinformatics analyzed nearly 15,000 links in abstracts from Thomson Reuters ‘ Web of Science citation index and found that the median lifespan of web pages was 9.3 years, and just 62% were archived. [6]In August 2015 Weblock analyzed more than 180,000 links to the full-text corpora of the major 24.5% of links cited were no longer available. [7]

Discovering

Discovering broken links might be done manually or automatically. Automated methods, including plugins for WordPress , Drupal and other content management system can be used to detect the presence of broken URLs . An alternative is using a specific broken link Xen’s Link Sleuth . However, if an URL returns an HTTP 200 (OK) response, it may be accessible, but the contents of the page could be changed. So manual checking links seems to be a must. Some web servers also return a soft 404 , reporting to computers that the link works even though it does not. Bar-Yossef et al. (2004)[8] developed a heuristic for automatically discovering soft 404s.

Combating

There are many solutions for tackling broken links: Some work to prevent them in the first place, while others try to resolve them when they have occurred. There are many tools that have been developed to help combat link.

Authoring

Carefully select and implement hyperlinks, and check them regularly after publication. Best practices include linking to primary sources and priority sources. McCown et al., 2005, suggest avoiding URLs that point to resources on researchers’ personal pages.

Always look for the most compact and direct URL available, and ensure that it’s clean, with no further information after the core of the URL. [9] This process is often referred to as URL normalization or URL canonicalization .

Whenever possible, use persistent identifiers (URLs designed for durability) such as ARKs, DOIs, Handle System references, and PURLs.

Avoid linking to PDF documents if possible. Because PDFs are documents rather than web pages, their content can not be translated into safe codes for URLs. Large PDFs may also download slowly and cause a timeout error. [9]

Server side

Never change URLs and never remove pages. If there is a reason for a page, such as a news site redacting a story, replace it with a message explaining its removal.

When URLs change, use redirection mechanisms such as ” 301: Moved Permanently ” to automatically refer browsers and crawlers to the new location.

Content management systems may be implemented in the same way as the management of links.

WordPress guards against link by non-canonical URLs with their canonical versions. [10]

IBM ‘s Peridot Attempts to automatically fix broken links.

Permalinking stops broken links by guaranteeing that the future will not move for the future. Another form of permalinking is that of linking the content of the content to the actual content of the content, but that the actual content may be moved, etc.

Design URLs-for example, semantic URLs -which they will not need to change when a different type of document is used. [11]

User side

The Linkgraph widget gets the URL from the correct page.

The Google 404 Widget attempts to “guess” the correct URL, and also provides the correct search engine.

When a user receives a 404 response, the Google Toolbar attempts to assist the user in finding the missing page. [12]

Web archiving

Main article: Web archiving

To fight link rot, web archivists are Actively engaged in collecting the web gold Particular portions of the Web and Ensuring the collection is preserved in an archive , Such As year archive website , for future Researchers, historians, and the public. The goal of the Internet Archive is to maintain an archive of the entire Web, taking periodic snapshots of pages that can be accessed via the Wayback Machine . In January 2013 the company announced that it had reached the milestone of 240 billion archived URLs. [13] National libraries , national archives and other organizations are also involved in archiving culturally important Web content.

Individuals may use a number of tools that allow them to survive in the future:

The “WayBack Machine”, at the Internet Archive , [14] is a free website that archives old web pages. It does not archive websites whose owners have stated they do not want their website archived.

Archive.is , an archive site which stores snapshots of web pages. It does not include WebCite, it includes Web 2.0 sites such as Google Maps and Twitter.

Perma, which is supported by the Harvard Law School together with a broad coalition of university libraries, takes a snapshot of a URL’s content and returns to a permanent link. [5]

The Hiberlink project, a collaboration between the University of Edinburgh, the Los Alamos National Laboratory and others, is working to measure “reference rot” in online academic articles, and also to what extent. [15] A related project, Memento , has established a technical standard for accessing online content in the past. [16]

Some social bookmarking websites allow users to make online clones of any web page on the internet, creating a copy at an independent url which remains online even if the original page goes down.

Amber, created by the Harvard Berkman Center , is a tool built to fight through archiving link rot links on Wordpress and Drupal websites to web censorship and prevent prevention bolster happy preservation. [17]

However, such preserving systems may be encountered on an off-line service so that the URLs are intermittently unavailable. [18]

Link rot (or linkrot ) is the process by which hyperlinks on individual websites or the Internet in general point to web pages , servers or other resources that have become permanently unavailable. The phrase also describes the effects of failing to update out-of-date web pages that clutter search engine results. Research [1] [2] shows that the half-life of a random webpage is two years.

Terminology

Link rot is also called “link death”, “link breaking” or “reference rot”. A link that does not work any more is called a “broken link”, “dead link”, or “dangling link”. Formally, this is a form of dangling reference :

Causes

One of the most common reasons for a broken link is that it does not exist. This error occurred frequently , which indicates that the web server has responded to this problem. Another type of dead link occurs when the server targets the target page stops working or relocates to a new domain name . The browser may return a DNS error or display a site unrelated to the content originally sought. The latter can occur when a domain name is reregistered by another party. Other reasons for broken links include:

Websites can be restructured or redesigned, or the underlying technology can be changed, altering or invalidating large numbers of inbound or internal links.

Many news sites keep articles open for a short time period, and then move them behind a paywall . This causes a significant loss of supporting links in sites using media sites as references.

Links may expire.

Content may be intentionally removed by the owner.

Links may be removed as a result of legal action or short order.

Search results from social media Such As Facebook and Tumblr are prone to link rot Because of frequent exchange in user privacy, the deletion of accounts, search result pointing to a dynamic page That HAS new results That Differ from the cached result , or the deletion of links or photos.

Links can contain ephemeral, user-specific information, such as session or login data. Because these are not universally valid, the result can be a broken link.

A link might be broken because of some form of blocking such as filters or firewalls .

A website may be closed or taken down, invalidating the links which are pointing to it.

A website might change its domain name. Links pointing to the old name might then become invalid.

Dead links can also occur on the authoring side, when the content is provided by the Internet and the targets.

Prevalence

The 404 “Not Found” is the occasional web user. A number of studies have examined the prevalence of links on the web, in academic literature, and in digital libraries . [3] In 2003 experiment, Fetterly et al. discovered that about one link out of every 200 disappeared each week from the Internet. McCown et al. (2005) discovered that half of the URLs cited in D-Lib Magazinearticles were no longer available (Spinellis, 2003, Lawrence et al., 2001). Nelson and Allen (2002) examined 3 years ago and no longer accessible after one year. In 2014, bookmarking site Pinboard owner Maciej Cegłowski reported a “pretty steady rate” of 5% link rot per year. [4]

A 2014 Harvard Law School study by Jonathan Zittrain , Kendra Albert and Lawrence Lessig , determined that approximately 50% of the URLs in US Supreme Court opinions no longer link to the original information. [5] They also found that in a selection of legal journals published between 1999 and 2011, more than 70% of the links no longer functioned as intended. A 2013 study in BMC Bioinformatics analyzed nearly 15,000 links in abstracts from Thomson Reuters ‘ Web of Science citation index and found that the median lifespan of web pages was 9.3 years, and just 62% were archived. [6]In August 2015 Weblock analyzed more than 180,000 links to the full-text corpora of the major 24.5% of links cited were no longer available. [7]

Discovering

Discovering broken links might be done manually or automatically. Automated methods, including plugins for WordPress , Drupal and other content management system can be used to detect the presence of broken URLs . An alternative is using a specific broken link Xen’s Link Sleuth . However, if an URL returns an HTTP 200 (OK) response, it may be accessible, but the contents of the page could be changed. So manual checking links seems to be a must. Some web servers also return a soft 404 , reporting to computers that the link works even though it does not. Bar-Yossef et al. (2004)[8] developed a heuristic for automatically discovering soft 404s.

Combating

There are many solutions for tackling broken links: Some work to prevent them in the first place, while others try to resolve them when they have occurred. There are many tools that have been developed to help combat link.

Authoring

Carefully select and implement hyperlinks, and check them regularly after publication. Best practices include linking to primary sources and priority sources. McCown et al., 2005, suggest avoiding URLs that point to resources on researchers’ personal pages.

Always look for the most compact and direct URL available, and ensure that it’s clean, with no further information after the core of the URL. [9] This process is often referred to as URL normalization or URL canonicalization .

Whenever possible, use persistent identifiers (URLs designed for durability) such as ARKs, DOIs, Handle System references, and PURLs.

Avoid linking to PDF documents if possible. Because PDFs are documents rather than web pages, their content can not be translated into safe codes for URLs. Large PDFs may also download slowly and cause a timeout error. [9]

Server side

Never change URLs and never remove pages. If there is a reason for a page, such as a news site redacting a story, replace it with a message explaining its removal.

When URLs change, use redirection mechanisms such as ” 301: Moved Permanently ” to automatically refer browsers and crawlers to the new location.

Content management systems may be implemented in the same way as the management of links.

WordPress guards against link by non-canonical URLs with their canonical versions. [10]

IBM ‘s Peridot Attempts to automatically fix broken links.

Permalinking stops broken links by guaranteeing that the future will not move for the future. Another form of permalinking is that of linking the content of the content to the actual content of the content, but that the actual content may be moved, etc.

Design URLs-for example, semantic URLs -which they will not need to change when a different type of document is used. [11]

User side

The Linkgraph widget gets the URL from the correct page.

The Google 404 Widget attempts to “guess” the correct URL, and also provides the correct search engine.

When a user receives a 404 response, the Google Toolbar attempts to assist the user in finding the missing page. [12]

Web archiving

Main article: Web archiving

To fight link rot, web archivists are Actively engaged in collecting the web gold Particular portions of the Web and Ensuring the collection is preserved in an archive , Such As year archive website , for future Researchers, historians, and the public. The goal of the Internet Archive is to maintain an archive of the entire Web, taking periodic snapshots of pages that can be accessed via the Wayback Machine . In January 2013 the company announced that it had reached the milestone of 240 billion archived URLs. [13] National libraries , national archives and other organizations are also involved in archiving culturally important Web content.

Individuals may use a number of tools that allow them to survive in the future:

The “WayBack Machine”, at the Internet Archive , [14] is a free website that archives old web pages. It does not archive websites whose owners have stated they do not want their website archived.

Archive.is , an archive site which stores snapshots of web pages. It does not include WebCite, it includes Web 2.0 sites such as Google Maps and Twitter.

Perma, which is supported by the Harvard Law School together with a broad coalition of university libraries, takes a snapshot of a URL’s content and returns to a permanent link. [5]

The Hiberlink project, a collaboration between the University of Edinburgh, the Los Alamos National Laboratory and others, is working to measure “reference rot” in online academic articles, and also to what extent. [15] A related project, Memento , has established a technical standard for accessing online content in the past. [16]

Some social bookmarking websites allow users to make online clones of any web page on the internet, creating a copy at an independent url which remains online even if the original page goes down.

Amber, created by the Harvard Berkman Center , is a tool built to fight through archiving link rot links on Wordpress and Drupal websites to web censorship and prevent prevention bolster happy preservation. [17]

However, such preserving systems may be encountered on an off-line service so that the URLs are intermittently unavailable. [18]