The Deep Web, also called the Deep Net, Invisible Web, or Hidden Web, is the portion of content on the World Wide Web that is not indexed by standard search engines.

Examples for this is content behind logins (with or without paywall) dynamically generated content, file formats not readable for search engines or URLs which can not be found by search engine spiders because they are not linked anywhere.

However, the current example questions seem to be about a different definition of the term which I have noticed to be used in certain circles: websites served through anonymization systems like TOR, Freenet or I2P.

These two definitions are contradicting: The TOR network, for example, can be accessed by anyone who installs the TOR software. Also, there is no technical reason why it can not be spidered by search engines like Google. The only reason why you don't see .onion search results from normal search engines is because they believe that those results are likely not relevant for most users because most users don't use TOR. In fact, there are search engines which spider TOR hidden services exclusively (which are usually hidden services themselves).

This has been asked a couple of months ago. Still no answer. That's slightly disconcerting.
– MastNov 16 '15 at 20:20

I actually think the Wikipedia definition is incorrect because of its ambiguity. Deep webalways refers to the commonly unreachable websites run on a separate grid of computers from the main Internet. So this question is purposeless in my opinion.
– AdamDec 30 '15 at 23:06

Also, there is no technical reason why it can not be spidered by search engines like Google

If this is hidden would it not fall under the definition in the proposal? Regardless of the technically ability of Google, if it's a service they as a search engine provider do not offer, then it's a service that's unavailable to use hence it's state will remain hidden until Google or other search engine providers choose to change their service offering and offer it.

You also stated

In fact, there are search engines which spider TOR hidden services exclusively (which are usually hidden services themselves)

These 'hidden' services would fall under hidden as they are hidden. In order to leverage them a host/user would be required to install software that would allow them to access the services. So those particular services, search of Tor only websites, would fall under the proposal verbiage IMO.

Strictly speaking Deep Web is best defined as any portion of the internet which is not indexed by search engines or can not be easily indexed by search engines. By this definition any web page which is hidden behind a user login, firewall, paywall, etc, would be strictly defined as being part of the Deep Web, the definition does not go into saying that a website is only part of the deep web if it can not be indexed by search engines, only cant be easily indexed or is not presently indexed. If a website is indexed or not can either be by design (robots.txt restrictions), or by nature of the way the website exists with regard to linked sites (.onion websites frequently are not linked to from the open web given their dependancy on the Tor network). I would personally suggest that both unindexed dark web as well as hidden network (tor) sites would all be classified as dark web and so could all be discussed in the dark web SE site.