We are experiencing an increased amount of deep learning questions lately. Not only does it define one of the latest breakthroughs of this decade in image recognition and other fields, but also easily accessible resources and open-source frameworks for working with deep neural networks are available.

We can certainly outline examples of questions related to deep learning that are on-topic and acceptable. They are usually focused on how to achieve a specific outcome using a framework such as TensorFlow, Theano, or even some other higher-level API on top of one of these frameworks.

On the other hand, there has also been a surge of questions that are simply off-topic by our standards, but do not receive enough votes for closure, and bring equally poor answers as well. Often, they simply ask for aimless advice on improving, designing, or training neural networks. Answers to these questions are often of bad quality and only end up providing opinionated suggestions. These questions are clearly off-topic and should be closed as "too broad" or recommendation questions, and migrating to another Stack Exchange site (Cross Validated, Data Science or Artificial Intelligence) may not be appropriate unless they are improved significantly.

We also have questions that may show research and are narrowed down to a particular concern, but can only be answered with a technology-agnostic explanation of a certain algorithm or technique usually employed in neural networks and deep learning. Likewise, they may also attract opinionated answers. I often end up voting to migrate to Cross Validated or making a custom close reason, but this isn't always an obvious decision.

Right now, I can only conclude that the small but steadily increasing sub-community around deep-learning and related tags (tensorflow, keras, tflearn, neural-network, conv-neural-network, and more) might just not have enough members doing the necessary maintenance of voting to close when deemed appropriate. This cross-meta question makes a good explanation of why deep learning questions can be tricky to triage in the Stack Exchange network: while it has a statistical foundation and can be placed as a subcategory of machine learning, designing and developing these models resembles a task of engineering. In fact, deep learning has also been called an art. One with a lot of pile stirring under the form of hyper-parameter tweaking, regularization, and data augmentation, to name a few. Of course, this doesn't change the fact that we have defined a scope in which "pure" machine learning questions are not included.

My main concern here is that the line between on-topic and off-topic questions about deep learning should be made more visible in the form of close-votes, before it gets out of hand when deep learning becomes more popular.
Are we just facing a lack of moderation around poor deep learning questions? Should we (and is there a way to) raise awareness in this sub-community regarding what deep learning questions should be asked here and whether they should be asked in another site instead?

I can spend a little more time here, because I work in this area, but at a first glance the tag doesn't look to be a particular problem. I closed or deleted a few obvious stinkers, but there seem to be a decent number of questions with code, clear explanations, and subject matter appropriate for the site. It's certainly not the trash fire that [kali-linux] is: meta.stackoverflow.com/questions/331586/…, for example.
– Brad Larson♦Jul 18 '17 at 14:35

2

@BradLarson The deep-learning tag isn't necessarily the culprit here. [tensorflow], even without the former tag, also attracts a fair deal of off-topic questions, which sometimes get answered as if they weren't.
– E_net4Jul 18 '17 at 15:37