{"files"=>["https://ndownloader.figshare.com/files/340293"], "description"=>"<div><p>A variety of functionally important protein properties, such as secondary structure, transmembrane topology and solvent accessibility, can be encoded as a labeling of amino acids. Indeed, the prediction of such properties from the primary amino acid sequence is one of the core projects of computational biology. Accordingly, a panoply of approaches have been developed for predicting such properties; however, most such approaches focus on solving a single task at a time. Motivated by recent, successful work in natural language processing, we propose to use <em>multitask learning</em> to train a single, joint model that exploits the dependencies among these various labeling tasks. We describe a deep neural network architecture that, given a protein sequence, outputs a host of predicted local properties, including secondary structure, solvent accessibility, transmembrane topology, signal peptides and DNA-binding residues. The network is trained jointly on all these tasks in a supervised fashion, augmented with a novel form of semi-supervised learning in which the model is trained to distinguish between local patterns from natural and synthetic protein sequences. The task-independent architecture of the network obviates the need for task-specific feature engineering. We demonstrate that, for all of the tasks that we considered, our approach leads to statistically significant improvements in performance, relative to a single task neural network approach, and that the resulting model achieves state-of-the-art performance.</p> </div>", "links"=>[], "tags"=>["unified", "multitask", "predicting", "properties"], "article_id"=>127267, "categories"=>["Biological Sciences", "Biochemistry"], "users"=>["Yanjun Qi", "Merja Oja", "Jason Weston", "William Stafford Noble"], "doi"=>["https://dx.doi.org/10.1371/journal.pone.0032235"], "stats"=>{"downloads"=>0, "page_views"=>0, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/A_Unified_Multitask_Architecture_for_Predicting_Local_Protein_Properties/127267", "title"=>"A Unified Multitask Architecture for Predicting Local Protein Properties", "pos_in_sequence"=>0, "defined_type"=>3, "published_date"=>"2012-03-26 02:01:07"}

{"files"=>["https://ndownloader.figshare.com/files/663068"], "description"=>"<p>Given an input amino acid sequence, the neural network outputs a posterior distribution over the class labels for that amino acid. This general deep network architecture is suitable for all of our prediction tasks. The network is characterized by three parts: (1) an amino acid feature extraction layer, (2) a sequential feature extraction layer, and (3) a series of classical neural network layers. The first layer consists a PSI-BLAST feature module and an amino acid embedding module. With a sliding window input (here ), the amino acid embedding module outputs a series of real valued vectors . Similarly, the PSI-BLAST module derives 20-dimensional PSI-BLAST feature vectors corresponding to the amino acids. These vectors are then concatenated in the sequential extraction layer of the network. Finally, the derived vector is fed into the classical neural network layers. The final softmax layer allows us to interpret the outputs as probabilities for each class.</p>", "links"=>[], "tags"=>["neural"], "article_id"=>333554, "categories"=>["Biological Sciences", "Biochemistry"], "users"=>["Yanjun Qi", "Merja Oja", "Jason Weston", "William Stafford Noble"], "doi"=>["https://dx.doi.org/10.1371/journal.pone.0032235.g001"], "stats"=>{"downloads"=>0, "page_views"=>0, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Deep_neural_network_architecture_/333554", "title"=>"Deep neural network architecture.", "pos_in_sequence"=>0, "defined_type"=>1, "published_date"=>"2012-03-26 00:59:14"}

{"files"=>["https://ndownloader.figshare.com/files/663361"], "description"=>"<p>The table lists, for each prediction task, the per-residue percent accuracy achieved via single-task training of the neural network with just the PSI-BLAST features (“Single”), single-task training that includes the amino acid embedding (“Embed”), multitask training just using the PSI-BLAST features (“Multi”), multitask training including the amino acid embedding (“Multi-Emb”), multitask training of one task along with the natural protein task (“NP”), multitask training without the PSI-BLAST embedding module but initializing the amino acid embedding by using the natural protein task (“NP only”), multitask training including the natural protein task (“All3”), “All3” with Viterbi post-processing (“All3+Vit”) and a previously reported method (“Previous”). Each row corresponds to a single task. The -value column indicates whether the difference between “Single” and “All3+Vit” is significant, according to a Z-test. The “CV” column is computed based on the accuracies separately for each cross-validation fold. It counts the percentage of CV folds in which the “All3+Vit” method outperforms the “Single” method. Rows labeled “(prot)” or “(seg)” report the protein- or segment-level accuracy, rather than residue-level accuracy. For the “NP” setting, the “*” in the “Embedding?” row indicates that this network uses the pre-trained embedding layer from the natural protein task.</p>", "links"=>[], "tags"=>["strategies", "percent"], "article_id"=>333851, "categories"=>["Biological Sciences", "Biochemistry"], "users"=>["Yanjun Qi", "Merja Oja", "Jason Weston", "William Stafford Noble"], "doi"=>["https://dx.doi.org/10.1371/journal.pone.0032235.t002"], "stats"=>{"downloads"=>0, "page_views"=>0, "likes"=>0}, "figshare_url"=>"https://figshare.com/articles/_Comparison_of_learning_strategies_based_on_percent_accuracy_/333851", "title"=>"Comparison of learning strategies based on percent accuracy.", "pos_in_sequence"=>0, "defined_type"=>3, "published_date"=>"2012-03-26 01:04:11"}