EAGLE: alignment-free method to compute relative absent words (RAWs)

About

EAGLE is an alignment-free method and associated program to compute relative absent words (RAW) in genomic sequences using a reference sequence. Currently, EAGLE runs on a command line linux environment, building an image with patterns reporting the absent words regions (in SVG) as well as reporting the associated positions into a file. EAGLE has got scripts to run on the current outbreak and the other existing ebola virus genomes (using the human as a reference), including the download, filtering and processing of the entire data.

The DICOM validator is a web-based solution for evaluation the compliance of PACS applications with the DICOM standard. It features the “as-a-service” business model, which allows users to immediately reach their goals without the extensive setup efforts required by similar solutions. The DICOM Validator is also a community-driven initiative, where users all around the world are invited to contribute to the creation and maintenance of the DICOM module definitions. With your help, we will soon reach full coverage of the DICOM Standard and keep-up with its latest revisions.

The sensing of the person in physical context enables personalized and predictive responses, and is a major step towards a smarter and safer environment. The main objective of SOCA is to create an open innovation ecosystem where data is gathered from multiple sources, processed, integrated, and made available for applications and users, and that is able to create a service sphere able to assist every individual inside it – from personal health to routine daily chores. For this endeavor, the academic campus will provide the perfect framework to support and trial innovations on the smart city and on the assisted living arenas.

SCALEUS is a data migration tool that can be used on top of traditional systems to enable semantic web features. This user-friendly tool help users easily create new semantic web applications from scratch. Targeted at the biomedical domain, this web-based platform offers, in a single package, a high-perfomance database, data integration algorithms and optimized text searches over the indexed resources. SCALEUS is available as open source at http://bioinformatics-ua.github.io/scaleus/.

Ann2RDF is an interoperable semantic layer that unifies text-mining results originated from different tools, information extracted by curators, and baseline data already available in reference knowledge bases, enabling a proper exploration using semantic web technologies. This result in a more suitable transition process, in which desired annotations are enriched with the possibility to be shared, compared and reused across semantic Knowledge Bases. Ann2RDF is available at http://bioinformatics-ua.github.io/ann2rdf/.

I2X is a reactive and event-driven framework that simplifies and automates real-time data integration and interoperability. This platform streamlines the creation of customizable integration tasks connecting heterogeneous data sources with any kind of services. Integration is poll-based, with intelligent agents monitoring data sources, or push-based, where the platform waits for data submission by external resources. I2X delivers data to services through a comprehensive template engine, where the platform maps data from the original data source to the destination resources. I2X is an open-source framework available online at https://bioinformatics.ua.pt/i2x/.

TASKA consists of a modular platform that allows collaboration between different users through a user-friendly web-based interface, while keeping a strong focus on the relation between the tasks that users perform.

MONTRA is a rapid-application development framework designed to facilitate the integration and discovery of heterogeneous objects which may be characterized by distinct data structures. Initially designed as a framework which allows biomedical researchers to easily set up dynamic workspaces, where they can publish and share sensitive information about their data entities, MONTRA is suitable for any data domain, by allowing the characterisation of the most diverse entities or group of entities (datasets). Through the use of a common skeleton, it automatically generates a fully-fledged web data catalogue, ensuring data privacy protection.

NETDIAMOND – New Targets in Diastolic Heart Failure: from Comorbidities to Personalized Medicine

Funding entity:P2020/PAC
Period: 2016-2019

Heart failure (HF) is a highly prevalent syndrome of impaired cardiac function that constitutes the main cause of hospitalization and disability amongst the elderly, a leading cause of mortality, morbidity and resource consumption. HF with preserved ejection fraction (HFpEF) is characterized by preserved ejection, impaired cardiac filling, lung congestion and effort intolerance, accounting for a rising proportion of over 50% of cases due to ageing and increasing incidences of systemic arterial hypertension (SAH), obesity and diabetes mellitus (DM). The current proposal sets forth to address this issue by a mixed strategy of discovery science approach through comprehensive multi-omics studies in plasma and tissues from HFpEF patients and animal models with and without comorbidities (DM, SAH and obesity), and an hypothesis-driven approach focusing on disturbances of cell function and communication in endothelial cells (EC), cardiac fibroblasts (CF), adipocytes and CM. A holistic view of HFpEF and of the role of comorbidities will be achieved by correlating and integrating transcriptomics, proteomics and lipidomics studies with clinical data. The impact on CM and myocardium will be comprehensively assessed in vitro and in vivo. Finally, preclinical testing of functional foods, synthetic antioxidants, enhanced bioavailability putative therapeutic molecules as well as other potentially effective gene targets identified along the project’s course will be assayed.

Digital medical imaging systems are, nowadays, essential tools in clinical practice, both in decision supporting and in treatment management. The main objective of this project is to investigate new solutions for extracting, merging and searching over multimodal data, including text (DICOM metadata and diagnosis reports) and image information. Relevance feedback will be also investigated to increase the results quality of the proposed multimodal architecture. It is also our aim to investigate the contribution of semantic information in imaging retrieval and information extraction. We will develop a semantic PACS concept to provide search functionality using context-dependent semantic information.

Diabetic Retinopathy (DR) is a leading cause of blindness in the industrialized world that can be avoided with early treatment, demanding an earlier diagnosis in a stage where the treatment is still possible and effective. DR evolves silently without any visual symptoms, during the early stages of the disease.
Under this context, the vision of the consortium SCREEN-DR is to create a distributed and automatic screening platform for DR, based on the state-of-the-art Information and Communication Technologies (ICT), including advanced Picture Archiving and Communication Systems (PACS) management, Machine Learning and Image Analysis, enabling immediate response from health carers, allowing accurate follow-up strategies, and fostering technological innovation.

GeCo is a method and tool designed for the compression and analysis of genomic data. As a compression tool, GeCo is able to provide additional compression gains over several top specific tools in different levels of redundancy. As an analysis tool, GeCo is able to determine absolute measures, namely for many distance computations, and local measures, such as the information content contained in each element, providing a way to quantify and locate specific genomic events. GeCo can afford individual compression and referential compression (conditional or conditional exclusive). The tool is memory adjustable, using hash-caches for the deepest context models, making possible to be run in modest computers.

Smash is a completely alignment-free method/tool to find and visualise genomic rearrangements. The detection is based on conditional exclusive compression, namely using a FCM (Markov model), of high context order (typically 20). For visualisation, Smash outputs a SVG image, with an ideogram output architecture, where the patterns are represented with several HSV values (only value varies). The method can perform both in small- and large-scale. Nevertheless is more directed to large-scale since that the main aim of the research is to know where the large-scale [chromosomal by chromosome] of several primates was equal/different, having at a glance a map of the entire genomes. Therefore the method aims to solve evolutionary species Rubik’s cube. The following image, illustrating the information maps between human and chimpanzee for the several chromosomes, depicts such an example study:

Nevertheless, the method is not limited to primates information. The following image show the information map between Meleagris gallopavo and Gallus gallus chromosomes 1 using a threshold of 0.95.

MENT: Microarray comprEssiOn Tools

About

MENT is a set of tools for lossless compression of microarray images, however, it can be used in other kind of images such as medical, RNAi, etc. This set of tools is divided into two categories, defined by the decomposition approach used:

BOSC09HC (Bitplane decOmpoSition Compressor 2009 using Histogram Compaction) – Tool inspired on method introduced by (Neves 2009) where it was added an Histogram Compaction unit in order to remove some redundant bitplanes. This Histogram Compaction is usefull for images that have a reduced number of intensities.

BOSC09SBR (Bitplane decOmpoSition Compressor 2009 using Scalable Bitplane Reduction) – Tool inspired on method introduced by (Neves 2009) where it was added an Scalable Bitplane Reduction unit in order to remove some redundant bitplanes. The Scalable Bitplane Reduction technique was first introduced by Yoo 1999.

BOSC09MixSBC (Bitplane decOmpoSition Compressor 2009 Mixture with Simple Bitplane Coding) – Tool based on a mixture of finite-context models. In this particular case, we only considered two different models. The first one used by Neves and Pinho (Neves 2009) and the other one based on a Simple Bitplane Coding inspired on Kikuchi’s work (Kikuchi 2009, Kikuchi 2012).

BITTOC (Binary Tree decomposiTiOn Compressor) – Tool inspired on Chen’s work regarding compression of color-quantized images (Chen 2002). This tool performance was studied in the context of medical images by Pinho and Neves in 2009 (Pinho 2009) and more recently applied to microarray images (Matos 2014).

XS: a FASTQ read simulator

About

XS is a skilled FASTQ read simulation tool, flexible, portable (does not need a reference sequence) and tunable in terms of sequence complexity. XS handles Ion Torrent, Roche-454, Illumina and ABI-SOLiD simulation sequencing types. It has several running modes, depending on the time and memory available, and is aimed at testing computing infrastructures, namely cloud computing of large-scale projects, and testing FASTQ compression algorithms. Moreover, XS offers the possibility of simulating the three main FASTQ components individually (headers, DNA sequences and quality-scores). Quality-scores can be simulated using uniform and Gaussian distributions.

SACO: a lossless compression tool for the sequences alignments found in the MAF files.

About

SACO was designed to handle the DNA bases and gap symbols that can be found in MAF files. Our method is based on a mixture of finite-context models. Contrarily a recent approach, it addresses both the DNA bases and gap symbols at once, better exploring the existing correlations. For comparison with previous methods, our algorithm was tested in the multiz28way dataset. On average, it attained 0.94 bits per symbol, approximately 7% better than the previous best, for a similar computational complexity. We also tested the model in the most recent dataset, multiz46way. In this dataset, that contains alignments of 46 different species, our compression model achieved an average of 0.72 bits per MSA block symbol.

MAFCO: a compression tool for MAF files

About

MAFCO is a lossless compression tool specifically designed to compress MAF (Multiple Alignment Format) files. Compared to gzip, the proposed tool attains a compression gain from ≈ 34% to ≈ 57%, depending on the data set. When compared to a recent dedicated method, which is not compatible with some data sets, the compression gain of MAFCO is about 9%. MAFCO was designed and implemented at IEETA, a research unit of the University of Aveiro, and is available for non-commercial use.

ACE’14 Workshop on “Designing Systems for Health and Entertainment: what are we missing?”

2014/11/11

Systems that aggregate health and entertainment goals are proliferating, but little is known about the way to design and evaluate these systems and how to manage the different (if nor opposite) needs of these two main areas. This workshop will promote the discussion of issues surrounding these areas, enabling a better understanding of the how’s and why’s of designing systems for health and entertainment, as well as the identification of new avenues of research in the field.
Therefore we invite designers, researchers and practitioners to participate in an exciting full-day workshop where they are invited to share their personal views and research on the intersection of technology, health and entertainment.

The FCT Investigator Programme aims to create a talent base of scientific leaders, by providing 5-year funding for the most talented and promising researchers, across all scientific areas and nationalities.

For the 2013 call, Sérgio Matos, research assistant at IEETA, was awarded a FCT Investigator grant, for the 2014-2018 period.

FALCON is an alignment-free unsupervised system to measure a similarity top of multiple reads according to a database. The machine learning system can be used, for example, to classify metagenomic samples. The core of the method is based on the relative algorithmic entropy, a notion that uses model-freezing and exclusive information from a reference, allowing to use much lower computational resources. Moreover, it uses variable multi-threading, without multiplying the memory for each thread, being able to run efficiently from a powerful server to a common laptop. To measure the similarity, the system will build multiple finite-context (Markovian) models that at the end of the reference sequence will be kept frozen. The target reads will then be measured using a mixture of the frozen models. The mixture estimates the probabilities assuming dependency from model performance, and thus, it will allow to adapt the usage of the models according to the nature of the target sequence. Furthermore, it uses fault tolerant (substitution edits) Markovian models that bridge the gap between context sizes. Several running modes are available for different hardware and speed specifications. The system is able to automatically learn to measure similarity, whose properties are characteristics of the Artificial Intelligence field.

Citation

Paper was submitted, currently the citation should be addressed to the url (bioinformatics.ua.pt/software/falcon).

DNAatGlance is a program for the detection of large-scale genomic regularities by visual inspection. Several discovery strategies are possible, including the standalone analysis of single sequences, the comparative analysis of sequences from individuals from the same species, and the comparative analysis of sequences from different organisms. The software was designed and implemented at IEETA, a research unit of the University of Aveiro, and is available for non-commercial use.

MFCompress: a compression tool for FASTA and multi-FASTA data

About

MFCompress is a compression tool for FASTA and multi-FASTA files. In comparison to gzip and applied to multi-FASTA files, MFCompress can provide additional average compression gains of almost 50%, i.e., it potentially doubles the available storage, although at the cost of some more computation time. On highly redundant data sets, and in comparison with gzip, 8-fold size reductions have been obtained. MFCompress was designed and implemented at IEETA, a research unit of the University of Aveiro, and is available for non-commercial use. For other uses, please send an email to ap@ua.pt.

What?

Egas is a web-based platform for biomedical text mining and collaborative curation. The web tool allows users to annotate texts with concept occurrences as well as with relations between concepts. Annotations can be performed manually or based on the results of automated concept identification and relation extraction tools. These automatic annotations may have been previously added to the documents, using one of the accepted input formats, or may be added during the annotation process, by calling a document annotation service. Users can inspect, correct or remove automatic text mining results, manually add new annotations, and export the results to standard formats.

How?

Text-processing and fetching modules, such as the concept and relation annotation services, were implemented in Java, and the web interface was developed using HTML5, CSS3, and JavaScript, in order to allow fast processing of large documents and support mobile devices. The resulting information is stored in a relational database. Finally, all database operations are performed using secured RESTful web-services, allowing easy integration with mobile devices, such as smartphones and tablets.

The projects’ ambition is the creation of a new set of solutions based in novel ICT technologies, developing a concept that encompasses the synergistic usage of cloud computing, with large database access and information retrieval, associated with advanced methods for reasoning and data mining (and with the basic scalable algorithms to support the dimensions of the data sets targeted).

Neurodegenerative disorders are a major health concern worldwide, Portugal being no exception. With this project the University of Aveiro proposes extend existing research in the field of neurodegenerative diseases through the creation of a consortium of 5 research units from UA (CBC, QOPNA, I3N, IEETA, CICECO). The projects main goal is to offer novel therapeutic strategies to tackle the complex array of existing neuropathologies. By building a multidisciplinary research team that combines experts in molecular neuropathologies, proteomics, metabolomics, bioinformatics, neuronal networks, organic synthesis and drug design from the UA we will be able to attack the problem on many fronts. Upon successful completion of this project, new therapeutic approaches will have been developed which will contribute to the improvement of life quality for neurodegenerative patients, having a high society impact considering the 10 million new patients reported every year.

The price was awarded at BioLINK SIG 2013 for the work “Neji: a tool for heterogeneous biomedical concept identification”.

BioLINK SIG 2013: Roles for text mining in biomedical knowledge discovery and translational medicine
The Annual Meeting of the ISMB BioLINK Special Interest Group
In Association with ISMB/ECCB 2013, Berlin, Germany
July 20, 2013

A 6-hour iOS Development Seminar will be held by Rui Pedro Lopes, Professor at Polytechnic Institute of Brangança, on the 29th July 2013, at Department of Electronics, Telecommunications and Informatics (DETI), Aveiro.

This Seminar will cover the following main topics: Objective-C, Storyboards, Core Data, Master-Detail User Interface

Exploring Human Genetic Variations

About

Variobox is a desktop tool for the annotation, analysis and comparison of human genes. Variant annotation data are obtained from WAVe, protein metadata annotations are gathered from PDB and UniProt, and sequence metadata is obtained from Locus Reference Genomic (LRG) and RefSeq databases. By using an advanced sequence visualization interface, Variobox provides an agile navigation through the various genetic regions. Researched genes are compared to the sequences retrieved from LRG and RefSeq, automatically finding and annotating new potential mutations. These features and data, ranging from patient sequences to HGVS-valid variant description up to pathogenicity evaluation, are combined in an intuitive interface to explore genes and mutations.

To run, first unpack all the files to any folder. Then, if you’re on Windows, double click the Variobox file inside the folder. On Mac or Linux, start a terminal, change the directory to the created folder, and run java -jar variobox.jar

Tutorial

Step 1 The initial layout

This is the initial VarioBox workspace that shows up when you open the application. At the bottom of the workspace you can find a separator, “Home”, created automatically. Here will be as many separators as searches performed, each one identified by the searched HGNC code. At the centre you can see the logo and a panel, where searches for reference genes can be performed, using a valid HGNC symbol. To work with Variobox, a reference gene is always the starting point. After obtaining the reference, a sequence can be loaded to the application to be aligned with the sequence, and analysed.

Step 2 Making a quick search

By default there are two genes bellow the search box: Collagen, type I, alpha 1 (COL1A1), and Myotubularin 1 (MTM1). Click on COL1A1 or type it at the search box and hit search. A progress bar will show up indicating the progress of the loading process. A new tab (with the name of the searched HGNC code), like the one below, will show up once the reference gene is automatically retrieved from the web servers:

The right zone is formed by two distinct panels:

The top one, titled Protein Viewer is where the 3D protein conformation of the selected gene is shown, if available, using JMol.

The bottom one, titled Information Panel, which will display additional information on selected items, such as mutations and exons.

On the top of the window there is a large genomic viewer with a movable and resizable window that allows specifying a region to be explored in the centre zone. This viewer distinguishes exons (blue) and introns (purple), and allows quickly jumping through the gene. The centre zone is populated with gene data and information, in three distinct panels, described below:

Gene panel

In this panel you can see the codon sequence and the decoded polypeptide sequence, labelled Reference Sequence and Translated Sequence respectively, and also the Known Mutations for the gene, as retrieved from WAVe. A zoomed genomic viewer is also displayed to further facilitate the exploration of the gene.

Mutations are identified by different colours, and shown next to the corresponding nucleotides. Additional information about a mutation can be obtain by clicking on the mutation. The Information Panel (right side of the workspace) will display details regarding the selected mutation’s position, source, type, annotation, etc.

Navigation panel

The navigation panel is a simple feature that allows the easy exploration of the gene through mutations and exons. Clicking on the next or previous buttons will centre the sequence in the appropriate item (a mutation or exon):

The Navigation Panel also permits filtering what mutation types are to be shown in the Gene Panel. For instance, if you only check Substitutions, all mutations besides SNPs will be hidden.

Gene Details panel

This panel shows you a quick information about the gene that you are analysing. The current information supported is the following:

Number of mutations: displays the total number of mutations found in the reference gene. No information will be displayed if no mutations are known;

Number of exons: total number of exons found in the gene;

Sequence size: total size of the reference sequence;

Date of creation: the date and time when this gene was created;

Loaded files: the files that were selected by the user to be aligned with the reference sequence.

Step 3 Loading mutated sequences

To load a gene sequence and align it with the reference gene, click the menu Genes→Load gene file. Alternatively, go to the menuFile→Load gene file. You will be prompted with a new window to select the file you want to load. For the current version we support the file types:

DNA Sequence Chromatogram File: .scf ; .abi extension

DNA Electropherogram File: .ab1 extension

FASTA files: .fasta ; .fa extension

After selecting the file (or files, if you choose the forward-reverse format), click Load selected file and VarioBox will read them. Once the file is correctly loaded, an alignment with the reference gene is automatically performed. This alignment will also display found mutations, as compared to the reference gene. The analysis of the loaded sequence is described in the next step.

Step 4 Analysing mutated sequences and saving results

After the files are loaded, the Gene Panel will be updated with the mutated sequence as well as the calculated mutations, as depicted in the following figure:

The loaded sequence will also be coloured according to its chromatogram confidence (if there is one), ranging from green (high confidence) to red (no confidence). This will allow easily understanding the validity of calculated mutations. Also note that the mutations are automatically annotated using the standard notation, and its annotation is displayed when clicking on a mutation. To save the sequences, mutations, alignment and other information, the gene should be assigned to a patient. To do so, go to the menu Genes →Save to patient and select a patient from the list of patients that will be presented.

Step 5 Final Features

If you want to register a new patient in VarioBox, make the following steps: Go to Patients → New patient and fill the Patient Details panel (shown bellow) with all the required information(note that only one field is mandatory). After that just click Save patient and a new record will be created.

To load a saved project, go to Patients → Open patient and select the patient you previously saved. This will create a new tab with all the patient information: patient personal information as well as the genes from that patient. Those genes can be open just by selecting them and clicking Open selected.
This action will open many tabs as many genes you have selected and will re-create all the gene panels you had in the workspace previously.

Closing tabs is as simple as going to Patients → Close patient or Genes → Close current gene project depending of the tab type you have open.

Biomedical Concept Annotation Tool, API and Widget

About

becas is a web application, API and widget for biomedical concept identification. It helps researchers, healthcare professionals and developers in the identification of over 1,200,000 biomedical concepts in text and PubMed abstracts.

becas provides annotations for isolated, nested and intersected entities. It identifies concepts from multiple semantic groups, providing preferred names and enriching them with references to public knowledge resources. You can choose the types of entities you want to identify and highlight or mute specific entities in real-time.

To facilitate annotation of PubMed abstracts, becas automatically fetches publications from NCBI servers and renders them with identified concepts highlighted.

Using becas

You can access the becas web annotation tool here and learn to use it in its help page. Explore the Web API in the API docs and discover how easy it is to integrate the becas widget in the widget docs.

Bioinformatics is playing a key role on molecular biology advances, not only by enabling new methods of research, but also managing the huge amounts of relevant information and make it available world-wide.

State of the art methods on bioinformatics include the use of public databases to publish the scientific breakthroughs. These databases provide valuable knowledge for the medical practice. But, given their specificity and heterogeneity, we cannot expect the medical practitioners to include their use in routine investigations. To obtain a real benefic from them, the clinician needs integrated views over the vast amount of knowledge sources, enabling a seamless querying and navigation.

Goals

Main goals behind the conception of DiseaseCard:

Provide the user with an integrated view of the information available in the internet for a specific disease, from the phenotype to the genotype.

Use rare diseases as the main target due to the high association between phenotype and genotype.

Do not replicate information that already exists in public or private databases. The system is based in an information model that allows accessing and sharing these data;

Be supported in a navigation protocol that allows guiding users in the process of retrieving information from the Internet.

DiseaseCard

Results

Diseasecard can provide the answers to several questions that are relevant in the genetic diseases diagnostic, treatment and accomplishment, such as:

NCCD is a method and package tool designed to compute the NCCD (Normalized Conditional Compression Distance) and, for instance, to perform phylogenomics (whole genome) on 48 bird species. It will use a state-of-the-art genomic compressor, based on a mixture of finite-context models, as a metric distance.

The EU-ADR Web Platform helps experts in the study of adverse drug reactions (ADRs) through the use of computational services and scientific workflows, provided by several European partners. The system assists in the earlier detection of adverse drug reactions, improving drug safety and contributing to public health benefit. You can access the EU-ADR Web Platform here

EU-ADR Project

The overall objective of this project was the design, development and validation of a computerized system that exploits data from electronic healthcare records and biomedical databases for the early detection of adverse drug reactions. Visit the project page.

The multiobjective formulation of the pairwise sequence alignment problem is introduced, where a vector score function takes into account the substitution score and indels or gaps separately. Two solution methods are introduced: a multiobjective dynamic programming that extends classical algorithms for this problem and an epsilon-constraint algorithm that solves a series of constrained sequence alignment problems. A state pruning technique based on the concept of bound sets is also presented. Finally, its application to phylogenetic tree construction is
discussed.

In recent years, the development and use of Electronic Healthcare Records (EHRs) throughout Europe has grown exponentially resulting in large volumes of clinical data. At the same time, large collections of disease‐specific data are recorded – in local, regional and/or national settings. Researchers also follow specific cohorts over time, and focus on specific types of data such as imaging or genetic data. Other researchers are building biobanks that aim to combine clinical data with genetic data. As a result, individual patients can contribute to multiple, often separate, data sources.

Despite examples of excellent practice, rare disease (RD) research is still mainly fragmented by data and disease types. Individual efforts have little interoperability and almost no systematic connection between detailed clinical and genetic information, biomaterial availability or research/trial datasets. By developing robust mechanisms and standards for linking and exploiting these data, RD-Connect will develop a critical mass for harmonisation and provide a strong impetus for a global “trial-ready” infrastructure ready to support the IRDiRC goals for diagnostics and therapies for RD patients.

What?

Neji is an innovative framework for biomedical concept recognition. It is open source and built around four key characteristics: modularity, scalability, speed, and usability. It integrates modules of various state-of-the-art methods for biomedical natural language processing (e.g., sentence splitting, tokenization, lemmatization, part-of-speech tagging, chunking and dependency parsing) and concept recognition (e.g., dictionaries and machine learning). The most popular input and output formats, such as Pubmed XML, IeXML, CoNLL and A1, are also supported. Additionally, the recognized concepts are stored in an innovative concept tree, supporting nested and intersected concepts with multiples identifiers. Such structure provides enriched concept information and gives users the power to decide the best behavior for their specific goals, using the included methods for handling and processing the tree.

Why?

Concept recognition is an essential task in biomedical information extraction, presenting several complex and unsolved challenges. The development of such solutions is typically performed in an ad-hoc manner or using general information extraction frameworks, which are not optimized for the biomedical domain and normally require the integration of complex external libraries and/or the development of custom tools. Thus, Neji fills the gap between general frameworks (e.g., UIMA and GATE) and more specialized tools (e.g., NER and normalization), streamlining and facilitating complex biomedical concept recognition.

How?

On top of the built-in functionalities, developers and researchers can implement new processing modules or pipelines, or use the provided command-line interface tool to build their own solutions, applying the most appropriate techniques to identify names of various biomedical entities. Neji was built thinking on different development configurations and environments: a) as the core framework to support all developed tasks; b) as an API to integrate in your favorite development framework; and c) as a concept recognizer, storing the results in an external resource, and then using your favorite framework for subsequent tasks.

Dr. Kim Sneppen from the Niels Bohr Institute, Copenhagen-DK, will give the give the inaugural Lecture of our Systems Biology seminars series entitled Simplified Models of Biological Networks, on the 28th of September.

Redesign mRNA sequences to optimise the secondary structure

About

The mRNA optimiser is a tool that redesigns a gene messenger RNA to optimise its secondary structure, without affecting the polypeptide sequence. The tool can either maximize or minimize the molecule minimum free energy (MFE), thus resulting in decreased or increased secondary structure strength.

The optimisation is achieved by using an heuristic to look for synonymous gene sequences, and select the ones with the best secondary structure. Evaluations of the secondary structure are made using a correlated stem-loop prediction algorithm that examines the nucleotide sequence for simple stem-loops. This algorithm is fine-tuned to have its results highly correlated with the MFE evaluations of RNAfold.

Our results indicate that an average of over 40% increase in MFE can be obtained with this method. Also, since there is a tendency to reduce the GC percentage of nucleotide sequences when optimising, the developed tool includes an option to maintain the GC content of the wildtype gene.

The mRNA optimiser is a command line tool (a graphical interface will be available soon). To use it you need to open a terminal window, change to the directory where mRNAOptimiser is, and run it:

1. Open a terminal window

In Windows, go to the Start menu, click Run, write cmd, and click Ok.

In Mac, write terminal in spotlight and hit enter.

2. Change the directory

In Windows, Mac and Linux, write cd in the terminal followed by the directory where you placed the tool.

3. Run the mRNA optimiser

In Windows, write mRNAOptimizer.exe and hit enter. Usage indications will show up in the terminal.

In Mac and Linux, write java -jar mRNAOptimizer.jar and hit enter. Usage indications will show up in the terminal.

You may choose to supply your mRNA sequence by writing it into the terminal or referring an input file, with the -f input_sequence option. The tool only changes the coding region of the mRNA, therefore you must indicate where the start codon begins (-b index, to indicate the index of the first nucleotide of the start codon) and where the stop codon ends (-e index, to indicate the index of the last nucleotide of the stop codon). The default coding zone is the entire sequence.

To redirect the output results to a file, use the -o output_file option. To choose whether the tool should maximize or minimize the MFE, use the -d type option (default is maximize). You may limit the algorithm in both time and number of iterations by using the options -t max_time and -i max_iterations. Also, the tool will use the standard genetic code by default, but you can select other genetic coding tables using the -c coding_table option.

To maintain the original mRNA percentage of guanine and citosine (GC content) unaltered after optimisation, use the -g option. There is also a quiet mode, where nothing is output except for the resulting sequence, using the -q option.

What is OralCard?

OralCard is an online bioinformatic tool that comprises results from manually curated articles reflecting the oral molecular ecosystem (OralPhysiOme), by merging the experimental information available from the oral proteome both of human (OralOme) and microbial origin (MicroOralOme). OralCard is a key resource for understanding the molecular foundations implicated in biology and disease mechanisms of the oral cavity.

How does it work?

OralCard integrates information about more than 3500 proteins and searching can be performed in three distinct views: (1) by protein names or respective UniProt codes, (2) by disease name, OMIM code or MeSH term, (3) and by organism.

Helena Deus, “Linked Data and Semantic Web Technologies for improving discovery in the Life Sciences”

We live in a world of data. This is also true for the Life Sciences, where the introduction of omics technologies such as genome sequencing has led to the industrialization of data production beyond a craft-based cottage industry and into a deluge of biological information. Nevertheless, the apparently simple task of collecting and keeping pace with the latest information about a gene of interest is still thwarted by the need for biological researchers to become experts at database-surfing and literature mining.

Linked Data is a set of principles devised for creating a Web of Data where a new generation of Web applications can discover and link relevant pieces of information based on its properties rather than its location in a database. Linked data is also at the root of a movement towards building a knowledge continuum in the Life Sciences and by doing so, has the potential to be a foundation for a platform that will support 21st century Biology.

In this talk, I will present some of the scenarios where Linked Data has been successfully applied in accelerating scientific discovery and translation of Life Sciences knowledge into Health Care and what challenges are still to be addressed.

GReEn: a tool for efficient compression of genome resequencing data.

About

Research in the genomic sciences is confronted with the volume of sequencing and resequencing data increasing at a higher pace than that of data storage and communication resources, shifting a significant part of research budgets from the sequencing component of a project to the computational one. Hence, being able to efficiently store sequencing and resequencing data is a problem of paramount importance.We describe GReEn (Genome Resequencing Encoding), a tool for compressing genome resequencing data using a reference genome sequence. It overcomes some drawbacks of the recently proposed tool GRS, namely, the possibility of compressing sequences that cannot be handled by GRS, faster running times and compression gains of over 100-fold for some sequences.
GReEn is available for non-commercial use. For other uses, please send an email to ap@ua.pt.

Join us

We have several ideas to make Gimli the most complete and efficient tool for biomedical information extraction. You are welcome to join us and contribute to the development of new and improved features. Please contact us:

Team

Problem

The recognition of named entities is a crucial initial task of biomedical text mining. A number of NER solutions have been proposed in recent years, taking advantage of different resources and/or techniques. Currently, the best results are achieved by combining the output of different systems. However, little effort has been spent in such harmonisation solutions, being specific to a corpus and/or non-knowledge based.

Features

Conceptual

Knowledge-based harmonisation

Correct, remove and create annotations

Support several biomedical domains and organisms

On-demand harmonisation

Support both NER and normalisation systems

Technical

Automated scripts for simple usage

Java library for advanced users

Input and Output in IeXML format

Method

Totum is a innovative harmonisation solution based on Conditional Random Fields, which were trained on several manually curated corpora. Thus, we avoid the single corpus dependency, supporting several biomedical domains and organisms. In the end, Totum harmonises gene/protein annotations provided by several heterogeneous NER solutions, following the gold standard requirements.

Results

Considering a corpus that contains the test parts of the four corpora, the experiments show that Totum improves the F-measure of state-of-the-art tagging solutions by up to 10% in exact alignment and 22% in nested alignment. Finally, Totum achieves an F-measure of 70% (exact matching) and 82% (nested matching) against the same corpus.

Integration

Cloud-based

Deploy your knowledgebase in the cloud, using any available host.
Your content – available any time, any where. And with full create, read, update, and delete support.

Semantics

Use Semantic Web & LinkedData technologies in all application layers.
Enable reasoning and inference over connected knowledge.
Access data through with LinkedData interfaces and deliver a custom SPARQL endpoint.

Rapid Dev Time

Reduce development time. Get new applications up and running much faster using the latest rapid application development strategies.
COEUS is the back-end framework, the client-side is language-agnostic: PHP, Ruby, JavaScript, C#… COEUS’ API works everywhere.

Interoperability

Use COEUS advanced API to connect multiple nodes together and with any other software.
Create your own knowledge network using SPARQL Federation enabling data-sharing amongst a scalable number of peers

The Human Variome relates to genomic mutations and their effects on particular phenotypes. This critical life sciences research field has grown greatly in recent years, mostly due to the appearance of projects such as the Human Variome Project or the European GEN2PHEN Project. Nonetheless, locus-specific mutation databases and included variants are far from being standardized and widely used in the research community workflow. With WAVe, we offer centralized and transparent access to these databases, combined with the integration of found variants in a single system that is enriched with the most relevant gene-related information in a user-friendly web-based workspace.http://bioinformatics.ua.pt/WAVe

Features

WAVe provides a comprehensive set of features that will improve bioligists’ workflow when researching in the genomic variation field.

Search

Searching for genes only requires that users start typing the gene HGNC-approved symbol in any of the available search boxes. This event will trigger the automatic suggestion system that will offer various solutions based on users’ input. Following one of the suggestions leads directly to the gene view interface. When a suggestion is not accepted and there is more than one match, WAVe will display the gene browse interface, containing only the results matching the provided query.

Browse

Querying for * lists all genes as well as available LSDBs and variants for each gene. In this gene browse scenario, searches for a particular gene can be performed, in real time, by typing in the table search box. By clicking in one of the genes, users are sent to the gene view interface.

View

The gene view interface is the main WAVe workspace. The layout is divided in two main areas: the sidebar and the content zone. The sidebar displays minimal gene information on top – gene HGNC symbol, name and locus – and the navigation tree, which is WAVe’s user interface key element, at the bottom. The navigation tree is organized in nodes, each referring to a distinct data type: each node leaf links directly to a page containing information regarding a specific topic. Pages linked in each leaf appear in the content zone. This enables loading external applications without leaving WAVe’s interface and, thus, without losing focus with ongoing research.

API

Programmatic access to data is also available. The gene tree is available as an easily-parsable feed. Feeds are obtained by appending the atom tag (or other format: rss, json) to the end of the gene view address. For instance, BRCA2 Atom feed is available at http://bioinformatics.ua.pt/WAVe/gene/BRCA2/atom .
WAVe also provides an RSS API for variant access. With this, you have programmable access to all available variants for a given gene. For instance, BRCA2 variants (from multiple LSDBs) are at http://bioinformatics.ua.pt/wave/variant/BRCA2/atom. In addition to the variant description, WAve points to the original LSDB containing the variant.
This WAVe makes WAVe the only platform capable of providing aggregated variant listings through both visual and programmable access.

Feedback

We highly appreciate any feedback you can provide regarding WAVe and the genomic variation field. To do this, you can simply send an e-mail to pedrolopes@ua.pt. Thank you.

Ensembl is a world reference for vertebrate genome annotation, providing high quality annotation for more than 50 species. Particularly challenging is the annotation of non-coding functional regions of the genome. Ensembl Regulation aims at making Ensembl
a reference for the annotation of genomic features with a potential role in the transcriptional regulation of gene expression. Combining publicly available data from large projects like ENCODE and The Epigenomics Roadmap, we group overlapping areas of open chromatin and transcription factor binding to build a “best-guess” set of regulatory features, in a cell-aware manner. Finally, we also include histone-modification and polymerase data to generate cell-specific classifications for the regulatory regions. Taking advantage of the role of the EBI as part of the ENCODE data analysis group, we aim at bringing Ensembl to the forefront of the annotation of the regulatory genome.

The overall objective of this project is the design, development and validation of a computerized system that exploits data from electronic healthcare records and biomedical databases for the early detection of adverse drug reactions.

Genomic Name Server

The integration of heterogeneous data sources has been a fundamental problem in database research over the last two decades. The goal is to achieve better methods to combine data residing at different sources, under different schemas and with different formats in order to provide the user with a unified view of the data. Although simple in principle, due to several constrains, this is a very challenging task where both the academic and the commercial communities have been working and proposing several solutions that span a wide range of fields. However, the limitations found on most solutions reflect the difficulty to obtain a simple but comprehensive schema able to accommodate the heterogeneity of the biological domain while maintaining an acceptable level of performance: GeNS is our proposal towards solving this issue.

Installing and using GeNS

The Genomic Name Server can be either downloaded and installed on a local computer or accessed by Web Services. Please keep in mind that GeNS currently requires over 10 GB of disk space and this figure is likely to increase in the near future. Therefore, if disk space is a serious restriction you should consider using the available Web Services. We are currently using
Microsoft SQL Server 2008 but GeNS can be set up in any other DBMS.

a) Setting up a local instance of GeNS

Download either the full backup of the database (here) or a dump of all the tables (available here): Last update: 24/11/09

Once inside your DMBS, simply restore the full backup of the database (this is for MS SQL Server 2008 only; a step-by-step walkthrough can be found here) or import the data from the tables to the database.

Congratulations! GeNS is now ready to be used.

b) Using the Web Services

The Web Services are now available here. Furthermore, a detailed description is also available here (Updated March 24).The Web Services API is in an early stage of development and, as such, users should bear in mind that certains problems may arise during it’s usage.

Advantages

Easy to understand and use

Flexible and scalable

Efficient

Accessible by several methods

Improves the cross-database low identifier coverage issue

Architecture

GeNS uses four distinct methods for gathering data from external databases: by Web Services, web crawlers, database connectors and finally by tabular files connectors. All of the recovered data is subsquently processed and synchronized to our database. Finally, the data can be accessed via Web Services or by downloading, installing and querying the data with SQL.

Currently, GeNS is importing data from four major databases: UniProt (SwissProt and TrEMBL), KEGG, EMBL – EBI and Entrez. Since these databases already incorporate data from third-party databases, we have over 460.000 unique genes, more than 100.000 biological relations and a hundred and forty distinct datatypes.

Architecture

Database

GeNS database was designed with simplicity and extensibility in mind; the following schema is a complete representation of the database.

Database

Concepts:

Organism: An individual form of life capable of growing, metabolizing nutrients, and usually reproducing. Organisms can be unicellular or multicellular. The Organism table stores taxonomic information; each entry corresponds to an organism with any given number of associated proteins. This table is the root of the hierarchical model. For each organism, we store its scientific and short names.

Protein: Any of a group of complex organic macromolecules that contain carbon, hydrogen, oxygen, nitrogen, and usually sulfur and are composed of one or more chains of amino acids. The Protein table is where the proteins’ internal identifiers and gene locus are stored; each entry in this table has a referring organism (in which this protein is found) and may have any number of associated biological entities and/or equivalent external databases’ protein identifiers in the ProteinIdentifier and BioEntity tables.

ProteinIdentifier: The table in which the mapping between the external databases’ protein identifier and BioPortal’s
internal identifier is made.

BioEntity: A table that stores all the biological entities associated with a given protein; this includes,
among other things, pathways and gene ontologies.

DataType: A table listing all the possible external databases from which the biological data may come from; each entry in the ProteinIdentifier and BioEntity tables references this
table, so that we may easily determine the nature (and source) of the
data.

Reproducing the results

The following files allow anyone to reproduce the obtained results regarding the cross-database low identifier coverage issue and the
performance testing queries. You will need a working copy of GeNS in order to use these scripts.

GeneBrowser is a web-based tool that, for a given list of genes, combines data from several public databases with visualisation and analysis methods to help identify the most relevant and common biological characteristics. The functionalities provided include the following: a central point with the most relevant biological information for each inserted gene; a list of the most related papers in PubMed and gene expression studies in ArrayExpress; and an extended approach to functional analysis applied to Gene Ontology, homologies, gene chromosomal localisation and pathways.

GeneBrowser

Although GeneBrowser can be used to answer many different biological questions, a particular question set was used to tune its development:

What public databases provide relevant information about my dataset and how can I navigate through them?

What biological processes are enriched with respect to my input list of genes?

What are the most relevant metabolic pathways that contain my genes?

What are the genomic regions of these genes?

Which are the most relevant homologue classes in my list of genes?

What gene expression experiments have been previously conducted with the same genes?

What are the most relevant publications associated with my study?

Feedback

We highly appreciate any feedback you can provide regarding GeneBrowser. jpa@ua.pt. Thank you.

Dicoogle is an information retrieval system for medical images. It starts by indexing DICOM files and metadata, both locally and in distributed systems using a P2P communication framework. Upon this distributed index users can then search for exams or specific features using a free text interface.

What is QuExT?

QuExT (Query Expansion Tool) is a document indexing and retrieval application that obtains, from the MEDLINE database, a ranked list of publications that are most significant to a particular set of genes. Document retrieval and ranking are based on a concept-based methodology that broadens the resulting set of documents to include documents focusing on these gene-related concepts. Each gene in the input list is expanded to its various synonyms and to a network of biologically associated terms. Currently, the expansion is based on proteins, metabolic pathways and diseases (this last one only when the selected organism is Homo sapiens). The retrieved documents are ranked according to user-definable weights for each of these concept classes. By simply changing these weights, users can alter the order of the documents, allowing them to obtain for example, documents that are more focused on the metabolic pathways in which the initial genes are involved, rather than on the genes themselves.

How does it work?

QuExT receives as input a list of genes and a corresponding organism. The gene list can be typed into the input box or uploaded in a text file. Genes can be separated by commas or spaces. The organism to consider is selected from the drop-box menu. Figure 1 shows the query expansion procedure.

When the user submits the form, gene names or identifiers in the input are checked against a database and mapped to an internal identifier corresponding to the selected organism. Genes which are not found in the database are rejected from further analysis.

QuExT then creates an expanded query and searches a local index of the PubMed database for documents matching this query.

Query expansion is performed as follows: for each gene in the query, the algorithm obtains, from a term expansion table corresponding to the selected organism, all the alternative gene, protein, pathway and disease names corresponding to that gene’s internal ID. The full list of terms from all input genes is then accumulated in four separate query strings (one for each concept type). Each term obtained from expanding all genes is used to search the index.

QuExT runs four index searches using the four query strings obtained in the query expansion stage (one for each concept type). For each search, the documents that match the query and the corresponding scores are obtained. Resulting documents and corresponding scores are kept on separate lists, one for each concept class.

Notice that while the term expansion takes into account the selected organism, to avoid going from a gene in one organism to a related term in a different organism, this is not true for document retrieval. Since the indexing does not distinguish between different species referred in the articles, a search for a gene name in H. sapiens may return results referring to the same gene but in mice, for example.

Finally, the results from the document retrieval stage are assembled and documents are re-ranked in terms of the defined weights for each concept. The final score for document i is obtained as a weighted sum of the four concept-based scores:

where Wj is the weight attributed to the concept type j and sij represents the score for document i in terms of the jth concept type.

NeoScreen is a bioinformatics software that helps diagnosis tasks in newborn screening programs. The application imports MS/MS raw data, and organizes and maintains all the information along the time in a database, providing a set of patterns that allow the detection of abnormalities in the blood samples. Is is been used, from 2005, to support the Portuguese Newborn Screning Program (http://www.diagnosticoprecoce.org/)

NeoScreen – Newborn screening analysis

The introduction of the Tandem Mass Spectrometry (MS/MS) in neonatal screening laboratories has opened the way to innovative newborn screening analysis. With this technology the number of metabolic disorders that can be detected, from dried blood-spot species, increases significantly. However, the amount of information obtained with this technique and the pressure for quick and accurate diagnostics raises serious difficulties in the daily data analysis. To face this challenge we developed a software system, NeoScreen, which simplifies and allow speeding up newborn screening diagnostics.

Software

In this view, the individuals are separated in several diagnostic categories, such as “very suspicious”, “suspicious”, “not suspicious”, etc. Some of these categories represent individuals with markers out of the established limits, but that are not associated with any known disease. In the right-side frame it is displayed the relevant information that was extracted and processed by the software for each individual, like: plate information, markers concentrations, and suspicious diseases.

MIND is a repository of microarray experiments that handles storage, management and analysis of microarray data. It is supported by an infrastructure prepared to integrate dynamically further functionalities (Quality Control assurance, data processing, data mining, visualization, reports, etc.).

Microarray INformation Database

The development of microarray technology has been phenomenal during the past years, and it is becoming a daily tool in many genomics research laboratories. However, the multi-step and data-intensive nature of this technology has created an unprecedented computational challenge. In fact, the full power of microarrays technology can only be achieved if researchers are able to efficiently store, analyse and share their results.

MIND Workflow

LIMS capabilities

A LIMS (Laboratory Information Management System) is an database repository that allows to manage all the laboratorial data.

MIND LIMS

Main advantages of MIND:

Easier and fast access to all the laboratorial data

Trace of all the experiment allowing errors detection

Allows an easier share of data among different users

Public web-based interface

MIAME and MAGE compliance

Data Analysis capabilities

MIND Data Analysis

Quality control

Enables the user to detect systematic errors on the production of microarrays. It also allows the usage of some pre-processing such as background subtraction, data normalization and data filtering;

Exploratory data analysis

Allows the user to, based on definite objectives, specify the experiment design and retrieve the biological meaning from the shown results.

Software integration

Allow the dynamic introduction of processing algorithms and R scripts.

ANACONDA is a software package specially developed for the study of genes’ primary structure. It uses gene sequences downloaded from public databases, as FASTA and GenBank, and it applies a set of statistical and visualization methods in different ways, to reveal information about codon context, codon usage, nucleotide repeats within open reading frames (ORFeome) and others.

Codon context analysis

Genome sequencing is opening unprecedent ways for understanding how gene primary structure is organized. Two of the most studied open reading frame characteristics are codon usage and codon context.
Traditional methods used for codon usage and context analysis do not provide user-friendly tools to carry out detailed gene primary structure analysis at a genomic scale.

Codon usage tables, using absolute metric, are available in public databases for any sequenced gene or genome and freeware software for multivariate analysis (correspondence analysis) of codon and amino acid usage is also readily available, however sophisticated statistical and data visualization tools are clearly lacking.

We propose the usage of several statistical methods – contingency table analysis, residual analysis, multivariate analysis (cluster analysis) – to analyze the codon bias under various aspects (degree of association, contexts and clustering).

Cluster analysis

A cluster analysis tool allows also calculating similarities between two vectors of the contingency table. This technique is used to group lines and columns (codons) of the correlation matrix, allowing highlight global patterns in the genes.

The statistical tools that are incorporated in the system, for data clustering, residual analysis and histogram plotting of calculated indexes, allow reaching new conclusions on gene primary structure features at a genomic scale. We expect that the results obtained will permit identifying some general rules that govern codon context and codon usage in any genome. Additionally, the identification of genes containing expanded codons that arise as a consequence of erroneous DNA replications events will permit uncovering new genes associated to human disease.

Visualization

In order to detect the impact of codon context bias (as well as the presence of rare codons) on coding sequences, ANACONDA has additional tools for sequence mapping. The layout for sequence include written information about the ORF and the sequence itself, in which the codons have been coloured with the same residual colour scale of the ORFeome map.

ANACONDA allows the user to work with more than one ORFeome at a time. This creates large data sets that are difficult to deal with, in particular when multiple comparisons are being performed.

Considering that vast number of ORFeomes can be analyzed simultaneously by ANACONDA, we have included extra tools to allow comparative studies.

Anaconda

he statistical tools that are incorporated in the system, for data clustering, residual analysis and histogram plotting of calculated indexes, allow reaching new conclusions on gene primary structure features at a genomic scale. We expect that the results obtained will permit identifying some general rules that govern codon context and codon usage in any genome.

A PACS solution for echocardiography laboratories that provides a cost-efficient digital archive, and enables the acquisition, storage, transmission and visualization of DICOM cardiovascular ultrasound sequences.

Scenario

The medical imaging digitalization and implementation of PACS (Picture Archiving and Communication Systems) systems increases practitioner’s satisfaction through improved faster and ubiquitous access to image data. Besides, it reduces the logistic costs associated to the storage and management of image data and also increases the intra and inter institutional data portability. Echocardiography is a rather demanding medical imaging modality when regarded as digital source of visual information. The date rate and volume associated with a typical study poses several problems. They are hard to keep “online” (in centralized servers) and difficult to access (in real-time) outside the institutional broadband network infra-structure. For example, an uncompressed echocardiography study size can typically vary between 100 and 500Mbytes.

Product Presentation

The innovation of our approach is the implementation of a DICOM private transfer syntax designed to support any video encoder installed on the operating system. This structure provides great flexibility concerning the selection of an encoder that best suits the specifics of a particular imaging modality or working scenario. To ultrasound studies we are using the highly efficient MPEG4 codec that takes full advantage of object texture, shape coding and inter-frame redundancy. More than 40.000 studies have been performed so far. For example, a typical Doopler color run (RGB) with an optimized time-acquisition (15-30 frames) and a sampling matrix (480*512), rarely exceed 200-300kB. Typical compression ratios can go from 65 for a single cardiac cycle sequence to 100 in multi- cycle sequences. With these averaged figures, even for a heavy work-loaded echolab, it is possible to have all historic procedures online or distribute them with reduced transfer time over the network, which is a very critical issue when dealing with costly or low bandwidth connections. The solution is actually installed in one public Central Hospital (CHVNG) and one private laboratory of cardiac images. Because the solution front-end is fully Web-based, the clinical specialists are using the platform to provide decision support remotely, accessing over Internet in a secure way (i.e. over SSL). Moreover, the solution is changing the work methods. The process workflow is fully digital where reviewing and reporting procedures can be done at physician’s home (i.e. telework).

MS-PDC

Burning Module – to export the study to CD/DVD in DICOM default transfer syntax, including a standalone viewer.

Communications Module – send a study to a external server.

Himage Modules

Image Quality

Two studies were carried on assessing the DICOM cardiovascular ultrasound image quality. In a simultaneous and blind display of the original against the compressed cine-loops, 37% of the trials have selected the compressed sequence as the best image. This suggests that other factors related with viewing conditions are more likely to influence observer performance than the image compression itself.

DNA Microarray technology is one of the most promising new technologies for global gene expression analysis. This technology is sophisticated, very expensive, highly interdisciplinary and produces vast amounts of data whose management and analysis pose significant challenges. This project aims to study new bi-clustering approaches that can help to obtain relevant information from gene expression microarrays.

The very few quantitative mRNA mistranslation studies carried out to date indicate that the average decoding error ranges from 10-4 to 10-5 errors per codon decoded. However, no systematic study has yet been carried out to rank mRNA sequences according to
decoding error and no methodology has yet been developed to identify genes that are prone to decoding error.

In this project, software tools for data visualization and mathematical methodologies for identification of general rules governing RNA translation, and tools for mapping mRNA regions of high decoding error and for identifying putative gene expression regulatory sequences present in mRNAs, will be developed.

There is a great potential for synergy between medical informatics and bioinformatics with a view on continuity and individualisation of healthcare, so that the benefits of the human genome sequence can reach the population. A collaborative effort between those two disciplines is needed to bridge the current gap between them. Biomedical Informatics (BMI) is an emerging discipline that aims at bringing these two worlds together to foster the development of novel diagnostic and therapeutic methodologies and strategies.

The INFOBIOMED network aims at setting a durable structure for the described collaborative approach at an European level, mobilising the critical mass and the resources necessary for enabling the collaborative approach that supports the consolidation of BMI as a crucial scientific discipline for future healthcare.
(http://www.infobiomed.org/)

One goal currently challenging bio – and clinical informatics is to develop robust computational methods and tools to model, store, retrieve and analyse information at multiple levels of complexity, i.e., from molecule to organism. For example, the unification of heterogeneous databases under one virtual system is an important step towards developing such robust computational models. The latter is the objective of the INFOGENMED project which aims at building a virtual laboratory for accessing and integrating genetic and medical information for health applications. Once built, the system allows practitioners, biologists, chemists and other experts to navigate through local and remote biomedical databases.

INFOGENMED started in September 2002, (http://infogenmed.web.ua.pt/), and the functionalities already built in the system allow for: (1) defining clinical pathways to guide the user in the navigation of multiple sources over the Internet; (2) identifying and characterizing the most relevant databases to support the molecular medicine practice for selected rare genetic diseases; (3) designing the integration methods, based on virtual databases, mediators and semantic vocabulary servers.

Talk from Florentino Fernández Riverola, Dpto. de Informática – Universidade de Vigo
Current research lines and projects of the “Next Generation Information Systems” group, from University of Vigo, in Orense

The Xth Spanish Symposium on Bioinformatics (JBI2010) take place in October 27-29, 2010 in Torremolinos-Málaga, Spain. Co-organised by the National Institute of Bioinformatics-Spain and the Portuguese Bioinformatics Network and hosted by the University of Malaga (Spain).

E. Coelho, J. P. Arrais, and J. L. Oliveira
“Uncovering Microbial Duality within Human Microbiomes: A Novel Algorithm for the Analysis of Host-Pathogen Interactions”
In Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC2015), Milan, Italy, August 2015

L. Ribeiro, R. Rodrigues, C. Costa and J.L. Oliveira,
“Enabling Outsourcing XDS for Imaging on the Public Cloud”
In Proceedings of the 14th World Congress on Medical and Health Informatics (MEDINFO 2013), Copenhagen, Denmark, August 2013

D. Polónia, C. Costa, and J. L. Oliveira
“Architecture evaluation for the implementation of a Regional Integrated Electronic Health Record”
In The XIX International Congress of the European Federation for Medical Informatics (MIE 2005), Geneve, Switzerland, 2005.

Biologists have been wondering for many years how organisms evolved highly accurate information maintenance, transfer and decoding machineries. In particular, how the astonishing translational decoding rate of 20 codons per second is achieved with an average error of 10-4 to 10-5 per codon decoded, and how does the ribosome maintain the reading frame. The tools to answer these questions are not yet available but the row DNA sequencing data is. To shed new light into this important question, we have developed a software package that simulates ribosome scanning and reading during mRNA translation. The software screens fully or partially sequenced genomes and determines the arrangement of any particular codon in relation to the others by simultaneously fixing P-site codons and “memorizing” E and A-site codons during each translocation cycle. In doing so, it builds a genome wide codon context map that allows for identification of potential error prone mRNA sequences and gene expression regulatory points.

In this project, the various tools already developed will be integrated into a single software package to allow for automated search, downloading and editing of row DNA sequence data. Software tools for data display and new mathematical methodologies for identification of general rules governing mRNA translation will be developed. New tools for mapping mRNA regions of high decoding error and putative gene expression regulatory sequences present in the mRNAs, will also be developed. Finally, a database and an Internet Home Page will be built for making the data available to the scientific community. These in silico studies will be complemented with in vivo experiments. For this, a multidisciplinary team including two computing engineers, two mathematicians, one physicist, one biochemist and one molecular biologist has been assembled. To our knowledge this is the first Portuguese multidisciplinary team set up for functional genomics and the only one actively engaged on the development of software tools and mathematical models for genome analysis. It is expected that this project will provide important new insight on the role of the translational machinery on genome evolution.

Functional Proteomics in Candida albicans: Developing an Integrated Database for the Management of Proteomics projects

Funding entity: POCTI-32942/99Period: 2001-2004

Candida albicans is an important human pathogen which exists as a commensal in at least 50% of the human population. It accounts for more than 60% of all fungal infections and is now the fourth most common form of septicaemia in Western hospitals with an associated mobidity between 30 and 50%. It is also a major cause for concern in HIV-infected populations where 84% of the patients develop oropharyngeal C.albicans colonisation and 55% develop clinical thrush. C. albicans pathogenesis is dependent upon a wide range of virulence factors, namely a myriad of morphogenesis associated factors, represents a major challenge to the elucidation of C. albicans pathogenesis at the molecular level through classic molecular and biochemical methodologies. The diploid nature of C. albicans, its alternative genetic code and its recalcitrance to genetic analysis, add extra difficulties to its study and to the development of new antifungals. However, the advent of new genetics and molecular technologies which allow for genome wide analysis is promising to alter the present situation.

This project aims at integrating classical genetics and biochemical approaches with newly developed, proteomics and bioinformatics methodologies to uncover new virulence factors associated to morphogenesis.

Software tools are been developed for management of biological data extracted from protein 2D-maps, for helping planning and following up experimental protocols and for data storing. Additionally, mathematical algorithms are also been developed for creating theoretical protein 2D-maps for comparative proteomics studies.

The objective of this project is to develop a query expansion and document ranking method specially aimed at obtaining, from the MEDLINE database, a ranked list of publications that are most significant to a set of genes.

The overall goal is to instantiate a new network connectivity concept for medical imaging data and services at inter-institutional level. This will turn large volumes of clinical information and analytical tools, actually “locked” in clinical units, into shared repositories and high-quality collaborative environments for medical applications, education and research.

The GEN2PHEN project has the overall ambition of unifying human and model organism genetic variation databases, and doing this in such a way that the resulting holistic view of G2P data can be blended with all other biomedical database domains via one or more central genome browsers.

The University of Aveiro Bioinformatics & Computational Biology group is proud to launch its new online portal to the public. Along with this main portal redesign, new websites were created for Dicoogle and Neoscreen.

About
The complete protein-protein interaction (PPI) network of even the most studied organisms is yet to be fully established. This is mostly due to lack of reliability and accuracy of the high-throughput experimental methods used for PPI identification. PPIs can be conveniently represented as networks, allowing the use of graph theory in their study. Different network-based methods have been used to identify false-positive interactions and missing links in biological networks. Network topology studies may reveal patterns associated with specific organisms or the type of PPIs. Thus, in this paper, we propose a new methodology to denoise PPI networks and predict missing links solely based on the network topology, the Organization Measurement (OM) method.
The OM methodology was applied in the denoising of Saccharomyces cerevisiae (Yeast) and Homo sapiens (Human). To evaluate our methodology, two strategies were used. The first compared its application in random networks and in the gold standard networks, while the second perturbed the networks with the gradual random addition and removal of edges. The applied validation strategy showed that the proposed methodology achieves an AUC of 0.95 and 0.87, in Yeast and Human networks, respectively. The random removal of 80% of the Yeast gold standard interactions resulted in an AUC of 0.71, whereas the random addition of 80% interactions resulted in an AUC of 0.75. In Human, the random removal of 80% interactions resulted in an AUC of 0.62, while the random addition of 80% interactions resulted in an AUC of 0.72.
The implemented tests show that the OM methodology is sensitive to the topological structure of the biological networks and can be used for network denoising. The obtained results suggest that the present approach can efficiently denoise PPI networks and that it can be applied to different organisms, as long as they have inherent patterns in the structures of their network models. In addition, although the performance of the method correlates with the initial quality of the network, improvements were consistently obtained.

HighFCM is a compression algorithm that relies on a pre-analysis of the data before compression, with the aim of identifying regions of low complexity. This strategy enables to use deeper context models, supported by hash-tables, without requiring huge amounts of memory. As an example, context depths as large as 32 are attainable for alphabets of four symbols, as is the case of genomic sequences. These deeper context models show very high compression capabilities in very repetitive genomic sequences, yielding improvements over previous algorithms. Furthermore, this method is universal, in the sense that it can be used in any type of textual data (such as quality-scores). HighFCM was designed and implemented at IEETA, a research unit of the University of Aveiro, and is available for non-commercial use.

About
From birth, humans are subject to the colonization and invasion attempts of numerous microorganisms. Although in normal situations, contacting with microbes can support the shaping and development of our immune system, specific situations, such as stress or an unhealthy diet, can render us vulnerable to opportunistic pathogens.
Since the oral cavity is particularly exposed to the environment, it is an anatomic region prone to microbial invasion. Additionally, one of the requirements for bacterial colonization and cellular invasion is the establishment of protein-protein interactions (PPIs) with the host. With this in mind, we aim to develop a computational method for prediction of the oral human-microbial interactome.
Revealing the human-microbial interactome will allow further understanding of the mechanisms behind the onset of oral diseases. Additionally, this knowledge may give insight on key proteins involved in oral infections, which can be used for either diagnosis, as molecular biomarkers, or for treatment, as drug-targets.

About
The emergence of multi-resistant bacterial strains and the existing void in the discovery and development of new classes of antibiotics is a growing concern, as some bacterial strains are now resistant to last-line antibiotics and considered untreatable. A growing trend in drug screening for the past decade is drug repositioning, which consists in focusing on one of the undesired effects of an already commercialized drug in an attempt to make it the main effect. While this was formerly performed experimentally, computational methods speed and reduce the associated costs of drug and drug-target screening.
Thus, we present a computational pipeline that enables the discovery of putative leads for drug repositioning that can be applied to any microbial proteome. Putative drug-targets are inferred by calculating network metrics for the interactome of the bacterial organism. Prediction of drug-target interactions (DTI) is performed using a random forest trained with high-quality publicly available data. Classifier performance achieved an area under the ROC curve of 0.91 for classification of out-of-sampling data. A drug-target network was created by combining 3,081 unique ligands and the expected ten best drug targets. This network was used to predict new DTIs and to calculate the probability of the positive class, allowing the scoring of the predicted instances.

The worldwide surge of multiresistant microbial strains has propelled the search for alternative treatment options. A key aspect to this task is to understanding the mechanisms by which specific pathogens colonize, survive and replicate within the host, which can be achieved through the study of protein-protein interactions. Despite the advances of laboratorial techniques, protein sequence-based computational models allow the screening of protein interactions between entire proteomes in a fast and inexpensive manner. These models are specially valuable due to the recent advances in sequencing metagenomic organisms, where only the protein sequence is available.
Here, we present an improved supervised machine learning model for the prediction of protein interactions based on the protein structure. We propose the usage of the discrete cosine transform as an efficient methodology of representing protein sequences and use categories extracted from physicochemical properties of amino acids.
For the classification task we use a mesh of hyper-specialised classifiers dedicated to the most relevant pairs of Gene Ontology molecular function annotations.
Based on an exhaustive evaluation that includes datasets with different configurations, cross-validation and out-of-sampling validation, the obtained results outscore the state-of-the-art for sequence-based methods. For the final mesh model using SVM with RBF, a consistent average AUC of 0.84 was attained.

-dct_d2_rbf: script used for studying d2 with dct rbf method
-dct_random_d3: script used for DCT method with dataset 3
-dct_rbf_parameters: script used for studying rbf parameters for dct
-dct_rbf_parameters: script used for studying rbf execution time

-original_code: original script
-guo_d3: script used for studying guo with dataset 3

-shen_d2: script used for studying shen with dataset 2
-shen_d3: script used for studying shen with dataset 3

-shen time: script used for studying shen execution time
-guo time: script used for studying guo execution time