Node

Content provider

Email Subscription

Frequency

Subscribe to Filter

URL

Importing into Google Calendar

In the left-hand column of the Google Calendar main view,
click the arrow to the right of "Other calendars" and click "Add by URL". In the form that appears, paste in the URL from the box above, and click the button to confirm.

Please note, it may take a while for newly-created events in TeSS to synchronize with your Google Calendar.

Metagenomics: Data Analysis and Interpretation

16 - 19 September 2019

Norwich, United Kingdom

Metagenomics: Data Analysis and Interpretationhttp://www.earlham.ac.uk/metagenomics-data-analysis-and-interpretationhttps://tess.elixir-europe.org/events/metagenomics-data-analysis-and-interpretationThis course will provide an overview of the main aspects involved in metagenomics data analysis and discussion around the interpretation and actual examples of the impact and applications of metagenomics derived research. A substantial part of the course will be devoted to hands-on experience with bioinformatics resources and tools relevant in metagenomics data analysis.
Participants will start with an overview of NGS technologies, a look at experimental approaches and emerging technologies, including a tour of Earlham Institute’s Genomic Pipelines laboratories. Then the remainder of the course will be spent in front of the computers learning how to produce metagenomic assemblies, and taking participants from data to publication-ready figures.2019-09-16 09:00:00 UTC2019-09-19 17:00:00 UTCEarlham InstituteEarlham Institute (EI), Colney Lane, Norwich, United KingdomEarlham Institute (EI), Colney LaneNorwichNorfolkUnited KingdomNR4 7UZ[]training@earlham.ac.uk[][]workshops_and_courses[][]

Combining Workflows, Tools and Data Management - GCB 2019

16 September 2019

Heidelberg, Germany

Combining Workflows, Tools and Data Management - GCB 2019https://www.denbi.de/training/595-combining-workflows-tools-and-data-management-gcb-2019https://tess.elixir-europe.org/events/combining-workflows-tools-and-data-management-gcb-2019Educators:
Björn Grüning (RBC), Wolfgang Müller (de.NBI-SysBio)
Date:
16.09.2019
Location:
Marsilius-Arkaden
Turm West, Room K13
Im Neuenheimer Feld **6.130.3**
69120 Heidelberg
Germany
Contents:
There is a huge call towards FAIR data. However, what is *FAIR*? Many of us know how that FAIR means Findable, Accessible, Interoperable, Reusable. However the questions "How do I achieve FAIR?" and "How FAIR is FAIR enough?" are still open to debate.
A completely different discussion is: How do I approach making my data FAIR? Making data FAIR can be tedious, manual work.
Within this workshop we will demonstrate another approach, i.e. using the workflow system Galaxy, as well as Jupyter Notebooks to extract, enrich, process, and finally upload data into the FAIRDOMHub. This is built around the example use case of building an age estimator for humans from RNA data.
On the way, we will give reference to the software and services we provide and the type of advice that we can give.
Keywords:
FAIR data, Galaxy, Jupyter Notebooks, FAIRDOMHub
Tools:
Galaxy, Jupyter Notebooks, FAIRDOMHub
Prerequisites:
None2019-09-16 09:00:00 UTC2019-09-16 17:00:00 UTCde.NBIHeidelberg, Heidelberg, GermanyHeidelbergHeidelbergKarlsruheGermany[][][]meetings_and_conferences[][]

Getting started with the de.NBI Cloud - GCB 2019

16 September 2019

Heidelberg, Germany

Getting started with the de.NBI Cloud - GCB 2019 https://www.denbi.de/training/565-introduction-into-the-de-nbi-cloud-gcb-2019https://tess.elixir-europe.org/events/getting-started-with-the-de-nbi-cloud-gcb-2019Educators:
Alexander Sczyrba, Peter Belmann, Sebastian Jünemann, Jan Krüger, Alex Walender (BiGi)
Location:
Heidelberg GCB
Date:
16th September
Content:
The need for high-throughput data analysis has grown tremendously since the introduction of next-generation sequencing (NGS) platforms. The massive amount of data produced creates a new class of resource barriers to be overcome including limited bandwidth, storage volume and compute power. Small research labs can hardly cope with the data generated. A solution to the mere resource problem are cloud computing environments as virtually unlimited and flexible resources.
The de.NBI Cloud is a full academic cloud federation, providing compute and storage re-sources free of charge for academic users. It provides a powerful IT infrastructure in combination with flexible bioinformatics workflows and analysis tools to the life science community in Germany. The de.NBI Cloud offers reliable IT security concepts and user access rules to en-sure secure data access and storage. It closes the gap of missing computational resources for life science researchers in Germany.
The de.NBI Cloud project started in 2016 as collaboration between the universities of Bielefeld, Freiburg, Gießen, Heidelberg and Tübingen. The close cooperation with the ELIXIR cloud ensures the connectivity and sustainability in the international context.
The de.NBI Cloud operates the major service levels:
• Infrastructure as a Service (IaaS)
suited for experienced power users that want full control over the compute environment; plain access to virtualized infrastructure
• Platform as a Service (PaaS)
suited for experienced users who utilize fully configured infrastructure for the deployment of custom workflows
• Software as a Service (SaaS)
suited for users without cloud experience who can use virtual machines (VMs) of pre-configured, state-of-the-art analysis tools and pipelines
Cloud computing requires initial efforts and skills to port existing workflows to these new mod-els. The same holds true for emerging programming models. Cloud environments can be difficult to use by scientists with little system administration and programming skills. Challenges exist in managing cloud environments as there is a lack of tools which simplify accessing and using these environments and helping bootstrap users by providing basic software stacks.
Keywords:
OpenStack, Cloud Computing, virtual machines (VMs)
Tools:
OpenStack, BiBiGrid
Prerequisites:
The participants should bring their own laptop computers. The goal of the tutorial is to provide a fundamental introduction to the underlying OpenStack infrastructure. Target audience are bioinformaticians or experienced computational data analysts who would like to utilize scalable and flexible cloud resources for their research. Participants will learn how to setup a cloud project and work with virtual instances, and how to efficiently utilize cloud computing resources. We will also address networking and security issues, demonstrate how to deploy bioinformatics tools in the cloud, and how to set up a customized compute cluster in a cloud environment using BiBiGrid. All topics will be covered by short talks and practical hands-on sessions.2019-09-16 09:00:00 UTC2019-09-16 17:00:00 UTCde.NBIHeidelberg, Heidelberg, GermanyHeidelbergHeidelbergKarlsruheGermany[][][]meetings_and_conferences[][]

Proteomics and metabolomics with OpenMS and pyOpenMS - GCB2019

16 September 2019

Heidelberg, Germany

Proteomics and metabolomics with OpenMS and pyOpenMS - GCB2019https://www.denbi.de/training/666-proteomics-and-metabolomics-with-openms-and-pyopenmshttps://tess.elixir-europe.org/events/proteomics-and-metabolomics-with-openms-and-pyopenmsEducators:
Julianus Pfeuffer, Timo Sachsenberg
Date:
16.09.2019
Location:
GCB 2019.
Heidelberg
Contents:
Computational mass spectrometry provides important tools and bioinformatic solutions for the analysis of proteomics data. Different methods for label-free quantification have been developed in recent years and were successfully applied in a wide range of studies. Targeted approaches for label-free quantification, like SWATH-MS, achieve deep proteome coverage over a large number of samples while non-targeted methods have shown great potential in unbiased discovery studies. This de.NBI training event introduces key concepts of both targeted SWATH-MS and non-targeted label-free analysis using workflow-based processing of real-life datasets. We will introduce several open-source software tools for proteomics, primarily focusing on OpenMS (http://www.OpenMS.org). In a hands-on session, we will demonstrate how to combine these tools into complex data analysis workflows including visualization of the results. Participants will have the opportunity to bring their own data and design custom analysis workflows together with instructors. For participants interested in developing their own algorithms and methods within the OpenMS framework, we provide a brief introduction to pyOpenMS – the python interface to the OpenMS development library.
Training material and handouts will be prepared for both users that want to design proteomic workflows, as well as training material for algorithm and tool developers.
Software Requirements:
The participants should bring their own laptop computers. Installer versions of required software will be made available.
Keywords:
LC-MS based proteomics, OpenMS, workflows, KNIME, data analysis
Tools:
OpenMS/pyOpenMS, KNIME2019-09-16 09:00:00 UTC2019-09-16 17:00:00 UTCde.NBIHeidelberg, Heidelberg, GermanyHeidelbergHeidelbergKarlsruheGermany[][][]meetings_and_conferences[][]

MOFA Workshop - GCB 2019

16 September 2019

Heidelberg, Germany

MOFA Workshop - GCB 2019 https://www.denbi.de/training/684-mofa-workshop-gcb-2019https://tess.elixir-europe.org/events/mofa-workshop-gcb-2019Educators:
Oliver Stegle (HD-HuB)
Date:
16.09.2019
Location:
Heidelberg, Germany
Contents:
This tutorial provides an introduction to Multi-Omics Factor Analysis (MOFA), a novel unsupervised framework for the integration of multi-omic data sets (Argelaguet et al, Molecular Systems Biology. 2018). Intuitively, MOFA can be viewed as a versatile and statistically rigorous generalization of principal component analysis to multi-omics data. Given multiple ‘omics data types on overlapping sets of samples, MOFA infers a low-dimensional data representation in terms of (hidden) factors. These learnt factors represent the driving sources of variation across data modalities, thus facilitating the identification of molecular phenotypes and disease subgroups.
In the first part of the tutorial I will give a 30-minute presentation to explain the model, its applications and limitations. The second part will consist on a hands-on activity where we will use two real-case data sets to show how MOFA can be used for integrative analysis. The first data set will be a large study of blood cancer patients (Dietrich, J Clin Invest. 2018), and the second will be a single-cell multi-omics data set (Angermueller, Nature Methods. 2016). The attendants are also encouraged to bring their own multi-omics data sets.
Keywords:
MOFA, R,
Tools:
MOFA, R,
Prerequisites:
A working knowledge of R is expected.
The tutorial requires the installation of the following software:
• R&gt;=3.4 + Rstudio
• Python&gt;=2.7
• MOFA R package (+ dependencies)
• MOFAdata R package (+ dependencies)
• mofapy python package (+ dependencies)2019-09-16 09:00:00 UTC2019-09-16 17:00:00 UTCde.NBIHeidelberg, Heidelberg, GermanyHeidelbergHeidelbergKarlsruheGermany[][][]meetings_and_conferences[][]

Integrating computational meta-omics for microbiome research - GCB 2019https://www.denbi.de/training/654-integrating-computational-meta-omics-for-microbiome-research-gcb-2019https://tess.elixir-europe.org/events/integrating-computational-meta-omics-for-microbiome-research-gcb-2019Educators:
Dirk Benndorf (BiGi / MetaProtServ), Thilo Muth
Date:
16.09.2019
Location:
German Cancer Research Center
Im Neuenheimer Feld 280
69120 Heidelberg
Germany
Contents:
The field of microbiome research starts to investigate microbial functions in relation to dysbiosis (i.e. the unbalanced composition of the microbiome) being associated with health disorders and disease states. While many microbiome studies mainly rely on genome-based analyses, the integration of meta-omics data at the gene, transcript, protein and metabolite level is a holistic approach that extends the capabilities of microbiome studies. However, the potential of integrative meta-omics has not been fully exploited so far. An important reason is that bioinformatics methods are developed by different research communities. This limits the exchange of ideas and transfer of methods between researchers across different omics fields.
In this workshop, we want to bring together bioinformaticians and researchers working in meta-omics and microbiome-focused disciplines. The meta-omics workshop aims to:
(i) provide a platform of presenting new algorithms and software tools for integrative multi-omics approaches or related single omics technology
(ii) stimulate discussions on challenges and open questions
(iii) help exchanging ideas on bioinformatics methods
(iv) identify what is currently lacking for integrative omics in microbiome research
Abstract proposals for oral presentations (15 min talk + 5 min discussion) of tools, methods or open problems can be submitted until July 31, 2019
Draft schedule:
• Abstract deadline for open speaker slots: July 31, 2019
• Response to applications for speaker slots: August 20, 2019
• Each talk is limited to 15 minutes and additional 5 minutes of discussion for each talk. Coffee break of 15 minutes after the first half of the workshop. Final plenary discussion (20 minutes).
• Proposed time schedule: 13.30 – 16.30
Learning goals:
In this workshop, we want to bring together bioinformaticians and researchers working in meta-omics and microbiome-focused disciplines.
Prerequisites:
Registration on GCB 2019. Abstract proposals for oral presentations (15 min talk + 5 min discussion) of tools, methods or open problems can be submitted until July 31, 2019
Keywords:
Microbiome, Metaproteomics, MetaProteomeAnalyzer, Prophane
Tools:
MetaProteomeAnalyzer, Prophane
Contact:
Dr. Thilo Muth (Bioinformatics Unit, Robert Koch Institute, Berlin; mutht@rki.de)
Dr. Dirk Benndorf (Bioprocess Engineering, Otto von Guericke University, Magdeburg; benndorf@mpi-magdeburg.mpg.de)2019-09-16 13:00:00 UTC2019-09-16 17:00:00 UTCde.NBIHeidelberg, Heidelberg, GermanyHeidelbergHeidelbergKarlsruheGermany[][][]meetings_and_conferences[][]

Microscopy Image Analysis Course

19 - 20 September 2019

Heidelberg, Germany

Microscopy Image Analysis Coursehttps://www.denbi.de/training/73-microscopy-image-analysis-coursehttps://tess.elixir-europe.org/events/microscopy-image-analysis-courseEducators:
Karl Rohr, Thomas Wollmann, Manuel Gunkel (HD-HuB), Qi Gao, Leonid Kostrykin
Date:
19.-20.9.2019
Location:
Heidelberg University
IPMB (Institute of Pharmacy and Molecular Biotechnology)
Im Neuenheimer Feld 364
Contents:
The course gives an introduction into the field of microscopy image analysis for cell biology and the use of software tools for automated processing of image data. Basic methods for computer-based analysis of microscopy images are introduced such as image preprocessing, segmentation, feature extraction, classification, colocalization, and tracking. Concepts of software platforms with focus on ImageJ and their use for analyzing cell microscopy image data are also taught. Workflow systems for automating image analysis pipelines are also considered (e.g., KNIME, Galaxy). The course consists of lectures and practical sessions. Participants should bring their laptops for the practical sessions. The target group are researchers with a background in biology or medicine that need to analyze their data and have little or no experience in automated image analysis.
Learning goals:
- Introduction into cell microscopy image analysis
- Application of software tools for automated analysis of image data
Prerequisites:
Basic knowledge in using software tools for image analysis is helpful but not mandatory
Keywords:
Computer-based image analysis, image preprocessing, segmentation, feature extraction, classification, colocalization, tracking
Tools:
Image J
Course fee:
Participants will be charged with a course fee of 40 Euros (to cover the lunch and infrastructure related cost). The invoice details will be shared via email.
Registration:
Please register directly on the HD-HuB website: https://www.hd-hub.de/course-dates/3-all/47-microscopy-image-analysis-course
In the "Comments" section of the registration form, please provide some information about yourself and your motivation to attend the training (e.g. Position, Field of study/Background, Topic of work, Knowledge of image analysis methods/tools).
Registration closes on August 11, 2019.
The capacity is limited to 20 participants and applicants will be selected after registration closed. You will be notified of the outcome by e-mail on August 23, 2019.2019-09-19 09:00:00 UTC2019-09-20 17:00:00 UTCde.NBIHeidelberg, Heidelberg, GermanyHeidelbergHeidelbergKarlsruheGermany[][][][][][]

Software Carpentry Workshop

16 - 18 October 2019

Heidelberg, Germany

Software Carpentry Workshophttps://www.denbi.de/training/486-software-carpentry-workshophttps://tess.elixir-europe.org/events/software-carpentry-workshop-ab3af408-aa91-49ed-bab2-5db1f2e6d15dEducators:
Malvika Sharan, Georg Zeller, Mike Smith, Thomas Schwarzl, Frank Thommen (HD-HuB), Holger Dinkel
Date:
16-10-2019 - 18-10-2019
09:00-18:00
Location:
ATC Computer Training Lab, EMBL Heidelberg
Contents:
Computation is an integral part of today's research as data has grown too large or too complex to be analysed by hand. An ever-growing fraction of science is performed computationally and many wet-lab biologists spend part of their time on the computer. Many scientists struggle with this aspect of research as they have not been properly trained in the necessary set of skills. The result is that too much time is spent using inefficient tools when progress could be faster. This course provides training in several key tools, with a focus on good development practices that encourage efficient and reproducible research computing.
Topics covered include:
Introduction to Python scripting
Introduction to the Unix shell and usage of cluster resources
Version control with Git and Github
Analysis pipeline management
Scientific Python &amp; working with biological data
Literate programming with Jupyter notebooks
Learning goals:
This course aims to teach software writing skills and best practices to researchers in biology who wish to analyse data, and to introduce a toolset that can help them in their work. The goal is to enable them to be more productive and to make their science better and more reproducible.
Prerequisites:
This is a course for researchers in the life sciences who are using computers for their analyses, even if not full time. The target student will be familiar with some command line/programmatic computer usage, will want to become more confident using these tools efficiently and reproducibly. A target student will have written a for loop in some language before, but will not know what git is (or at least not be very comfortable using git).
Keywords:
Programming; Command Line; Version Control; Bioinformatics; Data Analysis; Cluster Computing
Tools:
Python; Bash; Unix/Linux; Git; GitHub; SnakeMake; Biopython; Pandas; Numpy; SciPy; Matplotlib
2019-10-16 09:00:00 UTC2019-10-18 17:00:00 UTCde.NBI / ELIXIRHeidelberg, Heidelberg, GermanyHeidelbergHeidelbergKarlsruheGermany[][][]workshops_and_courses[][]

Machine Learning in R

6 - 7 November 2019

Heidelberg, Germany

Machine Learning in R https://www.denbi.de/training/675-machine-learning-in-rhttps://tess.elixir-europe.org/events/machine-learning-in-rDate
Nov 6 - Nov 7 2019
Location
EMBL Heidelberg
Tutors and helpers
- Dr. Malvika Sharan
- Prof Bernd Bischl
- Martin Binder
- Giuseppe Casalicchio Affiliation: Ludwig-Maximilians-University Munich
Course Information
This two-day course, on the implementation of Machine Learning in R, using mlr package will be delivered as practical sessions on programming and data analysis. The main goal of mlr is to provide a unified interface for machine learning tasks as classification, regression, cluster analysis and survival analysis in R. Sessions will be driven by many practical exercises and case studies. Before this workshop, participants are expected to review the official material introducing the principle of Machine Learning (see the prerequisite).
Course Content
This 2-day course will cover hands-on sessions using `mlr` and other relevant packages.
Daily schedule
- 09:30-12:30 3h morning, 90 min Theory + 90 min Practical - 12:30-13:30 1h Lunchbreak - 13:30-16:30 3h afternoon, 90 min Theory + 90 min Practical - 16:30-17:00 Time for general questions
Day 1
Introduction to the concepts and Practical with mlr - Performance Evaluation and Resampling (Metrics, CV, ROC) - Introduction to Boosting
Day 2
Introduction to the concepts and Practical with mlr - Tuning and Nested Cross-Validation - Regularization and Feature Selection
Prerequisite
The course is aimed at advanced R programmers, preferably with some knowledge of statistics and data modeling (See prerequisite materials from Day-1, 2, &amp; 4). In this course, our learners will learn more about machine learning and its application and implementation through the hands-on sessions and use cases.
Optional: Discussion-Based Session On The Principle of Machine Learning
Anna Kreshuk (EMBL Group Leader) will lead a one-day discussion-based session on 14 October 2019 to address your questions on the prerequisite materials on the principle of Machine Learning. This will also allow you to connect with other participants of this workshop informally, and discuss the materials in smaller groups. Please register for this workshop separately: https://bio-it.embl.de/events/machine-learning-discussion-workshop-2019/.
Registration
Please register on this page: https://bio-it.embl.de/events/machine-learning-in-r-2019/
Please note that the maximum capacity of this course is 40 participants and registration is required to secure a place. If you have any questions, please contact Malvika Sharan. In your registration, please mention your EMBL group name, or institute's name (e.g. DKFZ, Uni-HD) if you are registering as an external participant.
Costs
60,00 EUR
Keywords:
Machine Learning, R
2019-11-06 09:00:00 UTC2019-11-07 17:00:00 UTCde.NBIHeidelberg, Heidelberg, GermanyHeidelbergHeidelbergKarlsruheGermany[][][]workshops_and_courses[][]