Pages

Sunday, November 16, 2014

I previously wrote about the JavaScript Object Notation (JSON) which has become a de facto standard for sharing data by web services. I personally still prefer something using the Resource Description Framework (RDF) because of its clear link to ontologies, but perhaps JSON-LD combines the best of both worlds.The Open PHACTS API support various formats and this JSON is the default format used by the ops.js library. However, the amount of information returned by the Open PHACTS cache is complex, and generally includes more than you want to use in the next step. Therefore, it is needed to extract data from the JSON document, which was not covered in the post #10or #11.
Let's start with the example JSON given in that post, and let's consider this is the value of a variable with the name jsonData:

We can see that this JSON value starts with a map-like structure. We can also see that there is a list embedded, and another map. I guess that one of the reasons why JSON has taken such a flight is how well it integrates with the JavaScript language: selecting content can be done in terms of core language features, different from, for example, XPath statements needed for XML or SPARQL for RDF content. This is because the notation just follows core data types of JavaScript and data is stored as native data types and objects.

For example, to get the price value from the above JSON code, we use:

var price = jsonData.price;

Or, if we want to get the first value in the Bar-Eek list, we use:

var tag = jsonData.tags[0];

Or, if we want to inspect the warehouse stock:

var inStock = jsonData.stock.warehouse;

Now, the JSON returned by the Open PHACTS API has a lot more information. This is why the online, interactive documentation is so helpful: it shows the JSON. In fact, given that JSON is so much used, there are many tools online that help you, such as jsoneditoronline.org (yes, it will show error messages if the syntax is wrong):

BTW, I also recommend installing a JSON viewer extension for Chrome or for Firefox. Once you have installed this plugin, you can not just read the JSON on Open PHACTS' interactive documentation page, but also open the Request URL into a separate browser window. Just copy/paste the URL from this output:

And with a JSON viewing extension, opening this https://beta.openphacts.org/1.3/pathways/... URL in your browser window will look something like:

And because these extensions typically use syntax highlighting, it is easier to understand how to access information from within your JavaScript code. For example, if we want the number of pathways in which the compound testosterone (the link is the ConceptWiki URL in the above example) is found, we can use this code:

Debugging is the process find removing a fault in your code (the etymology goes further back than the moth story, I learned today). Being able to debug is an essential programming skill, and being able to program flawlessly is not enough; the bug can be outside your own code. (... there is much that can be written up about module interactions, APIs, documentation, etc, that lead to malfunctioning code ...)

While there are full debugging tools, achieving the task of finding where the bug is can often be reached with simpler means:

take notice of error messages

add debug statements in your code

Error messages

Keeping track of error messages is first starting point. This skill is almost an art: it requires having seen enough for them to understand how to interpret them. I guess error messages are the worst developed aspects of programming language, and I do not frequently see programming language tutorial that discuss error messages. The field can certainly improve here.

However, at least error messages in general give an indication where the problem occurs. Often by a line number, though this number is not always accurate. Underlying causes of that are the problem that if there is a problem in the code, it is not always clear what the problem is. For example, if there is a closing (or opening) bracket missing somewhere, how can the compiler decide what the author of the code meant? Web browsers like Firefox/Iceweasel and Chrome (Ctrl-C) have a console that displays compiler errors and warnings:

Another issue is that error messages can be cryptic and misleading. For example, the above error message "TypeError: searcher.bytag is not a function example1.html:73" is confusing for a starting programmer. Surely, the source code calls searcher.bytag() which definately is a function. So, why does the compiler say it is not?? The bug here, of course, is that the function called in the source code is not found: it should be byTag().

But this bug at least can be detected during interpretation and executing of the code. That is, it is clear to the compiler that it doesn't know how to handle the code. Another common problem is the situation where the code looks fine (to the compiler), but the data it handles makes the code break down. For example, an variable doesn't have the expected value, leading to errors (e.g. null pointer-style). Therefore, understanding the variable values at a particular point in your code can be of great use.

Console output

A simple way to inspect the content of a variable is to use this console visible in the above screenshot. Many programming languages have their custom call to send output there. Java has the System.out.println() and JavaScript has console.log():

Thus, if you have some complex bit of code with multiple for-loops, if-else statements, etc, this can be used to see if some part of your code that you expect to be called really is:

console.log("He, I'm here!");

This can be very useful when using asynchronous web service calls! Similarly, see what the value of some variable is:

These tools are very useful to find the location of a bug. And this matters. Yesterday I was trying to use the histogram code in example6.html to visualize a set of values with negative numbers (zeta potentials of nanomaterials, to be precise) and I was debugging the issue, trying to find where my code when wrong. I used the above approaches, and the array of values looked in order, but different from the original example. But still the histogram was not showing up. Well, after hours, and having asked someone else to look at the code too, and having ruled out many alternatives, she pointed out that the problem was not in the JavaScript part of the code, but in the HTML: I was mixing up how default JavaScript and the d3.js library add SVG content to the HTML data model. That is, I was using <div id="chart">, which works with document.getElementById("chart").innerHTML, but needed to use <div class="chart"> with the d3.select(".chart").innerHTML code I was using later.

OK, that bug was on my account. However, it still was not working: I did see a histogram, but it didn't look good. Again debugging, and after again much too long, I found out that this was a bug in the d3.js code that makes it impossible to use their histogram example code for negative values. Again, once I knew where the bug was, I could Google and quickly found the solution for it on StackOverflow.

Eating your own dog food is an rather useful concept in anything where a solution or product can change over time. This applies to science as much as programming. Even when we think things are static, they may not really be. This is often because we underestimate or are just ignorant against factors that influence the outcome. By repeatedly dogfooding, the expert will immediately recognize the effect of different influencing factors.

Examples? A politician that actually lives in a neighborhood where he develops policies for. A principle investigator that tries to reproduce an experiment himself from one of her/his postdocs or PhD students. And, of course, the programmer that should use his own libraries himself.

Dogfooding, however, is not the single solution to development; in fact, it can be easily integrated with other models. But it can serve as an early warning system, as the communication channels between you and yourself are typically much smaller than between you and the customer: citizen, peer reviewer, and user, following the above examples. Besides that, it also helps you better understand the things that is being developed, because you will see factors that influence in action and everything becomes more empirical, rather than just theoretical ("making money scarce is a good incentive for people to get of the couch", "but we have been using this experiment for years", "that situation in this source code will never be reached", etc).

And this also applies when teaching. So, you check the purity of the starting materials in your organic synthesis labs, and you check if your code examples still run. And you try things you have not done before, just to test the theory that if X is possible, Y should be possible too, because that is what you tell your students.

Of course, what the students last year and probably this year will produce ismuchmoreimpressive. And, of course, compared to full applications (I recommend browsing this list by the Open PHACTS Foundation), these are just mock ups, and they are. These examples are just like figures in a paper, making a specific point. But that is how these pages are used: as arguments to answer a biological question. In fact, and that is outside the scope of this course, just think of what you can do with this approach in terms of living research papers. Think Sweave!

Thursday, November 06, 2014

I think the authors of the Open PHACTS proposal made a right choice in defining a small set of questions that the solution to be developed could be tested against. The questions being specific, it is much easier to understand the needs. In fact, I suspect it may even be a very useful form of requirement analysis, and makes it hard to keep using vague terms. Open PHACTS has come up with 20 questions (doi:10.1016/j.drudis.2013.05.008; Open Access):

Given compound X, what is its predicted secondary pharmacology? What are the on- and off-target safety concerns for a compound? What is the evidence and how reliable is that evidence (journal impact factor, KOL) for findings associated with a compound?

Given a target, find me all actives against that target. Find/predict polypharmacology of actives. Determine ADMET profile of actives

For a given interaction profile – give me similar compounds

The current Factor Xa lead series is characterized by substructure X. Retrieve all bioactivity data in serine protease assays for molecules that contain substructure X

A project is considering protein kinase C alpha (PRKCA) as a target. What are all the compounds known to modulate the target directly? What are the compounds that could modulate the target directly? I.e. return all compounds active in assays where the resolution is at least at the level of the target family (i.e. PKC) from structured assay databases and the literature

Give me all active compounds on a given target with the relevant assay data

Identify all known protein–protein interaction inhibitors

For a given compound, give me the interaction profile with targets

For a given compound, summarize all ‘similar compounds’ and their activities

Retrieve all experimental and clinical data for a given list of compounds defined by their chemical structure (with options to match stereochemistry or not)

For my given compound, which targets have been patented in the context of Alzheimer's disease?

Which ligands have been described for a particular target associated with transthyretin-related amyloidosis, what is their affinity for that target and how far are they advanced into preclinical/clinical phases, with links to publications/patents describing these interactions?

Target druggability: compounds directed against target X have been tested in which indications? Which new targets have appeared recently in the patent literature for a disease? Has the target been screened against in AZ before? What information on in vitro or in vivo screens has already been performed on a compound?

Which chemical series have been shown to be active against target X? Which new targets have been associated with disease Y? Which companies are working on target X or disease Y?

Which compounds are known to be activators of targets that relate to Parkinson's disease or Alzheimer's disease

For my specific target, which active compounds have been reported in the literature? What is also known about upstream and downstream targets?

Compounds that agonize targets in pathway X assayed in only functional assays with a potency <1 μM

Give me the compound(s) that hit most specifically the multiple targets in a given pathway (disease)

For a given disease/indication, give me all targets in the pathway and all active compounds hitting them

Students in the Programming in the Life Sciences course will this year pick one of these questions as a starting point in the project. The goal is to develop a HTML+JavaScript solution that will answer the question the selected. There is freedom to tweak the question to personal interests, of course. By selecting a simpler pharmacological question that last year, more time and effort can be put into visualization and interpretation of the found data.

Search This Blog

This blog deals with chemblaics in the broader sense. Chemblaics (pronounced chem-bla-ics) is the science that uses computers to solve problems in chemistry, biochemistry and related fields. The big difference between chemblaics and areas such as chem(o)?informatics, chemometrics, computational chemistry, etc, is that chemblaics only uses open source software, open data, and open standards, making experimental results reproducible and validatable. And this is a big difference!

About Me

Assistant professor at the Dept of Bioinformatics - BiGCaT at NUTRIM, Maastricht University, studying biology at an unsupervised and atomic level. Open Science is my main hobby resulting in participation in, among many others, Bioclipse, CDK and WikiPathways. ORCID:0000-0001-7542-0286. Posts on G+ are personal.

Cookies

In the EU there is a directive upcoming requiring websites to warn people about HTTP cookies. This website uses the Blogger.com platform, Google Adsense (not that is it actually paying anything significantly), and a few scripts to count how often a blog post was tweeted, using Topsy and LinkedIn. These services undoubtedly make use of cookies, which you can disallow in your browser.