New software programs should help identify potential avenues for improved drug treatments for diabetes and other diseases

Proteins sometimes run amuck. All the good stuff (the useful genetic and biological material) they contain can get distorted. Mutations in specific amino acids can cause long strands of proteins to curl in on themselves (like a ball of wool a cat has played with) and refuse to break apart. These strands, known as amyloid fibrils, can be extremely toxic and are usually harmful. They attach to organs like the brain and pancreas, preventing them from functioning as they should. They are responsible for diseases as seemingly different as diabetes and Alzheimer’s, to name just a couple. Developing effective medications to treat these diseases, and cause the fibrils to dissolve typically involves biochemists in a lengthy and expensive process of trial and error.

Billions of choices

But now McGill researchers, led by Prof. Jérôme Waldispühl of the School of Computer Science, have created a suite of computer programs that should speed up the process of drug discovery for diseases of this kind. The programs are designed to scan the fibrils (or misfolded proteins) looking for weak spots. The idea is to then design helpful genetic mutations to dissolve the bonds that hold the fibrils together - a bit like finding the right strand of wool to tug on to unravel a whole knotted ball. It’s potentially a gargantuan task, because looking for the mutations that will prove useful in drug development involves exploring millions of possible structural combinations of genetic material.

But for the Fibrilizer, as McGill has dubbed its suite of computer tools, a name that hints at the super heroic nature of the programs they have developed, the task is of a very different order. “Within the space of a week, by using our programs and a supercomputer, we were able to look at billions of possible ways to weaken the bonds within these toxic protein strands. We narrowed it down to just 30 - 50 possibilities that can now be explored further,” says Mohamed Smaoui, a McGill postdoctoral fellow and the first author on three recent papers on the research. “Typically biochemists can spend months or years in the lab trying to pinpoint these promising mutations.”

Supercomputers to the rescue

The researchers tested their program on a medical compound that scientists have been trying to improve for the last couple decades. The compound is administered as part of a drug that is used by diabetes patients to boost the performance of insulin and is sold under the name Symlin. The synthetic compound is based on a version of the protein amylin, yet is known to be toxic to the pancreas over the long-term, creating amyloid fibrils. The McGill team were able to use Fibrilizer to pinpoint a limited number of possible genetic modifications to the compound that would act to reduce its toxicity.

Jérôme Waldispühl, the lead researcher on the papers, believes that computational research of this kind will play an increasingly important role in drug discovery in the future. “Computers are transforming the way that drugs are being developed,” says Waldispühl. “Amyloid research has accelerated in the last 10 years. But it may prove to be the key to finding better medications for a whole range of systemic and neurodegenerative diseases, from arthritis to Parkinson’s. Without supercomputers and programs of this kind, this research would be much more time-consuming and expensive.”

South Florida is on the front lines in the war against invasive reptiles and amphibians because its warm climate makes it a place where they like to live, a new University of Florida study shows.

Using supercomputer models and data showing where reptiles live in Florida, UF/IFAS scientists predicted where they could find non-native species in the future. They found that as temperatures climb, areas grow more vulnerable to invasions by exotic reptiles. Conversely, they found that extreme cold temperatures protect against invasion.

“Early detection and rapid response efforts are essential to prevent more of the 140 introduced species from establishing breeding populations, and this study helps us choose where to look first,” said Frank Mazzotti, a wildlife ecology and conservation professor at the University of Florida Institute of Food and Agricultural Sciences Fort Lauderdale Research and Education Center.

The new study is published online in the journal Herpetological Conservation Biology.

Lead author Ikuko Fujisaki, an assistant professor of wildlife ecology and conservation at the Fort Lauderdale REC, said scientists conducted the study to provide scientific data for managing invasive wildlife in the Sunshine State.

America imports more exotic animals than any other country in the world, with more than 1 billion animals entering the nation from 2005 through 2008, according to the U.S. Government Accountability Office. They come in by boats, planes and other modes of transportation. The animals are often used in the pet trade, but have other uses as well, including food and religious practices. Once they’re established, exotic animals are costly to remove, according to a 2010 led by Michigan State University. Therefore, wildlife management agencies are always looking for better ways to detect the invasive species early.

Urban areas are hubs of international transport and therefore are major gateways for exotic pests. With its subtropical and tropical climates and its high human population (19.9 million as of 2014), Florida provides a unique opportunity for a geographic risk assessment because of the number of exotic species that establish, fail to establish or whose fate is unknown, the UF/IFAS scientists said.

Invasive species are second only to losing habitats in contributing to the loss of biodiversity worldwide, Mazzotti wrote in a 2015 UF/IFAS Extension paper. Florida has more introduced species of reptiles and amphibians in the wild than anywhere else in the world.

This data leads Mazzotti to suggest South Florida as the focal area for exotic species.

The ability to custom design biological materials such as protein and DNA opens up technological possibilities that were unimaginable just a few decades ago. For example, synthetic structures made of DNA could one day be used to deliver cancer drugs directly to tumor cells, and customized proteins could be designed to specifically attack a certain kind of virus. Although researchers have already made such structures out of DNA or protein alone, a Caltech team recently created--for the first time--a synthetic structure made of both protein and DNA. Combining the two molecule types into one biomaterial opens the door to numerous applications.

A paper describing the so-called hybridized, or multiple component, materials appears in the September 2 issue of the journalNature.

There are many advantages to multiple component materials, says Yun (Kurt) Mou (PhD '15), first author of the Nature study. "If your material is made up of several different kinds of components, it can have more functionality. For example, protein is very versatile; it can be used for many things, such as protein-protein interactions or as an enzyme to speed up a reaction. And DNA is easily programmed into nanostructures of a variety of sizes and shapes."

But how do you begin to create something like a protein-DNA nanowire--a material that no one has seen before?

Mou and his colleagues in the laboratory of Stephen Mayo, Bren Professor of Biology and Chemistry and the William K. Bowes Jr. Leadership Chair of Caltech's Division of Biology and Biological Engineering, began with a supercomputer program to design the type of protein and DNA that would work best as part of their hybrid material. "Materials can be formed using just a trial-and-error method of combining things to see what results, but it's better and more efficient if you can first predict what the structure is like and then design a protein to form that kind of material," he says.

The researchers entered the properties of the protein-DNA nanowire they wanted into a supercomputer program developed in the lab; the program then generated a sequence of amino acids (protein building blocks) and nitrogenous bases (DNA building blocks) that would produce the desired material.

However, successfully making a hybrid material is not as simple as just plugging some properties into a supercomputer program, Mou says. Although the supercomputer model provides a sequence, the researcher must thoroughly check the model to be sure that the sequence produced makes sense; if not, the researcher must provide the supercomputer with information that can be used to correct the model. "So in the end, you choose the sequence that you and the computer both agree on. Then, you can physically mix the prescribed amino acids and DNA bases to form the nanowire."

The resulting sequence was an artificial version of a protein-DNA coupling that occurs in nature. In the initial stage of gene expression, called transcription, a sequence of DNA is first converted into RNA. To pull in the enzyme that actually transcribes the DNA into RNA, proteins called transcription factors must first bind certain regions of the DNA sequence called protein-binding domains.

Using the supercomputer program, the researchers engineered a sequence of DNA that contained many of these protein-binding domains at regular intervals. They then selected the transcription factor that naturally binds to this particular protein-binding site--the transcription factor called Engrailed from the fruit fly Drosophila. However, in nature, Engrailed only attaches itself to the protein-binding site on the DNA. To create a long nanowire made of a continuous strand of protein attached to a continuous strand of DNA, the researchers had to modify the transcription factor to include a site that would allow Engrailed also to bind to the next protein in line.

"Essentially, it's like giving this protein two hands instead of just one," Mou explains. "The hand that holds the DNA is easy because it is provided by nature, but the other hand needs to be added there to hold onto another protein."

Another unique attribute of this new protein-DNA nanowire is that it employs coassembly--meaning that the material will not form until both the protein components and the DNA components have been added to the solution. Although materials previously could be made out of DNA with protein added later, the use of coassembly to make the hybrid material was a first. This attribute is important for the material's future use in medicine or industry, Mou says, as the two sets of components can be provided separately and then combined to make the nanowire whenever and wherever it is needed.

This finding builds on earlier work in the Mayo lab, which, in 1997, created one of the first artificial proteins, thus launching the field of computational protein design. The ability to create synthetic proteins allows researchers to develop proteins with new capabilities and functions, such as therapeutic proteins that target cancer. The creation of a coassembled protein-DNA nanowire is another milestone in this field.

"Our earlier work focused primarily on designing soluble, protein-only systems. The work reported here represents a significant expansion of our activities into the realm of nanoscale mixed biomaterials," Mayo says.

Although the development of this new biomaterial is in the very early stages, the method, Mou says, has many promising applications that could change research and clinical practices in the future.

"Our next step will be to explore the many potential applications of our new biomaterial," Mou says. "It could be incorporated into methods to deliver drugs into cells--to create targeted therapies that only bind to a certain biomarker on a certain cell type, such as cancer cells. We could also expand the idea of protein-DNA nanowires to protein-RNA nanowires that could be used for gene therapy applications. And because this material is brand-new, there are probably many more applications that we haven't even considered yet."

New model allows pharmacological researchers to dock nearly any drug and see how it behaves in P-glycoprotein, a protein in the cell associated with failure of chemotherapy

Drugs important in the battle against cancer responded the way they do in real life and behaved according to predictions when tested in a supercomputer-generated model of one of the cell's key molecular pumps -- the protein P-glycoprotein, or P-gp.

Psychology researchers who have hypothesized that we classify scenery by following some order of cognitive priorities may have been overlooking something simpler. New evidence suggests that the fastest categorizations our brains make are simply the ones where the necessary distinction is easiest.

There are many ways we parse scenery. Is it navigable or obstructed? Natural or man-made? A lake or a river? A face or not a face? In many previous experiments, researchers have found that some levels of categorization seem special in that they occur earlier than others, leading to a hypothesis that the brain has a prescribed set of priorities. One example of this, the "superordinate advantage," holds that people will first sort out the global or "superordinate" character of a scene before categorizing basic details. Judging "indoor vs. outdoor," the hypothesis goes, not only happens before "kitchen vs. bathroom," but must happen first.

After gathering the new data published in the journal PLOS Computational Biology, senior author Thomas Serre, assistant professor of cognitive, linguistic and psychological sciences at Brown University, isn't so sure.

"Whatever is happening in the visual system might not be as sophisticated as we thought," Serre said.

Of categorization and computation

Serre and co-authors Imri Sofer and Sébastien Crouzet wanted to see whether the main predictor of how people first categorized a scene was merely the "discriminability" of the categorization, a measure of the ease of making the needed distinction.

They started by establishing discriminability scores for scenery images (they didn't rely, as some studies have, on abstract images with clear shapes and colors). To do this they used a standard computational model that could be trained by exposing it to pictures from very large database of natural scenes. After training, the algorithm was able to learn many categorization tasks. For each categorization task, the algorithm could also calculate how close each example was to the boundary (i.e., the line where it was a 50/50 shot) of being one thing or the other (e.g., man-made or natural). The greater the mathematical distance from that category boundary, the higher the discriminability score.

Once the researchers had a way of scoring discriminability, they conducted two experiments with small groups of human volunteers.

In the first one they asked eight volunteers to go through hundreds of trials in which they got quick glimpses of images and had to make the man-made vs. natural distinction by pressing a button. The researchers carefully presented images with the full range of discriminability scores. What they observed is that the higher the discriminability, the faster and more accurately the volunteers could make the categorization.

That small experiment confirmed that the computational model was able to predict human behavioral responses and that discriminability was a significant factor. That set the table for the more profound question: If one accounts for discriminability, do categorization levels, such as in the superordinate advantage, really matter?

The next experiment addressed that by presenting another 24 volunteers with tasks where they had to sort through a superordinate categorization (e.g., man-made vs. natural) and a basic categorization (e.g., desert vs. beach). Half of the participants were given tasks where the greater discriminability was at the superordinate level and half were given tasks where the easier distinctions lay in the basic level. The experiment involved more than 1,000 trials.

If people had a hardwired priority for the superordinate level, they would always make that distinction more quickly, but that's not what happened. Instead, people for whom the basic categorizations were easier accomplished those more quickly and accurately. By manipulating discriminability, the researchers dispensed with the superordinate advantage and replaced it with a "basic advantage."

These results suggest that the superordinate advantage is not necessarily part of a pre-ordained hierarchy in the brain. The superordinate categorization may just typically be easier.

"The mere fact that it is possible to reverse [the superordinate advantage], shows that it not a sequential type of process," Serre said.

It's certainly still possible that a hybrid of the two hypotheses exists, Serre said. There may be some hierarchy or priorities, but discriminability is such a powerful factor it can actually overwhelm them. Further experiments are underway.

As researchers continue to probe the psychology of how we sort out scenes, Serre said, they should at least use discriminability as a control in their experiments.