Following up on suggestions to make the code more standard, with the priority of figuring out how I can revise the current BioPython phylogeny class, to resemble the better version in lagrange, so that there is a generic flexible phylogeny/newick parser that can be used generally as well as by my BioGeography package specifically.

Following up on suggestions to make the code more standard, with the priority of figuring out how I can revise the current BioPython phylogeny class, to resemble the better version in lagrange, so that there is a generic flexible phylogeny/newick parser that can be used generally as well as by my BioGeography package specifically.

+

+

Added a bunch of tools for managing/parsing xmltree structures from ElementTree parsing of XML:

+

+

====find_to_elements_w_ancs(xmltree, el_tag, anc_el_tag)====

+

Burrow into XML to get an element with tag el_tag, return only those el_tags underneath a particular parent element parent_el_tag

Abstract: Create a BioPython module that will enable users to automatically access and parse species locality records from online biodiversity databases; link these to user-specified phylogenies; calculate basic alpha- and beta-phylodiversity summary statistics, produce input files for input into the various inference algorithms available for inferring historical biogeography; convert output from these programs into files suitable for mapping, e.g. in Google Earth (KML files).

Work Plan

Note: all major functions are being placed in the file geogUtils.py for the moment. Also, the immediate goal is to just get everything basically working, so details of where to put various functions, what to call them, etc. are being left for later.

Code usage: For a few things, an entire necessary function already exists (e.g. for reading a shapefile), and re-inventing the wheel seems pointless. In most cases the material used appears to be open source (e.g. previous Google Summer of Code). For a few short code snippets found online in various places I am less sure. In all cases I am noting the source and when finalizing this project I will go back and determine if the stuff is considered copyright, and if so email the authors for permission to use.

extract_latlong

shapefile_points_in_poly, tablefile_points_in_poly

Input geographic points, determine which region (polygon) each range falls in (via point-in-polygon algorithm); also output points that are unclassified, e.g. some GBIF locations were mis-typed in the source database, so a record will fall in the middle of the ocean.

Code

Note: creating functions for all possible interactions with GBIF is not possible in the time available, I will just focus on searching and downloading basic record occurrence record data.

access_gbif

utility function invoked by other functions, user inputs parameters and the GBIF response in XML/DarwinCore format is returned. The relevant GBIF web service, and the search commands etc., are here: http://data.gbif.org/ws/rest/occurrence

get_hits

Get the actual hits that are be returned by a given search, returns filename were they are saved

get_xml_hits

Like get_hits, but returns a parsed XML tree

fix_ASCII

files downloaded from GBIF contain HTML character entities & unicode characters (e.g. umlauts mostly) which mess up printing results to prompt in Python, this fixes that

Code

June, week 2: Functions to get GBIF records

Added functions download & parse large numbers of records, get TaxonOccurrence gbifKeys, and search with those keys.

get_record

Retrieves a single specified record in DarwinCore XML format, and returns an xmltree for it.

extract_occurrence_elements

Returns a list of the elements, picking elements by TaxonOccurrence; this should return a list of elements equal to the number of hits.

extract_taxonconceptkeys_tolist

Searches an element in an XML tree for TaxonOccurrence gbifKeys, and the complete name. Searches recursively, if there are subelements. Returns list.

extract_taxonconceptkeys_tofile

Searches an element in an XML tree for TaxonOccurrence gbifKeys, and the complete name. Searches recursively, if there are subelements. Returns file at outfh.

get_all_records_by_increment

Download all of the records in stages, store in list of elements. Increments of e.g. 100 to not overload server. Currently stores results in a list of tempfiles which is returned (could return a list of handles I guess).

lagrange_disclaimer()

Code

June, week 4: Functions to summarize taxon diversity in regions, given a phylogeny and a list of taxa and the regions they are in.

(note: I have scripts doing all of these functions already, so the work is integrating them into a Biopython module, testing them, etc.)

Priority for this week:

Following up on suggestions to make the code more standard, with the priority of figuring out how I can revise the current BioPython phylogeny class, to resemble the better version in lagrange, so that there is a generic flexible phylogeny/newick parser that can be used generally as well as by my BioGeography package specifically.

Added a bunch of tools for managing/parsing xmltree structures from ElementTree parsing of XML:

find_to_elements_w_ancs(xmltree, el_tag, anc_el_tag)

Burrow into XML to get an element with tag el_tag, return only those el_tags underneath a particular parent element parent_el_tag

Regarding where to put reconstructed nodes, or tips that where the only location information is region. Within regions, dealing with linking already geo-located tips, spatial averaging can be used as currently happens with GeoPhyloBuilder. If there is only one node in a region the centroid or something similar could be used (i.e. the "root" of the polygon skeleton would deal even with weird concave polygons).

If there are multiple ancestral nodes or region-only tips in a region, they need to be spread out inside the polygon, or lines will just be drawn on top of each other. This can be done by putting the most ancient node at the root of the polygon skeleton/medial axis, and then spreading out the daughter nodes along the skeleton/medial axis of the polygon.

get_polygon_skeleton

assign_node_locations_in_region

within a region’s polygon, given a list of nodes, their relationship, and ages, spread the nodes out along the middle 50% of the longest axis of the polygon skeleton, with the oldest node in the middle

assign_node_locations_between_regions

connect the nodes that are linked to branches that cross between regions (for this initial project, just the great circle lines)

write_history_to_shapefile

write the biogeographic history to a shapefile

write_history_to_KML

write the biogeographic history to a KML file for input into Google Earth

August, week 2: Beta testing

Make the series of functions available, along with suggested input files; have others run on various platforms, with various levels of expertise (e.g. Evolutionary Biogeography Discussion Group at U.C. Berkeley). Also get final feedback from mentors and advisors.