Raw Rheology data in supplement to the 2019 Macromolecules publication: "Assessing the Range of Validity of Current Tube Models Through Analysis of a Comprehensive Set of Star-Linear 1,4-Polybutadiene Polymer Blends"

GOES_flare_list: contains a list of more than 10,000 flare events. The list has 6 columns, flare classification, active region number, date, start time end time, emission peak time, GOES_B_flare_list: contains time series data of SDO/HMI SHARP parameters for B class solar flares
, GOES_MX_flare_list: contains time series data of SDO/HMI SHARP parameters for M and X class solar flares, SHARP_B_flare_data_300.hdf5 and SHARP_MX_flare_data_300.hdf5 files contain time series more than 20 physical variables derived from the SDO/HMI SHARP data files. These data are saved at a 12 minute cadence and are used to train the LSTM model., and B_HARPs_CNNencoded_part_xxx.hdf5 and M_X HARPs_CNNencoded_part_xxx.hdf5 include neural network encoded features derived from vector magnetogram images derived from the Solar Dynamics Observatory (SDO) Helioseismic and Magnetic Imager (HMI). These data files typically contains one or two sequences of magnetograms covering an active region for a period of 24h with a 1 hour cadence. We encode each magnetogram with frames of a fixed size of 8x16 with 512 channels.

This project aimed to discover and analyze the molecular mechanism of synthesis of two particular fucosylated oligosaccharide products in a mutant enzyme, Thermatoga maratima Alpha-L-Fucosidase D224G, whose wild type performs the opposite reaction (cleavage of fucosyl glycosidic bonds). Discovery of the mechanism was performed using an unbiased simulations method known as aimless shooting, whereas analysis of the mechanism in terms of the energy profile was performed using a separate method known as equilibrium path sampling. The data here concerns the latter method. and The contents of the atesa_master.zip are the ATESA GitHub project. A Python program for automating transition path sampling with aimless shooting using Amber. https://github.com/team-mayes/atesa

This is the code that resulted from NSF grant ECCS-1508943, "Inferring the behavior of distributed energy resources from incomplete measurements." The project focused on developing control, estimation, and modeling methods for residential demand response and electric distribution networks.
The talks, papers, and poster in Deep Blue: http://hdl.handle.net/2027.42/149480

The project outputs summarize all the publications, talks, and codes we accomplished under this NSF funding. In the project, we develop methodologies to manage uncertainty in future electric power systems and quantify how uncertainty affects power system sustainability. and Talks, papers, and poster in Deep Blue: http://hdl.handle.net/2027.42/149653

The NASA MAVEN (Mars Atmosphere and Volatile Evolution) spacecraft, which is currently in orbit around Mars, has been taking monthly measurements of the speed and direction of the winds in the upper atmosphere of Mars between about 140 to 240 km above the surface. The observed wind speeds and directions change with time and location, and sometimes fluctuate quickly. These measurements are compared to simulations from a computer model of the Mars atmosphere called M-GITM (Mars Global Ionosphere-Thermosphere Model), developed at U. of Michigan. This is the first comparison between direct measurements of the winds in the upper atmosphere of Mars and simulated winds and is important because it can help to inform us what physical processes are acting on the observed winds. Some wind measurements have similar wind speeds or directions to those predicted by the M-GITM model, but sometimes, there are large differences between the simulated and measured winds. The disagreements between wind observations and model simulations suggest that processes other than normal solar forcing may become relatively more important during these observations and alter the expected circulation pattern. Since the global circulation plays a role in the structure, variability, and evolution of the atmosphere, understanding the processes that drive the winds in the upper atmosphere of Mars provides key context for understanding how the atmosphere behaves as a whole system.
A basic version of the M-GITM code can be found on Github as follows:
https:/github.com/dpawlows/MGITM
and About 30 Neutral Gas and Ion Mass Spectrometer (NGIMS) wind campaigns (of 5 to 10 orbits each) have been conducted by the MAVEN team (Benna et al., 2019). Five of these campaigns are selected for detailed study (Roeten et al. 2019). The Mars conditions for these five campaigns have been used to launch corresponding M-GITM code simulations, yielding 3-D neutral wind fields for comparison to these NGIMS wind observations. The M-GITM datacubes used to extract the zonal and meridional neutral winds, along the trajectory of each orbit path between 140 and 240 km, are provided in this Deep Blue Data archive. README files are provided for each datacube, detailing the contents of each file. A general README file is also provided that summarizes the inputs and outputs of the M-GITM code simulations for this study.

The data and the scripts are to show that seizure onset dynamics and evoked responses change over the progression of epileptogenesis defined in this intrahippocampal tetanus toxin rat model. All tests explored in this study can be repeated with the data and scripts included in this repository. and Dataset citation: Crisp, D.N., Cheung, W., Gliske, S.V., Lai, A., Freestone, D.R., Grayden, D.B., Cook, MJ., Stacey, W.C. (2019). Epileptogenesis modulates spontaneous and responsive brain state dynamics [Data set]. University of Michigan Deep Blue Data Repository. https://doi.org/10.7302/r6vg-9658

The relationship between words in a sentence often tell us more about the underlying semantic content of a document than its actual words, individually. Recent publications in the natural language processing arena, more specifically using word embeddings, try to incorporate semantic aspects into their word vector representation by considering the context of words and how they are distributed in a document collection. In this work, we propose two novel algorithms, called Flexible Lexical Chain II and Fixed Lexical Chain II that combine the semantic relations derived from lexical chains, prior knowledge from lexical databases, and the robustness of the distributional hypothesis in word embeddings into a single decoupled system. In short, our approach has three main contributions: (i) unsupervised techniques that fully integrate word embeddings and lexical chains; (ii) a more solid semantic representation that considers the latent relation between words in a document; and (iii) lightweight word embeddings models that can be extended to any natural language task. Knowledge-based systems that use natural language text can benefit from our approach to mitigate ambiguous semantic representations provided by traditional statistical approaches. The proposed techniques are tested against seven word embeddings algorithms using five different machine learning classifiers over six scenarios in the document classification task. Our results show that the integration between lexical chains and word embeddings representations sustain state-of-the-art results, even against more complex systems.
Github: https://github.com/truas/LexicalChain_Builder

This information provides the data and commands to manually setup the computational simulations used in the PLOS ONE paper 'Patient-specific modeling of right coronary circulation vulnerability post-liver transplant in Alagille’s syndrome' using CRIMSON (CARDIOVASCULAR INTEGRATED MODELLING & SIMULATION) a prototype simulation environment developed under the support of the European Research Council (( http://www.crimson.software/)., Note that a Windows version of the CRIMSON flowsolver is provided as part of the CRIMSON Windows installer, but you will need a very powerful Windows computer to run these simulations, as the models used in the present work are extremely computationally-demanding. It is recommended that you use a Linux version of the CRIMSON flowsolver on a high-performance computer., Option 1 (ready-to-use files to immediately start the simulation):
1. Please unzip the Ready-to-use files.
2. Copy the folders of each of the three conditions to the high performance computer.
3. In addition to different codes used, each folder provides the boundary conditions applied in the simulations described in the manuscript (e.g. LPN parameters). To run the 3D simulations for each condition simply launch the it using the CRIMSON flowsolver. In addition, the solver.inp file can be modified to run a 0D "real-time simulation" (please open solver.inp with a text editor and modify line 4 "Simulate in Purely Zero Dimensions:" to "True")., Option 2 (using the MITK files):
1. Please download and install Crimson software ( http://www.crimson.software/).
2. Please unzip the MITK files and the Ready-to-use files.
3. From amongst the provided MITK files, load the MITK file of interest to CRIMSON (using the MITK files, additional changes can be made to the computational model in case the user wants to explore different settings/boundary conditions e.g. change the vascular wall properties, introducing a change in the geometry to create a virtual stenosis).
3. Navigate to the tree in the "Data Manager" panel and select the "Pulmonaries", "CRIMSON SOLVER" and then "Solver study 3D" items, in the described order.
4. In the right hand panel select the "CRIMSON Solver setup" tab and scroll down the right hand bar until to find the "Setup Solver" box; click to output the simulation files (faceInfo.dat, geombc.dat.1, multidomain.dat, netlist_surface.dat,numstart.dat, presolver folder, solver.inp, restart.0.1).
5. Copy and replace the geombc.dat.1 and restart.0.1 generated by CRIMSON for each individual condition to the respective unziped folder in the Ready-to-use file (discard the remaining files that were output by CRIMSON). Note that if you have not changed anything about the model (e.g. vascular wall properties), then doing this will produce restart.0.1 and geombc.dat.1 files which are identical to the ready-to-use versions.
6. Finally copy each Condition folder to the high performance computer and simply launch the simulation using the CRIMSON flowsolver., and For technical queries please contact crimson-users@googlegroups.com. --October 2018.

This archive contains data files from spark-ignited homogeneous combustion internal combustion engine experiments. Included are high-resolution two-dimensional two-component velocity fields acquired at two 5 x 6 mm regions, one near the head and one near the piston. Crank angle resolved heat flux measurements were made at a third location in the head. The engine was operated at 40 kPa, 500 and 1300 RPM, motor and fired. Included are in-cylinder pressure measurements, external pressure and temperature data, as well as details on the geometry of the optical engine to enable setups of simulation configurations.

This is the experimental data referenced in our manuscript entitled “SMALL-LABS: An algorithm for measuring single molecule intensity and position in the presence of obscuring backgrounds .” These live-cell single-molecule imaging movies were used as a test of the SMALL-LABS single-molecule image analysis algorithm.
The dataset comprises two movies; each one is provided both as a .tif stack and as an .avi file. The movie called “low_bg” has a standard low background, and the movie called “high_bg” includes a high fluorescent background produced by an external 488-nm laser.

These manikins represent body shape models for children weighing 9 to 23 kg in a seated posture relevant to child restraint design. The design of child restraints is guided in part by anthropometric data describing the distributions of body dimensions of children. However, three-dimensional body shape data have not been available for children younger than three years of age. These manikins will be useful for assessing child accommodation in restraints. The SBSM can also provide guidance for the development of anthropomorphic test devices and computational models of child occupants.
The sampled manikins were predicted for a range of torso length and body weight dimensions. The SBSM model was exercised for two torso lengths and nine body weights to obtain 18 body shapes. The 3D shape models can be downloaded in a standard mesh format (PLY). Each body shape is accompanied by predicted landmark locations and standard anthropometric variables.

******Michigan Indoor Corridor 2012 Dataset******
This dataset is made available for research purpose only.
Please contact Grace Tsai( gstsai@umich.edu) for any questions or comments.
This dataset was used to produce the results in our IROS 2012 paper. If you use the data, please cite the following reference in your publications related to this work:
Grace Tsai and Benjamin Kuipers
Dynamic Visual Understanding of the Local Environment for an Indoor Navigating Robot
International Conference on Intelligent Robots and Systems (IROS'12)
October 2012
The dataset contains 4 video sequences acquired with camera mounted on a wheeled vehicle. The camera was set-up so that there was zero tilt and roll angle with respect to the ground. The camera has a fixed height (0.47 m) with the ground throughout the video.
The intrinsic parameters of the cameras are:
Focal length fc = [ 1389.182714 1394.598277 ]
Principal point cc = [ 672.605430 387.235803 ]
The distortion of the camera has been corrected.
For each video sequences, an estimated camera pose in each frame of the video is provided in the file pose.txt. Each line in the file looks like:
<frame index> <x (pose)> <y (pose)> <theta (pose)>
Note the camera poses provided here are estimated by using an occupancy grid mapping algorithm with a laser range finder
to obtain the robot pose.
The dataset provides a ground truth labeling for all the pixels every 10 frames for each video. The labels of each frame is stored as a 2D matrix in a .mat file. The filename of each .mat file corresponds to the image frame. The labels can be interpreted as followed:
-2 -> ceiling plane
-1 -> ground plane
>0 -> walls
The labels of the walls are illustrated in a .pdf figure. Note the figure is only a illustration graph, not an actual floor plan.

The goal of the work is to elucidate the stability of a complex experimentally observed structure of proteins. We found that supercharged GFP molecules spontaneously assemble into a complex 16-mer structure that we term a protomer, and that under the right conditions an even larger assembly is observed. The protomer structure is very well defined, and we performed simulations to try and understand the mechanics underlying its behavior. In particular, we focused on understanding the role of electrostatics in this system and how varying salt concentrations would alter the stability of the structure, with the ultimate goal of predicting the effects of various mutations on the stability of the structure.
There are two separate projects included in this repository, but the two are closely linked. One, the candidate_structures folder, contains the atomistic outputs used to generate coarse-grained configurations. The actual coarse-grained simulations are in the rigid_protein folder, which pulls the atomistic coordinates from the other folder. All data is managed by signac and lives in the workspace directories, which contain various folders corresponding to different parameter combinations. The parameters associated with a given folder are stored in the signac_statepoint.json files within each subdirectory.
The atomistic data uses experimentally determined protein structures as a starting point; all of these are stored in the ConfigFiles folder. The primary output is the topology files generated from the PDBs by GROMACS; these topologies are then used to parametrize the Monte Carlo simulations. In some cases, atomistic simulations were actually run as well, and the outputs are stored alongside the topology files.
In the rigid_protein folder, the ConfigFiles folder contains MSMS, the software used to generate polyhedral representations of proteins from the PDBs in the candidate_structures folder. All of the actual polyhedral structures are also stored in the ConfigFiles folder. The actual simulation trajectories are stored as general simulation data (GSD) files within each subdirectory of the workspace, along with a single .pos file that contains the shape definition of the (nonconvex) polyhedron used to represent a protein. The logged quantities, such as energies and MC move sizes, are stored in .log files.
The logic for the simulations in the candidate_structures project is in the Python scripts project.py, operations.py, and scripts/init.py. The rigid_protein folder also includes the notebooks directory, which contains Jupyter notebooks used to perform analyses, as well as the Python scripts used to actually perform the simulations and manage the data space. In particular, the project.py, operations.py and scripts/init.py scripts contain most of the logic associated with the simulations.

Investigating minimum human reaction times is often confounded by the motivation, training, and state of arousal of the subjects. We used the reaction times of athletes competing in the shorter sprint events in the Athletics competitions in recent Olympics (2004-2016) to determine minimum human reaction times because there's little question as to their motivation, training, or state of arousal.
The reaction times of sprinters however are only available on the IAAF web page for each individual heat, in each event, at each Olympic. Therefore we compiled all these data into two separate excel sheets which can be used for further analyses.

In this work , we study the problem of allocating limited security countermeasures to protect network data from cyber-attacks, for scenarios modeled by Bayesian attack graphs.
We consider multi-stage interactions between a network administrator and cybercriminals, formulated as a security game.
We propose parameterized heuristic strategies for the attacker and defender and provide detailed analysis of their time complexity.
Our heuristics exploit the topological structure of attack graphs and employ sampling methods to overcome the computational complexity in predicting opponent actions.
Due to the complexity of the game, we employ a simulation-based approach and perform empirical game analysis over an enumerated set of heuristic strategies.
Finally, we conduct experiments in various game settings to evaluate the performance of our heuristics in defending networks, in a manner that is robust to uncertainty about the security environment.

This data is part of a large program to translate detection and interpretation of HFOs into clinical use. A zip file is included which contains hfo detections, metadata, and Matlab scripts. The matlab scripts analyze this input data and produce figures as in the referenced paper (note: the blind source separation method is stochastic, and so the figures may not be exactly the same). A file "README.txt" provides more detail about each individual file within the zip file.

The data contained in the file comprises those collected during the characterization of the sensor as described in the article "Investigation of a low-cost magneto-inductive magnetometer for space science applications" (cited below). This includes:
, - Resolution
, - Stability
, - Linearity
, - Frequency response, and Curation note: Addendum to README added June 1, 2018 regarding several files not used in preparing the manuscript with which the dataset is associated.

This data is in support of the publication in review "Using sensor data to dynamically map large-scale models to site-scale forecasts: A case study using the National Water Model". It is all the raw data extracted from the NWM flow forecasts for Iowa and the IFIS stage readings.
For the NWM data, each date has it's own tab-delimited file with columns being the time (hrs) and rows being the NHD site.
For the IFIS gages, each tab delimited file is for a single site for the period of record.

Each pdf is an electronic version of the paper output for each experiment.
Each text file is the electronic version of the data on the computer cards for each experiment. These text files are directly readable by Excel. Once in Excel, the data can be manipulated as desired.
Additional information is in the theses.

This data set contains the relevant time series for constructing and testing electricity load models within the related paper. The files within are a '.mat' file that contains the data and a 'readme.txt' file detailing the contents of the data.

This archive contains data files from spark-ignited homogenous combustion internal combustion engine experiments. Included are two-dimensional two-component velocity fields from various measurement planes with maximized field of view, in-cylinder pressure measurements, external pressure and temperature data, as well as details on the geometry of the optical engine to enable setups of simulation configurations. Fired operation was with stoichiometric propane air, 40kPa MAP, at 1300 RPM.

This archive contains data files from spark-ignited homogenous combustion internal combustion engine experiments. Included are two-dimensional two-component velocity fields acquired in a small, high-resolution field of view near the spark plug, and images of hydroxyl radical chemiluminescence recording the early flame-kernel growth. Included are in-cylinder pressure measurements, external pressure and temperature data, as well as details on the geometry of the optical engine to enable setups of simulation configurations. Included are tables of one-per-cycle parameters for each test with methane or propane at stoichiometric, dilute limit, lean limit, and rich limit, operation conducted at 40kPa and 1300 RPM.

This archive contains data files from motored internal combustion engine experiments. Included are two-dimensional two-component velocity fields from four measurement planes with maximized field of view. in-cylinder pressure measurements, external pressure and temperature data, as well as details on the geometry of the optical engine to enable setups of simulation configurations. Motored operating conditions include 40kPa and 90kPa MAP, 800 and 1300 RPM.

Details of the microphone used for data collection, acoustic environment in which data was collected, and naming convention used are provided here.
1 - Microphones Used:
The microphones used to collect this dataset belong to 7 different trademarks. Table (1) illustrates the number of used Mics of different trademarks and models.
Table 1: Trademarks and models of Mics
Mic Trademark Mic Model # of Mics
Shure SM-58 3
Electro-Voice RE-20 2
Sennheiser MD-421 3
AKG C 451 2
AKG C 3000 B 2
Neumann KM184 2
Coles 4038 2
The t.bone MB88U 6
Total 22
2- Environment Description:
A brief description of the 6 environments in which the dataset was collected is presented here:
(i) Soundproof room: a small room (nearly 1.5m × 1.5m × 2m), which is closed and completely isolated. With an exception of a small window in the front side of the room which is made of glass, all the walls of the room are made of wood and covered by a layer of sponge from the inner side, and the floor is covered by carpet.
(ii) Class room: standard class room (6m × 5m × 3m).
(iii) Lab: small lab (4m × 4m × 3m). All the walls are made of glasses and the floor is covered by carpet. The lab contains 9 computers.
(iv) Stairs: is in the second floor. The place of recording is 3m × 5m
(v) Parking: is the college parking.
(vi) Garden: is an open space outside the buildings.
3- Naming Convention:
This set of rules were followed as a naming convention to give each file in the dataset a unique name:
(i) The file name is 19 characters long, and consists of 5 sections separated by underscores.
(ii) The first section is of 3 characters indicates the Microphone trademark.
(iii) The second section of 4 characters indicates the microphone model as in table (2).
(iv) The third section of 2 characters indicates a specific microphone within a set of microphones of the same trademark and model, since we have more than one microphone of the same trademark and model.
(v) The fourth section of 2 characters indicates the environment, where
Soundproof room --> 01
Class room --> 02
Lab --> 03
Stairs --> 04
Parking --> 05
Garden --> 06
(vi) The fifth section of 2 characters indicates the language, where
Arabic --> 01
English --> 02
Chinese --> 03
Indonesian --> 04
(vii) The sixth section of 2 characters indicates the speaker.
Table 2: Microphones Naming Criteria
Original Mic Trademark and model --> Naming Convenient
Shure SM-58 --> SHU_0058
Electro-Voice RE-20 --> ELE_0020
Sennheiser MD-421 --> SEN_0421
AKG C 451 --> AKG_0451
AKG C 3000 B --> AKG_3000
Neumann KM184 --> NEU_0184
Coles 4038 --> COL_4038
The t.bone MB88U --> TBO_0088
For example: SEN_0421_02_01_02_03 is an English file recorded by speaker number 3 in the soundproof room using microphone number 2 of Sennheiser MD-421

This dataset was generated for our work "Shape and symmetry determine two-dimensional melting transitions of hard regular polygons". The dataset includes simulation results for 13 different polygons (equilateral triangles through regular tetradecagons and the 4-fold pentille) at a variety of packing fractions near the isotropic fluid to solid phase transition. Each trajectory contains the final 4 frames of each simulation run we conducted at system sizes of over one million particles.
For each shape, there is a JSON file that describes the vertices of the polygon and a number of simulation trajectory files in GSD ( https://bitbucket.org/glotzer/gsd) format. The trajectory files contain the positions and orientations of all the polygons at each frame, along with the simulation box size. The trajectory file names identify the packing fraction of that simulation run.

This study evaluated the performance of a video-based intervention for improving the belt fit obtained by drivers. Previous laboratory studies have demonstrated that some drivers position their seat belts suboptimally. Specifically, the lap portion of the belt may be higher and farther forward relative to the pelvis than best practice, and the shoulder portion of the belt may be outboard or inboard of mid-shoulder.
A video was developed to present the most important aspects of belt fit best practices, with emphasis on the lap belt. The video demonstrated how a seat belt should be routed with respect to an individual’s anatomy to ensure a proper fit. The three key belt fit concepts conveyed in the video were:
1) Lap belt low on hips, touching the thighs.
2) Shoulder belt crossing middle of collarbone.
3) Belt snug, as close to bones as possible.
Additional context about the ability to achieve to good belt fit, such as opening a heavy coat or adjusting the height adjusters on the B-pillar behind the windows, were also presented.

We provide the parameters used in Umbrella Sampling simulations reported in our study "Efficient Estimation of Binding Free Energies between Peptides and an MHC Class II Molecule Using Coarse-Grained Molecular Dynamics Simulations with a Weighted Histogram Analysis Method", namely the set positions and spring constants for each window in simulations. Two tables are provided. Table 1 lists the names of the peptides and their corresponding sequences. Table 2 lists the parameters. The abstract of our work is the following:
We estimate the binding free energy between peptides and an MHC class II molecule using molecular dynamics (MD) simulations with Weighted Histogram Analysis Method (WHAM). We show that, owing to its more thorough sampling in the available computational time, the binding free energy obtained by pulling the whole peptide using a coarse-grained (CG) force field (MARTINI) is less prone to significant error induced by biased-sampling than using an atomistic force field (AMBER). We further demonstrate that using CG MD to pull 3-4 residue peptide segments while leaving the remain-ing peptide segments in the binding groove and adding up the binding free energies of all peptide segments gives robust binding free energy estimations, which are in good agreement with the experimentally measured binding affinities for the peptide sequences studied. Our approach thus provides a promising and computationally efficient way to rapidly and relia-bly estimate the binding free energy between an arbitrary peptide and an MHC class II molecule.

Many data sets come as point patterns of the form (longitude, latitude, time, magnitude). The examples of data sets in this format includes tornado events, origins/destination of internet flows, earthquakes, terrorist attacks and etc. It is difficult to visualize the data with simple plotting. This research project studies and implements non-parametric kernel smoothing in Python as a way of visualizing the intensity of point patterns in space and time. A two-dimensional grid M with size mx, my is used to store the calculation result for the kernel smoothing of each grid points. The heat-map in Python then uses the grid to plot the resulting images on a map where the resolution is determined by mx and my. The resulting images also depend on a spatial and a temporal smoothing parameters, which control the resolution (smoothness) of the figure. The Python code is applied to visualize over 56,000 tornado landings in the continental U.S. from the period 1950 - 2014. The magnitudes of the tornado are based on Fujita scale.

The files include an Excel file with the x-, y-, and z- coordinates that make up the nodal coordinates for a surface model of small (5th percentle) female pelvis geometry, the finite element model (.k file) that represents the nodal coordinates, and two surface files that represent the geometry (.obj and .ply).

In a sensitive cochlea, the basilar membrane response to transient excitation of any kind--normal acoustic or artificial intracochlear excitation--consists of not only a primary impulse but also a coda of delayed secondary responses with varying amplitudes but similar spectral content around the characteristic frequency of the measurement location. The coda, sometimes referred to as echoes or ringing, has been described as a form of local, short term memory which may influence the ability of the auditory system to detect gaps in an acoustic stimulus such as speech. Depending on the individual cochlea, the temporal gap between the primary impulse and the following coda ranges from once to thrice the group delay of the primary impulse (the group delay of the primary impulse is on the order of a few hundred microseconds). The coda is physiologically vulnerable, disappearing when the cochlea is compromised even slightly. The multicomponent sensitive response is not yet completely understood. We use a physiologically-based, mathematical model to investigate (i) the generation of the primary impulse response and the dependence of the group delay on the various stimulation methods, (ii) the effect of spatial perturbations in the properties of mechanically sensitive ion channels on the generation and separation of delayed secondary responses. The model suggests that the presence of the secondary responses depends on the wavenumber content of a perturbation and the activity level of the cochlea. In addition, the model shows that the varying temporal gaps between adjacent coda seen in experiments depend on the individual profiles of perturbations. Implications for non-invasive cochlear diagnosis are also discussed.

Supporting Information for research article "Life cycle comparison of environmental emissions from three disposal options for unused pharmaceutical". This spreadsheet provides the calculations and values used for this study; please refer to the manuscript and supporting information (as text) available at http://dx.doi.org/10.1021/es203987b for details about how to use this spreadsheet. We use life cycle assessment methodology to compare three disposal options for unused pharmaceuticals: (i) incineration after take-back to a pharmacy, (ii) wastewater treatment after toilet disposal, and (iii) landfilling or incineration after trash disposal. For each option, emissions of active pharmaceutical ingredients to the environment (API emissions) are estimated along with nine other types of emissions to air and water (non-API emissions). Under a scenario with 50% take-back to a pharmacy and 50% trash disposal, current API emissions are expected to be reduced by 93%. This is within 6% of a 100% trash disposal scenario, which achieves an 88% reduction. The 50% take-back scenario achieves a modest reduction in API emissions over a 100% trash scenario while increasing most non-API emissions by over 300%. If the 50% of unused pharmaceuticals not taken-back are toileted instead of trashed, all emissions increase relative to 100% trash disposal. Evidence suggests that 50% participation in take-back programs could be an upper bound. As a result, we recommend trash disposal for unused pharmaceuticals. A 100% trash disposal program would have similar API emissions to a take-back program with 50% participation, while also having significantly lower non-API emissions, lower financial costs, higher convenience, and higher compliance rates.