Michael Zingale is a computational astrophysicist who enjoys blowing up stars and working on new algorithms to enable these simulations. He is an Associate Professor of Physics and Astronomy at Stony Brook University on Long Island, NY.

Many of us have written notes for our classes or have searched online for notes written by our colleagues that we can use in our classes. To help coordinate the effort of sharing and building texts for astrophysics topics, I started a github organization called the Open Astrophysics Bookshelf:

The basic idea is that we, as a community, can crowdsource texts on astrophysics topics and make these freely available (via the Creative Commons license) for anyone to use. And since we tend to use LaTeX for our scientific writing, these texts can be easily managed by git version control.

Essentially, the Open Astro Bookshelf is just a github (http://github.com) organization where different texts can be hosted as git repos. There are two cases one can imagine for adding to the bookshelf.

Many of us have our own set of notes (or a draft text) on a topic in our area of expertise, with varying degrees of polish. In this case, you are starting off with something that is already substantially developed. By hosting this text on the Open Astro Bookshelf, you gain input from the community—people can contribute changes via github pull requests. These can be anything from typos, requests for clarification, figures, or entire chapters. Since the work is hosted on github, all contributions will be noted in the git log, but a project can also include an author list noting the primary author, major contributors, other contributions, etc.

There are some topics that we all know are in need of a good up-to-date text, for instance, to train students in techniques of our trade. A Scientific Computing Cookbook is an example. (Another idea suggested by a colleague is an AST 101 text). In this case, we can start a text in a repo that is a stub, simply an outline, and rely on crowdsourcing to write the text from scratch.

Since everything is openly licensed, anyone can create mash-ups of the content to suit their needs.

In contrast to traditional books that go through a publisher, these texts are living. They can continually be updated. But don’t worry—we can still cite an “edition” via the git hash (it is quite straightforward to have a makefile put the git hash into the LaTeX source at compile time).

There are already some examples of each of the above cases up online.

I’ve moved my set of notes on Computational Astrophysical Hydrodynamics there, Mark Krumholz has put up his notes on Star Formation, and Ed Brown has contribute his Stellar Physics notes. I’ve reached out to others in the field who have expressed interest in putting up their notes as well, so I hope for this to grow quickly (especially now that it is summer). I’d like to encourage anyone else who is interested to contact me, and we can setup a repo for your notes as well.

There is also an mostly empty template for a Scientific Computing Cookbook to share tips/techniques and good practices across our field—this latter one is a case where crowdsourcing can hopefully put something nice together. This idea arose from discussions I’ve had with many of our computational astrophysics colleagues, and I’m told that a similar idea has surfaced among the astrobetter community.

As this is all new, there is still a lot to learn about how to best coordinate the different efforts. I think that each text needs a lead (or leaders)—they set the tone of the text, make the initial organization issues, and will have the admin access to merge pull requests. The number of people in this role can evolve with time as people become bigger contributors. Anyone else in the community can interact via the normal github mechanism—pull requests and issues.

Please don’t hesitate to contribute. There is a lot of potential for us to build up a bookshelf that covers a wide range of topics in our field, that is freely available to everyone (students certainly don’t like paying for books), and is continually brought up to date.

We are very pleased to introduce Bumblebee! This interface is in the beta stage of development and is ready for more users. It features a clean look and powerful search and filtering operations but it is indeed a beta and still has some quirks and bugs. However, there are some features which we think you’ll find so useful, we want to tell you about them as soon as possible. Over the next couple months, we plan to highlight some of these features in a series of blog posts, so stay tuned!

The search engine that powers Bumblebee enables faster and more complex searches than Classic. Classic is well suited for things like finding articles by author or title, but it lacks the ability to examine citations, search full-text, or progressively filter results. We are incorporating all of the features found in Labs and Classic into Bumblebee and are anticipating that Labs will be deprecated and replaced by Bumblebee by the end of 2015.

Never fear, ADS Classic will remain available and supported for the foreseeable future. However, while bug fixes are being applied, Classic is not undergoing any new development. In the very far distant future, long after you and your postdocs have retired, when Classic is deprecated, all abstract page URLs will remain fully functional and so there will be no broken links and no need to change your links to ADS.

The myADS notification service will remain unchanged for the moment. In the next year or so we expect to provide an even better custom notification service using the same technology which underpins Bumblebee.

If you want to be on the bleeding edge and help us run Bumblebee through the paces, please check out the Help Pages for instructions on how to search and give it a go. We welcome your feedback on Bumblebee via issues on the Github repo, tweets to @adsabs, or email to adshelp@cfa.harvard.edu.

All of Bumblebee’s code is publically accessible on Github and issues and pull requests are always welcome. The API powering Bumblebee is also available to users and we are planning to publish the documentation for it soon. If you would like to develop widgets or applications to interface with ADS services, you can generate an API Key by creating an account and navigating to the API Token section in Settings. We are looking forward to future hack days and seeing what the community builds with these services!

Questions or comments about these ADS services? Let us know in the comments!

Matteo Cantiello is a theoretical astrophysicist at the Kavli Institute for Theoretical Physics and Authorea’s Chief Scientist.

I am aware that a while back a lot of astronomers have tried out writing their research articles on Authorea, a web-based collaborative writing platform. Some were disappointed by the lack of certain advanced LaTeX features (e.g., deluxetables, now supported). You were disappointed, you told us why, and we just implemented some big changes to make you happy. Authorea now has a ”Power LaTeX user” mode which supports a much much larger subset of LaTeX. Essentially everything. And unlike some services such as ScribTeX and WriteLaTeX (previously reviewed on Astrobetter), all your LaTeX renders both to PDF and to HTML (i.e., the web).

So, why should you give Authorea a spin and start using it daily for your research? It’s a good question. Here some highlights that might guide that decision.

1.With Authorea, your paper is accessible from any computer, anywhere in the world.

2.You can write your paper from your browser, no installation of TeX required.

3.You can write in LaTeX or in markdown. Advanced LaTeX and tables are now supported.

4.Collaboration is made easy. No need for endless emails threads with multiple draft revisions.

5.Every Authorea paper is a Git repo, version controlled. Again, no installations required.

6.Want to work offline via Github? You can. Authorea becomes the rendered version so that your coauthors can still work with you without having to learn Git.

7.Adding citations has never been easier. One click and done. Believe me, you will never want to go back.

8.You can include data and code in your paper, like IPython Notebooks. This allows for transparency and reproducibility of results.

9.Export to any journal format with just one click. We support all the usual suspects, from ApJ to AJ, from MNRAS to A&A. Switch back and forth between these styles in one click.

Ok, enough with the list of fancy features. Here’s my personal experience as an astrophysicist using Authorea. I switched to writing papers with Authorea about a year ago and I noticed a number of immediate improvements: first of all my papers get written faster. Then I noticed that I have no need to exchange emails with collaborators concerning the paper. All the action happens (and it’s logged) on Authorea, including discussions about revisions and suggestions for improvements.

This said, I didn’t really expect the most important upturn. By getting rid of the overhead I had previously considered a messy unavoidable part of the scientific writing process, something remarkable happened. I actually started enjoying writing more!

And I don’t mean just publishing; I had experienced that joy before. The difference is I now cherish the time I spend putting my science into words. It might sound crazy, but Authorea did something amazing: it made mediscover the pleasure of writing science together with my collaborators.

So if you ask me ”Why should I write my next paper with Authorea?” my honest answer is ”Because you will love it!”. My suggestion is that you take Authorea for a spin and make up your own mind.

The Astronomical community has a lot of experience with early adoption and innovation, so your feedback can help to substantially improve this tool. Do you think Authorea is on the right track? Is there a particular feature missing that would improve substantially your workflow? Share your reviews and suggestions in the comments!

]]>http://www.astrobetter.com/blog/2015/07/13/why-should-i-use-authorea-to-write-my-papers/feed/11http://www.astrobetter.com/blog/2015/07/13/why-should-i-use-authorea-to-write-my-papers/Python/3D Visualization: A new book available for students and scientistshttp://feedproxy.google.com/~r/AstroBetter/~3/uRfmZOH-BvY/
http://www.astrobetter.com/blog/2015/07/08/python3d-visualization-a-new-book-available-for-students-and-scientists/#commentsWed, 08 Jul 2015 12:00:30 +0000http://www.astrobetter.com/?p=8839

Brian Kent is an associate scientist with the National Radio Astronomy Observatory working on pipeline software for VLA and ALMA, and has interests in galaxy surveys, dynamics, and 3D graphics and visualization.

A new book published by IOP is now available entitled “3D Scientific Visualization with Blender.” This work is written for a broad scientific audience of students and researchers interested in rendering their data in 3D. The book introduces the reader to the interface, how to build and manipulate 3D objects, animate a scene and render a result, and read data into models through the Python API. A chapter with simple examples concludes the book as a launching point for the reader to begin developing their own visualization and analysis scenes. A number of the examples are relevant to astronomical data analysis. The new book available through Amazon or IOP Publishing also has a companion website. The sample video tutorial below exhibits some of the ideas featured in the book.

In the age of large astronomical dataset, it is exciting to consider new software paradigms that might aid us in 3D visualization. Building upon these tutorials, what kinds of visualization scenarios can we envision?

]]>http://www.astrobetter.com/blog/2015/07/08/python3d-visualization-a-new-book-available-for-students-and-scientists/feed/0http://www.astrobetter.com/blog/2015/07/08/python3d-visualization-a-new-book-available-for-students-and-scientists/The Starchive: An open access, open source database of the nearest stars and beyondhttp://feedproxy.google.com/~r/AstroBetter/~3/e0Yq7RKzfNk/
http://www.astrobetter.com/blog/2015/04/27/the-starchive-an-open-access-open-source-database-of-the-nearest-stars-and-beyond/#commentsMon, 27 Apr 2015 13:00:14 +0000http://www.astrobetter.com/?p=8815

Angelle Tanner is an assistant professor at Mississippi State University with interests in multiple methods of exoplanet detection and characterization.

Situation 1:You make a preliminary list of target stars using SIMBAD and then follow up with a tedious search through Vizier and individual papers for all the additional observational data you need to motivate a proposal, prepare for an observing run or complete a table for a paper.

Situation 2: You are pondering writing a telescope proposal to collect AO images of a set of nearby stars with known infrared excesses but you are not sure which of your target stars may have already be observed since people are slow to publish their collection of non-detection statistics.

While current archives like SIMBAD and Vizier are a valuable resource for stellar observable and physical parameters the data is incomplete and difficult to accumulate into a single table which is customized for a particular research topic or observing goal. With the Starchive we wish to utilize the power of community access to populate a database initially containing all known stellar, sub-stellar and planetary objects within 25 parsecs. Because the original science goal of the Starchive was to compliment or support exoplanet research for the first year we will focus on both the objects within 25 parsecs as well as the nearest young stars (<50pc). The database will be designed to be expandable in both stellar content (larger volume) and parameter content as both stellar and exoplanet fields evolve.

The database will contain observable meta-data like photometry, vsini, radial velocities, distances, proper motions, metallicities, etc. as well as derived physical parameters such as mass, radii, luminosity, age and effective temperature. If multiple values of a parameter are available then all will be shown and there will be a “preferred” value that will be chosen by the science users group. The database will also contain high contrast images, infrared and optical spectra and light curves. The database will allow users to download data in multiple formats and will have its own suite of plotting tools.

Similar to the website used for the Kepler Community Follow-up Observing Program, verified Starchive contributors will be able to upload data ranging from meta-data to images and spectra. While anyone will be allowed to download data from the archive, only verified users will be allowed to upload any data. A fidelity team will monitor the accuracy of uploaded data and those who contribute the most to the archive will be highlighted on the main page. In an effort to not reinvent the wheel, we plan to utilize existing programs such as those found on exoplanets.org and the Kepler CFOP. We also plan on interfacing the front end of the archive with github so that the community can develop additional plotting tools to interface with the database. Therefore by allowing the community to contribute to the database we are open access and by encouraging people to down load and contribute code we are open source.

Feel free to check out starchive.org in a few months to check out our progress. The project currently employs a couple MSU undergrads and we are searching for a postdoc to fill a three year position at MSU. Finally, we are also looking for members to form science, user and fidelity committees. Contact Angelle Tanner for further inquiries.

Emily Rice is an assistant professor at the College of Staten Island and a research associate at the American Museum of Natural History. She is also responsible for those parody songs that get stuck in your head.

Dearest Colleagues, I have printed my last paper conference poster and carried my last poster tube. Forsooth I have discovered the fabric poster, and I will never look back! Fabric posters are high print quality, cheaper than paper posters, and so much easier to transport. The only downside is that you need to order them at least a week in advance for the best product.

The poster arrived slightly creased from being folded in the shipping envelope, but the creases disappeared after a short time spread out on the hotel bed. The fabric is light but sturdy, smooth to the touch, and only slightly stretchy. The printed images are vibrant and crisp. Even the smallest fonts I used (24 point native, ~14-16 point in a pasted image), appeared clear and legible. The poster hung easily without drooping (proof), and I even could carry it in my bag during the rest of the conference (the best response to “Sorry I missed your poster” is definitely: “Don’t worry, I have it right here!”). If you’re afraid no one will recognize you at the airport without your trusty poster tube, you can wear your poster as a cape or a scarf! Just make sure to submit a photo to STARtorialist.

Printing via Spoonflower can be intimidating if you don’t have experience printing or buying fabric, but they have detailed instructions for creating the proper file. The most important thing is to make your file 150 dpi and the size you’d like it to print (higher dpi isn’t better for fabric printing). Fabric is typically sold by the yard (length), and the width is determined by the type of fabric. On Spoonflower the fabric widths vary from 42″ to 58″ (the performance knit is 56” wide). By creating your poster 36” in one dimension you can purchase just one yard of fabric. The AAS poster limit is 44” by 44” so I recommend making the poster 36” wide by 44” tall, then rotate your poster image for printing. If you select “Basic Repeat” under the printing options, you’ll have the top 12” of your poster repeated on the bottom – nice motivation for creating an aesthetically pleasing header. For display at AAS 225 I simply pinned the extra fabric behind the poster. Alternately you could make the poster 44” wide and trim the extra fabric. The preview feature makes it easy to check that your poster will print they way you want (see below). Best of all, one yard of performance knit fabric is just $21.60 ($24 less the 10% discount for designing your own fabric).

The downside with Spoonflower is the printing/shipping time: you’ll need to have your poster finished a week before the conference. Spoonflower’s standard printing turnaround time changes depending on their order volume. Today the turnaround estimate is 7 to 8 days, which is an eternity in pre-conference preparation time. Guaranteed Delivery arrives two days after it is shipped but doesn’t rush the printing. Rush Delivery orders placed by noon EST will be shipped the next business day so that’s the way to go. Before AAS 225 I placed my order on December 24, it shipped December 29 and arrived December 31. For one yard/poster the costs are \$3 for standard shipping, \$15 for Guaranteed Delivery, and \$25 for Rush Delivery. Shipping costs are determined by weight so combining orders can decrease the cost: I tested 2 yards (\$6, \$15, \$25), 3 yards (\$6, \$28, \$48), and 4-6 yards (\$7, \$28, $48). That can bring your cost down to $30 per poster, including Rush Delivery!

At AAS 225 I also saw a Spoonflower-printed combed cotton poster and several PosterSmith posters. The cotton fabric from Spoonflower costs slightly less, but it was thin and off-white with visible fibers. PosterSmith received positive reviews for fast printing and shipping (received in two days in rural Pennsylvania!), but the price is similar to Fedex Office ($118 for a 42” by 42” poster, shipping included), and the quality was decent but not stunning. The print looked good – vibrant colors and crisp lines – but the posters were stiff and some had creases that wouldn’t budge. Spoonflower has many more fabric options to try, but I’ll be sticking with the performance knit.

Have you printed a fabric poster from Spoonflower, PosterSmith, or another service? Share your reviews in the comments!

During AAS225 in Seattle, there was an announcement about changes coming to the AAS Journals: Astrophysical Journal (ApJ), Astrophysical Journal Letters (ApJL), and the Astronomical Journal (AJ). These changes include lots of awesome things such as “linking articles directly to data archives, providing for video abstracts, improving figure presentation, making figures interactive, introducing the ability to produce 3-D presentations.” The changes also include some more controversial things like changing the journal titles (including changing ApJ Letters to “Letters of the AAS”) and the process used to figure out what paper gets published where. Not a lot of details were given and the community understandably has lots of questions, concerns, and opinions about these changes.

Motivated by the plethora of questions, concerns, and confusion that the AAS Agents were reporting, the AAS has followed up with some more information in an announcement posted yesterday: Changes Ahead for AAS Journals. In this announcement, we got more details about the process, the Transition Team currently being assembled, and a request for community input. In particular,

Some of the decisions yet to be made:

Names for the new journals. Should we continue the valuable AJ and ApJ brands, consolidate under one new title, or add titles?

Content. How can content best be channeled, such that the impact factor of the journals and visibility for authors increases?

The AAS has provided a Comment Box on the journals site where you can communicate directly to the AAS Leadership and the AAS Journals Transition Team.

There is already a lively discussion, as always, on the Astronomers FaceBook Group. Some of the concerns raised there are about how will the non-astronomers (mostly physicists) who serve on many of our tenure and promotion committees evaluate our papers in journals which 1) no longer exist under the name we published them in and 2) have ambiguous “prestige” factor.

Also, the possibility of 3D interactive figures is mind blowing and really pushing the envelope for scientific publication. I can’t wait to start messing around with 3D plotting and to stop struggling with projections and clever ways to indicate extra dimensions!

So what do you think? What are you excited about? What are you worried about? Please share your thoughts and questions in the comments below.

For those of you not in the know, at the past AAS meeting, a session was held on Licensing Astrophysics Codes based on suggestions that such a session would be interesting and useful to astronomers. This is a topic previously discussed in an AstroBetter guest post by Jake VanderPlas in March of 2014.

In case you missed the session but are interested in what was covered, the Astrophysics Source Code Library has a summary of each talk along with the presentation slides available here. Huge thanks to Alice Allen for the writeup!

Chris Beaumont is a software engineer at Counsyl, and previously a software engineer at Harvard and the Space Telescope Science Institute. Glue began as a side project during Chris’ PhD thesis, and is now being developed to visualize data from the James Webb Space Telescope.

We’ve recently released version 0.4 of Glue, a Python-based GUI for visually exploring related datasets.

Glue is a package which allows users to inter-compare several related datasets — images, catalogs, image cubes, etc. Glue provides a graphical interface to make basic visualizations of each dataset, with the important feature that all plots are selectable; users can draw geometric regions on any plot to define subsets used to filter data. Importantly, these subsets can propagate across several datasets — so a user can trace a geometrical structure in an astrophysical image, and use that to filter points in a spatially overlapping catalog. These kinds of linked-view interactions make it much easier to discover multidimensional structures within and between datasets.

Also central to Glue’s philosophy is it’s hybrid nature. Most data exploration tools are primarily Graphical User Interfaces (ds9, Topcat, Aladin, filtergraph, etc.) or programmatic interfaces (Python, IRAF, IDL, etc). Glue sits somewhere between these two extremes — it provides a graphical way of exploring data without having to write code, but also provides several interesting hooks for integrating code with the GUI. Here are some examples of the ways Glue can be extended with user-written Python:

Users have access to the Python command line from within Glue, and can run arbitrary code using data loaded into Glue.

People familiar with Matplotlib (Python’s main data visualization library) can easily create custom interactive data viewers. Importantly, they can do so without writing code to deal with user interaction — they focus on visualizing data in a particular way, and Glue generates an appropriate user interface automatically.

Users can write scripts to load data and configure standard plots, which eliminates repetition when exploring datasets several times over the course of a research project.

For more information about Glue, you can watch one of the demo videos, check out the GitHub source (where we welcome bug reports and contributions), or join the Google Group to ask more general questions. We hope you’ll find Glue useful for your own data exploration needs.

We all love Google Docs. It’s a functional and convenient way to share and collaboratively edit documents across platforms, time zones, and even continents. We in the BDNYC group use it extensively.

But what if you want to write a scientific paper? Google Docs, as awesome as it is, is not much more than a word processor. We want the internal hyperlinks for sections, figures, tables, and citations, elegant mathematical formulae, well-formatted tables, more control over where and how our components are arranged – in a word, LaTeX. Yes, LaTeX has its own host of problems, but it’s very good at what it does.

There are a number of collaborative editing projects out there – Authorea springs to mind. But one of the simpler options out there is actually pretty good: WriteLaTeX.

WriteLaTeX is a fairly simple site: On the left hand side, you have a live view of the raw LaTeX source; on the right, a frequently refreshed (there is a manual/auto switch) compiled view of your source generated by a LaTeX installation (probably PDFtex) running on their server. Much like Google Docs, you can have multiple separate documents owned by you, and either editable or viewable by others. This is accomplished by having two URLs for your document: One that allows editing, one that’s view-only. The different URLs are accessible from the “Share” tab.

AS AN EDITOR:
The collaborative editing feature works about as well as Google Docs’ does- I have had an entire classroom of high school students working on a document simultaneously, and it kept track of all of their cursor positions without problems. This was even as lines were added and deleted, and various LaTeX syntax elements were accidentally mangled and then fixed. You can watch other people typing while you type, and eventually the compiled view on the right will either catch up or warn you of an error. For instance, in the picture above, there’s a warning on line 29 (also accessible from the “warning” box in the top right corner).

That actually makes WriteLaTeX a pretty awesome improvement over a standalone LaTeX installation: You don’t have to comb through mountains of output to find the three errors you introduced at some point in the last two days of editing; you’ll know where the errors are as soon as you make them (unless there are other errors preventing the compiler from getting that far, but regular LaTeX is no help there either).

WriteLaTeX has three editing modes: emacs, vim, and default. They seem to differ by which keyboard shortcuts they use to do things like cut and paste. There are also undo and uncomment commands (which dutifully removes the % character from the beginning of the line) available in all modes by right clicking on the text, and on the toolbar in Default mode. Note that I can’t find a way to switch out of default mode once I switch in, except to reset ALL my projects.
The Rich Text mode is only a beta and only available as an option if you’re in default mode, and attempts to translate some latex commands into word processor style. It could be a nice way to ease someone in to writing LaTeX. Like Google Docs, WriteLaTeX auto-saves your work. You can restore to previous versions, too, but only if you explicitly saved them under the “Versions” tab.

AS A LATEX INSTALLATION:
If you click the “Project” tab on the top bar, you get a nice view of all the files in the project, and you can download and upload from there. In this example, I’ve added the aastex.cls, apj.bst, and emulateapj.cls files (plus my .bib file) to my project; I could then specify \documentclass[iop]{emulateapj} and write the document you see here complete with \citet{} commands and a fully functional deluxetable in the document later on. There are probably some things too complex to include that way, but WriteLaTeX does come with a pretty nice selection built in: natbib and amsmath worked out of the box. And, of course, you get the frequent server-side compilation and its instant error-catching benefits.

As for images, WriteLaTeX accepts .jpgs (like their froggy example image), .pngs, and .pdfs. I’m guessing it’s a PDFtex background, because that would explain the lack of .ps or .eps upload.