Will Skora's blogWriting on maps | open data | Clevelandhttp://localhost:4000
Recently, November 2018<p>A small portion of my life in recent months.</p>
<p>Recently:</p>
<p>What I’ve been thinking, asking myself about, or wondering:</p>
<p>How do you conduct an accessibility audit for a website?</p>
<p>How do I delegate and ask others for help without abdicating my responsibilities and without unfairly foisting something on them?
Is the level of assistance that you are asking for at which you differ based on the context (a personal friend, a co-worker, someone that you’re in a volunteer organization with, of the relationship)?</p>
<p>Not taking for granted how many people your parents had a positive impact on in their lives. Dad
died about 3 weeks ago. I was genuinely touched by the number of people who came to the wake and told our family how he had a positive impact on them.</p>
<p>Reconciling that I’m not learning some programming concepts as quickly as I had hoped and occasionally feeling that my feelings of imposter syndrome are valid? Prioritizing what to learn. <a href="https://gitlab.com/cpl/site2/issues"> Depending on the task at hand </a> at work, it varies; a smattering of CSS, HTML, PHP, javascript, and SQL (and in that order).
(There’s a lot more code - like our wordpress theme - that I haven’t made public just yet).
Heck, I’m still using all ES5; should I go over to setting up the whole babel ecosystem (and frankly learn it, to be honest) (looking at the fetch library to retrieve JSON, so I’m considering it).</p>
<p>Watching:</p>
<p>The leaves falling</p>
<p>Kim’s Convenience</p>
<p>Listening:</p>
<p>The Flys - Got You (Where I Want You). <a href="https://www.youtube.com/watch?v=BM_OWaItNJM">(youtube video)</a> The night before my wedding, one of my best friends, since grade school and I were catching up on our lives at a local bar. For context, Old Brooklyn isn’t really sexy or trendy. It’s not surburban like Applebee’s either. I was genuinely suprised that</p>
<p>Hearing that song in the background instantly brought me back to my adolescence. I’ve enjoyed the song but I couldn’t name the song or artist until then (thanks soundhound for identifying it). Finally identifying one of a song’s artist and title after not knowing for years is one of my favorite feelings. I’ve kept a playlist of <a href="https://open.spotify.com/user/skorasaurus/playlist/1GfyvC6gfbu3MA6RdxiVsb?si=acWs7FTWTD66bA0xtL4O5g">these songs</a> (spotify). Some of these songs are just ones where the title is not apparent in the lyrics, but I like more than others.</p>
<p>Serial, season 3; based in Cleveland.</p>
<p>Reading:</p>
<p>Metafilter. I’ve been a daily reader, although it’s not the same as before. Maybe its just my life experiences where what people write doesn’t seem so novel; also the rampant distrust of most institutions.</p>
<p><a href="http://www.webaxe.org/accessibility-interpretation-problem/"> The Accessibility Interpretation Problem </a> by Glenda Sims and Wilco Fiers. The best piece that I’ve read on web accessibility. How the guidelines for web accessibility are very subjective and subject to
intepretations; despite the initial impressions that it’s straight-forward and binary; there’s no “This site is accessible” badge or designation.</p>
<p>Un Lun Dun by China Miéville</p>
<p>Writing:</p>
<p><a href="https://cpl.org/web-standards/"> A web-standards guide for the library </a> Although I write most of the code, the content and editing is done by co-workers and when I arrived, our practices of code and were either non-existent or unwritten.
Writing this out and determining how something like this should be written has been a fair amount of time and experimentation.</p>
Sun, 18 Nov 2018 18:25:06 -0500http://localhost:4000/2018/11/recently/
http://localhost:4000/2018/11/recently/Georeferencing the past<p>I’ve been learning about <a href="https://imageryspeaks.wordpress.com/2012/01/24/georeferencing-vs-georectification-vs-geocoding/">georeferencing</a> <a href="https://support.esri.com/en/other-resources/gis-dictionary/term/georeferencing">(what is georeferencing)</a> maps for an upcoming project at work <em>to display a print map (24 x 36 inch) where the library provided services circa 1912.</em></p>
<p>My secondary goal for georefencing these maps is <em>to provide a web map layer for users to browse historic Cleveland at a high resolution detail (i.e. at zoom level 19-20).</em></p>
<p>Before I started this, my knowledge on georeferencing wasn’t much and I didn’t know what I’d use as the base map for my project - one that would provide viewers a sense of streets, intersections, and lack of sprawl in 1912…</p>
<p>Here’s what I learned and what I’m still trying to figure out:</p>
<p><strong>The sources of paper maps:</strong></p>
<p>CPL has <a href="https://en.wikipedia.org/wiki/Sanborn_Maps">Sanborn maps</a>. Produced every few years in the early 20th century, Sanborns richly detail addresses, landuse, streets, rivers, buildings, and often times, property owners, of the entire city. Sometimes the buildings usage was also noted. In addition to their utility, they are relatively asthetically pleasing. They’re also available at an extremely fine scale, a scale of 200 feet per inch.</p>
<p>These maps were published as a bounded book of ‘plats’/’plates’ - pages - each roughly 15 by 10 inches of an arbitrary geographic area.</p>
<p>CPL also have “Hopkins Maps”, made by a different company, but same physical layout and map design.</p>
<p>Fortunately, <a href="https://cplorg.contentdm.oclc.org/digital/collection/p4014coll24">CPL has unbounded a few editions, digital scanned them, and uploaded them into in our digital map collection</a></p>
<p>For the <a href="https://cdm16014.contentdm.oclc.org/digital/collection/p4014coll24/id/0/rec/1">city of Cleveland’s 1881 Hopkins maps, there are 40 pages</a>; each image with borders containing extraneous information (page number, map key) and on several pages, contained areas that
are also displayed on another plat.</p>
<p>Although the <a href="https://www.loc.gov/collections/sanborn-maps/about-this-collection/">LOC will eventually be uploading historic Sanborn maps of the entire country</a>, they have barely started the state of
Ohio save for good ole’ <a href="https://www.loc.gov/collections/sanborn-maps/?fa=location:ohio">(Monroeville)</a>.</p>
<p><img src="/images/2018-04-hopkins-1912-plate_19.png" alt="1912 hopkins image of cleveland" srcset=" /images/resized/320/2018-04-hopkins-1912-plate_19.png 320w, /images/resized/720/2018-04-hopkins-1912-plate_19.png 720w, /images/resized/900/2018-04-hopkins-1912-plate_19.png 900w, /images/resized/1200/2018-04-hopkins-1912-plate_19.png 1200w, " /></p>
<p>As shown in the above image, there’s extraneous information (the map scale, the north arrow, the plate number) on each page that would need to be removed or clipped out if I wanted to present them
as one congruious map. <a href="https://cdm16014.contentdm.oclc.org/digital/collection/p4014coll24/id/1819/rec/11">The image in Cleveland Public Library’s digital gallery</a></p>
<hr />
<p>If I wanted to create a contigous map, I had a fair amount of work ahead of me:</p>
<p>and I didn’t even know what order I should do these steps?!</p>
<p>So, how do I do this?!</p>
<p><strong>I had multiple questions when I first started</strong>:</p>
<p><em>Do I stitch the plat(e)s together first and then georeference them? Or do I georeference first?</em></p>
<p><em>What tools do I use stitch them together?</em> (stitch - creating them as if they appeared as one contiguous image)</p>
<p><em>How much accuracy should I get from them? Is 5 meter accuracy (from a reference layer) realistic?</em> What if the original map had distortions in it in the first place?*</p>
<p><em>Would I be able to get results as accurate as than this GIF below?</em> (Prospect Ave didn’t exist there at the old time, this is to primarily illustrate Carnegie Ave)</p>
<p><img src="http://localhost:4000/images/2018-04-toggling-between-osm-and-maplayer.gif" alt="Gif switching between 1912 hopkins map and present-day OSM map" /></p>
<p>(after all, I wanted to create a nice digital map layer)</p>
<p>(In Cleveland, OSM is pretty well aligned (usually within 5 meters) with <a href="http://ogrip.oit.ohio.gov/ServicesData/GEOhioSpatialInformationPortal/RESTServiceEndpoints.aspx">State of Ohio aerial imagery licensed in the public domain</a> and that is pretty darn accurate [http://ogrip.oit.ohio.gov/ProjectsInitiatives/StatewideImagery.aspx])
So, I has a good reference layer.</p>
<p>I spent an hour or so exploring <a href="https://cdm16014.contentdm.oclc.org/digital/collection/p4014coll24">our scanned maps</a> to determine if there were any that, together would provide
enough coverage of the city of Cleveland. Some of their metadata and descriptions
our digital collections were misleading; this item has the title of <a href="&quot;https://cdm16014.contentdm.oclc.org/digital/collection/p4014coll24/id/517/rec/6">Plat Book of Cuyahoga County, Ohio Complete in One Volume (Hopkins, 1914)</a> but if you carefully read the title page of this book and view a couple adjacent pages of it, you learn that it’s just 1 of 4 volumes that are needed to have complete coverage of Cuyahoga County. Unfortunately, we didn’t even have all 4 volumes of the 1914 Hopkins available; so I couldn’t use that as a resource.</p>
<p>I finally found a map collection that had coverage of the entire city of Cleveland: <a href="https://cdm16014.contentdm.oclc.org/digital/collection/p4014coll24/id/0/rec/1">a Hopkins book of Cleveland from 1881</a>.</p>
<p>So, I started out using the public <a href="http://mapwarper.net">mapwarper</a> which is really neat.</p>
<p>I experimented by:</p>
<p>Uploading each image page to mapwarper.net (for now, just manually)</p>
<p>Applying the “mask” that would remove the extraneous areas I didn’t need to reference</p>
<p>georeferencing (rectifying) them</p>
<p>I learned that it doesn’t matter whether you georeference or apply the mask first to a map on mapwarper;</p>
<p>This recommendation maybe different if you’re attempting to use the mosaic feature on there.</p>
<p>Lou Klepner reported that <a href="https://github.com/timwaters/mapwarper/issues/88#issuecomment-210443960">Plate Spline is most effective rectifying method on mapwarper</a>; I haven’t noticed definitively one better than the other.
For the resampling method, I used cubic spline and didn’t find any noticeable speed delay compared to the nearest neighbor.</p>
<p>I then downloaded the geotiffs from mapwarper - now georeferenced that have the geographic projection stored within them - so they can be displayed over other modern maps.</p>
<p>Now I can open the geotiffs in QGIS as raster layers.
They matched up pretty well although not perfect (ADD screenshot) and I printed a portion out in QGIS’ print composer. And… You couldn’t read the street names on the printed copy. I learned that these image were scanned and uploaded as 72ppi and don’t print well.
<strong>Oops</strong>. Our library didn’t save the original loseless digital scans (they had since corrected this practice several years ago for other scanned maps).</p>
<p>So, more searching to see if we had another map set of the complete coverage of the city of Cleveland. Yes, we did!
<a href="https://cdm16014.contentdm.oclc.org/digital/collection/p4014coll24/id/1810/rec/11">volumes one</a> and <a href="https://cdm16014.contentdm.oclc.org/digital/collection/p4014coll24/id/1863/rec/12">two of the 1912 Hopkins of Cleveland</a>.</p>
<p>72 PPI images are publicly available but we had 600PPI of these in private digital storage.</p>
<p>I asked Stephen Titchenal of <a href="http://www.railsandtrails.com/">railsandtrails.com</a> - an underrated resource for rail maps of the 20th century; he’s digitized dozens of maps. He admitted he hadn’t stitched together any map as large as I was proposing but recommended photoshop and <a href="http://www.panavue.com/">Panavue image assembler</a> a since abandonwared windows stitcher but he hadn’t stitched together anything as large as I was proposing. Welp. Most of his maps were 300ppi and suitable.</p>
<p>Guides by <a href="https://www.nypl.org/blog/2015/01/05/web-maps-primer">Mauricio Giraldo Arteaga, formerly of NYPL</a>, <a href="http://geo.nls.uk/urbhist/guides_georeferencing.html">National Library of Scotland</a>, and <a href="https://lincolnmullen.com/projects/spatial-workshop/georectification.html">Lincoln Mullen</a> are great introductions to the basics of georeferencing with mapwarper but they all assume that you’re only georeferencing one image at a time and not stitching them together.</p>
<p>So, now: My task, I ask for readers:</p>
<p><strong>Given my constraints:</strong> computing power on my work and personal computers (Thinkpad T450s, HP Z240 both with ubuntu) each no more than 16gb of ram); would I be able to work on 1 giant image
of all of the items stitched together? I tried gimp on ubuntu (to be fair it was a 600 PPI) and it was nearly unusable on a single image…</p>
<p>It wouldn’t be realistic to upload about 3gb of images to mapwarper.net…</p>
<p>So, readers, I’d love to hear your suggestions and thoughts.</p>
<p><strong>I ask a few questions on how to proceed:</strong></p>
<p><em>Given my two goals (a slippy web map and a print map of 24x36inch) would 300PPI be ok for both?</em></p>
<p><em>In which order should I complete the tasks of cropping/masking the plates, georeferencing the plates, and stitching them together to appear as one image?</em></p>
<p><em>After I georeference them, should CPL provide both georeferenced and non-georefenced items in our digial collection?</em></p>
<p>Tentatively, I think I’ll batch convert (with imagemagick) the images to 300PPI; then crop 1-2 plates of them in gimp (if it’s feasible from a memory standpoint), then try to georeference them in qgis.</p>
<p>For sharing georeferenced,
I can see both sides whether to add the georeferenced ones because georeferencing is never perfect; it’s always a work in progress.</p>
<p>I’d appreciate your advice for my next steps and what you’ve learned if you’ve done something similar (email is skorasaurus at gmail, the left bar has my social media contacts). I’ll share what I’ve learned later.</p>
Tue, 10 Apr 2018 07:25:06 -0400http://localhost:4000/2018/04/georeferencing_the_past/
http://localhost:4000/2018/04/georeferencing_the_past/Tools on map-making at Data Days CLE<p>I gave a workshop/presentation on tools for map-making at <a href="http://datadayscle.org">Data Days CLE</a>
on Friday. One of my favorite moments was the city employee who asked me about alternatives to
ARCGIS/ESRI and specifically being able to offer read access to geodatabases to other departments of data without using ESRI (hope I remember that correctly).</p>
<p>My slides are at <a href="http://skorasaur.us/ddc18">http://skorasaur.us/ddc18</a> and below is a long list of resources, most of which I mentioned in my talk. This list is also available in my github repository for this - <a href="https://github.com/skorasaurus/ddc18">https://github.com/skorasaurus/ddc18</a></p>
<p>This list is by no means, comprehensive, but a starting point for tools for map-making, primarily
focusing on web maps (maps that are viewable online) outside of the ESRI ecosystem.</p>
<p><a href="http://mapschool.io/">mapschool</a> - As brief as it is, it’s an extremely useful overview of modern maps and some theory. I don’t know of any other document on maps that is as short yet as informative.</p>
<p><strong>mapmaking suites (SAAS, software as a service):</strong></p>
<p><a href="https://carto.com">carto</a></p>
<p><a href="https://mapbox.com">mapbox</a></p>
<p><a href="www.shinyapps.io">shinyapps</a> - R-based</p>
<p><strong>Quicker and simpler web map templates:</strong></p>
<p>All of these simpler web map templates require a relatively minimal amount of data (not a very rigid rule, but I’d say less than a couple hundred points/features and that you don’t have a lot of properties on them). If you have more than this, you’ll need to upload them to one of the above services.</p>
<p><a href="https://github.com/mapzap/mapzap.github.io">mapzap</a> - less styling options but easier to use</p>
<p><a href="http://mapstarter.com/">mapstarter</a> - also has print options</p>
<p><a href="https://github.com/JackDougherty/leaflet-maps-with-google-sheets">leaflet + and google sheets</a></p>
<p><a href="http://umap.openstreetmap.fr/en/">umap</a> - If you want a map to share with others with some custom icons quickly and aren’t picky about the basemap; can embed as well.</p>
<p><strong>data manipulation/gis in browser:</strong></p>
<p>As above, these may not work (or will work very slowly) if you’re using files that have hundreds of features or are above, say 10mb, in size.</p>
<p><a href="http://geojson.io">geojson.io</a> - quickly edit and save to numerous formats; works on files &lt; 10mb</p>
<p><a href="http://mapshaper.org">mapshaper</a> - relatively simple yet powerful, also has command-line based tool</p>
<p><a href="http://dropchop.io/">dropchop</a> - do some common GIS operations within the browser</p>
<p><a href="http://turfjs.org">turf.js</a> - do some common GIS operations within the browser (javascript)</p>
<p><strong>utilities for printing web maps:</strong></p>
<p><a href="https://github.com/portofportlandgis/portmap">portmap</a> -</p>
<p><a href="staticmapmaker.com">staticmapmaker.com</a> - limited options; but usable</p>
<p><a href="http://datadesk.github.io/web-map-maker/">LA Times’ Web Map Maker</a></p>
<p><a href="https://printmaps.mpetroff.net/">Petroff’s Print Maps</a></p>
<p><a href="[https://www.mapbox.com/help/static-api-playground/]">https://www.mapbox.com/help/static-api-playground/</a></p>
<p><strong>geocoding:</strong></p>
<p><a href="https://smartystreet.com">smartstreets</a> Not free; but does a relatively great job and has relatively easy to use interface; good if you’re on a timecrunch and/or limited skills.</p>
<p><strong>Meta (a list of other lists):</strong></p>
<p><a href="https://github.com/tolomaps/resources">robin’s list</a></p>
<p><a href="https://github.com/RoboDonut/awesome-spatial">awesome-spatial</a> - great list of all types of spatial tools, many of these require knowledge in a particular programming language, comfortability with command line.</p>
<p><a href="https://github.com/tmcw/awesome-geojson">awesome-geojson</a> - great utilities for working with geoJSON.</p>
<p><a href="https://github.com/TheMapSmith/color-tools">color-tools</a> - all resources on colors</p>
<p><a href="http://dataviz.tools/category/mapping/">dataviz-tools’ list</a> - thorough list, somewhat out of date</p>
<p><strong>theory:</strong></p>
<p><a href="http://maptime.io">maptime</a> - An informal association of meetup groups that teach geospatial concepts and maps. They have accessible tutorials. I co-organized Cleveland’s maptime from 2012-2014ish.</p>
<p><a href="https://github.com/tmcw/mapmakers-cheatsheet">mapmakers-cheatsheet</a></p>
<p><strong>Advanced:</strong></p>
<p><a href="https://github.com/wireservice/csvkit">csvkit</a> - python library and command line to
manipulate CSV files</p>
<p><a href="http://qgis.org">qgis</a> - geospatial analysis, map-making, and so much more; comparable to ArcGIS.</p>
<p><a href="https://github.com/sgillies/frs-cheat-sheet">cheat-sheet for fiona and rasterio</a> -
Cheatsheet for using python libraries of fiona, rasterio, manipulating geospatial data.</p>
<p><a href="https://github.com/johnkerl/miller">miller</a> - command-line based; very powerful and advanced; specifically for parsing CSV files.</p>
<p><a href="https://github.com/dwtkns/gdal-cheat-sheet">GDAL cheatsheet</a> - GDAL is a geospatial library at the core of many geospatial applications; data conversion; reprojection;
analysis, and more.
Cheatsheet for using some of its command-line based tools.</p>
<p><a href="http://d3js.org">d3</a> - extremely powerful javascript library for dataviz and maps</p>
<p><a href="https://beta.observablehq.com/">observable HQ</a> - a sandbox for experimenting with javascript and D3</p>
<p><strong>Sites/Articles mentioned in talk:</strong></p>
<p><a href="http://www.businessinsider.com/most-famous-book-set-in-every-state-map-2013-10">Most famous set in every US state</a></p>
<p><a href="http://www.ericson.net/content/2011/10/when-maps-shouldnt-be-maps/">when it shouldn’t be a map</a></p>
<p><strong>data sources:</strong>
<a href="http://www.opencleveland.org/blog/guide-to-cleveland-data/">Guide to Cleveland Data sources</a> - A list of places to get available open civic data for the Cleveland area</p>
<p>If you want to start with the command line:
https://github.com/jlevy/the-art-of-command-line</p>
<p><strong>Highly recommended Books:</strong>
Interactive Data Visualization for the Web: An Introduction to Designing with D3 (2nd Edition) - Scott Murray - clearly written with examples; good not just for D3 as a refresher or extremely concise overview of html, css, and javascript.</p>
<p>GIS Cartography - Gretchen Peterson
Great design influence for making print and web-maps.</p>
<p>cat photo by <a href="https://www.flickr.com/photos/mahfoudh/37519121762/">Walid Mahfoudh</a></p>
Sun, 08 Apr 2018 22:25:06 -0400http://localhost:4000/2018/04/data_days_cle/
http://localhost:4000/2018/04/data_days_cle/Recently<p>What I’ve been up to (outside of my work):</p>
<p>I used to spend a lot of time listening, finding, and buying new music. I don’t nearly listen to as much as I used to; my priorities in my free time have changed. Tracking down or knowing that there’s a great song or album to be found just doesn’t give me as much excitement it once had.</p>
<p>However, these songs were my favorite ones to listen to in 2017 and will remind me of that year for the rest of my life (alphabetical order):</p>
<p>Broken Social Scene - Halfway Home<br />
Broken Social Scene - Anthems for a Seventeen Year Old Girl<br />
Dday One - Contact<br />
Dirty Projectors - Little Bubble<br />
Dolly Spartans - I Hear the Dead<br />
Doves - Rise<br />
Gomez - Options<br />
Noname - Diddy Bop (feat. Raury &amp; Cam O’bi) <br />
Orbital - Belfast<br />
pronoun - a million other things<br />
Sammus - 1080p<br />
Talking Heads - Once in a Lifetime<br />
The Go-Betweens - Love Goes On!<br />
The War On Drugs - Pain<br />
Ultimate Painting - Song for Brian Jones</p>
<p>Some of the favorite albums that I listened to for the first time in 2018: AndyFellaz - BeatBop Street; Kendrick Lamar - DAMN; The Go-Betweens - 16 Lovers Lane; broken social scene - you forgot it in people.</p>
<hr />
<p>I’ve been stewing on the rest of this post for almost a year now. Deleting portions. I’ve scrapped multiple versions of it.</p>
<p>2017 had been the most successful year for me, professionally. Personally, it’s been one of the hardest, battling anxiety and to a lesser extent, depression. I know I’m pretty fortunate; my struggles are a lot less burdensome than others and I have a lot of privilege.</p>
<p>Articulating my thoughts into sustained, multiple paragraphs in a coherent fashion that is also grammatically correct and well-polished for general audiences is relatively difficult for me.</p>
<p>I’ve been spending less time on twitter and trying to spend the time that I’ve devoted to that on reading books or actually reading articles that I’ve saved (liked/favorited) on twitter. I made a conscious effort to go through my twitter likes a couple weeks ago: I had 4200; now down to ~3,500 (~3,200 now). Found some articles worth reading and it was a nice window in my internet consumption over the years. It also reminded me how much link rot is prevalent.</p>
<p>Reminded me that I spend less time in the open source geospatial community because my full-time job in general web development nowadays (primarily wordpress and CSS language/CMS-wise; making sure that cpl.org is functional). In my experience, the opensource geo community was generally quite welcoming to new people, respectable in their behaviors at conferences and online, would work together, would sometimes prioritize (and corporate users would fund) developing documentation.</p>
<p>Reading these saved tweets also reminded that many of my peers, especially those I professionally
admire, had unfinished projects and blog posts.</p>
<p>There’s a lot more to write, especially my experiences with open data and civic technology in the past couple years.</p>
Sun, 11 Mar 2018 19:25:06 -0400http://localhost:4000/2018/03/recently/
http://localhost:4000/2018/03/recently/The role of open data and libraries<p><em>(This is an ongoing draft/manifesto of thoughts that have; I am a web developer at Cleveland Public Library but these are my views and not those of my employer)(Writing this out made me think of even more questions than answers. I may be critical but I’m critical because I care about libraries).</em></p>
<p>As a participant in the open data and civic tech movement as a brigade captain of <a href="opencleveland.org">Open Cleveland</a> and web developer at the Cleveland Public Library, I see the potential of libraries playing a much larger part of the open data movement.</p>
<p><strong>Public libraries can be and should be (?) stewards of digital open data because they’ve historically been stewards, have public trust and neutrality, subject domain experts, and are connected with the community.</strong></p>
<p>What’s open data:
https://opengovdata.org/ is great).</p>
<p>Why:</p>
<p><strong>Public Libraries have historically been stewards of data:</strong><br />
Historically, we librarians, are already are open data stewards. The Cleveland Public Library’s Public Administration Library (https://cpl.org/locations/public-administration-library/) has been designated as the “the most complete collection of material on Cleveland city government available anywhere” including City Council legislation, budgets, and more. This stewardship and sharing only has been on paper or microfilm.</p>
<p>CPL is also a <a href="https://cpl.org/subjectscollections/governmentdocuments/">Federal Depository Libraries</a>. They were the ‘data portals’ - a centralized access point - an on-paper data portal, guaranteeing public access to federal data ( Census, reports, contact information, and more). (With this data being managed on a federal level on data.gov, how should Federal Depository Libraries continue their function (I don’t know)?)</p>
<p>City data portals and the open data movement haven’t been focused on maintaining or sharing historical data, often only sharing the most current version of a data set. Who is archiving and saving that? As archivists, libraries can fill this role too.</p>
<p><strong>Public Libraries have subject-based experts and know how to find knowledge:</strong><br />
We know that just because there’s an open data portal, doesn’t mean that people will use it. Cities hosting open data portals are realizing that a portal isn’t enough. For open data to have any effect for the public good, it needs to be used like another resource, a tool, a means to an end; a source to answer the question; a source to analyze.
Open data is just another source of knowledge that needs to be interpreted (by knowing how to filter the data, how to structure their queries technically, to use technical tools, etc) to find and then further analyze the information that a patron is trying to access so the patron has their answer/knowledge.</p>
<p><strong>Connected with the community:</strong><br />
Our public libraries are still in the community and relatively do a better job of working with all communities and being places welcoming to all. They’re one of the few organizations that still have a wide and collaborate with entities across different sectors. They’re one of the only 3rd places left for people to meet. They’re one of the few places where people who normally don’t interact with each other can.</p>
<p>Libraries including CPL have been teaching people how to utilize the then-new sources of information, the internet and tools to make sense of it (excel) and basic
<a href="https://medium.com/read-write-participate/remixing-mozillas-web-literacy-curriculum-for-cpl-2170e44e4610">digital literacy courses</a>. Carnegie Library in Pittsburgh are teaching <a href="https://www.carnegielibrary.org/event/data-101-finding-stories-data/">data literacy courses</a> and how to use data sources. These are good starts for libraries to help people and institutions, especially those from marginalized backgrounds, learn how to access the data. As it’s just not enough to provide the raw material (books, databases) or open government data, The libraries can help people make sense and enhance the patrons’ use of these materials as they do with book discussions, instructing patrons to access and use databases, offering geneology clinics to use and understand those resources. The library would be the data intermedary perhaps doing the data analysis, helping people and institutions understand the data.</p>
<p><strong>Have neutrality, public trust:</strong><br />
(Perhaps the most contentious point and least fleshed out?)
Libraries are luckily generally well funded in NE Ohio and generally have the public’s trust. By being non-elected positions or at least, so far, relatively not politically influenced, they could continue to share data if a government administration cuts access to their data (just see what’s happening on a federal level). They could help present the material to patrons in ways that the government may find critical of them.</p>
<p><strong>The challenges here and ahead:</strong><br />
Even from my limited experience at CPL, we’re limited by capacity. Librarians have the subject expertise but don’t have general technical expertise to do the extracting, transforming, and loading of raw data sets into ways for patrons to access. A combination of better technical training for staff members and also developers making it easier to fulfill patrons’ common data requests.
Perhaps a bad analogy, like how there’s LaTeX on one end of the spectrum for extremely custom, esoteric needs that’s extremely powerful and Word available (suitable for common needs) for word processing and formatting. Libraries should have staff members who would know both to accomodate the variety of needs of patrons.</p>
<p>Although libraries are already sharing some historical open data, the process to migrate the data from the paper records into a digital format is laborious. A first part of the digitization process - creating digital images of these items (look at all of the digital collections!) - has been generally embraced by libraries but the knowledge and tools to transform that into open or structured data generally hasn’t been done (except perhaps OCR’ing some text of books). Budgets would need to be increased to increase staff/instiutional capacity to migrate the data on paper into a digital/structured, open format.</p>
<p>For example, CPL has plenty of <a href="https://cplorg.contentdm.oclc.org/digital/collection/p4014coll24">digital maps available</a> but no spatial data sets yet (for example: boundaries, building footprints) that could be created from these digital maps. I’m working on creating a geospatial data set of Cleveland’s annexations from these <a href="http://mapwarper.net/maps/22169">two</a> <a href="http://mapwarper.net/maps/22173">maps</a>.</p>
<p>We sometimes go half-way in preservation and need to make sure that we’re keeping these capabilities for open data: for example, digitizing maps at a low enough resolution so that it would be difficult for someone to <a href="https://apollomapping.com/blog/g-faq-orthorectification-part">geoference and orthorectify</a>, licensing is another one.
<a href="http://spacetime.nypl.org/">NYPL’s Space/Time Directory</a> is creating tools to improve digitization processes/workflows to create these data sets. Perhaps, we should offer our maps already georeferenced (we don’t).</p>
<p>Administrations also need recognize the value (and limits) of open data to fund this and if they haven’t already, establish the partnerships with the holders of the data, the local governments.</p>
<p>Libraries have historically been stewards of data and I think they generally may have somewhat missed the initial curve of the growing open data sector/ecosystem. As an established 3rd party with a mission and history to maintain and share knowledge for the broader community with minimal restrictions ; they also can be the intermediary because the raw data and help the patrons, find the meaning, interpretation of the data.
In the meantime, libraries should begin working with local governments and groups like <a href="https://www.codeforamerica.org/join-us/volunteer-with-us">Code For America brigades</a> (volunteer groups using open data and civic tech to imporve their communities and local governments) to learn how they can partner to serve the community needs fulfilled in part by open data and as a be steward of data.</p>
<p>(Thank you to everyone who’s inspired to write this and laid some ground work writing, studying, or talking about this, notably Leila Slutz, Anastasia Diamond-Ortiz, and Mita Williams).</p>
Sun, 22 Oct 2017 14:55:06 -0400http://localhost:4000/2017/10/libraries-and-open-data/
http://localhost:4000/2017/10/libraries-and-open-data/Mapping your neighborhood in Cleveland and Akron<p>Where is your neighborhood? What is its name? Where is its boundary? Is this boundary fuzzy for you?</p>
<p>Neighborhoods and these answers change from person to person.</p>
<p>In Cleveland, neighborhoods’ boundaries are largely left to the imaginations of residents, visitors, realtors, businesses, and non-profits.</p>
<p>The City Planning Commission’s Statistical Planning Areas, adopted in early 2000s. These names are argely ignored and not widely adopted with good reason, they are missing and many of the names there aren’t used in everyday life.</p>
<p>Here’s your chance to say where your neighborhood is and view what others have shared.</p>
<p><strong>Map Your Neighborhood in Cleveland and Akron at <a href="http://skorasaur.us/nh">http://skorasaur.us/nh</a></strong></p>
<p>You’re encouraged to map (that is draw) the neighborhood where you live but others that you may not live in but may spend a lot of time in or feel strongly about.</p>
<p>No neighborhood or city boundaries are present on the map; to remove biases and to encourage boundaries across city lines.</p>
<p>With projects like <a href="OpenStreetMap">http://openstreetmap.org</a> and improved technology and software, mapping is not only a noun, but it is being used as a verb - creating and modifying what is (or isn’t) on a map - the canvas representing a space.</p>
<p>After you’ve mapped a neighborhood, view what others have drawn.</p>
<p>I hope this sparks a conversation of neighborhood identity in each of you.</p>
<p>Thanks to work of <a href="http://pnwmaps.com/neighborhoods/">Nick Martinelli</a> and <a href="http://bostonography.com/2015/map-your-neighborhood-again/">Andy Woodruff and Tim Wallace at Bostonography</a>, I’ve been able to build upon their work and customize it for Cleveland.</p>
<p>Identifying neighborhoods has fascinated me for some time and inspired me to create my first map - <a href="https://skorasaurus.wordpress.com/cleveland-neighborhood-map/">my (incomplete) interpretation of Cleveland’s neighborhoods</a>, in 2010-2011.</p>
<p>To make your own instance for a city, the source code and directions are available on <a href="https://github.com/enam/neighborhoods/">github</a>. I’ve made a couple adjustments (like directions) that I’ll be shortly adding.</p>
Sat, 05 Sep 2015 19:25:06 -0400http://localhost:4000/2015/09/crowd-sourced-neighborhoods/
http://localhost:4000/2015/09/crowd-sourced-neighborhoods/Recently<p>What I’ve been up to (outside of my work):</p>
<p>Setting up tech (registration, website updating/maintainence, and writing the content) for the <a href="http://jhfeichtnerfund.com/">8th annual Jake’s Invitational</a>.</p>
<p>If you’re looking to golf for a great cause in Northeast Ohio, check out the 8th Jake’s Invitational on August 9th.</p>
<p>We fund children’s futures by giving them financial aid to Lawrence School, a great place for students with learning differences.</p>
<p>Spending more time with open data.</p>
<p>Obtaining the data (especially local data) to be used in maps has been time-consuming. When exploring or thinking about different topics to understand through maps, I am limited by the data that is available.</p>
<p>This has led me to spend more time to advocate for and work with open data on a broader scale. I’ve been co-leading <a href="http://www.opencleveland.org">Open Cleveland</a> which along with <a href="http://openneo.org">OpenNEO</a> and <a href="http://www.hackcleveland.org">Hack Cleveland</a> has been the open data movement in Cleveland.</p>
<p>We’re educating local politicians and city employees that civic data they work with and manage can be useful if it’s available to
the public like creating a web form so someone can apply online <a href="https://github.com/opencleveland/large-lots">to take formal stewardship of the vacant lot next door to them</a>.</p>
<p>Data alone won’t solve anything but it will make a lot of others’ jobs easier.</p>
<p>I didn’t submit a talk to NACIS this year. Do I regret it? Not yet. I might later.</p>
<p>I’ll share some Carto thoughts on animated temporal maps very briefly:</p>
<p>I’ve been thinking a little about animated temporal maps, maps whose features change based on a specific time.
One <a href="http://darkhorseanalytics.com/blog/wp-content/uploads/2014/05/nyBreathe.gif">examples</a></p>
<p>Torque by CartoDB is one easy to use library that is described to do temporal mapping. I haven’t seen as much use of Torque (or many temporal maps) in recent months on cartotalk on twitter.</p>
<p>I hadn’t thought of any use of torque either, until last week, visualizing over time, Cleveland’s building demolitions.</p>
<p>For the outsiders of Cleveland, yes, many of these were likely houses; it’s a visual representation of the housing crisis.</p>
<p>I was wondering how I could see it spread, what areas were hit hardest. I want to see different ways how this can be visualized.</p>
<p>This first visualization is just a proof of concept I got up and running; I’ve fiddled with torque’s API a little since then although not enough to write up for you just yet, will do soon. I am now sleepy.</p>
<p>Listening to <a href="https://www.youtube.com/watch?v=QO8gzvS82UI">Jean-Christian Arod - Detour Nostalgique</a>. from the movie CRAZY. I fell in
love with the song back in 05 or 06, and just rediscovered it earlier tonight, listening to it a few times on lop.</p>
Mon, 01 Jun 2015 19:25:06 -0400http://localhost:4000/2015/06/recently/
http://localhost:4000/2015/06/recently/Mapping Clevelands Building Ages<p>A quick update to let people know that I made an <a href="http://skorasaur.us/maps/clebuildings.html"> online map of Cleveland’s Building ages </a></p>
<p>I’ve wanted to make one for a couple years now but didn’t have access to the data. I finally do now and uploaded the <a href="https://github.com/skorasaurus/clebuildings/blob/master/all.csv"> raw data on github </a>. Once you click on that link, click raw, and save the file as a CSV.</p>
<p>The Case for Open Data:</p>
<p>I’ve wanted to make this map for years… Cleveland and Cuyahoga County’s data access is less than stellar and falling behind other cities.
<a href="http://www.codeforamerica.org/governments/principles/open-data/"> Open Data policies </a> foster the culture where information like building construction dates is shared and readily accessible and updated on the internet for government employees, non-profits, private businesses, and anyone else.</p>
<p>In Cleveland, there’s growing awareness open data’s value to communities <a href="http://www.opencleveland.org"> Open Cleveland </a>, <a href="http://www.codeforamerica.org/"> Code For America Brigade</a>, is among several groups and individuals including <a href="http://cpl.org"> Cleveland Public Library </a>, <a href="http://povertycenter.case.edu/">Center on Urban Poverty and Community Development at Case Western Reserve University </a>
and <a href="http://www.onecommunity.org/">One Community </a> to promote open data policies and demonstrate its value.</p>
<p>If you’re interested to participate in this, you’re welcome to become a part of <a href="http://www.opencleveland.org"> Open Cleveland </a> and attend one of our meetups and events.</p>
<p>Some technical/cartography details:</p>
<p>It’s only the 2nd choropleth map (I usually make base maps or transportation maps) that I’ve ever made and learned more about that area of cartography.
I tried out a couple different data classification methods (jenks, Equal Interval, and others) that are built-in to cartodb, the mapping software library that I used. None of them really felt ‘right’ to me so I made my own breaks after examining the source data and looking for trends.</p>
<p>I only used 6 breaks. Is it too many or too few? I am not experienced in this area of cartography and found advice for sticking with 5-7 (on the belief too many would confuse the reader). I would have liked to see how a scheme where each increase in year change the color ever so slightly, perhaps increasing the Hue value by 1 or 2 points for each year. I couldn’t figure how to do this (and I don’t know the proper cartographical term either) in cartodb, so I opted to
look at cartodb’s data classifications…</p>
<p>Why cartodb:</p>
<p>I actually originally tried mapbox’s mapbox-studio as I was more experienced with their softwares and thought it would be a good tool for the job (and I was really itching to use mapbox-studio again - Yes, I know the latter point is not a good reason to base your decision on what tool(s) to use to make a map).. But, mapbox-studio was unable to create the vector layer for me; it give me an error that I couldn’t have X (I think it was 50k) amount of points at a particular zoom level. So, I went to cartodb. It handled the 40mb geojson file that I uploaded without a problem.</p>
<p>Why points:</p>
<p>There’s been a few building age maps similar to this one (web maps, detailed to the block level) for NYC, Austin, PDX, Chicago.
All of them had used the actual building footprint. Unfortunately, the source data only gives point centroid of the land parcel for the geography (lat/lon).
I briefly thought about trying to match them up with the publicly available building footprints were last updated in 2007 (!) provided by the County (they’re currently updating them now) but they are woefully incomplete in coverage. I also thought about matching them up with the building footprints that the Cleveland Metroparks made (they are the forefront of open source GIS in Northeast Ohio) from ~2012. That data set is relatively complete in terms of coverage. However, I’m doing this in my limited free time and didn’t want to go down that rabbit hole, so I decided to just go with the points.</p>
<p>Notice errors or omissions in the source data? I have too and I need to figure out who to contact and send them the corrections. This data was missing buildings built in 2013 and 2014.</p>
Thu, 08 Jan 2015 00:00:00 -0500http://localhost:4000/2015/01/mapping-clevelands-building-ages/
http://localhost:4000/2015/01/mapping-clevelands-building-ages/Visualizing Improvements in OpenStreetMap in Carroll County<p>Before I went to visit Carroll County in North Central Ohio for a weekend in July, its state in OSM was really poor. It consisted mostly of TIGER-imported ways that hadn’t been whose geometry hadn’t been changed since the original TIGER Import in 2008. In many cases, the roads’ geometry didn’t make up with reality. Some roads were 30 meters off of where they really are; small townships of 10-20 streets on the map were like jigsaw puzzle pieces that needed to be rotated; random roads were in the middle of grassy fields. One exception to it was Carrollton, the largest city in the county, which was improved very well OSM user Evan Edwards.</p>
<p>After that weekend, I corrected some road geometries and added a dozen or so POIs. and wondered how long it will take me to fix all of the roads in the county and what would it reveal about editing TIGER data.</p>
<p>Answer: a lot longer than I thought! (~30 hours over a couple months). Near the end, it honestly became a chore. But I want to see how it would like visually, I didn’t want to do that work for nothing…</p>
<p>How I did it (very briefly):
I was able to more quickly improve the ways by using where I’d copy the geometry from a TIGER2013 file; verify that the TIGER13 matches the aerial imagery better than the existing way.</p>
<p>This a great strategy and I’d advise anyone who is interested in improving TIGER-imported ways in the future.</p>
<p>Watch a quick example of this workflow in the GIF.</p>
<p><img src="http://media.giphy.com/media/yoJC2xRysrWnNO389a/giphy.gif" alt="GIF - editing osm with tiger2014 and josm" title="gif" /></p>
<p>If you’re interested in a detailed workflow, skip to the end…</p>
<h2 id="visualization">Visualization:</h2>
<p>Here’s a first draft of the map;</p>
<ul>
<li>http://skorasaurus.github.io/carroll.html</li>
</ul>
<p>Now, I’m trying to visualize the improvements that I made by displaying
the distance between 2 linestrings (before I edited with OSM data from spring 2014 - which a postgis table named old_line, and OSM data from Nov. 2014 which is a postgis table named new_line))</p>
<p>How I’m planning to do this.</p>
<p>A] select each point in the every linestring from old_line as oldpointdump</p>
<p>SELECT ST_DumpPoints(way) FROM old_line as oldpointdump</p>
<p>B] FOR each point in oldpointdump, find the closet linestring in new_line; measure this distance
C] create a new column for this distance (as distfromtiger)</p>
<p>F] then in every line string in old_table between 2 points (let’s call these points J and K), add a column and assign that column a value that is (.5 * (distfromtiger for J) + (distfromtiger of K))</p>
<p>G] style the linestring in mapbox/cartodb using the value that I derived in F</p>
<p>==============================</p>
<h2 id="a-much-more-thorough-workflow-follows">A much more thorough workflow follows:</h2>
<p>I used ogr2osm to convert 2013 TIGER Shapefile (then tiger2014 once they became available) into an OSM file and then loaded that file in josm and deleted all of the unnecessary tags that were created in it, except the fullname field; to ensure that the name was matching up with the correct road, and saved on my computer for my workflow.</p>
<p>Then, I went to editing. I downloaded some OSM data from JOSM, opened the TIGER osm file, and checked out a particular area with bing imagery.
For a particular street, go over the entire street to see if the 2013 road is in fact closer to matching the aerial bing imagery than the existing road is. I’d also load up the history for the road to see if any nodes were changed. (Cntrl+H). Many roads were changed to proper highway type (tertiary) but didn’t have any of the geometry modified since the original tiger import. To more easily find roads that were not yet modified I would do: user:bot-mode OR user:DaveHansenTiger AND type:way AND -modified</p>
<p>If the 2013 is more closely aligned than the existing road, delete the name tag in the TIGER 2013 way you’re about to select; copy (cntrl+c the highway as it is in a separate layer with the separate layer selected in josm); switch to your osm data layer, and then paste into your data layer of OSM data. Then select the two ways, the new way and the way you’re replacing; and hit cntrl+shift+g. CNTRL+shift+g is the ‘replace geometry’ shortcut that keeps the history of the original way and relations that the way belongs to.</p>
<p>I looked over each way that I imported. In some cases the imported way was about 5-10 meters from the center of the road in the bing imagery but still better than before and within the general margin of error that we have in OSM.</p>
<p>After you finish all of the ways in your area; hit the JOSM validation, first fix all of the errors marked highway duplicate nodes; then hit the josm validation button again; and go to the ways ended near other nodes, right click on each one in the list which brings me to the location of the node, hit ‘N’ for each. At this point, I’d go back to the ways I modified, and simplify them (cntrl+y) because TIGER with deletes any excess nodes without deleting the nature of the geometry. I set my Simplify way settings (done in advanced preferences in josm, and finding the ) to 0.8 . JOSM’s simplification uses the Ramer–Douglas–Peucker algorithm.</p>
Sat, 29 Nov 2014 18:25:06 -0500http://localhost:4000/2014/11/comparingcounties-part1/
http://localhost:4000/2014/11/comparingcounties-part1/Jekyll Clean Theme* Get it from [github](https://github.com/scotte/jekyll-clean).
* See the [live demo](https://scotte.github.io/jekyll-clean).
<<<<<<< HEAD
* See it [in action on my own blog](https://scotte.github.io).
=======
* See it [in action on my own blog](https://scotte.org).
>>>>>>> upstream/gh-pages
Welcome to the sample post for the Jekyll Clean theme.
A simple and clean Jekyll theme using [bootstrap](http://getbootstrap.com)
(not to be confused with jekyll-bootstrap) that's easy to modify and very
modular in component and element reuse.
It uses Disqus for comments and includes Google Analytics support. Both of
these features are disabled by default and can be enabled via \_config.yml. You
can also rip this code out of the templates if you like (footer.html and post.html).
The beauty of Jekyll - keep things clean... Jekyll Clean!
The theme works well on mobile phones, using a collapsable nav bar and hiding the
sidebar. The links pane in the sidebar is available on mobile through the nav menu,
and you can do the same thing for any other sections added to the sidebar.
Don't forget to occassionally merge against my upstream repository so you can get
the latest changes. Pull requests are encouraged and accepted!
Installation
============
If you don't have a blog already on github, start by cloning this repository.
Best to do that directly on github and then clone that down to your computer.
If you already do have a blog, You can certainly apply this theme to your existing
blog in place, but then you won't be able to merge as the theme changes. If you
re-apply your blog history on top of this theme's **gh-pages** branch, it's then
easy to update to the latest version of the theme. You also don't want to have to
deal with resolving old conflicts from your existing history, so you may wish to to
push your existing master off to a new branch so you have the old history and start
a new branch with this as the start, merging in your \_posts and other assets (after
git rm'ing the current \_posts.
Not ideal, but you have to make a choice - either apply it manually or base your
blog off this theme's branch. Either way it will work, and both have their own
pros and cons.
You can setup an upstream tracking repository like so:
```
$ git remote add upstream git@github.com:scotte/jekyll-clean.git
```
And now when you wish to merge your own branch onto the latest version of the
theme, simply do:
```
$ git fetch upstream
$ git merge upstream/gh-pages
```
Of course you will have to resolve conflicts for \_config.yml, \_includes/links-list.html,
and \_posts, and so on, but in practice this is pretty simple.
This is how I maintain my own blog which is based on this theme. The old history is
sitting in an **old-master** branch that I can refer to when I need to.
<<<<<<< HEAD
=======
Running Locally
===============
Here's the exact set of packages I need to install on Debian to run jekyll
locally with this theme for testing.
```
$ sudo aptitude install ruby ruby-dev rubygems nodejs
$ sudo gem install jekyll jekyll-paginate
```
And then it's just a simple matter of running jekyll locally:
```
$ jekyll serve --baseurl=''
```
Now browse to http://127.0.0.1:4000
Using gh-pages
==============
Running a jekyll site is a bit outside the scope of this doc, but
sometimes it can be a bit confusing how to configure jekyll for
project pages versus user pages, for example.
To start with, read through
[the documentation here](https://help.github.com/articles/user-organization-and-project-pages/).
This will provide a good overview on how it all works. The git branch and
baseurl (in _config.yml) will change depending on the sort of site deployed.
When you clone this repository, it's set up for project pages, so the
deployed branch is "gh-pages" and baseurl is configured to 'jekyll-clean',
because that's the name of this project.
If you plan to deploy this as user pages, the deployed branch is "master"
and baseurl is configured to '' (i.e. empty).
Comment Systems
===============
Jekyll clean supports both [isso](https://posativ.org/isso) and
[disqus](https://disqus.com) comment systems.
After enabling **comments**, either **isso** or **disquss** must
be configured. Don't try configuring both!
Isso Comments
=============
Isso requires running a local server, so is not suitable for hosting
in github pages, for example. Isso is open source and keeps all your
data local, unlike Disqus (who knows exactly what they are doing with
your data).
In _config.yml you'll need to set **isso** to the fully-qualified URL
if your isso server (this is the value for **data-isso** passed to the
isso JS). Make sure **comments** is true.
Disqus Comments
===============
Getting Disqus to work can be a bit more work than it seems like it should be.
Make sure your Disqus account is correctly configured with the right domain
of your blog and you know your Disqus shortname.
In _config.yml you'll need to set **disqus** to your Disqus shortname and
make sure **comments** is true.
Finally, in posts, make sure you have **comments: true** in the YAML front
matter.
More information on using Disqus with Jekyll is
[documented here](https://help.disqus.com/customer/portal/articles/472138-jekyll-installation-instructions).
Code Syntax Highlighting
========================
To use code syntax highlighting, use the following syntax:
```
```python
import random
# Roll the die
roll = random.randint(1, 20)
print('You rolled a %d.' % roll)
``` #REMOVE
```
(Remove #REMOVE from the end of the last line). Which will look like this in
the rendered jekyll output using the default css/syntax.css provided with this
theme (which is the **colorful** theme from [https://github.com/iwootten/jekyll-syntax](https://github.com/iwootten/jekyll-syntax)):
```python
import random
# Roll the die
roll = random.randint(1, 20)
print('You rolled a %d.' % roll)
```
You can, of course, use any theme you wish, see the jekyll and pygments
documentation for more details.
>>>>>>> upstream/gh-pages
License
=======
The content of this theme is distributed and licensed under a
![License Badge](/images/cc_by_88x31.png)
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/legalcode)
This license lets others distribute, remix, tweak, and build upon your work,
even commercially, as long as they credit you for the original creation. This
is the most accommodating of licenses offered. Recommended for maximum
dissemination and use of licensed materials.
In other words: you can do anything you want with this theme on any site, just please
provide a link to [the original theme on github](https://github.com/scotte/jekyll-clean)
so I get credit for the original design. Beyond that, have at it!
This theme includes the following files which are the properties of their
respective owners:
* js/bootstrap.min.js - [bootstrap](http://getbootstrap.com)
* css/bootstrap.min.css - [bootstrap](http://getbootstrap.com)
* js/jquery.min.js - [jquery](https://jquery.com)
* images/cc_by_88x31.png - [creative commons](https://creativecommons.org)
<<<<<<< HEAD
=======
* css/colorful.css - [iwootten/jekyll-syntax](https://github.com/iwootten/jekyll-syntax)
>>>>>>> upstream/gh-pages
Fri, 22 Aug 2014 19:25:06 -0400http://localhost:4000/2014/08/jekyll-clean-theme.md/
http://localhost:4000/2014/08/jekyll-clean-theme.md/