Tagopen data

About five years ago (I’m too lazy to look it up right now), the City of Chicago adopted an energy benchmarking law. This means that owners of buildings of a certain size would soon be required to report how much energy (electricity, natural gas, district steam, chilled water, and other fuels) their buildings use. Every few years they have to audit their reports.

The city has posted three years of energy reports for the “covered” buildings (the ones of a certain size) on its data portal. I copied the Chicago Energy Benchmarking dataset into the Chicago Cityscape database (for future features) and then loaded it into QGIS so I could analyze the data and find the least efficient buildings in Chicago.

The dataset has all three years so I started the analysis by filtering only for the latest year, 2016. I first visualized the data using the “ghg_intensity_kg_co2e_sq_ft” column, which is “greenhouse gas intensity, measured in kilograms of carbon dioxide equivalent per square foot”. In other words, how much carbon does the building cause to be emitted based on its energy usage and normalized by its size.

In QGIS, to symbolize this kind of quantitative data, it helps to show them in groups. Here are “small fry” emitters, medium emitters, and bad emitters. I used the “Graduated” option in the Symbology setting and chose the Natural Breaks (Jenks) mode of dividing the greenhouse gas intensity values into four groups.

There are four groups, divided using the Natural Breaks (Jenks) method. There’s only one building in the “worst” energy users group, which is Salem Baptist Church, marked by a large red dot. The darker red the dot, the more energy per square foot that building consumes.

Among the four groups, only one building in Chicago that reported in 2016 was in the “worst emitters” group: Salem Baptist Church of Chicago at 10909 S Cottage Grove Avenue in Pullman.

The Salem Baptist Church building was built in 1960, has a gross floor area of 91,800 square feet, and an Energy Star rating of 1 because it emits 304.6 kilograms of carbon dioxide equivalent per square foot (kgco2esf). (The Energy Star rating scale is from 1 to 100.)

The next “worse” emitter in the same “Worship Facility” category as Salem Baptist Church is several magnitudes of order lower. That’s St. Peter’s Church at 110 W Madison Street in the Loop, built in 1900, which emits 11.7 kilograms of carbon dioxide equivalent per square foot (but which also has an Energy Star rating of 1).

The vast difference is concerning: Did the church report its energy usage correctly, or are they not maintaining their HVAC equipment or the building and it’s leaking so much air?

A different building was in the “worst” emitter category in 2015 but improved something about the building by 2016 to use a lot less energy. Looking deeper at the data for Piper’s Alley, however, something else happened.

In 2015, Piper’s Alley reported a single building with 137,176 gross square feet of floor area. The building’s owner also reported 5,869,902 kBTUs of electricity usage and 1,099,712,681 kBTUs of natural gas usage. Since these are reported in kilo-BTUs that means that you multiply each number by 1,000. Piper’s Alley reported using 1 trillion BTUs of natural gas. Which seems like an insane amount of energy usage, but could be totally reasonable – I’m not familiar with data on how much energy a “typical” large building uses.

Piper’s Alley in Old Town is the building that reported two different floor areas and vastly different energy usage in 2015 and 2016. The building’s owner didn’t report data for 2014 (although it may not have been required to).

There’s another problem with the reporting for Piper’s Alley, however: For 2016, it reported a gross floor area of 217,250 square feet, which is 36 percent larger than the area it reported in 2015. The building reported using significantly more electricity (58 percent more) and significantly less natural gas (137 percent less), for a vastly lowered kgco2esf value.

Share this:

A lot of geospatial data (GIS) is stored on ArcGIS MapServers, which is part of the Esri “stack” of products that municipalities use to manage and publish GIS data. And a lot of people want that data. If you have ArcGIS software on your Windows computer, then it can be pretty easy to plug in the map server URL and manipulate and extract the data.

For the rest of us who don’t have an extremely expensive license to that software, you can use a “command line” tool (written in Python) on any computer to download any layer of GIS data hosted on the ArcGIS MapServer and automatically convert it to GeoJSON.

You’ll need to install the Python package pyesridump, from the OpenAddresses GitHub repository, created by Ian Dees and other contributors.

Installing pyesridump is easy if you have pip installed, using the command pip install esridump.

The next thing you’ll need is the URL to a layer in a MapServer, and these are not easy to find.

Finding data to download

I can guarantee the county where you live has one. Before you continue, check to see if your county (or other jurisdiction) has the “open data portal” add-on to their ArcGIS stack.

Here are links to the open data portals enabled by Esri for Lake County, Illinois, and Broomfield County, Colorado). This is much easier to browse and find data to download (in shapefile and other formats) and you can skip this tutorial.

I don’t have a good recommendation to find the MapServer URL, though. A reader suggested looking for MapServers for jurisdictions around the world by looking through Esri’s portal of open data called ArcGIS Hub. Once you locate a dataset you want, you can find the MapServer URL under About>Data Source on the right side of the page.

I normally find them by looking at the HTML source code of a MapServer I already know about.

The first term, esri2geojson tells your computer which program to load.

The second term is the URL of the MapServer URL.

The third term is the filename and location where you want to store the file. I prefer running the command “inside” the folder where I want the file to be stored. You can also specify a full path of the file. On a Mac this would look like ~/Users/username/Documents/GIS/projectname/cookcounty_commissioners.geojson

After you enter the command into your computer’s terminal, press enter. esri2geojson will report back once, after it finds and understands the MapServer URL you gave it. When it’s done, the command will “close” and your computer’s terminal will wait for the next command.

First, can you answer: Are most building permits issued to North Michigan Avenue (between Madison Street, 0 north/south, and Oak Street, 1000 north), or South Michigan Avenue (between Madison Street, 0 north/south, and um, somewhere south of 130th Street, 13000 south)?

Here’s the answer…

Even though South Michigan Avenue is at least 13x longer than North Michigan Avenue, South Michigan Avenue has 39 percent fewer building permits!

From 2006 to yesterday (Saturday), there were 7,828 building permits issued to projects on North Michigan Avenue and 4,714 building permits issued on South Michigan Avenue.

The most common address on North Michigan Avenue to receive building permits was 875 N Michigan Avenue. It’s also the most common address to receive building permits on all Chicago streets.

What’s there? The John Hancock Center (tower)!

The average building address number on North Michigan Avenue is 540.6. That means that building permits on North Michigan Avenue concentrate around Grand Avenue, which is near the city’s biggest Marriott hotel, and is where the Under Armor flagship store is.

The next most common street – after South Michigan Avenue – is North Clark Street, which extends from Madison Street (0 north/south) to the northern edge of the city at Howard Street, which is 7600 north, about 7.6 times longer than North Michigan Avenue.

Businesses in the 400 block of South Clark Street, as of when the photo was taken in November 2008. I believe the hotel is still there. This is the busiest block of South Clark Street, for building permits. Photo by Bruce Laker.

South Clark Street doesn’t register in the top 10 or even the top 100. It comes it at number 162, with 772 building permits. This is surprising to me because South Clark Street runs from Madison Street (0 north/south) in downtown and goes to 2200 south, and has a lot of downtown office buildings.

South LaSalle Street (3,613 building permits), South Wabash Avenue (2,916), and South Dearborn (1,611) are all in the top 50. The data could be wrong somehow.

Share this:

Over on my website Chicago Cityscape I’ve assembled a map of maps: There are 20,432 maps in 36 layers. You might say there are 36 maps, and each of those maps has an arbitrary number of boundaries within. I say there are 20,000+ maps because there’s a unique webpage for each of them that can tell you even more information about that map.

This post is to throw out some analysis of these maps, in addition to the simple counts above.

The data comes from the City of Chicago, Cook County, and the U.S. Census Bureau. Some layers have come from bespoke sources, including the entrances of CTA and Metra stations drawn by Yonah Freemark and me for Transit Explorer. The sections of the Chicago River were divided and sliced by the Metropolitan Planning Council. The neighborhood and business organizations layers were drawn by me, by interpreting textual descriptions of the organizations’ boundaries, or by visually copying an organization’s own map.

There are 6,879 unique words longer than 2 characters, in the metadata of this map of maps. The most common word is “annexation”, which makes sense, given that the layer with the most maps shows the 10,668 Cook County annexation actions since 1830 – the first known plat was incorporated in the City of Chicago.

The GeoJSON file, an open source, human readable GIS format, comes out to 30 MB, and it make break your browser when you try to display this layer.

The next group of words are also generic, like “planned” and “development”, related to the Planned Development kind of zoning process in Chicago – called Planned Unit Development in other jurisdictions.

After that, some names of municipalities that traded back and forth between unincorporated Cook County and incorporated municipalities are on the list.

Working down the list, however, it gets really boring and I’m going to stop. I bet if you’re a smarter data science person you can find more interesting patterns in the words, but I’ve also increased the number of generic words (like planned development) by adding these as keywords to each map’s “full text search” index, to ensure that they would respond to a variety of search phrases from users.

Extract free and open source data from OpenStreetMap

Open the Overpass Turbo website and, on the map, search for the city from which you want to extract data. (The Overpass query will be generated in such a way that it’ll only search for data in the current map view.)

Click the “Wizard” button in the top toolbar. (Alternatively you can copy the code below and paste it into the text area on the website and click the “Run” button.)

In the Wizard dialog box, type in “railway=subway” in order to find metro, subway, or rapid transit lines. (If you want to download interstate highways, or what they call motorways in the UK, use “highway=motorway“.) Then click the “build and run query” button.

In a few seconds you’ll see lines and dots (representing the metro or subway stations) on the map, and a new query in the text area. Notice that the query has looked for three kinds of objects: node (points/stations), way (the subway tracks), relation (the subway routes).

If you don’t want a particular kind of object, then delete its line from the query and click the “Run” button. (You probably don’t want relation if you’re just needing GIS data for mapping purposes, and because routes are not always well-defined by OpenStreetMap contributors.)

Download the data by clicking the “Export” button. Choose from one of the first three options (GeoJSON, GPX, KML). If you’re going to use a desktop GIS software, or place this data in a web map (like Leaflet), then choose GeoJSON. Now, depending on what browser you’re using, a couple things could happen after you click on GeoJSON. If you’re using Chrome then clicking it will download a file. If you’re using Safari then clicking it will open a new tab and put the GeoJSON text in there. Copy and paste this text into TextEdit and save the file as “mexico_city_subway.geojson”.

Screenshot 1: After searching for the city for which you want to extract data (Mexico City in this case), click the “Wizard” button and type “railway=subway” and click run.

Screenshot 2: After building and running the query from the Wizard you’ll see subway lines and stations.

Screenshot 3: Click the Export button and click GeoJSON. In Chrome, a file will download. In Safari, a new tab with the GeoJSON text will open (copy and paste this into TextEdit and save it as “mexico_city_subway.geojson”).

Convert the free and open source data into a shapefile

After you’ve downloaded (via Chrome) or re-saved (Safari) a GeoJSON file of subway data from OpenStreetMap, open QGIS, the free and open source GIS desktop application for Linux, Windows, and Mac.

In QGIS, add the GeoJSON file to the table of contents by either dragging the file in from the Finder (Mac) or Explorer (Windows), or by clicking File>Open and browsing and selecting the file.

Convert it to GeoJSON by right-clicking on the layer in the table of contents and clicking “Save As…”

In the “Save As…” dialog box choose “ESRI Shapefile” from the dropdown menu. Then click “Browse” to find a place to save this file, check “Add saved file to map”, and click the “OK” button.

A new layer will appear in your table of contents. In the map this new layer will be layered directly above your GeoJSON data.

Screenshot 4: The GeoJSON file exported from Overpass Turbo has now been loaded into the QGIS table of contents.

Screenshot 5: In QGIS, right-click the layer, select “Save As…” and set the dialog box to have these settings before clicking OK.

Query for finding subways in your current Overpass Turbo map view

/*
This has been generated by the overpass-turbo wizard.
The original search was:
“railway=subway”
*/
[out:json][timeout:25];
// gather results
(
// query part for: “railway=subway”
node["railway"="subway"]({{bbox}});
way["railway"="subway"]({{bbox}});
relation["railway"="subway"]({{bbox}});/*relation is for "routes", which are not always
well-defined, so I would ignore it*/
);
// print results
out body;
>;
out skel qt;

I created a combined dataset of over 2 million names, including contractors, architects, business names, and business owners and their shareholders, from Chicago’s open data portal, and property owners/managers from the property tax database. It’s one of three new features published in the last couple of weeks.

Type a person or company name in the search bar and press “search”. In less than 1 second you’ll get results and a hint as to what kind of records we have.

What should you search?

Take any news article about a Chicago kinda situation, like this recent Chicago Sun-Times article about the city using $8 million in taxpayer-provided TIF district money to move the Harriet Rees house one block. The move made way for a taxpayer-funded property acquisition on which the DePaul/McCormick Place stadium will be built.

The CST is making the point that something about the house’s sale and movement is sketchy (although I don’t know if they showed that anything illegal happened).

There’re a lot of names in the article, but here are some of the ones we can find info about in Chicago Cityscape.

Share this:

There are several hundred condo units in the building at 235 W Van Buren Street, and each unit is associated with multiple Property Index Numbers (PIN). Photo by Jeff Zoline.

Several people have used Chicago Cityscape to try and find who owns a property. Since I’ve got property tax data for 2,013,563 individually billed pieces of property in Cook County I can help them research that answer.

The problem, though, is that the data, from the Cook County combined property tax website, only shows who receives the property tax bills – the recipient – who isn’t always the property’s owner.

The combined website is a great tool. Property value info comes from the Assessor’s office. Sales data comes from the Recorder of Deeds, which is another, separately elected, Cook County government agency. Finally, the Treasurer’s office, a third agency, also with a separately elected leader, sends the bills and collects the tax.

The following is a list of the top 100 (or so) “property tax bill recipients” in Cook County for the tax years 2010 to 2014, ranked by the number of associated Property Index Numbers.

Many PINs have changed recipients after being sold or divided, and the data only lists the recipient at its final tax year. A tax bill for Unit 1401 at 235 W Van Buren St was at one time sent to “235 VAN BUREN, CORP” (along with 934 other bills), but in 2011 the PIN was divided after the condo unit was sold.

The actual number is closer to 90, arrived at by combining 5 names that seem to be the same (using OpenRefine’s clustering function) and removing 5 “to the current taxpayer” and empty names. You’ll notice “Altus” listed four times (they’re based in Phoenix) and Chicago Title Land Trust, which can help property owners remain private, listed twice (associated with 643 PINs).

[table id=2 /]

Share this:

This building at 1711 N Kimball no longer receives mail and the local mail carrier would mark it as vacant. After a minimum length of time the address will appear in the United States Postal Service’s vacancy dataset, provided by the federal Department of Housing and Urban Development. Photo: Gabriel X. Michael.

Working with accurate ZIP code data in your geographic publication (website or report) or demographic analysis can be problematic. The most accurate dataset – perhaps the only one that could be called reliably accurate – is one that you purchase from one of the United States Postal Service’s (USPS) authorized resellers. If you want to skip the introduction on what ZIP codes really represent, jump to “ZIP-code related datasets”.

Understanding what ZIP codes are

In other words the post office’s ZIP code data, which they use to deliver mail and not to locate people like your publication or analysis, is not free. It is also, unbeknownst to many, a dataset that lists mail carrier routes. It’s not a boundary or polygon, although many of the authorized resellers transform it into a boundary so buyers can geocode the location of their customers (retail companies might use this for customer tracking and profiling, and petition-creating websites for determining your elected officials).

The Census Bureau has its own issues using ZIP code data. For one, the ZIP code data changes as routes change and as delivery points change. Census boundaries needs to stay somewhat constant to be able to compare geographies over time, and Census tracts stay the same for a period of 10 years (between the decennial surveys).

Understanding that ZIP codes are well known (everybody has one and everybody knows theirs) and that it would be useful to present data on that level, the Bureau created “ZIP Code Tabulation Areas” (ZCTA) for the 2000 Census. They’re a collection of Census tracts that resemble a ZIP code’s area (they also often share the same 5-digit identifiers). The ZCTA and an area representing a ZIP code have a lot of overlap and can share much of the same space. ZCTA data is freely downloadable from the Census Bureau’s TIGER shapefiles website.

Here’s a real world example of the kinds of problems that ZIP code data availability and comprehension: Those working on the Chicago Health Atlas have run into this problem where they were using two different datasets: ZCTA from the Census Bureau and ZIP codes as prepared by the City of Chicago and published on their open data portal. Their solution, which is really a stopgap measure and needs further review not just by those involved in the app but by a diverse group of data experts, was to add a disclaimer that they use ZCTAs instead of the USPS’s ZIP code data.

ZIP-code related datasets

Fast forward to why I’m telling you all of this: The U.S. Department of Housing and Urban Development (HUD) has two ZIP-code based datasets that may prove useful to mappers and researchers.

1. ZIP code crosswalk files

This is a collection of eight datasets that link a level of Census geography to ZIP codes (and the reverse). The most useful to me is ZIP to Census tract. This dataset tells you in which ZIP code a Census tract lies (including if it spans multiple ZIP codes). HUD is using data from the USPS to create this.

The USPS employs thousands of mail carriers to delivery things to the millions of households across the country, and they keep track of when the mail carrier cannot delivery something because no one lives in the apartment or house anymore. The address vacancy data tells you the following characteristics at the Census tract level:

total number of addresses the USPS knows about

number of addresses on urban routes to which the mail carrier hasn’t been able to delivery for 90 days and longer

You must register to download the vacant addresses data and be a governmental entity or non-profit organization*, per the agreement** HUD has with USPS. Learn more and download the vacancy data which they update quarterly.

Tina Fassett Smith is a researcher at DePaul University’s Institute of Housing Studies and reviewed part of this blog post. She stresses to readers to ignore the “no-stat” addresses in the USPS’s vacancy dataset. She said that research by her and her colleagues at the IHS concluded this section of the data is unreliable. Tina also said that the methodology mail carriers use to identify vacant addresses and places under change (construction or demolition) isn’t made public and that mail carriers have an incentive to collect the data instead of being compensated normally. Tina further explained the issues with no-stat.

We have seen instances of a relationship between the number of P.O. boxes (i.e., the presence of a post office) and the number of no-stats in an area. This is one reason we took it off of the IHS Data Portal. We have not found it to be a useful data set for better understanding neighborhoods or housing markets.

** This agreement also states that one can only use the vacancy data for the “stated purpose”: “measuring and forecasting neighborhood changes, assessing neighborhood needs, and measuring/assessing the various HUD programs in which Users are involved”.

Share this:

This tutorial is a direct response to a question about which Chicago beach has the largest parking lot. Matt Nardella of Moss Design, in a response to a Twitter-based conversation about Alderman Cappleman’s suggestion that perhaps Montrose beach has too much parking, researched on Wikipedia to find the answer. This is where it said that Montrose beach has the largest parking lot of any of Chicago’s 27 beaches.

Now we’re going to try and prove which beach has the largest associated parking lot.

This tutorial will teach how you to (1) display Chicago beaches, (2) download data held in OpenStreetMap, (3) find the parking lots within the OpenStreetMap data, (4) find the parking lots near the beaches, and (5) calculate each parking lot’s area (in square feet). You can use this tutorial to accomplish any one of these three tasks, or the same tasks but on a different part of OpenStreetMap data (like the area of indoor shopping malls).

You’ll need the QGIS software before starting. You’ll also need at least 500 MB of free space. Start a project folder called “Biggest Parking Lots in Chicago” and make two more folders, within this folder, called “origdata” and “data”.

First, let’s get some data about beaches

Since we only want to know about the parking lots near Chicago beaches we need to get a dataset that locates them. This data is presumably within the same OpenStreetMap extract we’re waiting for, but it’s best to go to the most reliable source.

Open the parks shapefile in a new document in QGIS (call it “map01a.qgs”). You might not see the data so right-click the parks layer and select “Zoom to layer extent”.

Filter out all the points that aren’t beaches by using the query builder. Right-click the layer and select “Filter…” and input this filter expression: “FACILITY_N” = ‘BEACH’

Your map will now show 26 points along an invisible lakefront and then the beach at Humboldt Park.

For the rest of this tutorial we’ll reference the beaches layer as ParkFacilities.

Second, let’s get some data from OpenStreetMap

The easiest way to grab data from OpenStreetMap is by using QGIS, a free, open source desktop GIS application that has myriad plugins that match the capabilities of the heavyweight ESRI ArcGIS line of software. We can download OpenStreetMap data straight into QGIS.

Click on the Vector menu and select OpenStreetMap>Download data.

We want as much data as will cover the beaches information so in the Extent section of the dialog box choose “From layer” and select the beaches layer (called ParkFacilities).

Browse to the “origdata” folder you created in the first task and choose the filename “chicago.osm”.

Click OK and watch the progress meter tell you how much data you’ve downloaded from OpenStreetMap.

Once it’s completed downloading, click “Close”. Now we want to add this data to our map.

Drag the chicago.osm file from your file system into the QGIS Layers list. A dialog box will appear asking which layers you want to add.

Select the layer that has the type “MultiPolygon”. This represents areas like buildings and parking lots.

Third, display the OpenStreetMap data and eliminate everything but the parking lots

We only want to compare parking lots in this dataset with beaches in the previous dataset so we need to eliminate everything from the OpenStreetMap data that’s not a parking lot. Since OSM data depends on tags we can easily select and show all the objects where “amenity” = “parking”.

Filter out all the polygons that aren’t parking lots by using the query builder. Right-click the layer and select “Filter…” and input this filter expression: “amenity” = ‘parking’. Hopefully all the parking lots have been drawn so we can analyze a complete dataset!

Your map will now show little squares, rectangles, and myriad odd shapes that represent parking lots around Chicagoland. (Most of these have been drawn by hand.) It should look like Image XXX.

Since this data is stored in a projection with the codename of EPSG:3435 and the OpenStreetMap data is stored with codename of EPSG:4326 we need to convert the beaches to match the beaches (because we’re going to be using feet as a measuring distance instead of degrees).

Right-click the layer and select “Save As…” and choose the format “ESRI Shapefile”. Then click the top Browse button and select a location on your hard drive for the converted file.

For “CRS” choose “Selected CRS”. Then click the bottom Browse button and search for the EPSG with the codename 3435. Select the checkbox named “Add saved file to map” so the new layer will be immediately added to our map.

Fourth, select all the parking lots near a beach

This task will select all the parking lots near the beaches. I chose 2,000 feet but you could easily choose a different distance. You might want to measure on Google Earth some minimum and maximum distances between beaches and their respective, associated parking lots.

(This task is easier using PostGIS which has a ST_DWithin function to find objects within a certain distance because we can avoid having to create the buffer in QGIS.)

Create a 2,000 feet buffer. Select Vector>Geoprocessing tools>Buffer.

In the Buffer(s) dialog box, select ParkFacilities (which has your beaches) as the “Input vector layer”. Choose a distance of 2000 (the units are pre-chosen by the projection and since we’re using a projection that’s in feet, the distance unit will be feet).

Double check that 2,000 feet was enough to select the parking lots. In my case, I see that the point representing Montrose beach was further than 2,000 feet away from a parking lot.

Let’s do it again but with 3,000 feet this time, and saving the “Output shapefile” as “beaches buffer 3000ft.shp”.

This time it worked and the nearest parking lots are now in the 3,000 feet radius buffer. You can see in Image XXX how the two concentric circles stretch out from the beach point towards the parking lots.

We’re not done. We’re next going to use our newly created 3,000 feet buffers to tell us which parking lots are in them. These will be presumed to be our beach parking lots.

Use the “Select by location” tool to find the beaches that intersect our 3,000 feet buffers. Select Vector>Research Tools>Select by location.

Follow me: we want to select features in parking 3435 [our parking lots] that intersect features in beaches buffer 3000ft [our beaches]. We’ll modify the current selection by creating a new selection so that we don’t accidentally include any features previously selected.

You’ll now see a bunch of parking lots turn yellow meaning they are actively selected.

Let’s save our selected parking lots as a new file so it will be easier to analyze just them. Right-click “parking 3435” and select “Save Selection As…” (it’s important to choose “Save Selection As” instead of “Save As” because the former will save just the parking lots we’ve selected).

Save it as “selected parking 3435.shp” in your “data” folder. The CRS should be EPSG:3435 (NAD83 Illinois StatePlane East Feet). Check off “Add saved file to map” and click OK.

Turn off all other layers except ParkFacilities to see what we’re left with and you’ll see what I show in Image XXX.

Share this:

People float by the Montgomery Ward Complex on Kayaks. Photo by Michelle Anderson.

Last week I met with the passionate staff at Landmarks Illinois to talk about Licensed Chicago Contractors. I wanted to understand the legality for historic preservation and determine ways to highlight landmarked structures on the website and track any modifications or demolitions to them.

I used pgShapeLoader to import them to my DigitalOcean-hosted PostgreSQL database and modified some existing code to start looking at these two new datasets. Voila, you can now track what’s going on in the Montgomery Ward Company Complex – currently occupied by “600 W” (at 600 W Chicago Avenue) hosting Groupon among other businesses and restaurants.

Today I was messing around with some queries after I saw that the ward containing this place on the National Register – the 27th – also had a bunch of other listed spots.

I wrote a query to see which wards have the most places on the National Register. The table below lists the top three wards, with links to their page on Licensed Chicago Contractors. You’ll find that many have no building permits associated with them. This is because of two reasons: the listing’s small geography to look within for permits may not include the geography of issued permits (they’re a few feet off); we don’t have a copy of all permits yet.

[table id=15 /]

4 wards don’t have any listings on the National Register of Historic Places and nine wards have one listing.