Tag: GIS

Digitizing the HOLC Collection for Mapping Inequality

The DSL recently released its first atlas map since the launch of American Panorama in December 2015. Mapping Inequality: Redlining in the New Deal America brings to life the study of New Deal America, the federal government, housing issues, and inequality by offering complete online access to the national collection of “security maps” and area descriptions produced between 1935 and 1940 by the Home Owners’ Loan Corporation (HOLC). To read more about HOLC and the New Deal visit the Introduction.

Since this blog focuses on “all things spatial”, I wanted to touch on the massive amount of GIS work that went into creating this project. First, I would like to acknowledge all of our student interns at UofR, who spent countless hours (don’t worry we paid them) making this possible. I would also like to thank our collaborators at Virginia Tech and the University of Maryland for their contribution to the GIS efforts to make this project possible.  The students learned a great deal about the process of georeferencing, digitizing, database management, and topology rules. Most of the students, prior to working at the DSL, have never worked with GIS or spatial data. Here are some stats and figures that show just how much work went into creating Mapping Inequality that a majority of people don’t know.

-Georeferencing was by far the largest task of the project. Georeferencing the cities varied significantly in time depending on the size and layout of the city. One reason we added the large number of control points, was to insure that roads lined up correctly when compared to the modern day basemap. Once rectified, they were tiled and served out to the application. Note: all of these maps are downloadable via the site.

  • 166 maps
  • As many as 2,147 control points in a map
  • The average is 734 (per map)
  • 72,024 control points (144k+ clicks)

-Vectorization of the neighborhoods– via the rectified maps– required a lot of hands on digitization work. Topology rules played a large role in insuring the quality and accuracy of the
polygons digitized.

  • 7,513 polygons
  • As many as 498 vertices in a polygon
  • Average of 31 vertices
  • 229,829 vertices (clicks)

-Data Entry (polygons) Each polygon had up to seven fields that needed to be manually entered. These fields included key attributes for the project such as HOLC grade, polygon_id, and name of the neighborhood.

  • Up to seven fields for each polygon (id, grade, name, etc.)
  • About 45,000 data points


-Data Entry (Area Descriptions) Entering data for the area descriptions was very slow, hence the reason we have completed only 17 cities thus far. Some cities included up to 94 fields for each neighborhood. Some fields included whole paragraphs (like the one seen below).

  • Up to 94 fields per neighborhood
  • So far 94,719 individual fields completed for 17 cities
  • Estimated about 900,000 when completed

Overall, completing all of the GIS work stated above, took 4+ years to complete. Managing this ongoing collaborative project had its hurdles, but overall, went smoothly. Allowing students to work simultaneously and quickly resulted in 45GB+ of data in the end. We hope to work with the University of Maryland on their crowd sourcing platform to complete the remaining 150 or so area descriptions. Enjoy the project and hope you can use the data to uncover new stories and questions.


*If you are interested in learning more about the methods we used to complete this project, click on the link below and download the training manual.

HOLC Map Georeferencing: A Training Manual


Thanks to all of our great students!

Credit: Rob Nelson calculated the statistics and Nathaniel Ayers created the header photo. 

Contemporary Cartographer?

After reading an article from Crain’s “The next hot job: Cartographer” I starting thinking about my background and how it ultimately lead me to mapping. I have struggled with calling myself a cartographer for a while because of my none-traditional background, until the other day. It seems I am not alone in what they consider “contemporary cartography”.  After looking back at my final Landscape Architecture project, I came to realize just how close Cartography and Landscape Architecture are in their most basic forms—representing data, designs, ideas, and issues in a visual form. This is at the heart of what Cartographers and Landscape Architects do.

I realized that elements found in most landscape Master Plans are just diagrams that help the user envision features in a geographic space. Could these be considered maps? What is a map? By definition a map is “a diagrammatic representation of an area of land or sea showing physical features, cities, roads, etc.” Let’s take a planting plan of a city park for example. It is a diagrammatic representation of an area where certain data “plants” are spatially located. We are starting to see cases where people are pushing the ideas of what a map actually is and I think having a non-traditional and diverse background is what has sparked innovation in “contemporary cartography” that the article speaks of. I am excited about the future of cartography in the fact that companies like Carto and Mapbox are providing the tools necessary for easily accessible mapping.

So it looks like I can attribute my excitement and interest in mapping to my Landscape Architecture training, because at the end of the day I am still helping people visualize things that are not easily understood with words and can only be seen in diagrammatic representations!

“The Ideal Historical Atlas”

Image created by Nathaniel Ayers

Image created by Nathaniel Ayers

As we get closer to releasing the first four maps of American Panorama: An Atlas of United States History, I look back on Charles O. Paullin’s 1932 Atlas of the Historical Geography of the United States.  My first year at the DSL was spent collecting, formatting, organizing, and building the database for the online version of the Atlas. The Atlas contains nearly 700 unique and beautiful maps, ranging from topics such as the number of Cattle, Explorations, and Rates of Travel. I have always been awed by the craftsmanship and effectiveness of these maps which were published over eighty years ago.  Recently there has been a lot of interest in “retro” maps and recreating them with new data. The Paullin Atlas is a little different in this regard, in that we added underlying data and implemented “A Shiny New Interface for a Classic Atlas” according to National Geographic. Wright thought the maps in the atlas were limited and could be more effective if visualized as a “collection of motion-picture maps.” This is what we tried to accomplish along with being respectful of the original plates in the Atlas. Still to this day Charles O. Paullin’s Atlas is considered one of the most impressive atlases of American History. With the help of our friends at Stamen Design, I look forward to sharing American Panorama with everyone and hope to push the envelope like Wright and Paullin did when trying to create “the ideal historical atlas.”

University of Richmond, 100 Years Ago


Lily Calaycay ’17 3D modeling North Court in Google Sketchup. Photo by Nate Ayers

Imagine what the University of Richmond’s campus was like over a 100 years ago. Chris Kemp, the head of the Discovery, Technology, and Publishing department in Boatwright Library and his team have been working on documenting the history of the college for its’ centennial. They have done a fantastic job archiving and sharing historical artifacts of the university’s past. The DSL has been assisting Chris’s group with georeferencing historical campus maps for an upcoming project, which gives a spatial timeline of the university’s history. Since we had fairly detailed maps of campus buildings, roads and topology, what better way to envision the campus 100 years ago than a physical 3D model?


The base for any 3D model is the elevation data. The elevation data came from a 1911 survey map that was georeferenced, then digitized using various tools in ArcGIS. The contour lines were in 25’ intervals and covered all of campus as we know it today. Since this map only had contour lines and no buildings, we had to use a 1925 campus master plan map to obtain building footprints and road lines. Some buildings were deleted to reflect campus around the same time as the 1911 survey. These digitized footprints were then used in Google SketchUp to model basic building features. Buildings were not highly detailed because of the accuracy of the printer. The contour lines were converted to a TIN (Triangulated Irregular Network) to create a surface. The problem with converting the contours to a TIN was that the contour intervals were so large it created an unrealistic surface in certain portions of campus– some hand smoothing was needed in SketchUp before the model was printed to mitigate this. After the contours were converted to a surface they were imported into ArcScene and exported as a VRML file so it could be imported into Sketchup.

Once the TIN surface was exported out of ArcScene, it was passed on to Fred Hagemeister–the Center for Teaching, Learning, and Technologies Research Analyst for 3D printing preparations. Once in SketchUp, the surface model was exaggerated 2x to highlight the topology of campus and given a graduated color ramp to show distinction between high and low elevations. Building footprints were added and the surface was ”graded” to better represent the buildings’ actual existence with the landscape. The buildings and roads were added and colored to add detail and the campus started to take shape. Buildings were colored red, if they still exist, and blue if they are no longer on campus. The final and probably the most daunting task was turning the surface from a sheet into a solid shell. Once a solid shell, it was then scaled down and sectioned into 12 pieces since the printer can only print an “8×8” piece. Below you will see the 3D printing process—from printing, to excavation, and finally, gluing.


With the power of GIS and collaborations between Boatwright Memorial Library, the Center for Teaching, Learning, and Technologies, and the Digital Scholarship Lab we were able to take a paper survey map from 1911 and turn it into a physical 3D model.  Two maps over a decade apart were stripped down to their raw data and rebuilt together to show a glimpse of what the University of Richmond might have looked like over a century ago. Check out Chris’s Blog to learn more about the project.




Chris Kemp placing a section of the 3D model. Model is made up of 12 individual sections. Photo by Angie White


Model once all of the pieces are in place. Photo by Angie White

Richmond Then and Now

Richmond was a very different place a decade after the Civil War than it is today.  When we started working on the Visualizing the Past project for the Library of Virginia, we found a wonderful atlas of Richmond in 1876. This hand painted atlas, published by F.W Beers, features detailed buildings and their owners, parks and public landmarks. These maps served as our basemap for the project because of the great detail they provided. While working on the project I became really interested in what has changed around Richmond since this time.

Richmond Then and Now Scratchoff

Richmond Then and Now Scratchoff



This application has been updated! Check out the new Richmond Then and Now!

Click this link or the image to explore Richmond Then and Now

Visualizing the Past

Over the past several months, the DSL has been collaborating with the Library of Virginia, and Maurie McInnis, Vice Provost for Academic Affairs and Professor of art history at the University of Virginia on the To Be Sold– Virginia and the American Slave Trade exhibition. Read more about the exhibition below. Our role in the project was to create a 3D visualization of Richmond in the early 1850’s. The 3D visualization is used to help visitors envision Erye Crowe’s journey through Richmond, and experience the slave trade through his paintings and engravings. The models’ intent is not to replicate every detail of Richmond in 1853, but provide a sense of the architectural styles and atmosphere of the city at the time.  This has been a challenge considering the time period and lack of information on such a grand scale. The foundation of any 3D project is footprints, which are a little hard to come by in the 1850’s. We discovered a map made by F.W Beers in 1876 that detailed buildings, parks, and other features quite well. Below is one export from David Rumsey’s Map Collection. We also used these as the basemap for the model.


F.W Beers map of Richmond published in 1876. Export from David Rumsey’s Map Collection.

The problem with the Beer’s map is that it depicts Richmond 20 years later than decade of interest. To resolve this issue we referenced several maps of Richmond during the 1850’s and adjusted our footprints accordingly.  After georeferencing these images and digitizing the footprints, we started to think about modeling methods for a project like this. With the help of Maurie McInnis and Scott Nesbit, we gathered numerous photos showing buildings and detailed descriptions of material and architectural styles for the time period. We wanted to provide the greatest amount of detail without modeling 3,000+ buildings by hand. For buildings we had photos or descriptions for, our student interns modeled these using Google Sketchup. With the help of Nathaniel Ayers and I, the students modeled more than 30 buildings around Richmond for the 1850s. The students really enjoyed the project and got immersed in the details of the building they were modeling.

Even though the students modeled over 30 buildings we still had at least 3000 yet to model, with no real idea what the majority of them looked like. I’ve heard about CityEngine by ESRI for a while, but have never experimented with it. After reading about the Rome Reborn project I felt it was a perfect solution to our problem. In short, CityEngine uses a procedural modeling approach. By using rule files and GIS data, you can populate a large scale 3D model in a matter of moments.  Maurie helped us a great deal on this portion by providing detailed descriptions of facades and architectural styles found in Richmond at the time.  Both the Sketchup and CityEngine models were exported and brought into 3D Studio Max. Nathaniel Ayers did an outstanding job rendering the buildings, adding trees, and animating the video. Bringing everything into 3D Studio Max gave the model consistency since we used two different software’s to populate the buildings.

This project utilized a combination of modeling approaches, which served us well considering the time period and information available. Procedural modeling allows us to focus on the architectural details of a specific time/ place and apply these styles over a city—giving you a sense of what a city might have looked like during that time. Using this along with traditional/more detailed modeling approaches resulted in a stunning visualization showcasing the architectural features rather than specific building types, all without compromising the rest of the scene because of specific building descriptions.  To see the whole video click here or visit the To Be Sold exhibit at the Library of Virginia starting Monday, October 27, 2014—Saturday, May 30, 2015. Along with the exhibition we hope to present this work at the upcoming ESRI Users Conference this July.

Screen capture showing birds eye view of Richmond  Virginia in 1853.

Screen capture showing bird’s eye view of Richmond Virginia in 1853.

Screen capture showing the Capitol looking west.

Screen capture showing the Capitol looking west.

Birds eye view looking West over the city.

Bird’s eye view looking West over the city.

Screen capture of the American Hotel.

Screen capture of the American Hotel.


To Be Sold: Virginia and the American Slave Trade
Monday, October 27, 2014—Saturday, May 30, 2015
Time: 9:00 AM–5:00 PM
Place: Lobby and Exhibition Hall,  Free

This groundbreaking exhibition will explore the pivotal role that Richmond played in the domestic slave trade. Curated by University of Virginia professor Maurie McInnis, To Be Sold will draw from her recent book, Waiting to Be Sold: Abolitionist Art and the American Slave Trade, and be anchored by a series of paintings and engravings by Eyre Crowe, a British artist who witnessed the slave trade as he traveled across the United States in 1853. This internal trade accounted for the largest forced migration of people in the United States, moving as many as two million people from the Upper South to the Cotton South. Virginia was the largest mass exporter of enslaved people through the Richmond market, making the trade the most important economic activity in antebellum Virginia. This exhibition will not be merely a story of numbers and economic impact, but also one that focuses on individuals and the impact that the trade had on enslaved people.

A great open source tool for any GIS user

Before my time at the University of Richmond, the only mention of open source software was from computer science majors and programmers. Open source naturally seemed intimidating, since it was something new and mysterious, and in my mind sub-par to it’s proprietary counterpart. During my education, both universities I attended had a ESRI site license and rarely touched on open source GIS tools or software. The DSL has always leaned towards using open source software and its philosophy, which has been a great learning experience for me. I have begun to see the benefit in using both open source and proprietary software depending on the task.

While attending the VAMLIS conference last week, I attended a workshop presented by Jonah Adkins, a Sr. GIS Analyst for GISi. It was a great overview of open street map with some very useful open source tools on the web. Working with historical data, we tend to digitize a lot of polygons at the DSL. When serving these polygons up on the web it’s nice to generalize these for enhanced viewing optimization.  I have always had trouble simplifying polygons in Arc effectively and have dreaded this process every time I need to generalize a new data-set. Below is a great time saving tool for any GIS user struggling with simplifying polygons and wondering what tolerance levels to use.

When Jonah showed Mapshaper, “A tool for topologically aware shape simplification. Reads and writes Shapefile, GeoJSON and TopoJSON formats” tool at the conference, I was ecstatic! Not only can you upload a Shapefile, you can see in real time the simplification and the percent change from the original polygons.  It is one of the most simple, yet effective tools I have ever used. It is as simple as this.

1. Click on the link above. 

2. Configure your setting and upload file.

3. Slide simplify bar to simplify.

4. Export to Shapefile, GeoJSON and TopoJSON

5. Receive zipfile and enjoy simplify polygons.

6. Repeat!





So don’t be hesitant of open source options like I was. They can make your life a lot easier and broaden your knowledge while simplifying your workflows. Using a mix of proprietary and open source software/tools can be a powerful combination!




What is Value-by-Alpha anyway?

As some of you know, we are currently producing a Digital Atlas of American History. While working on one of the maps for the Atlas, I was searching for a better way of showing foreign-born population other than your run of the mill choropleth map. I stumbled upon a paper written by Robert Roth, Andy Woodruff, and Zachary Johnson titled “Value-by-alpha Maps: An Alternative Technique to the Cartogram.” You can read more about it in Andy Woodruff’s blog. After reading the paper and Andy’s blog I got really excited about trying this for our foreign-born population map.

Though choropleth mapping and area cartograms are two of the most common techniques for mapping thematic variables such as foreign-born population, each have significant drawbacks. Choropleth maps fail to distinguish between areas of high and low population. Area cartograms address that issue but can be difficult to interpret given the spatial distortions they introduce. Roth et al. (2010) have developed a new method for developing thematic maps: value-by-alpha mapping.  In the case of foreign-born population maps the value-by-alpha technique uses varying opacities to highlight areas of high population density and deemphasizes areas of low population density. This equalizes foreign-born population based on density and show the percentage of population in each county that were born outside the US—all while preserving both shape and topology. Utilizing this method for foreign-born population effectively highlights high density areas with large foreign-born populations, showing patterns that would likely be missed with traditional choropleth or area cartogram mapping techniques. 2010_foreign_born

My first attempt involved using ArcGIS. Achieving this in Arc is a little problematic, but nonetheless you get a pretty cool map. My one issue is that Arc really limits you on the color range and transparency values you can assign to a layer without some finagling. Andrew Wheeler has a great tutorial in his blog about how to do this in ArcGIS. I found a way to get around these limits in Arc but only after going with my second attempt. To do this you need to calculate in the attribute table a transparency value assigned to each value based on population density and assign that calculated value using the Display Expression tab under Layer Properties. Here are the results using the method outlined in Andrew’s Blog.

The second attempt involved using Leaflet and JavaScript. After discussing the first method with our director Rob, we decided this would be better achieved using JavaScript and Leaflet. JavaScript allows you to give each individual population density value a transparency and color value, where Arc makes you clump these into categories. Rob helped out a great deal with this method since my programs skills are very minimal. This gave us the greatest detail and really highlighted areas of high population density and high foreign-born population. The percent foreign born value is equalized by population density using alpha channels. This visually weights the map and neutralizes areas with low population density. Here is the same map as above but using JavaScript. 2010 Outlined


Though their are a couple of ways to do the Value-by-Alpha method, we found that the JavaScript approach gave us the most granular results and really conveyed what we were trying to show with the foreign-born data. Below is the final poster we presented at this years VAMLIS Conference.


Main Poster 04.psd

%d bloggers like this: