Getting started with mapping and GIS for free (Tips from a non-expert)

gis title

Sometimes it is useful to see results on a map. Maybe you want to see where your participants are coming from or show survey results geographically.

Mapping and GIS (geographic information system) are skills that I had been interested in learning for awhile. I never seemed to have the time to really delve into it and so these interests took the back shelf while other priorities popped up.

This past year I have been working on a project where geography and location are key and so I finally had the push I needed to get up to speed. I had a minimal budget (read: $0) for software. Although there are pretty fancy GIS programs out there (that require minimal know-how) those weren’t in the cards.

Although there are other resources out there, these are the two that I used. They require no coding, making them very beginner-friendly.

1. Google Maps

If you are looking for very basic mapping, Google Maps can actually do quite a bit. You can draw polygons/boundaries, add points, add in directions, and import data (although I think data imports are limited at 50 rows).

Here is a fictitious example of a program location (the purple star) and where the program participants live (the green dots).

Google

The nice thing about Google is that everything is saved on the cloud and you can access your maps from anywhere (and easily share them with others).

2. QGIS

QGIS

I needed to do more complex mapping than Google Maps allows and so I turned to QGIS, an open source GIS tool. I will warn you that it has a steep learning curve but there are many tutorials online (I found QGIS Tutorials and Tips extremely helpful!) and a community over at StackOverflow if you get stuck.

Here is another fictitious example of program locations (the grey circles) mapped in relation to income (red being the lowest income and the darker green being the highest income):

qgis2

I’d love to hear more from others about this subject. Do you know of a great mapping/GIS tool? Have you used mapping in evaluation? Let me know in the comments!

Advertisements

Non-linear relationships: The importance of examining distributions

Recently I was analyzing some data to help answer the question “what are the demographic differences between program graduates and program drop outs?” I did some modelling and found a few predictors, one of which was age.

I compared the average age between the groups and saw that the drop outs had a lower average age (42 years) than graduates (44 years). Simple enough. But this simplistic explanation didn’t jive with anecdotal information the program staff had given me. I wondered if the relationship between age and program completion was linear (i.e., does a change in age always produce a chance in the likelihood of graduating).

As I mentioned in my last post, I’ve been playing around with R. I recently came across something called a violin plot and I wanted to try it out. A violin plot is kind of like a box plot, except that instead of a plain old box it shows you the distribution of your data.

Here is an example of a box plot:

boxplot

The main thing that I immediately see from this chart is that on average, the drop outs were younger than the graduates.

Here is an example of a violin plot:

violin

I get a different takeaway from this plot. You can see from the violin plot that the distribution of age for the drop outs looks a lot different than the distribution of age for the graduates. The bottom of the drop out violin is wider, indicating that the drop outs skew a lot younger than the graduates. This indicates that we should be exploring the relationship between age and graduation more closely.

But what if you don’t use R and can’t create a violin plot? Histograms are standard tools to show distributions and are much more common. A histogram is essentially a column chart that show the frequency of values in your distribution (so for this example, it would show how many participants were 20 years old, 21 years old, 22 years old, you get the idea). Excel actually has a built in feature to create histograms (click here for instructions). The tool bugs me a lot and it isn’t super intuitive to use, but it gets the job done.

Here is the distribution for age for both the drop outs and graduates. Yes, yes, I know that my x-axes aren’t labelled and that my y-axes use different scales but these choices were intentional because I want you to focus on the shape of the distributions, not the content.

histograms

Again, you can see that the age of the drop outs skews to the left (meaning that there is a higher proportion of younger participants than older). The histogram for the graduated group looks quite different.

All of this evidence points to a non-linear relationship, meaning that age has an effect on whether or not a participant graduates for participants in different age groups.

To take a closer look at this relationship, I calculated the drop out rate for different age groupings and put them on a line chart. Aha! If the relationship between age and program completion was linear, we would expect this line to be straight. But it’s not. You can see that the drop-out rate declines with age until we hit age 40 or so. After that it’s more or less flat until age 70, and then goes down again.

dropouts.PNG

This is an important piece of knowledge for program staff to target retention efforts and something that we wouldn’t have uncovered if we simply had stopped at comparing the average age between the drop-outs and the graduates.

Chronicling my adventures in R: Why switch from SPSS and favourite packages so far

I’ve been saying that I’m going to start using R for a long time (this post assumes that you know what R is, if you want a brief explanation click here). I’ve officially declared 2017 to be the year that I switch from using SPSS to R for all of my data analysis. I’m going to make a series of posts sharing what I learn along the way.

Some background: I have been using SPSS since I first learned data analysis in 2002. As is (was?) common in the social sciences, all of my undergraduate training, and a lot of my graduate training, was using the GUI (graphical user interface). Over the past 5 years or so I have switched to syntax for reproducibility reasons. I have no prior experience with coding.

First off, why would I bother switching to R if SPSS has served me well for the past 15 years (yikes)? Good question! Here are the reasons that prompted me to make the switch:

  1. R is open source (read: free). SPSS is very expensive. The standard version is now more than $2500 US per year! I also like to support the open source movement which is about collaboration and community.
  2. It’s the leading tool in statistics. R is the most used tool in statistics, data science, and machine learning. Because it is open source, other users are constantly creating packages (there are thousands that anyone can download and use). There is a large, active, and growing community of users and this community is a great resource.
  3. The data visualization capabilities blow SPSS out of the water. Have you tried making a nice chart in SPSS? It’s an awful process and the end result isn’t great. My current workflow is to copy and paste SPSS output into Excel and do my visualization there. It works but wouldn’t it be grand if I could just make nice charts by adding a few lines of code while analyzing the data?
  4. It’s a lot more flexible than SPSS. R is not just a piece of software, it is a programming language. With SPSS you are often ‘locked in’ to the options for analyses that the software gives you. With R, if you can write the code you can do just about anything.

That list sounds great. Why have I waited so long to make the switch? R has a steep learning curve, especially if are you like me and you do not have a coding background. There are various online courses that introduce R. I have taken a few in the past and have found them to be quite helpful. The major thing that I learned from courses is how to “think like a programmer”…this was a large hurdle.

Now that I have the 101 material out of the way, I want to learn by doing and so I have been using R for all of my data analysis so far this year, mainly by following along with various tutorials. For example, recently I was doing a logistic regression. Because R has such an active user community, I was able to Google “R logistic regression tutorial” and bam, I could follow along with my own data.

My first major piece of advice in using R is to use RStudio, which is free for personal use. It has many advantages over base R, including a graphical workspace and a full-featured text editor.

Finally I want to share some packages that I have been using a lot as I get started with R:

  1. knitr – With SPSS I had so many files to go along with my analysis. There were syntax files and then there were output files. These files are difficult to share if the person you are sharing with doesn’t have SPSS, not to mention that it can be difficult to follow along when reading someone else’s output file. Knitr generates a document that has your code (syntax), results (output), and allows you to easily have formatted text with an intro, commentary, and conclusion to your analysis. So at the end of the day you have one file for everything and it can easily be shared as an .html, .doc, or .pdf file. Knitr is seamlessly integrated into RStudio (see above).
  2. ggplot2 – As I said before, the data visualization capabilities were a major draw for me to adopt R. ggplot2 can make gorgeous charts where you can customize almost all of the features.
  3. corrplot – I’ve struggled with presentation of correlation matrices before and I usually use a heatmap/table that I make in Excel. I just stumbled across the corrplot package a few days ago and I immediately fell in love. It is still a type of heatmap, but it makes the correlation matrix a lot more user-friendly to share with non-stats folks.

That sums up everything I wanted to say about my R journey so far. I’m aiming to write one of these posts every month or so and share my learnings. In the meantime I’d love to hear from other evaluators using R. Have you recently made the switch? How has it helped you?