Switching from SPSS to R: Save scripts, not workspaces!


I’m back with a quick lesson that I have learned while switching to R for data analysis (if you are curious about why I am doing so, I have a list of reasons here). This was a bit of a painful lessons that cost me a lot of time: SAVE SCRIPTS, NOT WORKSPACES.

What does that mean?

I am going to explain this in non-technical terms (sorry R experts), mainly because I don’t know the technical lingo.

In SPSS, your data file is a tangible thing. You can make changes to it and save it and then go back to the actual file and boom, there is the data just as you left it.

In R things work a bit differently. All changes to data (and analysis, and charts, and everything else) are executed through scripts. You write a block of code that does something. You save this script and each time you open R, you should re-load the script. Objects and dataframes aren’t “real” as they are in SPSS.

Like most R users, I use R Studio. R Studio is amazing and awesome and I love it. But it has a default setting that was allowing me to keep a bad habit I learned from SPSS (i.e., not re-loading a script each time to make sure that it included everything my analysis needed and treating objects as “real”). R Studio has a default setting that will automatically save your workspace and re-load it next time you start the program. Amazing! Or so I thought.

I have been working through the book R for Data Science (a great book which is FREE by the way) and in the workflow section the authors make this point very clearly: save scripts, not workspaces.  I didn’t really get why this was so important. It was so much easier just to open R Studio and have my previous workspace waiting for me.

Unfortunately I learned first-hand why this is so important.

I manually cleared my workspace because I thought I was done with my analysis (and I was sure my script had everything the analysis needed). Turns out my script was missing something pretty important. When I had to go back to my analysis to change something, lo and behold a few objects were missing from my script. I had to manually re-create them from memory.

It wasn’t the end of the world since I was able to do that, but it cost me a lot of time. And what if I had to go back to that analysis a year later? My memory would have certainly faded. If I had been working solely from scripts the entire time this error would have been caught right away (or not have occurred in the first place).

Thankfully you can change the default setting on R Studio so that it doesn’t save your workspace and enable this bad habit. Instructions are here. Don’t repeat my mistake!

Non-linear relationships: The importance of examining distributions

Recently I was analyzing some data to help answer the question “what are the demographic differences between program graduates and program drop outs?” I did some modelling and found a few predictors, one of which was age.

I compared the average age between the groups and saw that the drop outs had a lower average age (42 years) than graduates (44 years). Simple enough. But this simplistic explanation didn’t jive with anecdotal information the program staff had given me. I wondered if the relationship between age and program completion was linear (i.e., does a change in age always produce a chance in the likelihood of graduating).

As I mentioned in my last post, I’ve been playing around with R. I recently came across something called a violin plot and I wanted to try it out. A violin plot is kind of like a box plot, except that instead of a plain old box it shows you the distribution of your data.

Here is an example of a box plot:


The main thing that I immediately see from this chart is that on average, the drop outs were younger than the graduates.

Here is an example of a violin plot:


I get a different takeaway from this plot. You can see from the violin plot that the distribution of age for the drop outs looks a lot different than the distribution of age for the graduates. The bottom of the drop out violin is wider, indicating that the drop outs skew a lot younger than the graduates. This indicates that we should be exploring the relationship between age and graduation more closely.

But what if you don’t use R and can’t create a violin plot? Histograms are standard tools to show distributions and are much more common. A histogram is essentially a column chart that show the frequency of values in your distribution (so for this example, it would show how many participants were 20 years old, 21 years old, 22 years old, you get the idea). Excel actually has a built in feature to create histograms (click here for instructions). The tool bugs me a lot and it isn’t super intuitive to use, but it gets the job done.

Here is the distribution for age for both the drop outs and graduates. Yes, yes, I know that my x-axes aren’t labelled and that my y-axes use different scales but these choices were intentional because I want you to focus on the shape of the distributions, not the content.


Again, you can see that the age of the drop outs skews to the left (meaning that there is a higher proportion of younger participants than older). The histogram for the graduated group looks quite different.

All of this evidence points to a non-linear relationship, meaning that age has an effect on whether or not a participant graduates for participants in different age groups.

To take a closer look at this relationship, I calculated the drop out rate for different age groupings and put them on a line chart. Aha! If the relationship between age and program completion was linear, we would expect this line to be straight. But it’s not. You can see that the drop-out rate declines with age until we hit age 40 or so. After that it’s more or less flat until age 70, and then goes down again.


This is an important piece of knowledge for program staff to target retention efforts and something that we wouldn’t have uncovered if we simply had stopped at comparing the average age between the drop-outs and the graduates.