Skip To Content

The Role of IR in a Time of Crisis: Audience Questions, Answered

Reading time: 8 minutes

Rapid Insight recently partnered with University Business to host an informative panel discussion about the role of IR (Institutional Research) during times of crisis. The panelists answered questions on a variety of topics related to challenges and successes their departments have seen during the COVID-19 pandemic, as well as lessons learned for the future. The three IR experts featured in the panel were:

Partnering with experts like Laura, Joseph, and Drew is one of the great pleasures of working at Rapid Insight. It’s fascinating to get a glimpse into their work and see how they’re implementing Rapid Insight’s data prep, predictive modeling, and data sharing tools.

The webinar generated a huge volume of questions from the audience, and the presenters were kind enough to offer follow-up on some of the questions we weren’t able to address in the time allotted for the live discussion. This post contains the panelists’ answers to audience-submitted questions, grouped below by topic.

Isn’t survey data difficult to analyze when the week to week situation is radically changing? Some would say that answers we may have gotten 6 weeks ago might not be relevant today.

Laura Miller: Yes, survey data is very difficult to analyze as we typically would view it. We’ve used survey data two ways since the pandemic started.

First, we were in the middle of collecting NSSE data and reached out to them to see if we could have preliminary open-ended responses. They graciously provided us the preliminary data. We separated the responses: before campus operations were disrupted, and after. This allowed us to see if students were commenting on how we were doing amidst the disruption to determine if we needed to take any action based on what we read.

The second way we used survey data was by launching a check-in survey. This check-in survey explained very clearly to students that we would be looking at their responses to the survey at an individual level for intervention. We used this survey data to help individual students with the needs they said they had at the time to help them finish the semester strong. We also shared the aggregate data with faculty so they could see how many students were having to share devices with siblings, or were struggling with mental health, or concerned about heir wifi/internet connection, etc.

Surveys were used as a tool for immediate change and intervention during this time rather than the typical assessment use.

Have any of your student surveys looked at how your students are handling mental health?

Laura Miller: We had one survey question where students could check the box if they had mental health concerns. Note: All survey questions were voluntary.

Could you share what questions you asked in the survey related to our current circumstances?

Laura Miller: We were very grateful for HEDS and their offering of a survey that addressed the current circumstances. Click here for a link to the survey.

When you survey students, is it more important to ask direct questions about students’ experience in this current situation, or do you use questions you have repeatedly used, and then figure out how to analyze data through a crisis lens?

Drew Thiemann: The context is so radically different this year. I would say the research question is the same but the situation is not comparable enough to call it apples-to-apples and make decisions based on the traditional paired t-test approach.

Laura Miller: I agree with what Drew is saying. I would add that we have established check-in surveys that we do once a semester. There were a couple of questions on those surveys that were highly predictive of retention for us in the past. These were included on the survey to see if students changed how they responded (for example, did they become less confident of their intent to return next semester?). We included open-ended questions to get a sense of what might change their response to those questions. We used many more open-ended prompts in our survey to let students share openly.

As faculty and students were thrust into online learning with varying levels of preparation and facility of resources, are there risks involved with gathering data evaluating the experience? Can it be a fair assessment of online learning?

Drew Thiemann: When the coronavirus response first started and we made the decision to shift online, like many others in IR, I wondered if it offered an interesting natural experiment related to innovative pedagogies. At the same time, researchers must remember our core ethical principles, one of which is beneficence — is this helping or hurting, exploring or exploiting participants?

One of the resources that helped me frame my opinion is this blog post from Rebecca Barrett-Fox. I found it really thought-provoking and even a bit transgressive.

It helped me realize that yes, there are risks associated with evaluating this experience. Not only are there potential confounds in the data because of the way coronavirus forced our hands, but it also runs the risk of perpetuating the cycle of disruption and bifurcation that students (and instructors) are already enduring.

With more “onlining” of courses, what are going to be the long terms effects on the value proposition? How will this impact discounting (for privates)? Is it the role of IR to estimate price points? How do invite ourselves?

Laura Miller: All great questions, and ones I am asking myself. Yes, we will need to evaluate all of these items. If you are not already at the table for these discussions, I would suggest being proactive in looking at the data and providing an analysis. For example, a spreadsheet where you can change the assumptions on tuition, discounting, enrollment and see the financial impact. This will show that you grasp the concepts and are eager and ready to be involved in conversations.

What strategies can you offer for democratizing data access?

Drew Thiemann: My advice is to continue pursuing the methods and strategies you already put in place to foment data literacy, if they are working. If they aren’t working, evaluate and adapt.

This shouldn’t change because of coronavirus, but coronavirus certainly has the potential to complicate these efforts, so it will be important that you continue taking stock and assessing these efforts vis-à-vis meaningful milestones and benchmarks.

One way that I see this changing is that remote work has forced more of our IR colleagues online — not just at our institution but at virtually every other institution across the planet. The improvement of digital literacy among our colleagues is a resource to the information literacy we’re trying to promote.

We want to explore LMS data, but we are not currently permitted to. How did you manage to get access to it?

Laura Miller: Data sharing is all about relationships. We had conversations about LMS access many times before the pandemic, but it was clear there wasn’t buy-in at the time and we were not feeling the need to push for buy-in, so it was dropped. There were other more pressing data access issues to push for at the time.

The quick switch to reliance on faculty and students engaging via the LMS caused me to question whether we could reconsider access to this data. Since we are in a time of crisis, there was more openness to the idea. It’s clear that we don’t have the ability to quickly just turn on access; we actually need to build the infrastructure to pull the data out and organize it (this is likely why we ran into so many hurdles when we discussed this idea in the past). So, although there is now willingness, we still have a long road ahead.

The role of predictive analytics

What does this mean for predictive analytics and its role in IR? Predictive modeling relies on the continuation of past patterns as the basis for credibly predicting the future.

Drew Thiemann: Great question! At the risk of sounding glib, get back to me in a year!

But in all seriousness, the worldwide pandemic definitely has the potential to create outliers along different domains or constructs within your data. In the coming years, you will have to make a choice about whether to exclude 2020 in future analyses, and whether that applies across the board for all dependent variables or just some of them.

Alternately, we may realize that all bets are off and every prior year of data in your current predictive modeling datasets are obsolete. Time will tell.

I think this is one of the reasons why it’s so important that we keep checking in with our IR colleagues about the practices and strategies they’re finding helpful. My suspicion is that the next AIR Forum is going to set an attendance record, but I hope we can also find other opportunities to connect with one another throughout the year.

How can you model something that has never happened before? e.g. How much will enrollment tail off if classes stay on-line instead of (expected) getting back to residential living/learning?

Laura Miller: Right now, it’s hard to create brand new predictive models. One thing you can do is rely on older models and see how the outcome changes based on the variables that are being impacted by our current situation and the weight of those variables in the model.

You can also make assumptions based on survey data you’ve collected, or if you haven’t collected data, use some of the results from national surveys. This will allow you to get a sense of what you should consider modeling such as 10%, 20%, and 30% down.

This year is going to be less about predicting the number now for fall and more about constantly evaluating the data you have access to, like behavioral metrics in your CRM, registration activity, withdrawals, and cancellation data, as well as talking to other offices at your institution about what they are hearing and making sure that all of it is considered as possibilities in your planning.

Do you have specific examples of what data you have been working with in the last few weeks and what kind of predictive analytics were used as well as the outcome? How accurate do you consider these outcomes to be?

Laura Miller: We’ve done less with predictive analytics during this time as it’s very hard to use history to predict right now. However, we have been using what we know about the variables in our predictive model to help inform our projections for enrollment and retention.

For instance, we know the weight of on-campus event attendance and we know the subset of students who missed out on that opportunity. So, we used our predictive model to help us measure the potential impact of this loss. We don’t have solid outcomes that we are working with, it’s more a range of possibilities given what we know.

On the flip side, there is a lot we can’t measure, ranging from the impact of virtual visits that we never offered before to all of the stressors on students and families caused by this pandemic.


While the world grapples with and adapts to COVID-19, we here at Rapid Insight will continue to provide data resources and informative content. Keep an eye on our webinar page for upcoming events, or browse on-demand recordings. And if you’re interested in learning more about how our data software solutions can improve decision-making at your institution, click the button below to schedule a customized demo.

REQUEST A DEMO

Stay up to date
Subscribe to our blog

Leave a Reply

avatar
  Subscribe  
Notify of