Q&A: Enrollment Engagement Scores and App to Enroll ModelsReading time: 4 minutes
In a recent webinar, Kelley Graham, the Director of Enrollment Technology at Lipscomb University, discussed how she adapted Lipscomb’s “Enrollment Engagement” score and “App to Enroll” model to the post-COVID 19 enrollment environment.
If you missed the webinar, watch it here.
We received many questions during the presentation and ran out of time to answer them all. Graham was kind enough to answer the remaining questions after the webinar. We collected her answers here, along with contributions from James Cousins, Rapid Insight’s Analyst Manager.
An enrollment engagement score is a single metric that encompasses the many factors that lead a prospective student to enroll (or decide against doing so). Enrollment engagement scores equip admissions counselors to conduct outreach more effectively by focusing on the applicants who are most likely to be persuaded to enroll.
How does the scoring model handle emails, opens, clicks, etc?
Kelley Graham: The only emails that are included in the score are INCOMING emails. Clicks, opens, etc are not part of the score (mainly because someone could click or open a single email multiple times).
How were the points per each engagement decided? How often do you re-evaluate your model parameters?
Kelley Graham: The Senior leadership had input into the points. The parameters are re-evaluated during cycle prep (Graduate at the end of each semester, Undergraduate at the end of the Fall term)
Can you talk about how Slate and Construct interacted with each other? How did you come up with scores in the SQL rule in SLATE?
Kelley Graham: Originally, Slate exported an Excel file via a query, and then the file is scored via Construct. Construct has the capability to pick up the exported file from the query from the SFTP server, process it, then export it back to the SFTP server. Scores were created with a large input from senior leadership.
How do you use these scores to inform marketing, budgeting enrollment management decisions, how students are recruited, etc?
Kelley Graham: Origin Source is regularly evaluated. As I mentioned in the webinar, reporting is very important. It is essential that we have a lot of reports to measure different attributes.
An app to enroll model is a predictive model that forecasts an individual applicant’s likelihood to enroll.
Do you see this modeling being effective with institutions with rolling admissions where you don’t truly know your full applicant pool until classes are starting?
Kelley Graham: The Engagement Score is very valuable for this type of admissions. The App to Enroll predictive model is helpful in seeing who should be pushed to deposit, register, etc.
How many variables do you bring into your model? How many years of historical data was used?
Kelley Graham: We have roughly 60 variables over three years. The initial inquiry-to-applicant file contained approximately 70,000 rows and the app to enroll file contained almost 10,000 rows.
Was high school GPA unpredictive?
James Cousins: At the time this was asked, Kelley had been describing that test scores- having once been a major contributor to the model- were unavailable due to the pandemic’s interruptions of most testing facilities’ availability. This called for a variable that could fill the gap of both visits and test scores, which the engagement score managed. Later on, reviewing Kelley’s updated enrollment models, school GPAs were indeed related!
Kelley Graham: In addition, GPA wasn’t available for our Inquiries. We wanted to try to be able to add that model. We don’t generally get GPA data until the person becomes an applicant.
How did you present this model to your coworkers and how familiar are they with using predictive models? For example, how do they react when a 98% likely to enroll student, doesn’t enroll.
Kelley Graham: As I said in the presentation, we all know that this is PREDICTIVE modeling.
James Cousins: I second Kelley’s point about models not being guarantees. However, a common pattern I share with users is that you will always have a handful of highly likely arrivals not arriving, and you will always have a handful of highly unlikely arrivals arriving! The key is that, while you’re looking at the group that is highly likely, you can reasonably expect that most will indeed show up, whereas among the unlikely group, most will not. The counterexample to the one-off anomalies is the overarching pattern.
What kind of accuracy did you achieve? Specificity and Selectivity?
James Cousins: Specificity and Selectivity are both available readouts in the software.
Is the target ordinal or interval? What type of regression is it (OLS, logistic, multinomial)?
James Cousins: In Kelley’s case the target was binary- an indicator of a prospect/admitted student either enrolling or not enrolling. Predict guides users to perform Binary logistic regression for targets like this.
How good is the fit (R-squared)?
James Cousins: Kelley described that they update their models on a routine basis (either once or twice per year, depending on the program). The goodness of fit varies, although diagnostics like the C statistic, ROC curve, R-squared, and several more are always displayed for reference.
What algorithms do you use?
James Cousins: Predict makes use of either Ordinary Least Squares (for continuous targets), or Binary Logistic Regression (for binary targets). These are non-proprietary modeling algorithms, but Predict streamlines and improves their use by automatically creating and testing transformations of your independent variables- e.g. creating flags for individual categories to avoid users needing to code them manually or finding the best numerical transformation of continuous predictors.
Rapid Insight’s intuitive analytics tools and experienced analyst support team can help you build and implement an enrollment engagement score or app to enroll model of your own. Interested in learning more? Click the button below to set up a personalized walkthrough!