artificial intelligence (ai)

Solving Today’s Enrollment Challenges With AI – Part 2

Scott JeffeVice President, Research (Graduate and Online)April 21, 2024

Stephen Drew, RNL’s new Chief AI Officer, and I recently sat down for a discussion I entitled “Why won’t we solve today’s challenges without AI.” When I posed this opening question, Stephen made an important clarifying point: AI is not the solution, or at least not the sole solution, to solving any particular challenge. Perhaps a better question is, “How can we use AI (generative models, predictive models, conversational systems, etc.) to help solve our challenges better?”

This reframing of my seminal question provides insight to the thoughtfulness that Stephen brings to RNL and was covered in our blog last month, which also covered: 1) What is AI? 2) How does AI help institutions engage students expecting instantaneous/personalized responses? and 3) What will come next for AI in higher education?

This month, we go deeper into some of the questions that RNL partners and friends asked us during our 2024 Graduate and Online Innovation Summit. You can watch the entire conversation below or keep reading for key parts of our discussion (edited for reading).

What is responsible AI, and how is RNL Championing it??

The backbone of responsible AI is having a governance process in place that is used to evaluate each use case on its own merits, look at the associated risks, and make a decision about whether or not to move forward.

“Appropriateness” is important. Responsible AI practices ensure that we use the right model for the right use case. You don’t crack walnuts with sledgehammers, right? Choosing the right tool for the job is a challenge in many instances, including responsible use of AI.

RNL, like many institutions, uses machine-learning models for forecasting and predicting various things. Machine-learning models are well-documented, explainable, and use tried-and-true methods. Should we ditch these and redo using the latest and greatest general AI models? No, and institutions don’t need to either. Why? Because we are getting good results, the predictive power is high, and the explainability is high. We, and institutions, should focus development on areas where they do NOT have these positive results.

When thinking about where you should apply the latest and greatest AI, evaluate each use and ask what the potential risks are. Is this an appropriate thing to do? For example, our team recently considered creating a text-to-video AI tool that would generate thank-you videos to donors from an institutional leader or famous alum. Was this a good idea? Knowing that AI is not yet at the point where we can rely on text to video confidently gave us pause. Further, we explored the potential that an institution could confuse donors or not ensure that the message would align accurately with various demographics. All of these made it clear that this was not an appropriate use of AI at this point.

RNL’s approach to responsible AI is built on the best practices in all of my work and it is used for every idea that we think might make sense. Here’s what we do:

  1. Build experiments to test the idea, under the leadership of a dedicated research scientist.
  2. Validate the idea through the results of the experimentation.
  3. Measure the results and use them as the basis to talk about the potential drawbacks.
  4. Decide on whether or not we’re going to move forward.

RNL also has a partnership with a company called Credo.AI which brings in a governance framework and risk management tool to track all our AI use cases, the models we’re using for each use case, the data that’s behind each of those, and the risks that they pose. They also help us use their general models to do risk assessment.

RNL is using this expertise and experience to develop a consulting service around AI governance. We can go into a university and help them build responsible AI practices, build an AI council, establish methods for evaluating the usage of AI models, and track all these efforts.

How can AI help enrollment managers in analyzing data to achieve enrollment goals?

At the core of all of this is the shift to a conversational natural language interface that will allow enrollment managers to analyze data easier, faster, and better. Institutions have vast amounts of data, often in different places including their various CRMs and their student information system. Typically, reports and dashboards have been created in each platform, often answering a narrow question. This results in multiple dashboards that need to be reviewed to make good decisions.

If you bring all that data together and then overlay a language model designed to understand and extract data using natural language inputs, then instead of saying, “Let’s build a dashboard that will answer XYZ,” you can just ask any question you need answered. This allows enrollment leaders to use a single source—the AI—to answer many questions they need to ensure enrollment success.

I recently went to our partner advisory board meeting and an enrollment leader said to me, “I wish that I could just talk to my data like I talk to ChatGPT or Alexa. Just ask it a question and get a response.” My response? Well, check back with RNL soon because that is exactly what we are working on.

After answering the “what” questions, AI built on a conversational model can also answer both the “why” questions that naturally follow and the “what next” questions that will power planning and strategy. RNL will soon be able to help with all of this.

How can you transform your operations with AI?

Ask for a complimentary consultation with RNL’s experts and learn how you can use AI responsibly and effectively. We can discuss:

  • AI governance and responsible AI
  • Conversational AI for enrollment and fundraising
  • AI-powered analytics that deliver strategic insights.

Request Consultation

Want to learn more?

Read the first part of this interview and about the importance of responsible AI. You can also dive deeper by watching our webinar, Transforming Engagement Through Artificial Intelligence: Leveraging AI in Enrollment, Student Success, and Fundraising.

RNL’s AI experts are also happy to discuss how you can use AI to work more efficiently and better serve your constituents. Reach out and we will set up a complimentary consultation.


About the Author

Scott Jeffe

Scott Jeffe has worked with more than 200 institutions in 40+ states to apply market data to strategic decisions. With a focus on profiling the demands and preferences of nontraditional (adult, online, etc.) students, Scott...

Read more about Scott's experience and expertise

Reach Scott by e-mail at Scott.Jeffe@RuffaloNL.com.

Read More Blogs By: Scott Jeffe