artificial intelligence (ai)
Responsible AI for Higher Education: Cutting Through the Noise
Artificial intelligence (AI) has made a buzz throughout higher education in the past year. According to a recent survey of education professionals, 50 percent said they use AI in admissions—and that number is expected to grow to 80 percent in 2024. However, two-thirds of respondents said they are “very concerned” or “somewhat concerned” about the ethical implications of AI. Couple these concerns with headline-grabbing news such as Arizona State University partnering with Open AI, creator of ChatGPT, and you can see how the arrival of AI for higher education has created as many concerns as it has opportunities.
The benefits of AI for higher education are transformational. There are tremendous opportunities to use AI to personalize engagement with students and alumni, analyze data, optimize strategies, and maximize our time and resources more effectively. But before we reap those benefits, it is critical to implement a strategic framework including AI governance.
RNL took this key step in 2023, making significant investments in establishing a strategic framework that includes AI governance. As part of the executive leadership team, RNL’s Chief AI Officer (CAIO) Dr. Stephen Drew has established a dedicated AI team that has adopted the mechanisms of NIST, which is being used internally and can be implemented with our institutional partners.
Implementing AI responsibly is quite a F.E.A.T.
As we partner with campuses and nonprofit organizations to implement AI, we share four principles known as the F.E.A.T principles—which we follow at RNL with our own AI implementations:
- Fairness
- Empathy
- Accountability
- Transparency
1. Fairness
Building a fair AI requires attention to bias and unbiased data that can be correlated with protected variables such as gender, age, ethnicity, pronoun choices, and so on. According to NIST, bias is more common than you imagine. In its report Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, NIST noted that “human and systemic institutional and societal factors are significant sources of AI bias as well, and are currently overlooked. Successfully meeting this challenge will require taking all forms of bias into account. This means expanding our perspective beyond the machine learning pipeline to recognize and investigate how this technology is both created within and impacts our society.”
2. Empathy
As enrollment and philanthropy professionals, it is necessary to understand the role of AI in the workplace and its impact on a wide range of audiences: students, families, alumni, donors, colleagues, and others. We must address ethical concerns, implications, and practices of AI development, deployment, and workplace policies that may impact our constituents in a variety of business aspects. For example, when deploying digital assistants, we need to have guardrails in place of what is considered an acceptable response from the digital assistant to a student, parent, donor, or staff member. Is that digital assistant providing the level of knowledge and meeting the expectations they would have when interacting with a person?
3. Accountability
It is imperative to be strong advocates for providing high-quality and accurate information, including marketing campaigns and advertisements. Abuse of information cannot be tolerated, and regulatory compliance alone does not go far enough in supporting accountability. To be proactive, we encourage regular auditing practices and assessments on newly developed and established machine learning models as well as any AI-enabled tools.
4. Transparency
In order to assuage higher education professionals that AI is being applied responsibly, we must be proactive and demand transparency and traceability around AI systems. Provide documentation, training data, and root cause analysis (RCA) when there is a problem, and clearly communicate overall AI governance policies. Doing so builds trust in using AI responsibly and positively in our work.
Follow an AI Governance Framework
AI governance aligns the organizational goals of an institution with the AI and technology teams implementing AI strategy and systems. At RNL, we have implemented the following AI Governance framework:
This framework includes the formation of an RNL AI & Product Council with this critical mission:
We are committed to integrating and advocating ethical AI. Beyond just implementation, we champion AI awareness within RNL and the higher education community, ensuring alignment with ethical guidelines and policies. As we continue to transform, our goal is to position RNL as an innovative leader in the AI landscape, always informed and compliant with evolving legislation.
Where do we go from here with AI?
We want to help you with your AI journey. We have the expertise to help you if you are in the early stages of AI strategy development or feel stuck and can’t make the progress to move along the maturity curve. I invite you to learn more by watching webinar, Transforming Engagement Through Artificial Intelligence: Leveraging AI in Enrollment, Student Success, and Fundraising. Dr. Stephen Drew and I discussed conversational assistants and how AI can power insights from your data.
We can also talk more specifically how we can help your institution. Please reach out to our AI team for a complimentary consultation and we can discuss how you can implement it responsibly at your institution and use it to engage your constituents more effectively.
How can you transform your operations with AI?
Ask for a complimentary consultation with RNL’s experts and learn how you can use AI responsibly and effectively. We can discuss:
- AI governance and responsible AI
- Conversational AI for enrollment and fundraising
- AI-powered analytics that deliver strategic insights.