Who Controls AI in Higher Ed, And Why It Matters (Part 1)

Digital Learning

Who Controls AI in Higher Ed, And Why It Matters (Part 1)

By Jeffrey R. Young     Nov 8, 2017

Who Controls AI in Higher Ed, And Why It Matters (Part 1)

This article is part of the collection: EdSurge Live: A Town-Hall Style Video Forum.

It’s a pivotal time for artificial intelligence in higher education. More instructors are experimenting with adaptive-learning systems in their classrooms. College advising systems are trying to use predictive analytics to increase student retention. And the infusion of algorithms is leading to questions—ethical questions and practical questions and philosophical questions—about how far higher education should go in bringing in artificial intelligence, and who decides what the algorithms should look like.

To explore the issue, EdSurge invited a panel of experts to discuss their vision of the promises and perils of AI in education, in the first installment of our new series of video town halls called EdSurge Live. The hour-long discussion was rich, so we’re releasing it in two installments. Part 1 is below, and you part 2 is here.

The first segment included two guests with different perspectives: Candace Thille, an assistant professor of education at Stanford Graduate School of Education, and Mark Milliron, co-founder and chief learning officer at Civitas Learning. Read a transcript of the conversation below that has been lightly edited and condensed for clarity. We’ll publish part two of the discussion next week (or you can watch the complete video, below, or listen to the podcast version). And you can sign up for the next EdSurge Live here.

EdSurge: Civitas develops AI-powered systems for higher education. Mark, how would you describe the ideal scenario of how this technology might play out at colleges, and how a student might be helped by AI on campus?

Milliron: Higher education right now is definitely in the beginning stages of any kind of use of AI. It is probably moving from “accountability analytics” to “action analytics.” Today, 95 percent of the data work being done in higher education is really focused on accountability analytics—getting data to accreditors, to legislators, to trustees. Our chief data scientist comes from the world of healthcare, and he basically says that higher ed seems to be obsessed with “autopsy analytics.” It's data about people who are not with us anymore, trying to tell stories to help the current students.

I think the shift is to more “action analytics,” where we're actually getting the data, and it's closer to real time, to try to help students. We're starting to weave in predictive analytics to show trajectories, and using those data to try to help current students choose better paths and make better choices. In particular, they're trying to personalize the pathway the student is going to go on, and help shape those big decisions, but also to get them precision support, nudging encouragement, all at the right time.

The beginning phases of this has been a lot of higher education institutions doing the very basic form of what I call algorithmic triggering, where they're saying, "Based on this demographic category and based on this one assumption, we're going to make sure the student does X, Y or Z." And that in some way is painful because sometimes [advisors] make assumptions that a demographic category is a proxy for risk, which is in some ways really problematic. But I think we're starting to see more and more data, and it’s become more precise, and there are things that students can do to actually engage with the data and become captains of their own ship, or at least participate in their own rescue.

It's the students' data. Every piece of data is actually a footprint that if you put together tells a story of the journeys these students are on, and their data should be used to help them. Right now their data is mostly being used to help the institutions justify its existence or tell them some story, and we strongly feel that part of this is getting that data to actually help that student.

Candace, you're an early pioneer of using AI and adaptive-learning tools with the online learning initiative that you started at Carnegie Mellon University and now lead at Stanford. But you've also recently raised concerns that when companies offer AI-driven tools, that the software can be a kind of black box that's out of control of maybe the teachers or educators or those in higher ed institutions. I wonder if you could start by saying a little bit more about that perspective?

Thille: I am a big believer in using the data that we're abstracting from student work to benefit and support students in their learning, so we're highly alike about that. I work in a slightly different level than the work that Civitas works on, though, with the Open Learning Initiative. It's about creating personalized and adaptive learning experiences for students when they're learning a particular subject area. So the way it works is, as a student interacts with activities that are mediated by the online environment, we're taking those interactions and running them through a predictive model to be able to make a prediction, maybe about the learner’s knowledge state about where they are on a trajectory for learning.

As Mark mentioned, this kind of approach is revolutionizing every other industry. But what we have to recognize is what we're doing is having the systems support our pedagogical decision-making. Taking a piece of student interaction as evidence, running it through a model to make a prediction about that learner, and then giving that information either back to the system or to a faculty member so they can get insight into where the student learning is, all of that is pedagogical decision-making.

The data to collect, the factors to include when you're making the model, how to weigh the factors, what modeling approaches or algorithms to use, what information to represent once you have the prediction and how to represent it to the various stakeholder groups, I would say are all very active areas of research and part of the emerging science of learning.

So I would argue that in an academic context, all of those parts—and particularly the models and algorithms—can't be locked up in proprietary systems. They must be transparent. They must be peer-reviewable. They must be challengeable so that we can, as we're entering higher education into the space, the decisions about pedagogical decision-making that are being made, are being made by folks who know how to make those kinds of decisions. To just say "Trust us. Our algorithms work," I would argue that that's alchemy, that's not science.

Mark, as a leader of a company in the space, how do you respond to something like that?

Milliron: We absolutely think that is incumbent upon companies like Civitas and organizations who want to do this kind of work to make sure that we're making transparent what the data science is saying. For example, in our tool Illume, if you're looking at any given segment of students, and you want to look at their trajectories. One of the first things we do is whatever set of students you're looking at, we actually want to show you the most powerful predictors. We literally will list the most powerful predictors and their relative score and power in the model, so the people are clear this is why the trajectory is showing the trajectory it is, so people can understand, which variables are impacting that.

Part of the reason we did that is we wanted the educator to interact with the data because in many ways what you want is for that educator to be able to make a relevant and clear decision of what's happening with a student.

What we've clearly seen is getting the data to people who are managing that—the advisors, the student-success people—and letting them iterate and not assuming we know what the challenge is, or even what the response is is important. And then publishing in peer-review journals, so the rest of the people can see the math. But truthfully, modeling these days is a commodity. That's not the rocket science where you really want them to understand is what factors are you using, which factors tend to be the most predictive and how they're loading into the model. You know, an algorithm it's all about correctness and the efficiency of that model.

Trying to get educators to think this way is totally different for them. I do think we now have emerging communities of practice where people are bringing in to share what works and what doesn't work in this.

Is there some concern that students and professors could misuse the data?

Thille: There are multiple challenges. One of them is people would make the assumption, "A computer is telling me this, so it's neutral," or "It's objective," or "It's true." And without recognizing that the computer, the algorithm was written by a human being. We have certain values. We make certain choices about what factors to include in their model, how to weigh those factors, what to use in the prediction of score that they're giving you, that that was a human decision.

Then it's biased by the data that we give it. So if we don't have extremely representative data that's both representative of large numbers of students in different contexts, but also can be localized to the specific context, then the algorithms are going to produce biased results.

And I agree with you completely, Mark, that part of it is making sure that people who are using the systems really understand what the system is telling them and how to use that. But I'm thinking about institutions. A lot of institutions that I work with that are under a lot of pressure for accountability, as you're saying. And a big measure of accountability right now is graduation rates.

Let's say I'm a first-generation student. I can speak about my cousin, a first-generation Latina woman, coming into a big open-access institution, and I've decided that I want to be a doctor, so I enroll in a pre-med. I enroll in chemistry and biology and all these things in my first year because I want to be pre-med and those are the requirements. I didn't have the privilege of going to some high school that gave me lots of practice thinking about science and math the way that this institution is expecting me to. So I'm probably feeling a little bit like, "Do I really belong here? I'm excited that I got in, but I'm kind of questioning my fit here."

So, my first year I take the biology sequence. I hate the chemistry sequence and then say for my elective, I take the Latino Studies class, because I'm interested in that. And, I get Cs and Ds in my biology and chemistry courses, and I ace my Latino Studies class. They I come in to meet with my advisor, who looks at the predictive analytics and says the chances of this student staying in a chemistry major is going to graduate in four to six years is maybe 2 percent. It just doesn't look like a picture of success for you to stay in this major, but you've done really well in your Latino Studies class, and so we would predict that if you switch majors to Latino Studies, you would definitely graduate in four years and you'll have much better time here at our college, so I'm going to recommend you to switch majors.

Now, I think about if it was one of my kids sitting in that chair, their response would be, "That's your belief. I know I'm going to be a doctor, so you figure out how this institution's going to help me be a doctor," and they wouldn't change majors.

My concern is for a student who's already in a position where they think, "These authorities are here to take care of me. They have my best interest at heart. And they're showing me the data. It doesn't look like I'm going to be successful. I was kind of nervous about it anyway. It's really hard. I guess they're right. I'll switch majors." And my concern is not just the loss for that individual learner. I mean, that's a loss, but also a loss for us as a larger society about what an amazing doctor that young woman could've been.

Milliron: I could not agree more on that issue. The question is, can we take the same data and use them in a radically different way and a more effective way. To use our design thinking, and say to that student, "Okay, if you really want to be successful on this pathway, this is what we've learned about students like you have been successful. If you can pass this course at this level, and you can take advantage of these resources, you can double your likelihood of graduating within the next four to six years." And you can start almost teaching them how to level up.

The good news is we're in a very early stage of this, and if we can develop a norm and an ethic around this, we can make sure good stuff happens.

Thille: I was hoping you were going to say the other way we can use these data is not just to try and fix the student, but use the data to look at what are the patterns telling us (about the institutions). If students with this profile are failing out of Introductory Chemistry, then it's not just “How do we get that student to be different?” It's "We need to look at the way we're teaching Introductory Chemistry to make it so that not so many students are failing at it.” That could be another way of using that data.

Look for Part 2 of the highlights from the discussion, next week.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

EdSurge Live: A Town-Hall Style Video Forum

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up