Awareness of the dangers of algorithmic bias in AI systems is growing. Earlier this year, a 42-year-old Detroit resident was wrongly arrested after a face-recognition system falsely matched his photo with that of an image from security camera footage. Such systems have been shown to give more false matches on photos of Black people than white peers.
Some scholars worry that AI in learning management systems used by colleges could lead to misidentifications in academic settings, by doing things like falsely tagging certain students as low-performing, which could lead their professors to treat them differently or otherwise disadvantage them.
For instance, the popular LMS Canvas had a feature that red-flagged students who turned in late work, suggesting on a dashboard shown to professors that such students were less likely to do well in the class, says Roxana Marachi, an associate professor of education at San Jose State University. Yet she imagines scenarios in which students could be misidentified, such as when students turn in assignments on time but in alternative ways (like in paper rather than digital form), leading to false matches.
“Students are not aware that they are being flagged in these ways that their professors see,” she says.
Colleges insist that scholars be incredibly careful with data and research subjects in the research part of their jobs, but not with the tools they use for teaching. “That’s basic research ethics—inform the students about the way their data is being used,” she notes.
While that particular red flag feature is no longer used by Canvas, Marachi says she worries that colleges and companies are experimenting with learning analytics in ways that are not transparent and could be prone to algorithmic bias.
In an academic paper published recently in the journal Teaching in Higher Education: Critical Perspectives, she and a colleague call for “greater public awareness concerning the use of predictive analytics, impacts of algorithmic bias, need for algorithmic transparency, and enactment of ethical and legal protections for users who are required to use such software platforms.” The article was part of a special issue devoted to the “datafication of teaching in higher education.”
At a time when colleges and universities say they are renewing their commitment to fighting racism, data justice should be front and center, according to Marachi. “The systems we are putting into place are laying the tracks for institutional racism 2.0 unless we address it—and unless we put guardrails or undo the harms that are pending,” she adds.
Leaders of the LMS Canvas, which is produced by the company Instructure, insist they take data privacy seriously, and that they are working to make their policies clearer to students and professors.
Just three weeks ago the company hired a privacy attorney, Daisy Bennett, to assist in that work. She plans to write a plain-language version of the company’s user privacy policy and build a public portal explaining how data is used. And the company has convened a privacy council, made up of professors and students, that meets every two to three months to give advice on data practices. “We do our best to engage our end users and customers,” said Jared Stein, vice president of higher education strategy at Instructure, in an interview with EdSurge.
He stressed that Marachi’s article does not point to specific instances of student harm from data, and that the goal of learning analytics features are often to help students succeed. “Should we take those fears of what could go wrong and completely cast aside the potential to improve the teaching and learning experience?” he asked. “Or should we experiment and move forward?”
Marachi’s article raises concerns about a statement made at an Instructure earnings call by then-CEO Dan Goldsmith regarding a new feature:
“Our DIG initiative, it is first and foremost a platform for [Machine Learning] and [Artificial Intelligence], and we will deliver and monetize it by offering different functional domains of predictive algorithms and insights. Maybe things like student success, retention, coaching and advising, career pathing, as well as a number of the other metrics that will help improve the value of an institution or connectivity across institutions.”
Other scholars have focused on the comment as well, noting that the goals of companies sometimes prioritize monetizing features over helping students.
Stein, of Instructure, said that Goldsmith was “speaking about what was possible with data and not necessarily reflecting what we were actually building—he probably just overstated what we have as a vision for use of data.” He said he outlined the plans and strategy for the DIG initiative in a blog post, which points to its commitment to “ethical use of learning analytics.”
As to the concern about LMS and other tools leading to institutional racism? “Should we have guardrails? Absolutely.”
Competing Narratives
Marachi said she has talked with Instructure staff about her concerns, and that she appreciates their willingness to listen. But the argument she and other scholars are making is a critique of whether learning analytics is worth doing at all.
In an introductory article to the journal series on the datafication of college teaching, Ben Williamson and Sian Bayne from the University of Edinburgh, and Suellen Shay from the University of Cape Town, lay out a broad list of concerns about the prospect of using big data in teaching.
“The fact that some aspects of learning are easier to measure than others might result in simplistic, surface level elements taking on a more prominent role in determining what counts as success,” they write. “As a result, higher order, extended, and creative thinking may be undermined by processes that favor formulaic adherence to static rubrics.”
They place datafication in the context of what they see as a commercialization of higher education—as a way to fill gaps caused by policy decisions that have reduced public funding of college.
“There is a clear risk here that pedagogy may be reshaped to ensure it ‘fits’ on the digital platforms that are required to generate the data demanded to assess students’ ongoing learning,” they argue. “Moreover, as students are made visible and classified in terms of quantitative categories, it may change how teachers view them, and how students understand themselves as learners.”
And the mass movement to online teaching due to the COVID-19 pandemic makes their concerns “all the more urgent,” they add.
The introduction ends with a call to rethink higher education more broadly as colleges look at data privacy issues. They cite a book by Raewyn Connell called “The Good University: What Universities Do and Why it Is Time For Radical Change,” that “outlined a vision for a ‘good university’ in which the forces of corporate culture, academic capitalism and performative managerialism are rejected in favour of democratic, engaged, creative, and sustainable practices.”
Their hope is that higher education will be treated as a social and public good rather than a product.