Reflections on The Digital Divide in Education: The Irony of AI Content Detection (they don’t work)
The photo above comes from a recent post on Medium.com by a colleague who teaches computer science at another campus. In his post, my colleague asked a very good question, but he never answered it, so I asked ChatGPT to answer it instead.
Background
The post has a provocative headline, "The Digital Divide in Education: The Irony of AI Content Detection (they don’t work)," which immediately grabbed my attention for a number of reasons.
First and foremost is the fact that, in my own experience as an undergraduate writing instructor, AI text detection has worked very well on many occasions, yielding deeper and more rewarding experiences with my students by discussing these tools, student support at our College, ethics, academic integrity, and digital literacy in cases of suspected, potential AI-misuse, helping to cultivate a friendly, more effective student-teacher relationship rather than encourage an antagonistic, policing dynamic through the misuse and misunderstanding of such technology.
To say that AI detection does not work simply because it does not always work seems off the mark. 100% performance 100% of the time seems a little bit of an unrealistic goal, especially for something so complex and multi-layered, not to mention the variety of tools and techniques available that can easily obfuscate such text (as the article rightly pointed out). Nevertheless, to suggest that AI detection is useless or even harmful because it does not always work and requires some degree of resources seems akin to saying that blood tests are useless because they do not always work and require significant resources that only widen the poverty gap.
Secondly, my view about money and resources spent on education does not see the money spent on schooling as an expense. Rather, I see any money spent on education, especially money spent for the education of students in lower economic brackets, as an investment in those students' economic future. When my students complain about the cost of books, I tell them the same thing. I used to struggle to buy books for my classes too when I was an undergrad. I had to choose between work and school, paying for bills, or paying for books too.
Believe me, I get it. I understand the poverty gap, having straddled it myself as a first-generation college graduate.
But I assure most students that, if they do what they are supposed to in college and make the most of the opportunities available there, any money spent on books or anything else for college actually becomes an INVESTMENT, not an expense, paid over and again, year after year. That college degree will unlock a world of opportunities previously unavailable to them and make a significant impact for generations to come. It happened to me. It can happen to anyone... but nothing comes for free. That's just the way life works.
Lastly, my approach to these tools is highly informed by the research text-matching software systems, commonly presented as plagiarism detection software, and the need for educator training on the limitations and best uses of such technology, which I have engaged in for over three years.
Yes, AI detection technology has limitations. All forms of technology have their limitations, but these tools offer some affordances that can help us address the very issued that my coleague appears to be concerned with in his post.
Understanding how to use our tools can ultimately help us become better users and enable us to achieve intended outcomes more effectively, in effect, by understanding how to use the right tool for the right problem.
The Question
In his post, my colleague asked, "How can educators and technologists work together to create a more equitable learning environment where integrity and honesty are upheld?"
The original question was actually a bit longer (e.g. "How can educators and technologists work together to create a more equitable learning environment where integrity and honesty are upheld, not by the depth of one’s pockets, but by the merit of one’s efforts?"), but I felt that the last part of this question included language that was evocative yet biased and ultimately unnecessary since I felt the first part of the question already assumed the second part.
Below is the response I got from ChatGPT:
Collaboration between educators and technologists is crucial to creating a more equitable learning environment that promotes integrity and honesty. Here are some strategies for working together:
1. Define shared goals:
Educators and technologists should collaborate to define common goals for creating an equitable learning environment. This could include fostering academic integrity, ensuring access to technology for all students, and promoting inclusive learning experiences.
2. Develop clear policies:
Work together to establish clear and comprehensive policies on academic integrity, technology use, and online behavior. These policies should be communicated to students, parents, and other stakeholders to ensure everyone is aware of the expectations.
3. Implement technology for integrity:
Technologists can develop and integrate technologies that help maintain academic integrity, such as plagiarism detection tools, secure online testing platforms, and tools to monitor and discourage cheating during online assessments.
4. Provide training and support:
Educators need training on how to effectively use technology to enhance learning while maintaining integrity. Technologists can provide ongoing support and professional development to ensure educators are proficient in utilizing technology tools in a way that aligns with educational goals.
5. Address accessibility issues:
Collaborate to ensure that technology is accessible to all students, regardless of their socio-economic status, abilities, or geographical location. This includes providing devices, internet connectivity, and assistive technologies to those who need them.
6. Promote digital literacy:
Work together to integrate digital literacy education into the curriculum. Educators can teach students how to critically evaluate information online, avoid plagiarism, and use technology responsibly. Technologists can contribute by creating user-friendly interfaces and tools that promote responsible digital behavior.
7. Regular communication and feedback:
Establish open lines of communication between educators and technologists. Regular feedback from educators can help technologists refine and improve technological solutions to better meet the needs of the educational environment.
8. Consider ethical implications:
Educators and technologists should work collaboratively to address ethical concerns related to the use of technology in education. This includes issues such as data privacy, surveillance, and the potential for bias in algorithmic decision-making.
9. Involve stakeholders:
Engage students, parents, and community members in the conversation about creating an equitable learning environment. Solicit feedback and involve them in decision-making processes related to technology use and educational policies.
10. Continuous improvement:
Foster a culture of continuous improvement by regularly assessing the impact of technology on the learning environment. Gather feedback from educators, students, and other stakeholders to identify areas for improvement and make necessary adjustments.
By fostering a collaborative and proactive relationship between educators and technologists, it is possible to create an educational environment that values integrity, embraces technology, and ensures equitable access to learning opportunities for all students.
Discussion
It is interesting to note that a text-generative AI system considers AI-content detection to be part of a larger, more complex strategy, involving several interdependent parts. I'd also like to point out that Points 4 and 6 are areas where educators can most effectively improve both short-term outcomes and long-term outcomes at any educational institution.
As with most things in life, the truth of the matter does not lie at one extreme or another but rather in the middle somewhere. AI and AI-generated content detection are simply too new for us to be making such sweeping statements yet. I mean, they are big, bold, and grab a lot of attention, yes.
Sexy headlines make for lots of clicks, perhaps, but I would caution rather to keep our assumptions in check. Humans are notoriously terrible at predicting the future of their own technology.