Using details from conversations to enable further education
Detailing the components needed for learning analytics in an educational speculation platform
Outside of smalltalk, most conversation is filled with rich indicators of someone’s personality and interests. Unfortunately because of the lack of the terrible ratio of teachers to students in an average school, the caring attentiveness required to extract meaning from these conversations and apply them to the learners is quite difficult.
Essentially, the ability that a mentor has to hear you talking about plankton and telling you to consider looking into an obscure subfield of marine biology is rendered impossible as the teacher scrambles to maintain coherence and order in a class of 30 gradually fading students.
The solution is not a simple recommendation engine, however. Creating a bot that can analyze what you say and tell you what things you might be interested in is currently completely impossible to do with the needed careful attention to nuances in your character. If that is built and employed at full capacity today, we’ll have a dystopian totalitarian career-bot that will funnel all our talent into one of about 10 fields of interest — think of the uber-trendy echo chamber talk we’re having about Facebook right now and apply that to job recommendations. Basically we’d have a situation where if a student mentions art twice and science once in a total of five conversations, the system will start to believe that the student is meant to be an artist.
Before we start thinking about how to fix the recommendation engine, I want to first look into what ways the students can explore the system’s content and what aspects of those bits of content will be most useful to understand for a meaningful assessment report.
Exploring Connections
A new way to traverse a thread of stories on a digital platform.
Currently, a lot of tools are focused on keeping the user within their own beliefs. By placing alternatives in a highly visible manner for the learner, they are able to see opposing views, and other connections between stories that may not have otherwise been obvious.
One of the key things about speculations is the thread of events that leads to your projection. A future event D is believed to be more plausible if it has a coherent track through the events A, B, and C that precede it, even if only A is fact and B, and C are other speculations. At the very least, it will cause a far more interesting discussion than if someone was simply to create a speculation D having only heard A.
The ability to explore connections amongst student submissions in the system will be crucial to build assumptions about what sorts of stories the youth of today believe will become our realities in the future. From these connections, story threads will emerge from which we and the students themselves can see which threads they and other students engaged with.
A student can then, in theory, approach those who she believes will be able to reinforce her position, if that’s what she is looking for, or to otherwise challenge it.
Connect Sources
With an ever-growing bank of content on the web, it can be difficult to understand which articles are truly contributing to a learner’s understanding and which are simply based on false assumptions.
By building in the ability to dynamically link content to text in a way that allows the sources to be explored while reading, learners will develop healthy habits of fact checking, despite the platforms focus on fiction.
Another problem with the current paradigm of discussing information on digital platforms is the simplicity of being able to simply support a position without showing why you do or contributing to the robustness of that statement.
In certain contexts, for example liking an inane image of a dog online, its fine. In an educational context, it could become incredibly dangerous to simply approve of another student’s statement without saying what it is specifically about it that one agrees with. Take an argument by someone for racism because they believe that there are too many immigrants taking jobs away from the locals. If another person simply posts an “I agree” it would be impossible to discern what specifically is agreed with. Of course, it could be that they concur with the entire statement, but it could also be simply that they agree with the lack of jobs for locals, but not with being a racist. “Like,” “heart,” and “upvote” all serve some purpose but often being educational is not where their strengths lie.
This is why, for this system, a very powerful mechanism to enable and verify agreement on a particular statement would be to have contributors to the platform link citations from external sources as well as other stories that occur before and after the statement in question.
E.g. If a student, Jenny, claims that in 2029, there will be a notable drop in electric grid usage in suburbs around North America due to Elon Musk’s Solar Roof technology, that statement would gain a kind of “rank” if another student links to articles from The Washington post with Musk announcing the tech. Even better, another student might link to an article that has the original scientific research paper for that particular type of solar cell. These citations should be immediately viewable along side the statement written by the original student. Now let’s say another student, Mike, comes across Jenny’s post and disagrees. The disagreement would hold little weight if it was simply an opinion, but if Mike, or someone else that agrees with Mike, links the counterargument to a scientific paper or opinion piece by a qualified person then we can safely say that we have two reasonable possibilities to what may happen in 2029 to electric grid usage. Through this, students would be engaging in a meaningful learning experience based not merely on opinions of their instructors, but also their own critical reasoning.
Parsing Dialogue
Daily conversation is loaded with meanings and perspectives.
By processing the natural language that people use while they are engaged with a speculation, we can extract a variety of topics that the learner can explore.
This way, we can ensure that learners both see the relevance of topics as well as realize where their interests may lie.
This is the part of the system that would be most passive. Using some basic sentence parsing and natural language processing, my hope is to be able to extract topics that are mentioned in phrases, as well as understand whether the post is an opinion, a question, or a fact.
In the image above, you can see a thread of speculations with some highlighted text in the individual posts as well as some matching colours on larger cards on the right. The idea is that from the cards on the right, students can see what details about a topic mentioned in a particular sentence that may be of interest to them.
Let me illustrate a full conversation as a way to explain what I mean.
Let’s assume the instructor asks the class “What would have happened if the Wright brothers never took flight?”
Above you can see a back and forth between a number of students with an interjection from the instructor. Now, look at the coding on the right; those analyses are happening in the background constantly.
On the front-end of the system, a potential view could phrases that are highlighted and tagged with a topic, and once that phrase is clicked, a card with some more in-depth info about the topic appears and the students can begin to explore that topic separately from the discussion.
E.g. “Yeah and if you look at the tech they had at the time, they would have been really durable.” Clicking on really durable would then bring up a fact card, perhaps from Wikipedia or otherwise, about designing for durability in aeronautics. From there the students can enter a curiosity blackhole, choosing to either dig into more of the design aspect perhaps or the aeronautics.
In the ideal case, a discussion can start about dogs and end up with a participant realizing they are interested in metal work because of various jumps in the convo which went from dogs, to dog safety, to dog collars, to dog tags — and from dog tags, someone kept digging and got fascinated by how dog tags are made.
From these concepts above, it could be deduced that the project has an opinion about what the mechanics of curiosity are, and that it is using that to expose them and exploit them for the benefit of education. That would be perfectly fine with me. To this day, we have multiple theories about how curiosity and motivation work, none of which is exactly proven to be right. In this system, I am drawing on the works on self-determination theory (SDT) by Richard M. Ryan and Edward L. Deci, as well as some aspects of constructivist educational philosophy by Jean Piaget, and combining them with some experiences of my own where I’ve had excellent learning moments both by myself and through dialogue with others.
My belief is that by engaging some key elements of curiosity by provoking them with fictional points in the future and the past, we can get people talking and learning about themselves, and I’ve already tested that this is an interesting concept with some high schoolers. We should allow machines and humans to provide what they are each best at — logic and nuance respectively. Machines would merely be objective viewers and note-takers to transparently inform the students about their learnings without abstraction and subjective terms (think along the lines of telling a student how many opinions compared to questions they have formed in a given week as a simple ratio rather than stating that they are “opinionated” which is a culturally loaded word), and humans would converse, debate, and guide.
For now, I am engaged with designing the screens of an alpha version of this speculative learning tool. I want to be able to provide an idea of what it would look like to use it as a student, both to contribute and to check one’s analyses, and as a teacher, both the construction method for constraints of the speculation and examining the students’ profiles.
This is a progress update entry in my 7 week final thesis project at CIID. You can read all the other entries here.