What makes a struggling online learner struggle?

  • lack of attention?
  • lack of retention?
  • lack of motivation?
  • lack of language comprehension?
  • fear of making mistakes?
  • lack of prior knowledge?

Are these always the causes (or are they sometimes the effects, for which we never blame anyone, least of all ourselves) of learners struggling to learn online?

Here are some more questions about those of us who are involved in developing and providing online learning resources, which consider how we address (or rather, fail to address) the same issues:

  • what causes us to fail to notice struggling?
  • do we tend to pick it up too late?
  • do we handle it badly when we detect it?
  • how can we design systems which detect struggling earlier?
  • how can we design systems which cope better with strugglers?

Here are some hypotheses about struggling learners and online teaching (they’re essentially just guesses, in the form of claims about struggling, aimed at identifying what needs to be checked out):

  • struggling is addressed much more effectively when observation of (and interaction with) the learner is taking pace continuously throughout the learner’s online teaching and learning experience
  • struggling is rarely completely eliminated by:
    • just ‘improving’ the course materials
    • breaking the learning exercises into ever smaller pieces
    • testing ever more intensively, using yet more ‘prepared’ exercises

A failure of (artificial?) intelligence

I believe that the answers to the challenges that struggling poses may require the introduction of ‘questioning processes’ (which involve engaging the learner in interactions which have the characteristics of ‘natural’ conversations and ‘discuss learning experiences’) which may potentially be at, or even beyond the limits of our current AI solution capabilities.

The learner needs to be encouraged (in imaginative and context-sensitive ways) to ask questions (based upon relevant curiosity which the teaching system needs to stimulate, inspire and encourage) before, during and after each learning step, and to ask such questions even when they believe that they have understood what the course materials are meant to be teaching them.

This ‘encouragement of question asking’ is aimed at increasing the likelihood that the system will detect the kinds of misunderstandings and confusion which will lead to the learner getting ‘stuck’ at a later stage.

Similarly, the teaching system itself needs to be able to ask context-sensitive questions based upon its own real-time ‘observations’ regarding ‘how the learner responded’ to the exercises as the learner completes them.

Problems with interpreting the learner’s behaviour

The more ‘open’ and ‘free form’ the teaching exercises allow the learner’s responses to be, the more  challenging the job of ‘interpreting the learner’s behaviour’ becomes, but addressing this ‘behaviour interpretation requirement’ is likely to be essential in detecting and addressing struggling.

Whereas the symptoms of struggling may be easy to detect (either by learner dropout, task completion failure, or just low test scores) the causes which underlie those failures are often much harder to determine in an automated teaching environment.

In ‘blended solutions’ where a combination of automated and ‘human attended’ teaching is involved, the system can be designed such that ‘the human teacher is there to detect and deal with learners who struggle with the automated course materials’.

Even blended-learning can still leave strugglers struggling 

But even in the blended case, it’s still quite possible that the student’s struggles may be picked up later than necessary, especially if the only indication of struggle that is detected (and to which the human attendant is alerted) is is ‘task completion failure’ or low ‘end of exercise’ learning performance evaluation scores.

Often, in the middle of a relatively lengthy human-attended but otherwise-automated teaching process, something like a task completion failure can indicate struggle (which may trigger a human attendant remedial response) but the struggle may have been caused by an ‘undetected comprehension failure’ which the learner experienced (but was unaware of) during an exercise ‘successfully completed’ by the learner much earlier on in the teaching process, a problem which thereby may have gone unnoticed for many lessons.

Even in human-attended-blended-learning scenarios, if such a ‘late detected’ comprehension failure had been detected much earlier, it is possible that a very significant amount of time and effort (which may be required to be expended in order for the learner to redo all the preceding lessons once the struggle-causing-problem had been identified and overcome) could be saved.

In practice, the outcome of serious ‘late-detection of comprehension failure’ is often worse than just a redo – it can cause the learner to convince themselves that they will be unable to complete the course and to drop out before the human attendant can even attempt to remediate the problem.

So, what can we do about this?

Is it time to start exploring whether we can structure automated teaching resources so that they are entirely permeated by ‘discursive processes’ (rather than by just providing nothing more than the traditional, mostly struggle-insensitive ‘task-completion-based evaluation’ or ‘test questions’ and ‘hints’) so that strugglers are identified earlier and helped more readily and effectively?

As we move towards deploying more and more online learning (especially as many online resources are being made freely available) and more and more people are being taught many subjects almost exclusively online (often because there is no alternative) so the number of ‘online learning failures’ seems doomed to escalate dramatically unless we change the way we cater for strugglers.

Even when online learning strugglers are successfully identified using traditional methods like task completion failures and test scores, remediation of any sort, especially human-attended, is rarely, if ever available at an affordable rate for those that could only ever afford to make use of free online teaching resources.

What to do if AI fails to provide the answer

Even if we ultimately find that AI cannot help us with this problem in the short or medium term, we probably ought to consider how we might go about trying to construct blended learning systems such that they give human attendees tools which help them ‘act as if they were the AI’ by helping them ‘ask the right kinds of questions at the right time’ and equipping them to use the answers that they receive from the learners as a basis for determining what has gone wrong in the learning process and how to try to put it right, in real time, while the learner is struggling, or better still (by asking the kind of questions which can uncover ‘unexpressed confusion’) before they even start to struggle.

To struggle is human, so to fail to humanise learning software is to fail in our duty to all but those who never drop out of our online courses, never fail to complete an exercise and who easily get passing grades, is to fail that significant but neglected proportion of humanity who sometimes struggle to learn as easily as others, but could succeed if they could be given a way to voice their difficulties and be listened to and be helped by a system which was designed to respond to their needs.