The ‘teach yourself to program’ site is just not equipped to have an intelligent conversation about any problems you’re having as you go along. What impact is this having on the dropout rate?
Here are some things that Codecademy doesn’t do, which it should be able to do (I’ll explain why this is a serious problem below).
A Codecademy online course doesn’t ask you about what you thought about:
- the last instruction it gave you
- how you responded to it
- how it responded to what you did
It doesn’t provide the means to have a two-way conversation about:
- what you did in response to the last instruction
- why you did it
- why you did it that particular way
It doesn’t encourage you to ask it questions about what’s happening in your head as you try (or more particularly, if you fail) to understand what it’s trying to teach you.
It doesn’t give you the impression that it would be able to answer questions about what you did or about how it responded, as you go along.
None of these shortcomings are likely to pose any problems for anyone who discovers, when they start using Codecademy, that programming turns out to be something that they are able to learn far more easily than they expected.
These ‘unexpectedly easy programming learners’ are likely to be the very same people who, if they had tried to learn how to program from a book, would also have probably surprised themselves at just how quickly, easily and effectively they managed to teach themselves to program.
The question that this ‘recognition of Codecademy’s limitations’ poses, at this moment when the desirability and do-ability of teaching people to program is suddenly receiving an unprecedented amount of public interest, is whether the tools that we use to address this newly prioritised educational aim are the right tools for the job.
Codecademy’s teaching tools seem to be unaware of the last 40 years’ worth of research into how our minds work when we struggle to learn anything complicated, or of how we can deploy the fruits of that research to help strugglers to overcome those difficulties.
This may not matter to those who sail almost effortlessly through the Codecademy course material, because they would probably not have had any serious problems being taught programming in any other way.
This category of ‘unexpectedly easy programming learners’ is probably extremely well catered for by Codecademy.
However, the assumption that everyone who drops out of Codecademy does so because they turn out to be genuinely unsuited to learning how to program is highly presumptuous because it is predicated on the belief that Codecademy’s course material is just as suited to strugglers as it is to easy learners.
This presumption underestimates the extent to which pursuing a better insight into the difficulties of these Codecademy strugglers (which is what the questions at the top of this article would be aimed at doing) could turn a significant proportion of those strugglers into excellent programmers.
Don’t Codecademy’s vast (and burgeoning) number of users (which will inevitably include a significant proportion of potential strugglers) deserve the benefit of the tremendous advances in computer science that have been made in the many decades since we first started trying to use computers for education?
And if users are being asked to use Codecademy to develop and provide new courses for teaching additional programming languages and tools, it’s painfully discouraging to see that the standards that they will be using as the basis for the courses they are designing will be founded upon such a rudimentary teaching interface with such minimalistic responsiveness.
If all that happens in a training interaction is ‘succeed, fail, or ask for a hint’ (as it does in Codecademy) then the presumptions about the precise nature of the insight gained by the learner in the course of the interaction are almost entirely tacit and untested.
This is a state of affairs, which, whilst it is obviously potentially a (preventable) ‘back away and leave’ issue for those that fail, it is also a serious ‘dereliction of duty’ to those that succeed, where the ‘harvesting of experience at the point of use’ would have provided invaluable feedback about such things as implicit misconceptions, many of which are otherwise likely to remain consistently concealed from both the learner and the course-developer, unless or until some public-spirited user feels inclined to go to the trouble of bringing such things to the attention of the Codecademy developers.
Codecademy does make such things as Q & A available, but not as an integral ‘context sensitive’ interactive feature of the online exercises.
The assumptions implicit in Codecademy’s ‘instruction-dominated learning’ approach, are that ‘subsequent exercises will weed out any learning shortfalls’ and that ‘successfully demonstrated competence in exercises at later stages in the course implicitly validate the effectiveness of the teaching method’.
Such assumptions, in the case of something as complex as programming, are rarely the self-fulfilling prophecies that they seem to be, because course dropouts end up being inadequately analysed (and potentially drop out unnecessarily) and ‘successes’ inevitably conceal the unrecognised propagation of countless bad programming habits and needlessly perpetuated misunderstandings.
A concern which is often raised when the ‘interaction shortfall’ issue arises, is this:
“Won’t any additional ‘conversation’ (which might involve the teaching system asking the user additional questions beyond the minimum necessary for the learner to complete a coursework exercise, as well as providing the facility and encouragement for the learner to ask questions) ‘get in the way’ and slow the learning process down?
The answer is simple:
In an insight-impoverished teaching experience, the inexperienced user is in fact usually ‘going too fast’, leading to strugglers either quickly discovering that they ‘get stuck/feel confused/can’t cope’ and causing them to give up and drop out, or for those that do not feel that they are struggling, to apparent ‘success’ (in terms of completing the exercise) but with the supposedly ‘non-struggling learner’ having a far less appropriately constructed mental model of the methods that they have just ‘learned’ than they should or could have.
A problem inherent in teaching programming is this:
Experienced programmers, if they ‘test out the course material’ will almost inevitably have an inappropriate ‘intuitive response’ to the experience of interacting with very basic introductory course material, because the easier the learning process appears to be to them, the more they will instinctively feel that the exercise material ‘insults their intelligence’.
As far as an experienced programmer is concerned, any attempt they make to ‘try out beginners learning materials’ will already feel to them as if the authors are already ‘making it too easy’, i.e., ‘dumbing it down’.
For them this will typically consist of the authors ‘unnecessarily spelling out the obvious’ and ‘going far too slow and being excessively drawn-out’.
My own theory is that the better the programmer is as a programmer, the more subliminally/unconsciously ‘selective’ they are as a ‘programming teacher’ (or programming course material developer) in terms of requiring the ‘suitability of the student’ to be determined by the student’s ability to cope with (unintentionally) cryptic course material.
I personally put at least some of this predisposition down to the fact that ‘being able to make sense of another programmer’s poorly written code’ is often seen as an intrinsic part of the skill-set of a programmer.
In other words, good programmers are only good at teaching other ‘natural’ programmers, which leads to the (in my view highly questionable) assumption that only natural (i.e., quick and easy to teach) programmers are ever suitable to be taught programming.
In the light of such a predisposition, it would be surprising if any experienced programmer conducting a review of the Codecademy introductory course material would suspect that it would be ‘too hard for beginners’, or would ever see the need to make learning programming ‘even easier’, because as far as they were concerned, the level of ‘spoon-feeding’ evident in Codecademy was already making learning to program so much easier than their own original learning experience, and that, if anything, they would be much more concerned that Codecademy might be ‘lowering the bar too far’, i.e., potentially ‘letting in those who are not suitable (i.e., bright enough) to be accepted as programmers’.
It seems evident to me that Codecademy (probably unwittingly and certainly unintentionally) perpetuates this assumption, by failing to probe sufficiently deeply, systematically, or comprehensively enough into the learning experience that it is trying to deliver, as it is delivering it.