Technologists love to think they are innovators. This is certainly true in education tech, where you can hear breathless claims today about how technology is going to transform young people’s experiences of school.
A typical pitch is that tech will improve learning by cleverly giving pupils a personalised experience, ensuring learners get precisely the kind of extra guidance or stretching challenge they need. Instead of pupils only receiving instruction from their teachers or text, they can be introduced to concepts through engaging video clips and other media. Meanwhile educators can be freed from mundane tasks like test-marking and can use that extra time to work more professionally with students, guided by advanced insights gleaned by the technology from the learners’ actions.
I think that pitch is actually OK as a summary of some of the helpful things edtech can do.
But is it new? Oh dear, no. It’s more than an hundred years old.
As Audrey Watters makes clear in her terrific new book Teaching Machines, many of the edtech ideas being promoted today were also developed in the 1920s and then in the 1950s. Indeed, the history goes back even further, even before 1886 when a patent was filed for a machine to teach spelling.
Many modern pitches focus on how the tech offers learners a “personalised” or “individualised” learning experience because it is “adaptive”. Yet Watters has dug out essays and letters that show how edtech innovators were promoting precisely the same benefit far earlier, such as Sidney Pressey describing his Automatic Teacher in 1926. She writes:
Unlike the “mass education” of the radio or the film projector, Pressey claimed his Automatic Teacher would foster a more individualized classroom. He recognized that “some sentimentalists” would resist “education by machine.” But he insisted that machines would actually free the teacher “from the mechanical tasks of her profession — the burden of paper work and routine drill — so that she may be a real teacher, not largely a clerical worker.”
The first mechanical teaching machines were wooden or metal boxes with a small window to see a physical reel of content, and then became a bit more like typewriters. Yet even then they were being designed so learners could progress depending on how correctly or incorrectly they had answered questions.
A high school student who took part in a late 1950s pilot of one such machine is quoted summarising the benefits as: “The eggheads don’t get slowed up; the clods don’t get showed up.”
I had joked recently to a colleague that the only way to make print textbooks adaptive was if they were structured like the “Choose Your Own Adventure” series. What I’d not realised until reading Teaching Machines was that precisely such text books had existed in the 1950s — Norman Crowder’s TutorTexts — and that those “scrambled textbooks” seem to have inspired the later fictional versions, rather than the other way round.
Many other ideas used in learning adaptivity today were also clearly being developed in the first half of the 20th century, including offering wrong answers designed to uncover learners’ misconceptions then following up with instruction around that misunderstanding.
Even more advanced forms of adaptivity — more in line with what we now expect from AI-based systems — were being predicted in the mid-20th century by Simon Ramo, vice-president at an aeronautics manufacturer. Watters quotes Ramo’s 1957 essay for Engineering and Science in which he wrote that teaching machines should be:
“prepared to take a single principle and go over it time after time if necessary, altering the presentation perhaps with additional detail, perhaps trying another and still another way of looking at it, hoping to succeed in obtaining from the student answers that will indicate that the principle is reasonably well understood before it goes on to the next one.”
Parallels beyond personalisation
The parallels with the past go beyond education technologists proclaiming their systems offer personalised learning. Further similarities with the modern day include the excitement around using video clips for instruction. The inventor Thomas Edison was predicting in 1913 that cinema would replace teacher-led instruction, arguably making him one of the first proponents of what would now be called “flipped learning”.
There are also early examples of the annoying habit some technologists still have of blaming any lack of uptake of their tech by schools on teachers being too conservative (which was as wrong and crap an excuse then as it is now).
An area I found personally fascinating was the past debate over how much the construction of teaching machines should be separated, or not, from the work of those authoring the content. I should note I have an angle on this as I’m head of product for a company that owns a group of European education publishers.
Some edtech companies seem to believe that if you create a learning platform with a cool enough user experience (UX) then the content-makers should flock to you and fit their material to your system. If you build it, they will come.
But I have come to join those who believe that truly great learning experience design (LXD) is created in the space that overlaps between the UX of the platform and the instructional design. That doesn’t mean education platforms have to be locked to a very particular version of instruction, but it does mean that if we are to create new, better forms of learning experiences the content and the platform should be developed in harmony, rather than isolation.
So I found it fascinating to read how some of the early inventors came to a similar conclusion.
The teaching machines innovator B.F. Skinner ran into problems creating his first teaching machines, both with a typewriter company and with IBM, so in 1958 began to consider working with the publishers Harcourt Brace.
He was encouraged to make this move by R.E.Zenner, the vice president of the Union Thermo-Electric Corporation, who advised him “It helps to know that someone is willing to make the blades for the razor.”
Although Skinner also ran into some problems working with publishers, he would continue to recommend it for pragmatic as well as educational reasons. “It seems to me that it would be very difficult to develop an adequate distributing organization which could compete with a well established publisher,” he wrote a year later.
The character who emerges as the closest to a hero in Teaching Machines turns out to be Susan Meyer Markle, an under-appreciated member of Skinner’s team. She actually got out into schools, worked with teachers, and created some of the first maths teaching programmes herself, pioneering digital instructional design in the process. Her work directly connected the practicalities of teaching a specific subject, and presenting the correct kind of content, to the design of the devices.
Cause for pessimism or optimism?
Teaching Machines is a necessary cold bucket of water in the face for anyone in edtech who believes all they touch is ground-breaking. It forces us to recognise what has happened in the past, and to learn from it.
Personally I do not find that dispiriting at all. There are plenty of good ideas that have taken time before they could be realised properly and at scale. The fact Porsche designed and constructed an electric car back in 1898 does not undermine the recent successes that Tesla and others have had creating them and utterly transforming the car market.
It could be argued that what the lithium-ion battery has done for electric cars may prove to be what AI can do for learning technology. However, that parallel recognises that those components also come with some negatives we should not ignore.
For Watters’ book is a warning as well as a history lesson. She has for a long time been someone I have described as “edtech’s conscience”. Whenever someone new to education technology asks me what I’d recommend they read, I name two books: Class Clowns by Jonathan A. Knee, to help them understand how edtech can fail commercially, and Watters’ The Monsters of Education Technology, to help them understand how edtech can fail ethically.
In Teaching Machines she suggests we should question, and feel uneasy, why so much modern edtech appears to be a direct descendant of machines like Skinner’s.
“While autoinstructional technology may prove invaluable for improving the efficiency of factual and skill-type learning, we must appreciate the limitations as well as the potentialities of these devices,” she writes.
Today, as then, we can end up designing learning technology around what machines can most easily measure. Hence the continued dominance of forms of multiple choice question in many systems.
Our eagerness to guide learners to precisely the right next learning experience may be because we have a wonderful intention — to give any child something like the personal tutoring only the very richest families could afford in past centuries. But if we are not careful we could inadvertently treat learners like scientists conditioning mice to work through a maze.
Watters underlines that such a behaviourist approach was precisely Skinner’s background; before focussing on how children could learn he had previously achieved fame through such quirky experiments as training pigeons to play table tennis. Behaviouralism is, she implies, baked into the DNA of today’s personalised teaching machines and that approach still dominates instead of ones that open up possibilities for the learners to have more agency (such as Seymour Papert’s Logo system with its programmable turtle).
Her warning is a timely one. There is a great deal that better edtech will do to improve the experience of learners and educators, and that will certainly be assisted by AI. However, those of us working on learning technology must recognise the limitations of education-by-algorithm and strive to preserve students’ autonomy, creativity, and dignity.
Teaching Machines: The History of Personalized Learning is published by MIT Press.