The Tricky Task of Figuring Out What Makes a MOOC Successful

Why traditional metrics like completion rates aren't a good way to evaluate online courses

John Gress/Reuters

With a few keystrokes, you can register for a HarvardX MOOC on Computer Science, Genomics, Justice, or China.  Hundreds of thousands of people have done so, and in a report that we and our coauthors released this week, we show that only about 5 percent of these registrants go on to earn a certificate of completion these courses.  We could have titled the report: “MOOCs have low completion rates.”

Completion rates in courses, and graduation rates in colleges, have long been important metrics for measuring college success. If students invest time and money into earning college credit and then fail to complete a course, this represents an implicit breach of a commitment made by the students, instructor, and institution alike. If 95 percent of students who enrolled in a residential college course dropped out or failed, that course would rightly be considered a disaster.

After digging deeper into the data, however, we decided that completion rates are at best an incomplete measure, a position that is increasingly shared by many others. We would argue further: at worst, completion rates are a measure that threatens the goals of educational access that motivated the creation of MOOCs.

Our data show that many who register for HarvardX courses are engaging substantially in courses without earning a certificate. In these course, “dropping out” is not a breach of expectations but the natural result of an open, free, and asynchronous registration process, where students get just as much as they wish out of a course and registering for a course does not imply a commitment to completing it.

Frames of reference change our interpretation of statistics.  A 5 percent completion rate is low for a conventional college course but high for other forms of media with which MOOCs share much in common.  One of the first HarvardX courses, JusticeX, was originally produced as a PBS series by WGBH in Boston. Professor Michael Sandel has 12 video lectures from that series that were posted on YouTube in September of 2009. The first video has nearly 5 million views. The second has 1.2 million views. By the fifth video, views have declined to about 200,000 views for each video. Rather than decry this “5 percent completion rate” as a crisis in public broadcasting,  we find it remarkable that Michael Sandel can post a 45 minute video lecture on moral reasoning and get 5 million views. Anyone who watched one video or all twelve is a little wiser for their efforts.

But completion rates may not only miss the point, they may also discourage foundational principles of open online learning.  For example, consider the impact that the Colbert Bump had on HarvardX this past summer.

On July 24, 2013, Stephen Colbert, of Comedy Central’s Colbert Report, welcomed Anant Agarwal, the president of edX, a MOOC platform that hosts courses from HarvardX and MITx.  Overnight, edX certification rates for daily registrants in Harvard MOOCs dropped noticeably, from a five-day average of 3.2 percent to an average of 2.5 percent.

How do we know that the Colbert Bump was responsible?  Daily registrations in HarvardX MOOCs more than tripled that summer night, from 406 on Wednesday to 1,356 on Thursday.  The number of certificates earned by these students doubled from an average of 12 to an average of 24 per day.

By tripling registration rates but only doubling certification rates, Stephen Colbert single-handedly lowered the completion rate for all open HarvardX courses. With a flood of curious browsers from Colbert Nation, hundreds of students explored our courses, and dozens of students ultimately completed them.

Should HarvardX instructors be disappointed that Stephen Colbert tripled registration and doubled certification?  They might if they believed certification rates had any relevance to open online education.  Instead, the Colbert Bump reveals the truthiness of MOOC certification rates.

Certification rates aren’t necessarily false, but they obscure a larger story: a story of the wide variety of learning practices emerging in open online learning environments.

And why were certification rates so low in the summer to begin with? In part, because during the summer, nearly half of registrants were joining courses that were already closed for certification.

That’s right, you can sign up for a HarvardX course after it ends.  You’re a dropout the second you’ve registered.

HarvardX could boost its certification rate by closing those courses to new registrants or restrict courses to those most likely to complete them. Instead, it keeps courses open to maximize the number of students who are learning something new.

How should we evaluate MOOCs?  That depends upon what we hope them to accomplish, and here there is much blurring of distinguishable goals: from increasing educational access to disrupting higher education, from screening for talent worldwide to advancing pedagogy in residential classrooms.

We don’t have the answers yet, but as a start, our research points to the necessity of metrics that take initial student intentions and educational backgrounds into account. Nearly all traditional college students who enroll in a course plan to finish it; many students registering for a MOOC don’t. Traditional college students almost never enroll in courses where they have already mastered most of the material—it’s too expensive, but many MOOC enrollees join courses where they are already extremely familiar with the topic, to certify their skills, to test themselves against assessments, or to study a specific portion of a course.

Some MOOC students are even re-enrolling in courses they’ve passed. There were over 400 students who earned a certificate in The Ancient Greek Hero in Spring of 2013 who enrolled again in the Fall of 2013 in the same course. There are rules at places like Harvard and MIT that prevent students from re-enrolling in courses they’ve already passed, even in topics like ancient Greek literature that can reward a lifetime of study.

As this research effort continues, our hope is that our frames of reference for MOOCs can change.  The story of MOOCs is not going to be told with conventional statistics borrowed from brick-and-mortar classroom models. Rather, our research describes an emerging learning ecosystem, one where enrollment can be casual and nonbinding, learning happens asynchronously, and registrants come from all countries in the world, with diverse intentions and patterns of learning. The metrics we choose should respect their intentions and encourage their learning.