Learning Design & Formative Evaluation. OLDSMOOC W7

This blog entry is part of my learning diary for OLDSMOOC.

Learning Design & Formative Evaluation. This is the title of a paper by Thomas Reeves and Yishay Mor for week 7 of the Open University’s OLDS MOOC on 21st Century Learning Design and available today at least on http://tinyurl.com/b3oo7pt .

My position is that I don’t design courses but help others to do so and I often need to evaluate existing courses.  Sometimes these courses exist on my VLE and have been left by visiting lecturers of academics at my university who have moved on.  Other courses are being run by academics who seek guidance or advice about how to improve their courses.  99% of all courses I see have already been ‘designed’ before I have any involvement with them.

My initial response is normally to review the course and produce a report and I have normally focussed on –

  • usability,
  • accessibility, 
  • navigation,
  • how it displays on different devices,
  • communication (tutor to student, and student to student),
  • copyright,
  • pacing,
  • time on task,
  • learning outcomes and
  • assessment.

Recently I have been looking into a number of Quality Assurance Checklists to see if it would be better to use one of these and I am curator of a ScoopIt site called Quality Assurance in Online Courses. However, having read a blog post by David Jones entitled Compliance Cultures and Transforming the Quality of e-Learning I am wary of imposing a checklist on academics.

Quality Assurances documents on a desk.

Some of the Quality Assurance documents I have been looking at recently.

So, week 7 of OLDS MOOC appears to be just what I am looking for – an investigation into how to evaluate online courses. However, OLDS MOOC is about learning design and talks about learning designers and I work with academics.  I know that from week one when we tried to define what learning design is, we justified the use of the term by saying that only by using good general design principles could we order the haphazard collection of materials and activities into a meaningful course of study. Nevertheless, the academics I work with are primarily concerned with their research and their day to day teaching – the delivering of the courses – than with the underlying design of the course.  They might argue that ‘learning design’ is all very interesting but they just do not have the time to engage with it.  They are flying by the seat of their pants and hope that my e-leaning team can conduct some maintenance between flights so all will be well the next time they take off.

The above mentioned paper deals with how one can conduct formative evaluation of an online course (actually this is my reading into it – the authors only talk about ‘learning design’) and suggests a number of ways to collect the information in order to improve the course (design).  Already this is slightly at odds with my normal practice which is to do a summative evaluation. However, I can see that a formative approach would be preferable. What I need to see also is that it is practical.

This blog post then is my initial reaction to the Reeves and Mor paper.

Criteria for formative evaluation.

Reeves and Mor say there are no universal criteria established for formative evaluation of learning designs but that some of the most commonly considered are

  • functionality
  • usability
  • appeal
  • accessibility
  • effectiveness.

The inclusion of appeal, (“how much learners like it and why”) does not normally appear in the checklists I have been considering, but is an obvious question to ask and indeed we do include mid point and end point module evaluations in all our online courses. However, I must remember to look at these and see what they address as they were brought over from our face-to-face courses.

Collecting Data Systematically.

“The following formative evaluation methods are essential…”

  • peer review
  • expert review
  • learner review
  • usability testing
  • alpha, beta, field testing of prototype.

What strikes me here is the need to sell the concept – sell the benefits –  of peer review and expert subject matter review to those academics delivering the courses.  I can supply expert (web design, usability testing) evaluation but there’s a need to systematically engage the academics.

The paper also supplies examples of a number of evaluation tools.  One of these caught my eye as I hadn’t seen it before. It’s instructions for conducting user observations and appears to be a well thought out rubric.

This has given me something to think about and I might explore the possibility of using it with IT students on one of our online courses.  (The IT students are not students on the course but might be interested in being part of the testing).

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Simon Briercliffe

Historian and geographer, writer and researcher

Laurence Marks

away from the office..

AutoBeast

Random cars we like

cars&cooking

From the kitchen to the racetrack and back again

The Ali Lowe Commentary

The view from the shed...

The Lure of Speed

Vintage Motoring Blog

Rob Appleyard

My thoughts on learning, technology and stuff that makes me tick

%d bloggers like this: