The final session of the day is with Reuben Tozman of edCentre Training Inc. He is talking about why learning professionals should think of their work as science, then focus more on data as they design their learning initiatives…
In the learning world, we often don’t measure the effectiveness of our “learning”. Most of the people present today measure “participant satisfaction” for a specific training module or, at best, the knowledge those participants acquired, or can remember in a test. Some learning people will go further and evaluate (at level 4) to see if business performance has actually improved. But according to Tozman, very rarely do we actually evaluate if it was our “learning” that made the change in performance and if so, which part and how. If we could get that far with evaluation of the “learning” delivered, we could improve the minimum effective dose of learning (strip away what doesn’t have impact) and (more importantly) change the right things to make it work and ensure the performance results we seek.
Why aren’t we doing this already?
According to Tozman, part of the reason we are not doing this is that learning people do not always see themselves as “scientists” in the workplace. They don’t consider what they are doing as “experiments” and they don’t have clear data-models in mind when developing “learning”.
We tend to see ourselves as final solution providers that dump a “learning solution” into the world assuming it will just work. It’s like we are expected to bring solutions, rather than experiments. Half of the time we don’t even look to see if performance improved and the other half of the time, we don’t change anything even when the performance stays the same. We just “failed”.
Tozman suggests that we should change our approach to one where we, the learning professional, do some real science: State the problem, form a hypothesis, create an experiment to test the hypothesis, measure the experiment results and form conclusions about the hypothesis. And if we prove the hypothesis wrong, we move onto testing the next one.
To achieve that kind of scientific approach, we have to be able to design learning with data in mind.
What exactly do we mean by learning science?
If an experiment is going to effectively measure against a specific hypothesis, it needs to have a clearly defined data model, with measurable data point.
For example, imagine the following:
What does it mean to “design for data”?
In the experiment above , the “data model” gives us our definition of “engagement”: “People who are engaged proactively seek out information about company vision and values”. The “data points” we will measure might be “types of content chosen”, “time spent looking at that content”, “number of outbound links clicked from within one particular chunk of content” etc…
When we re-create the orientation program, we might chunk-down all the possible parts on company vision and values and allow learners the chance to self-orientate though the possible options (if they want to). What we are hoping to create is an effective experiment to prove our hypothesis true or false. If we can watch what they do and prove our hypothesis true, then we can do something about it and eventually see better bottom-line performance results (better retention and more satisfaction).
How will this help to create better learning?
If we do all this, we will firstly be able to know that we are working on the right things (because we took the time to validate our hypotheses about the cause of poor performance) and we will be able to design something that we know is effective enough to cause a positive desired change in performance (in this case, actually improving our people’s interest in company vision and values). We will use the same data-driven scientific approach to design learning initiatives with lots of measurable data points, so that afterwards we can make associations between what we did and how this impacted bottom-line performance improvement.
This is a different approach to the traditional design process. It will create real performance improvement and we will be able to confidently say that what we did had an impact.
If learning people get in the habit of creating small measurable data-points in learning that correspond to well thought out hypotheses, we will be able to start collecting more and more data to show the link between what people learnt and how it impacts performance. Using tools like “Tin Can API” we will be able to collect and analyse lots of chunks of data from different systems and draw effective conclusions about the link between learning and performance… leading to real improvement.