The final session of the day is with Reuben Tozman of edCentre Training Inc. He is talking about why learning professionals should think of their work as science, then focus more on data as they design their learning initiatives…
In the learning world, we often don’t measure the effectiveness of our “learning”. Most of the people present today measure “participant satisfaction” for a specific training module or, at best, the knowledge those participants acquired, or can remember in a test. Some learning people will go further and evaluate (at level 4) to see if business performance has actually improved. But according to Tozman, very rarely do we actually evaluate if it was our “learning” that made the change in performance and if so, which part and how. If we could get that far with evaluation of the “learning” delivered, we could improve the minimum effective dose of learning (strip away what doesn’t have impact) and (more importantly) change the right things to make it work and ensure the performance results we seek.
Why aren’t we doing this already?
According to Tozman, part of the reason we are not doing this is that learning people do not always see themselves as “scientists” in the workplace. They don’t consider what they are doing as “experiments” and they don’t have clear data-models in mind when developing “learning”.
We tend to see ourselves as final solution providers that dump a “learning solution” into the world assuming it will just work. It’s like we are expected to bring solutions, rather than experiments. Half of the time we don’t even look to see if performance improved and the other half of the time, we don’t change anything even when the performance stays the same. We just “failed”.
Tozman suggests that we should change our approach to one where we, the learning professional, do some real science: State the problem, form a hypothesis, create an experiment to test the hypothesis, measure the experiment results and form conclusions about the hypothesis. And if we prove the hypothesis wrong, we move onto testing the next one.
To achieve that kind of scientific approach, we have to be able to design learning with data in mind.
What exactly do we mean by learning science?
If an experiment is going to effectively measure against a specific hypothesis, it needs to have a clearly defined data model, with measurable data point.
For example, imagine the following:
What does it mean to “design for data”?
In the experiment above , the “data model” gives us our definition of “engagement”: “People who are engaged proactively seek out information about company vision and values”. The “data points” we will measure might be “types of content chosen”, “time spent looking at that content”, “number of outbound links clicked from within one particular chunk of content” etc…
When we re-create the orientation program, we might chunk-down all the possible parts on company vision and values and allow learners the chance to self-orientate though the possible options (if they want to). What we are hoping to create is an effective experiment to prove our hypothesis true or false. If we can watch what they do and prove our hypothesis true, then we can do something about it and eventually see better bottom-line performance results (better retention and more satisfaction).
How will this help to create better learning?
If we do all this, we will firstly be able to know that we are working on the right things (because we took the time to validate our hypotheses about the cause of poor performance) and we will be able to design something that we know is effective enough to cause a positive desired change in performance (in this case, actually improving our people’s interest in company vision and values). We will use the same data-driven scientific approach to design learning initiatives with lots of measurable data points, so that afterwards we can make associations between what we did and how this impacted bottom-line performance improvement.
This is a different approach to the traditional design process. It will create real performance improvement and we will be able to confidently say that what we did had an impact.
If learning people get in the habit of creating small measurable data-points in learning that correspond to well thought out hypotheses, we will be able to start collecting more and more data to show the link between what people learnt and how it impacts performance. Using tools like “Tin Can API” we will be able to collect and analyse lots of chunks of data from different systems and draw effective conclusions about the link between learning and performance… leading to real improvement.
How the Tin Can API could revolutionise the link between learning and performance, according to Tim Martin
Tim Martin has been working with SCORM for years, listening to people’s experience and problems and thinking about its limitations and future. Given his experience as a key player in Project Tin Can, Tim is here today to advocate the values of Tin Can, share a few concrete project examples and show us how the future of Tin Can is going to be awesome…
First things first: What is Tin Can?
Tin Can is the answer to SCORMs problems.
SCORM is a two-party system consisting of an LMS and some content, with standards about how it all fits together and how it works. SCORM is able to report in a simple way about the formal learning activities a formal learner undertakes. For example, tell us how many people followed a particular learning module. That’s it.
What is wrong with SCORM?
SCORM is limited because it can only tell us how or when one particular learner logged into an LMS to take a prescribed piece of training in an active browser session. If you read back the last sentence, you will see that it is fully loaded with all the problems of SCORM. That is not how we learn and that is not how we as organisational L+D people want learners to learn….
With all the hype around 70:20:10 and non-formal learning that takes place in the organisation, it seems clear that the majority of what people learn doesn’t come from classical training or formal learning solutions like the e-modules or video that SCORM has been measuring. The majority of learning is not coming from one person (alone) logged into one specific LMS system (if any) to follow a prescribed event (eg training) at one specific moment in time. People getting a lot of content from a lot of different places, sharing a lot of ideas and they are definitely learning in a less formal way.
And many L+D people today don’t want to oblige people to login to one particular LMS system to control their learning in a formal way. Martin cites the example of Google who told him “We don’t want an LMS. We don’t want people to have to do specific controlled things in a specific controlled way. We just want them to go out and learn.” But Google also wants to be able to see what is learnt and how it impacts performance. Enter Tin Can API…
How does Tin Can work?
Tin a Can API is a shared language for systems to talk to each other about the things that people do. It consists of an “activity provider” (whatever system it might be) telling what people did (whatever it was) and an LRS (learning record system) that listens and records. It does this with a simple noun-verb-object approach that records all activities and puts them in the LRS.
This modern web-service based system easily allows different systems to collect information. Here is a list of use systems that have already adopted Tin Can as their standard. Theoretically, Tin Can API can capture everything that is going on. And then correlate those activities, run analysis and give insights about what is going on. Across different systems.
The “activity provider” will report on (learning) activities across a variety of systems, which will then be stored in the LRS. This information can then be compared to data about performance from other non-learning systems. The LRS will be searchable (“bigdatable”) and could be used to draw all sorts of conclusions about learning and performance.
SCORM can only tell us a little bit about learning activities, mostly about completion rates, sometimes about test results (eg Tim followed training module X). Tin Can will go much further, allowing us to capture almost anything at any level. Martin gives an example, comparing to a SCORM system that can (only) tell us that 6 learners completed a CPR module and scored average 68%: Tin Can will be able to tell us how many times one learner compressed the CPR test dummy during the simulation, where he put his hands and the impact that had on the reanimation process. It will be able to produce a massive amount of (big) data and analyse everything, looking for trends and giving full reporting on the correlations between different learning activities/results and, eventually, performance.
But it goes SO much further than this still formal learning reporting…
It may be awesome, but give me a practical example of this awesomeness please…
Imagine the following: Google employees pick up content from across a variety of systems. They search, they consume and then they share content on platforms like LinkedIn, Yammer (or whatever Googley thing Googles use). Let’s pretend they are sales people. They then go out into the sales world and makes sales (or doesn’t).
Tin Can will allow the Google L+D people to run analysis at a very detailed level on all the different (learning) content that was picked up by all the different people. Add into the mix reporting on who searched and shared what, how, where and when. Who liked something they read or retweeted it. Tin Can will then allow us to correlate all that information with sales performance activities and data (again from different systems) in order to draw conclusions about the acquisition of knowledge and skills and the impact on sales.
Example: Do people who learnt how to ask specific questions in a sales meeting close more deals? Do people who called back their prospects within 2 weeks of meeting them close more sales than those who didn’t? What key words are top sales people searching on their browsers? Is there a correlation between the number or type of sharing on social media platforms and the sales closed. If so what?
The possibilities for data collection and analysis with Tin Can are endless, given the simplicity of the way in which the “activity providers” report on what is being done (see below…). With such information, learning people (and managers) will be able to focus more on the learning the organisation needs to bring the results it is missing.
Personally, I find this very exciting (others more cynical might imagine the scary dark-side applications of such systems). I already wrote about “Big Data for Learning in a Call-Centre” but didn’t realise the standards were there. Even though Tim Martin has repeated several times today that it’s not all there already and that we need to move slowly, it is clear to me that this will go very far…
Thanks for reading