Evaluating informal learning or focussing on what counts?
Session W112 is about to kick-off with Saul Carliner, Associate Professor of Concordia University (Montreal). On the last day of the conference, it is good to see so many smiley learning geeks still ready to soak up some information. And the big surprise already is that Saul is actually able to engage the audience… Even though he is an academic 🙂 Let’s go: “Evaluating Informal Learning”…
For several years now, the learning world has been talking about the importance of “informal” learning. It has always been around and has always been important, but we recognise more-and-more today that most learning happens in an informal way. The question is: If learning is happening all by itself, without any control and design by learning professionals, (how) can we evaluate its effectiveness?
At the start of the session, I had a chance to let the speaker know what was my own personal question today is: “Should we even bother trying to evaluate informal learning?” More on this later….
First: Let’s be sure we know what we are talking about. What is “informal learning”?
Defining it by saying its not formal learning is not good enough. According to our speaker, we need to be specific about our definition. My own approach to defining different types of learning has always been quite straight-forward. There are 3 types of learning:
- 1 and 2 are examples of when the learner intentionally seeks out learning, in any way, sometimes training
- In the 3rd example, the learning is non-intentional: Without a conscious effort and action from the learner, something is learnt.
- I call “1” formal learning
- I call “2” non-formal learning
- I call “3” informal learning
…today, our speaker refers to “2” and “3” together as “informal” learning and that is the subject of the session.
According to Hodkinson, Malcolm and Wihak, “informal” learning is about the following 5 aspects:
- The process – how learning happened, ie: not in training
- The learning location – where it happened, ie: not the training room
- The purpose of the activity itself – either the learner took action in a non-formal way (“2”) or the learning was a secondary by-product of some other activity
- The content – the type of content and platform was something other than formal/training
- Consciousness – the learner may or may not have known that she was learning; “HR” wasn’t consciously controlling it
Traditionally, how do we evaluate learning in organisations? Can we do this for “informal” learning?
The framework most learning professionals have been using for some time is Kirkpatrick’s 4 Level evaluation model. In this model we look at how people react to learning, which knowledge, skills or attitude they actually acquire, how they behave after the learning and the impact on business results (in terms of key business drivers).
When it comes to “informal” learning, some levels of this evaluation system are not so easy to achieve:
- As an example, suppose you want to evaluate the satisfaction of on-the-job training (L1). Several problems may arise. Was the OTJ training announced to the people who would evaluate it? Did they know when it would take place? How many people were being trained on the job? If it is only one person, you can’t do a good statistical analysis of the results achieved in order to update and improve the approach.
- In another example, we discussed the difficulty of assessing the learning taken from an online “help” system. Somebody has clicked on a page to read some “help” information, but who? Why? What did they think of the information? Could they use it? Did work approach and results improve?
What can we do to evaluate this type of learning?
Who is already evaluating “informal” learning in line with Kirkpatrick’s levels and how?
Saul Carliner shared examples of some different organisations or professions we might not already have thought about and how they achieve evaluation of “informal” learning. These use Kirkpatrick’s levels to varying degrees. You might be able to use the same approaches:
- Museums use interview-based techniques to find out from visitors what they thought of the museum and what they learnt. They leave visitor books where people can write down their comments when they want to. Museums with touch-screen information and interactive presentation systems can see who clicked on what, when and how long between the first click and second (which could suggest the amount of time spent absorbing information).
- Marketeers have been struggling with the problem of evaluating their campaign success for years. The marketing blend consists of direct and indirect marketing, various platforms, multiple customer types and moments in time. Marketeers measure sales during an advertising campaign.
Brand-recognition and brand-loyalty are regularly measured before and after campaigns.
- Web-designers and web-masters have built various functions into their sites in order to achieve effective web analytics. It is possible to measure all sorts of different metrics to evaluate user behaviour.
Some ideas of how to evaluate “informal” learning in the workplace
Saul Carliner suggested some simple ideas for evaluating “informal” learning…
Suppose we have an employee seeking a promotion. She joined an IT consulting company after getting a degree in web-development 6 years ago and wants now to show what consulting skills and knowledge she has acquired in the last 6 years, in order to get a promotion. But being a billable consultant, she hasn’t had the opportunity for any formal training since her induction to the company and nobody knows what she “did”; nothing is in the LMS. How can we evaluate her learning?
What would you do?
- By doing interviews and coaching your people, you can find out what they know. Clever interview techniques like STAR and intentional coaching methods like GROW can help us assess what our IT consultant learnt and did since joining the company.
- Another approach you could use to see how people are learning is to put something in place to help your people create a work-portfolio that shows their development over time. Artists and musicians have been doing this for years: They collect drawings and track-lists that show what they have done and that indicate the acquisition and implementation of different knowledge, skills, attitude, behaviour and results.
- As a side-note, my children’s school (Steiner) has a pedagogy which seems to outsiders far more informal than classical school environments. The standards are there, but we don’t see them so easily. Bearing in mind that the kids are able to create their own learning experience and do what they want in some disciplines, it could be difficult to assess their learning. What does Steiner do? They simply collect a portfolio throughout the year that represents the activities the children have done and the results these activities have given.
But I think that what is going on in this discussion is in fact an example of something far more important and disturbing (dramatic music)….
What struck me is that some of these methods are not new at all: Interview, coaching and assessments (for example) have been going on for years for all types of formal learning requirements. These can be used to evaluate “informal” learning as well. But:
I have the feeling that people “worrying” about evaluating “informal” learning have been thinking only about the learning process and not so much the results. As learning and development professionals, many like to show the value of their work, as if they have to defend the learning they designed and delivered. But as we start to recognise that much of the learning process is not a result of designed and delivered formal learning work, things might get a bit scary for those same L+D professionals. How will I show what I have done (read: controlled) and how will I prove my worth? As we saw above (OTJ training and response to “help” pages) it’s difficult to get a good idea of how people respond to “informal” learning. My feeling in today’s session is that this bothers some people in the L+D profession.
In session W112 of the ASTD2013 conference, we interchanged usage of the word “learning” freely between two different meanings: “What was done” and “what people are competent for”. If we focus on what was done to learn, we have an evaluation problem. If we focus on competence (and business results) there is no problem.
So in fact the bottom line for me here = Who cares how people learnt? What matters is what they do and the business results we get. Forget your happy sheets and forget testing acquisition of knowledge, skills and attitude. Look at how people behave and the results they are getting and support that. As a profession, we need to stop getting caught up looking for ways to do what we always did in the past. It doesn’t work today. Not because things have changed, but because we are growing as a profession. We recognise that learners can (and do!) do things by themselves and we need to support them by creating a culture and environment that is open to all learning types and that supports sharing things and helps to capture the output of “informal” learning for the benefit of others with intention.
“2013 is about forgetting learning management processes and control and focusing on the user experience and business outcomes”