Blog Archives

Evaluating informal learning or focussing on what counts?

Session W112 is about to kick-off with Saul Carliner, Associate Professor of Concordia University (Montreal). On the last day of the conference, it is good to see so many smiley learning geeks still ready to soak up some information. And the big surprise already is that Saul is actually able to engage the audience… Even though he is an academic 🙂 Let’s go: “Evaluating Informal Learning”…

 

For several years now, the learning world has been talking about the importance of “informal” learning. It has always been around and has always been important, but we recognise more-and-more today that most learning happens in an informal way. The question is: If learning is happening all by itself, without any control and design by learning professionals, (how) can we evaluate its effectiveness?

At the start of the session, I had a chance to let the speaker know what was my own personal question today is: “Should we even bother trying to evaluate informal learning?” More on this later….

First: Let’s be sure we know what we are talking about. What is “informal learning”?

Defining it by saying its not formal learning is not good enough. According to our speaker, we need to be specific about our definition. My own approach to defining different types of learning has always been quite straight-forward. There are 3 types of learning:

20130522-184524.jpg

  • 1 and 2 are examples of when the learner intentionally seeks out learning, in any way, sometimes training
  • In the 3rd example, the learning is non-intentional: Without a conscious effort and action from the learner, something is learnt.
  • I call “1” formal learning
  • I call “2” non-formal learning
  • I call “3” informal learning

…today, our speaker refers to “2” and “3” together as “informal” learning and that is the subject of the session.

 

According to Hodkinson, Malcolm and Wihak, “informal” learning is about the following 5 aspects:

  • The process – how learning happened, ie: not in training
  • The learning location – where it happened, ie: not the training room
  • The purpose of the activity itself – either the learner took action in a non-formal way (“2”) or the learning was a secondary by-product of some other activity
  • The content – the type of content and platform was something other than formal/training
  • Consciousness – the learner may or may not have known that she was learning; “HR” wasn’t consciously controlling it

Traditionally, how do we evaluate learning in organisations? Can we do this for “informal” learning?

The framework most learning professionals have been using for some time is Kirkpatrick’s 4 Level evaluation model. In this model we look at how people react to learning, which knowledge, skills or attitude they actually acquire, how they behave after the learning and the impact on business results (in terms of key business drivers).

 

When it comes to “informal” learning, some levels of this evaluation system are not so easy to achieve:

  • As an example, suppose you want to evaluate the satisfaction of on-the-job training (L1). Several problems may arise. Was the OTJ training announced to the people who would evaluate it? Did they know when it would take place? How many people were being trained on the job? If it is only one person, you can’t do a good statistical analysis of the results achieved in order to update and improve the approach.
  • In another example, we discussed the difficulty of assessing the learning taken from an online “help” system. Somebody has clicked on a page to read some “help” information, but who? Why? What did they think of the information? Could they use it? Did work approach and results improve?

 

What can we do to evaluate this type of learning?

Who is already evaluating “informal” learning in line with Kirkpatrick’s levels and how?

Saul Carliner shared examples of some different organisations or professions we might not already have thought about and how they achieve evaluation of “informal” learning. These use Kirkpatrick’s levels to varying degrees. You might be able to use the same approaches:

  • Museums use interview-based techniques to find out from visitors what they thought of the museum and what they learnt. They leave visitor books where people can write down their comments when they want to. Museums with touch-screen information and interactive presentation systems can see who clicked on what, when and how long between the first click and second (which could suggest the amount of time spent absorbing information).
  • Marketeers have been struggling with the problem of evaluating their campaign success for years. The marketing blend consists of direct and indirect marketing, various platforms, multiple customer types and moments in time. Marketeers measure sales during an advertising campaign.
    Brand-recognition and brand-loyalty are regularly measured before and after campaigns.
  • Web-designers and web-masters have built various functions into their sites in order to achieve effective web analytics. It is possible to measure all sorts of different metrics to evaluate user behaviour.

Some ideas of how to evaluate “informal” learning in the workplace

Saul Carliner suggested some simple ideas for evaluating “informal” learning…

Suppose we have an employee seeking a promotion. She joined an IT consulting company after getting a degree in web-development 6 years ago and wants now to show what consulting skills and knowledge she has acquired in the last 6 years, in order to get a promotion. But being a billable consultant, she hasn’t had the opportunity for any formal training since her induction to the company and nobody knows what she “did”; nothing is in the LMS. How can we evaluate her learning?

 

What would you do?

  • By doing interviews and coaching your people, you can find out what they know. Clever interview techniques like STAR and intentional coaching methods like GROW can help us assess what our IT consultant learnt and did since joining the company.
  • Another approach you could use to see how people are learning is to put something in place to help your people create a work-portfolio that shows their development over time. Artists and musicians have been doing this for years: They collect drawings and track-lists that show what they have done and that indicate the acquisition and implementation of different knowledge, skills, attitude, behaviour and results.
  • As a side-note, my children’s school (Steiner) has a pedagogy which seems to outsiders far more informal than classical school environments. The standards are there, but we don’t see them so easily. Bearing in mind that the kids are able to create their own learning experience and do what they want in some disciplines, it could be difficult to assess their learning. What does Steiner do? They simply collect a portfolio throughout the year that represents the activities the children have done and the results these activities have given.

But I think that what is going on in this discussion is in fact an example of something far more important and disturbing (dramatic music)….

What struck me is that some of these methods are not new at all: Interview, coaching and assessments (for example) have been going on for years for all types of formal learning requirements. These can be used to evaluate “informal” learning as well. But:

I have the feeling that people “worrying” about evaluating “informal” learning have been thinking only about the learning process and not so much the results. As learning and development professionals, many like to show the value of their work, as if they have to defend the learning they designed and delivered. But as we start to recognise that much of the learning process is not a result of designed and delivered formal learning work, things might get a bit scary for those same L+D professionals. How will I show what I have done (read: controlled) and how will I prove my worth? As we saw above (OTJ training and response to “help” pages) it’s difficult to get a good idea of how people respond to “informal” learning. My feeling in today’s session is that this bothers some people in the L+D profession.

 

Also:

In session W112 of the ASTD2013 conference, we interchanged usage of the word “learning” freely between two different meanings: “What was done” and “what people are competent for”. If we focus on what was done to learn, we have an evaluation problem. If we focus on competence (and business results) there is no problem.

 

So in fact the bottom line for me here = Who cares how people learnt? What matters is what they do and the business results we get. Forget your happy sheets and forget testing acquisition of knowledge, skills and attitude. Look at how people behave and the results they are getting and support that. As a profession, we need to stop getting caught up looking for ways to do what we always did in the past. It doesn’t work today. Not because things have changed, but because we are growing as a profession. We recognise that learners can (and do!) do things by themselves and we need to support them by creating a culture and environment that is open to all learning types and that supports sharing things and helps to capture the output of “informal” learning for the benefit of others with intention.

 

As I discussed with my new-found friend @JD_Dillon today whilst pretending not to stalk Karl Kapp

 

“2013 is about forgetting learning management processes and control and focusing on the user experience and business outcomes”

Don’t forget to assess results (Evaluating training, part 5)

This blog page is part 5 of a 5 part blog series on evaluating training. Follow this link to find the mother page (page 1).

 

Finally, don’t forget to assess results

We do learning for a reason. It’s not enough to say “let’s do a training” and people always invest time, money and effort for a reason. If you can’t show them the return on investment in terms of concrete business results, forget about it.

 

The question of HOW to do this has been around for ages in the learning world. My opinion is that we should not stress too much about it:

  • Be clear from the outset what we are trying to achieve (see other blog post on “learning design questions”)
  • Agree what measurables (fluffy or precise) we are looking to improve in terms of results (profit, sales generated, number of difficult conflict situations)
  • Measure them at an agreed point in time before and after learning
  • Correlate results and draw conclusions

It’s the last part that tends to bother people, as they worry that their conclusions are not really conclusive…. But who cares? If we create a learning initiative because we want better results and then we HAVE better results, don’t stress !

 

Hope this was interesting (longest blog series yet?)

Re-read the other posts if you want to…

 

@dan_steer

Leave a comment

Join me on Twitter

Visit www.infinitelearning.be

 

Assessing behaviour (Evaluating training part 4)

This blog page is part 4 of a 5 part blog series on evaluating training. Follow this link to find the mother page (page 1).

 

If you want to assess behaviour, you need to observe and talk to different people

Kirkpatrick’s 3rd level of evaluation is about behaviour: What is the learner DOING after learning?

I think the best way to assess this is to observe the learner in action, but you can also ask the learner (much later after training) and ask other people (mostly a stakeholder or manager, but could be a 360° evaluation).

 

In order to do a good job of assessing behaviour vs. learning, you need to do 3 key thing:

  • Have a set bunch of “observables” and “numbers” criteria to measure
  • Take a base-measure of how the learner behaves BEFORE the learning initiative
  • Measure again afterwards

 

Ethical questions arise as to whether or not you should tell the learner when you are doing the assessment. I’ll stick my neck out here and answer “NO” – most people tend to put in more effort when they know they are under the spotlight and I also want to assess attitude when doing Level 3 assessments.

 

This blog series is split into 5 parts. Choose one of these links to read more…

 

@dan_steer

Leave a comment

Join me on Twitter

Visit www.infinitelearning.be

 

Evaluating what people learnt (Evaluating training, part 3)

This blog page is part 3 of a 5 part blog series on evaluating training. Follow this link to find the mother page (page 1).

 

If you want to assess learning, you need to test competence

The way I define competence gives an immediate idea on how to measure it:

  • “Having the necessary knowledge, skills, attitude and resources to achieve (business) results”

 

This means that assessing competence will require:

  • Knowledge assessment, using tests for example
  • Skill testing, either in a controlled environment or on the job (I prefer the latter)
  • Attitude assessment, which would be mostly done by observing behaviour and having conversations with people

We don’t talk about assessing resources here… that is only included in the definition to note that people cannot be expected to DO things if they don’t have the resources (unless the competence is proactivity 🙂

 

This blog series is split into 5 parts. Choose one of these links to read more…

 

@dan_steer

Leave a comment

Join me on Twitter

Visit www.infinitelearning.be

 

Happy sheets (Evaluating training, part 2)

This blog page is part 2 of a 5 part blog series on evaluating training. Follow this link to find the mother page (page 1).

 

If you are talking about level 1 evaluations (“happy sheets”) these are my current favourite questions:

OPEN questions:

  • What is your opinion of the training?
  • What did you learn?
  • What will you do differently in the future?

 

Some people will go further on each of these questions, asking things like:

  • What did you find good? What did you find bad? What do you think of the duration? What did you think of the trainer etc etc…

If you are planning to create reports on these elements to compare different learning providers and track progress in trainer-performance, these questions can be interesting.

Personally, I use happy-sheets to see how I can improve in my own work as a trainer, so I want to reduce admin and increase useful feedback. I just really want to know whatever THEY want to tell me.. .. so I levave it quite open.

 

My current favourite CLOSED questions are:

  • Was this added-value for you?
  • Would you recommend it to others?

Short and sweet – I don’t like to measure things on scales anymore. Let’s cut the crap and get to the heart of it. Thankzs @Gosse_C from KPMG Belgium for this idea some years ago…

 

What about 1 to 5 and 1 to 4 scales?

Some people want to know whether you should use a 5 point scale or a 4 point scale. Tough one..

  • First response is generally that a 4 point scale obliges people not to “sit on the fence” and show their real preference. As a Learning + Development Manager in the past, I used a 5 point scale and can’t really say “people always scoring 3” happened a lot … so for me, this is a theoretical question, rather than practical. As a side note, I told my team of trainers that 3 was not acceptable anyway – we wanted 4s and 5s !
  • Let’s assume we did use a 4 point scale – does it work? In my experience as a trainer, I didn’t see anything under “good” and “really good” in the answers. Is this simply because I’m so good? 🙂 I’m not convinced… SOMETIMES what I saw was someone scoring “good” (3) but adding lots of negative comments. For me, this meant that they just didn’t dare to put bad, but really the perception was bad…

..so you need to be careful that you scores represent reality …which is why I don’t use them and prefer only the OPEN and CLOSED questions noted above.

 

Now, what about those other levels of evaluation? Learning, Behaviour and Results?

Asking participants what they think about these things is good, but not enough!

 

This blog series is split into 5 parts. Choose one of these links to read more…

 

@dan_steer

Leave a comment

Join me on Twitter

Visit www.infinitelearning.be

 

What questions should you asked to assess training? or What is the best way to evaluate training?

(This blog post is page 1 of 5 …scroll to the bottom for links to the other pages)

Did I really just dare to answer this question? After years of debate? Yes, I did ! And why not .. maybe my opinion is worth something to someone….

I saw this question in a recent LinkedIn discussion from the ASTD group, raised by Kim Schweitzer. Again, there is SO much to say! Actually, the question was about “feedback from the audience”, but I adapted it slightly to talk about other things…

There are SO many questions that can be asked and approaches that can be taken to evaluating training – I’ve seen a lot as both a Learning+Development Manager and a freelance trainer. And the conversation goes on… so I will not try to play the expert here, but just outline what I think are the key issues.

 

Let’s start at the beginning…

What is key is to first be clear on WHAT you want to assess: Satisfaction? Learning? Behaviour? Competence? Return on Investment?

..then you need to ask WHEN you will do this

..and then: What will you DO with all this information?

 

Regarding WHAT you want to evaluate, consider Kirkpatrick’s 4 levels of evaluation:

  1. How did they react to training?
  2. What did they learn?
  3. What do they do differently?
  4. What are the results?

…see this link for more information

 

It’s my opinion that only level 1 can truly be assessed with a satisfaction form (happy sheet): How did they react?

We might say that we can assess levels 2, 3 and 4 with a happy-sheet, but I disagree. You can only assess what they SAY they learnt, do, achieved (which is perhaps also worth asking, by the way).

 

The rest of this blog is split into 4 parts. Choose one of these links to read more…

 

Leave a comment

Join me on Twitter

Visit www.infinitelearning.be

10 things you can learn from David Brent about running performance evaluations

During leadership training today, we watched some of the BBC series « The Office » and evaluated the boss’ approach to dealing with Performance Evaluation Meetings.

To see David Brent in action, check out part of the episode in question (Series 2, Episode 2) here

 

There are many different performance evaluation processes and these are not discussed here. Assuming that you, like many corporate employees, are running “classical performance evaluation moments”, read on…

 

Based on our evaluation of David Brent (good and bad) work, we created a non-exhaustive list of 10 best practices for dealing well with performance evaluations:

  • Explain the purpose of the meeting and have a meeting structure
  • ….it is my opinion that one should deal first with the past, then the present, then the future
  • Focus on the employee being reviewed
  • It’s OK to have a 2-way conversation and to include bottom-up evaluation, but it’s not OK for the reviewer to be self-centred and egoistic
  • Listen well to your employees – give them a chance to express things about motivation, performance, future plans etc..
  • Give constructive feedback, not just encouragement
  • Use a blend of hard fact-driven measures and subjective observation based measures
  • Discuss results and relationships, motivation and performance, competence and behaviour
  • Don’t make career promises you can’t keep …and be careful when you discuss potential evolution to ensure its not understood as a promise
  • Take time to align vision, values and objectives
  • Be calm and patient

 

Leave a comment

Follow me on Twitter

Visit www.infinitelearning.be