Blog Archives

ASTDTK14: Experimenting and Engaging to Create Effective Learning

As the days distance me from Las Vegas and the ASTD Techknowledge Conference, the eternal presenter in me is looking for the message, the one big takeaway, the answer to the 3 most important questions: “What is the point? What do you want from me? What’s in it for me?”

My answer today is that learning effectiveness is all about experimenting with learning initiatives and engaging the learner…

 

Both innovation and even real knowledge come from experimentation

In the opening keynote at TK14, Jeff Dyer told us that one of the keys to innovation is experimentation: We have to try new things if we want to get new results. If, as Donald H Taylor told us in Brussels last October, “the goal of learning is to be … agile enough to keep up with an ever changing environment”, then we need to stop throwing traditional training solutions at our business problems and approach things differently: Using open “what if?” questions and associative thinking, we must create hypotheses for the causes of business problems (and their solutions) and then set about designing new learning experiments that can test the validity of those hypotheses and lead to effective results. This approach to dealing with problems is key to any science or research process. But the learning function is not often seen as science and research…

Rueben Tozman said we must start by thinking about business in the same terms as our customers … and then define data models that tie behaviour, processes and learning activities to bottom line results. Based on those models, we can create data-driven-learning initiatives that can truly assess the situation and improve it. Too much of what we do in L+D (particularly training) is either unmeasurable or unmeasured. At the best, we can only say how people reacted to a training, but we cannot say that performance issue “X” is due to reasons “A”, “B” or “C” or that “A”, “B” or “C” can be resolved by specific (and effectively measured) learning initiatives “1”, “2” or “3”. While the rest of the business reports on almost everything, learning stumbles along on hope and faith.

To help us out, things are changing in the world of learning measurements. The traditional LMS and its “who followed what training” statistics will be replaced with advanced learning record systems, using experiential APIs like Tin Can, that could link pretty much any learning or performance activity to a data model that provides real insight to the learning profession.

And so my first conclusion is as follows: Know what makes the business run, be open to something new and be able to design data-driven learning experiments to assess effectiveness and really improve performance.

 

When it comes to creating something new, think “engagement”

Technology conferences tend to focus on new approaches to learning; TK14 was no exception. Starting with quite basic “enhancement strategies and tools” like QR codes for training, video learning initiatives and social media for formal learning and moving past transmedia storytelling to more granular MOOC-based learning strategies or attempts to gamify the learning experience, the thin red line of it all was “engagement”.

Amy Jo Martin kicked-off TK14 day 2 with a message about engagement and sentiment: “What connects people to you is not what you do, but why you do it”. * Extrapolating, I thought about why learners engage with other learners, materials or specific formal initiatives: They do it because they want to improve, to find solutions, to get good at something and because they “dig” it. In all our efforts to support this, we need to keep that basic engagement alive.

* This week, the London Learning Technologies Conference was opened by Brian Solis, known for his message about “the secret ingredient to engagement: empathy” and the importance of the user-experience.

Jane Bozarth and Mark Oehlert said that learning communities exist everywhere and our job is not to convince people of their value, but rather to convince them to see the value of “formalising” community activities at work using specific platforms (like Yammer or LinkedIn) and more open sharing or learning narration. If we start small, think big and move fast (Oehlert – video) with community activities, we can create a river of information flow that has real value for the organisation.

What really stood out for me (and kept me awake at night!) was the unique and numerous possibilities of mobile, as outlined by Chad Udell. Coming to Vegas as a mobile learning cynic, I was thinking only of more boring e-learning delivered on small screens. Leaving, I am convinced that since more-and-more people love to play with their phones and phones can do more-and-more things, there are real opportunities to engage and create learning effectiveness. Bring on the mobile revolution!

What did I miss at TK14 on “engagement”? Augmented Reality. I am running my own experiments with Aurasma for training, orientation exercises and onboarding experiences and I know that David Kelly shared his experience with Google Glass at LT14uk. I am sure that in the future such tools will allow us to shorten the distance between the learner’s own reality and more layers of knowledge, skills and future enhanced performance. Fingers crossed for ASTD ICE 2014 in May…

Either way, my second conclusion is simple? Let’s find better ways to make the learning experience awesome, natural and effective.

 

Experimenting and engaging – that is the message for me from ASTD TK14.

 

See you next time!

D

 

 

Advertisements

Reuben Tozman on Learning Scientists and Designing For Effective Data Collection

The final session of the day is with Reuben Tozman of edCentre Training Inc. He is talking about why learning professionals should think of their work as science, then focus more on data as they design their learning initiatives…

The pitch

In the learning world, we often don’t measure the effectiveness of our “learning”. Most of the people present today measure “participant satisfaction” for a specific training module or, at best, the knowledge those participants acquired, or can remember in a test. Some learning people will go further and evaluate (at level 4) to see if business performance has actually improved. But according to Tozman, very rarely do we actually evaluate if it was our “learning” that made the change in performance and if so, which part and how. If we could get that far with evaluation of the “learning” delivered, we could improve the minimum effective dose of learning (strip away what doesn’t have impact) and (more importantly) change the right things to make it work and ensure the performance results we seek.

Why aren’t we doing this already?

According to Tozman, part of the reason we are not doing this is that learning people do not always see themselves as “scientists” in the workplace. They don’t consider what they are doing as “experiments” and they don’t have clear data-models in mind when developing “learning”.

We tend to see ourselves as final solution providers that dump a “learning solution” into the world assuming it will just work. It’s like we are expected to bring solutions, rather than experiments. Half of the time we don’t even look to see if performance improved and the other half of the time, we don’t change anything even when the performance stays the same. We just “failed”.

Tozman suggests that we should change our approach to one where we, the learning professional, do some real science: State the problem, form a hypothesis, create an experiment to test the hypothesis, measure the experiment results and form conclusions about the hypothesis. And if we prove the hypothesis wrong, we move onto testing the next one.

To achieve that kind of scientific approach, we have to be able to design learning with data in mind.

What exactly do we mean by learning science?

If an experiment is going to effectively measure against a specific hypothesis, it needs to have a clearly defined data model, with measurable data point.

For example, imagine the following:

  • There is a problem with engagement, as shown by lack of retention and poor employee satisfaction
  • Hypothesis: People are not interested in the company vision and values
  • Experiment: Re-create the orientation programme to allow (but not oblige) participants to seek out for themselves more information about company vision and values
  • Run the experiment and measure results to see if “yes” or “no” people are interested in the company vision and values
  • Look at the results and conclude if the hypothesis is true
  • If it is, create something to improve the interest in vision and values; if it is not (and we are satisfied with the experiment) test the next hypothesis
  • What does it mean to “design for data”?

    In the experiment above , the “data model” gives us our definition of “engagement”: “People who are engaged proactively seek out information about company vision and values”. The “data points” we will measure might be “types of content chosen”, “time spent looking at that content”, “number of outbound links clicked from within one particular chunk of content” etc…

    When we re-create the orientation program, we might chunk-down all the possible parts on company vision and values and allow learners the chance to self-orientate though the possible options (if they want to). What we are hoping to create is an effective experiment to prove our hypothesis true or false. If we can watch what they do and prove our hypothesis true, then we can do something about it and eventually see better bottom-line performance results (better retention and more satisfaction).

    How will this help to create better learning?

    If we do all this, we will firstly be able to know that we are working on the right things (because we took the time to validate our hypotheses about the cause of poor performance) and we will be able to design something that we know is effective enough to cause a positive desired change in performance (in this case, actually improving our people’s interest in company vision and values). We will use the same data-driven scientific approach to design learning initiatives with lots of measurable data points, so that afterwards we can make associations between what we did and how this impacted bottom-line performance improvement.

    This is a different approach to the traditional design process. It will create real performance improvement and we will be able to confidently say that what we did had an impact.

    Finally…

    If learning people get in the habit of creating small measurable data-points in learning that correspond to well thought out hypotheses, we will be able to start collecting more and more data to show the link between what people learnt and how it impacts performance. Using tools like “Tin Can API” we will be able to collect and analyse lots of chunks of data from different systems and draw effective conclusions about the link between learning and performance… leading to real improvement.