ASTDTK14: Experimenting and Engaging to Create Effective Learning

As the days distance me from Las Vegas and the ASTD Techknowledge Conference, the eternal presenter in me is looking for the message, the one big takeaway, the answer to the 3 most important questions: “What is the point? What do you want from me? What’s in it for me?”

My answer today is that learning effectiveness is all about experimenting with learning initiatives and engaging the learner…

 

Both innovation and even real knowledge come from experimentation

In the opening keynote at TK14, Jeff Dyer told us that one of the keys to innovation is experimentation: We have to try new things if we want to get new results. If, as Donald H Taylor told us in Brussels last October, “the goal of learning is to be … agile enough to keep up with an ever changing environment”, then we need to stop throwing traditional training solutions at our business problems and approach things differently: Using open “what if?” questions and associative thinking, we must create hypotheses for the causes of business problems (and their solutions) and then set about designing new learning experiments that can test the validity of those hypotheses and lead to effective results. This approach to dealing with problems is key to any science or research process. But the learning function is not often seen as science and research…

Rueben Tozman said we must start by thinking about business in the same terms as our customers … and then define data models that tie behaviour, processes and learning activities to bottom line results. Based on those models, we can create data-driven-learning initiatives that can truly assess the situation and improve it. Too much of what we do in L+D (particularly training) is either unmeasurable or unmeasured. At the best, we can only say how people reacted to a training, but we cannot say that performance issue “X” is due to reasons “A”, “B” or “C” or that “A”, “B” or “C” can be resolved by specific (and effectively measured) learning initiatives “1”, “2” or “3”. While the rest of the business reports on almost everything, learning stumbles along on hope and faith.

To help us out, things are changing in the world of learning measurements. The traditional LMS and its “who followed what training” statistics will be replaced with advanced learning record systems, using experiential APIs like Tin Can, that could link pretty much any learning or performance activity to a data model that provides real insight to the learning profession.

And so my first conclusion is as follows: Know what makes the business run, be open to something new and be able to design data-driven learning experiments to assess effectiveness and really improve performance.

 

When it comes to creating something new, think “engagement”

Technology conferences tend to focus on new approaches to learning; TK14 was no exception. Starting with quite basic “enhancement strategies and tools” like QR codes for training, video learning initiatives and social media for formal learning and moving past transmedia storytelling to more granular MOOC-based learning strategies or attempts to gamify the learning experience, the thin red line of it all was “engagement”.

Amy Jo Martin kicked-off TK14 day 2 with a message about engagement and sentiment: “What connects people to you is not what you do, but why you do it”. * Extrapolating, I thought about why learners engage with other learners, materials or specific formal initiatives: They do it because they want to improve, to find solutions, to get good at something and because they “dig” it. In all our efforts to support this, we need to keep that basic engagement alive.

* This week, the London Learning Technologies Conference was opened by Brian Solis, known for his message about “the secret ingredient to engagement: empathy” and the importance of the user-experience.

Jane Bozarth and Mark Oehlert said that learning communities exist everywhere and our job is not to convince people of their value, but rather to convince them to see the value of “formalising” community activities at work using specific platforms (like Yammer or LinkedIn) and more open sharing or learning narration. If we start small, think big and move fast (Oehlert – video) with community activities, we can create a river of information flow that has real value for the organisation.

What really stood out for me (and kept me awake at night!) was the unique and numerous possibilities of mobile, as outlined by Chad Udell. Coming to Vegas as a mobile learning cynic, I was thinking only of more boring e-learning delivered on small screens. Leaving, I am convinced that since more-and-more people love to play with their phones and phones can do more-and-more things, there are real opportunities to engage and create learning effectiveness. Bring on the mobile revolution!

What did I miss at TK14 on “engagement”? Augmented Reality. I am running my own experiments with Aurasma for training, orientation exercises and onboarding experiences and I know that David Kelly shared his experience with Google Glass at LT14uk. I am sure that in the future such tools will allow us to shorten the distance between the learner’s own reality and more layers of knowledge, skills and future enhanced performance. Fingers crossed for ASTD ICE 2014 in May…

Either way, my second conclusion is simple? Let’s find better ways to make the learning experience awesome, natural and effective.

 

Experimenting and engaging – that is the message for me from ASTD TK14.

 

See you next time!

D

 

 

Reuben Tozman on Learning Scientists and Designing For Effective Data Collection

The final session of the day is with Reuben Tozman of edCentre Training Inc. He is talking about why learning professionals should think of their work as science, then focus more on data as they design their learning initiatives…

The pitch

In the learning world, we often don’t measure the effectiveness of our “learning”. Most of the people present today measure “participant satisfaction” for a specific training module or, at best, the knowledge those participants acquired, or can remember in a test. Some learning people will go further and evaluate (at level 4) to see if business performance has actually improved. But according to Tozman, very rarely do we actually evaluate if it was our “learning” that made the change in performance and if so, which part and how. If we could get that far with evaluation of the “learning” delivered, we could improve the minimum effective dose of learning (strip away what doesn’t have impact) and (more importantly) change the right things to make it work and ensure the performance results we seek.

Why aren’t we doing this already?

According to Tozman, part of the reason we are not doing this is that learning people do not always see themselves as “scientists” in the workplace. They don’t consider what they are doing as “experiments” and they don’t have clear data-models in mind when developing “learning”.

We tend to see ourselves as final solution providers that dump a “learning solution” into the world assuming it will just work. It’s like we are expected to bring solutions, rather than experiments. Half of the time we don’t even look to see if performance improved and the other half of the time, we don’t change anything even when the performance stays the same. We just “failed”.

Tozman suggests that we should change our approach to one where we, the learning professional, do some real science: State the problem, form a hypothesis, create an experiment to test the hypothesis, measure the experiment results and form conclusions about the hypothesis. And if we prove the hypothesis wrong, we move onto testing the next one.

To achieve that kind of scientific approach, we have to be able to design learning with data in mind.

What exactly do we mean by learning science?

If an experiment is going to effectively measure against a specific hypothesis, it needs to have a clearly defined data model, with measurable data point.

For example, imagine the following:

  • There is a problem with engagement, as shown by lack of retention and poor employee satisfaction
  • Hypothesis: People are not interested in the company vision and values
  • Experiment: Re-create the orientation programme to allow (but not oblige) participants to seek out for themselves more information about company vision and values
  • Run the experiment and measure results to see if “yes” or “no” people are interested in the company vision and values
  • Look at the results and conclude if the hypothesis is true
  • If it is, create something to improve the interest in vision and values; if it is not (and we are satisfied with the experiment) test the next hypothesis
  • What does it mean to “design for data”?

    In the experiment above , the “data model” gives us our definition of “engagement”: “People who are engaged proactively seek out information about company vision and values”. The “data points” we will measure might be “types of content chosen”, “time spent looking at that content”, “number of outbound links clicked from within one particular chunk of content” etc…

    When we re-create the orientation program, we might chunk-down all the possible parts on company vision and values and allow learners the chance to self-orientate though the possible options (if they want to). What we are hoping to create is an effective experiment to prove our hypothesis true or false. If we can watch what they do and prove our hypothesis true, then we can do something about it and eventually see better bottom-line performance results (better retention and more satisfaction).

    How will this help to create better learning?

    If we do all this, we will firstly be able to know that we are working on the right things (because we took the time to validate our hypotheses about the cause of poor performance) and we will be able to design something that we know is effective enough to cause a positive desired change in performance (in this case, actually improving our people’s interest in company vision and values). We will use the same data-driven scientific approach to design learning initiatives with lots of measurable data points, so that afterwards we can make associations between what we did and how this impacted bottom-line performance improvement.

    This is a different approach to the traditional design process. It will create real performance improvement and we will be able to confidently say that what we did had an impact.

    Finally…

    If learning people get in the habit of creating small measurable data-points in learning that correspond to well thought out hypotheses, we will be able to start collecting more and more data to show the link between what people learnt and how it impacts performance. Using tools like “Tin Can API” we will be able to collect and analyse lots of chunks of data from different systems and draw effective conclusions about the link between learning and performance… leading to real improvement.

    How the Tin Can API could revolutionise the link between learning and performance, according to Tim Martin

    Tim Martin has been working with SCORM for years, listening to people’s experience and problems and thinking about its limitations and future. Given his experience as a key player in Project Tin Can, Tim is here today to advocate the values of Tin Can, share a few concrete project examples and show us how the future of Tin Can is going to be awesome…

    First things first: What is Tin Can?

    Tin Can is the answer to SCORMs problems.

    SCORM is a two-party system consisting of an LMS and some content, with standards about how it all fits together and how it works. SCORM is able to report in a simple way about the formal learning activities a formal learner undertakes. For example, tell us how many people followed a particular learning module. That’s it.

    What is wrong with SCORM?

    SCORM is limited because it can only tell us how or when one particular learner logged into an LMS to take a prescribed piece of training in an active browser session. If you read back the last sentence, you will see that it is fully loaded with all the problems of SCORM. That is not how we learn and that is not how we as organisational L+D people want learners to learn….

    With all the hype around 70:20:10 and non-formal learning that takes place in the organisation, it seems clear that the majority of what people learn doesn’t come from classical training or formal learning solutions like the e-modules or video that SCORM has been measuring. The majority of learning is not coming from one person (alone) logged into one specific LMS system (if any) to follow a prescribed event (eg training) at one specific moment in time. People getting a lot of content from a lot of different places, sharing a lot of ideas and they are definitely learning in a less formal way.

    And many L+D people today don’t want to oblige people to login to one particular LMS system to control their learning in a formal way. Martin cites the example of Google who told him “We don’t want an LMS. We don’t want people to have to do specific controlled things in a specific controlled way. We just want them to go out and learn.” But Google also wants to be able to see what is learnt and how it impacts performance. Enter Tin Can API…

    How does Tin Can work?

    Tin a Can API is a shared language for systems to talk to each other about the things that people do. It consists of an “activity provider” (whatever system it might be) telling what people did (whatever it was) and an LRS (learning record system) that listens and records. It does this with a simple noun-verb-object approach that records all activities and puts them in the LRS.

    This modern web-service based system easily allows different systems to collect information. Here is a list of use systems that have already adopted Tin Can as their standard. Theoretically, Tin Can API can capture everything that is going on. And then correlate those activities, run analysis and give insights about what is going on. Across different systems.

    The “activity provider” will report on (learning) activities across a variety of systems, which will then be stored in the LRS. This information can then be compared to data about performance from other non-learning systems. The LRS will be searchable (“bigdatable”) and could be used to draw all sorts of conclusions about learning and performance.

    SCORM can only tell us a little bit about learning activities, mostly about completion rates, sometimes about test results (eg Tim followed training module X). Tin Can will go much further, allowing us to capture almost anything at any level. Martin gives an example, comparing to a SCORM system that can (only) tell us that 6 learners completed a CPR module and scored average 68%: Tin Can will be able to tell us how many times one learner compressed the CPR test dummy during the simulation, where he put his hands and the impact that had on the reanimation process. It will be able to produce a massive amount of (big) data and analyse everything, looking for trends and giving full reporting on the correlations between different learning activities/results and, eventually, performance.

    But it goes SO much further than this still formal learning reporting…

    It may be awesome, but give me a practical example of this awesomeness please…

    Imagine the following: Google employees pick up content from across a variety of systems. They search, they consume and then they share content on platforms like LinkedIn, Yammer (or whatever Googley thing Googles use). Let’s pretend they are sales people. They then go out into the sales world and makes sales (or doesn’t).

    Tin Can will allow the Google L+D people to run analysis at a very detailed level on all the different (learning) content that was picked up by all the different people. Add into the mix reporting on who searched and shared what, how, where and when. Who liked something they read or retweeted it. Tin Can will then allow us to correlate all that information with sales performance activities and data (again from different systems) in order to draw conclusions about the acquisition of knowledge and skills and the impact on sales.

    Example: Do people who learnt how to ask specific questions in a sales meeting close more deals? Do people who called back their prospects within 2 weeks of meeting them close more sales than those who didn’t? What key words are top sales people searching on their browsers? Is there a correlation between the number or type of sharing on social media platforms and the sales closed. If so what?

    The possibilities for data collection and analysis with Tin Can are endless, given the simplicity of the way in which the “activity providers” report on what is being done (see below…). With such information, learning people (and managers) will be able to focus more on the learning the organisation needs to bring the results it is missing.

    Personally, I find this very exciting (others more cynical might imagine the scary dark-side applications of such systems). I already wrote about “Big Data for Learning in a Call-Centre” but didn’t realise the standards were there. Even though Tim Martin has repeated several times today that it’s not all there already and that we need to move slowly, it is clear to me that this will go very far…

    Thanks for reading
    D