Crowdsourcing learning design with Larry Israelite

For the final session of ATD2015 day 1, SU401 address the topic of “crowdsourcing”. According our speaker Larry Israelite, the learning world has a problem: Often, we are not in the right place, nor fast enough to respond to business needs. Crowdsourcing can help. “But”, he hears us say, “we already do that. We have subject matter experts who help us.” Good start, but an SME is not a crowd. So how is the concept of crowdsourcing applicable to what we do in learning? Read on…

Who is it good for?

The man sitting next to me works for the U.S. Federal Government’s procurement department. He needs to be sure that people around the world and across departments comply to rule and regulations, follow processes and do a good job. He gets a request for some formal learning programme. He makes it. He facilitates it. Now he wants to know if people learnt something. It’s time to test them. 

According to speaker Larry Israelite, our learning designer will have to book a meeting with a (busy) subject-matter-expert in order to create a test. And after his first design effort, he will no doubt go back to that busy person to correct and refine the test. If he instead asked the crowd to make the test for him, he would get much faster to a perfectly acceptable test.

How does it work?

In 1906, a British statistician Francis Galton observed during a country fair that the average answer of a crowd of about 800 people guessing the weight of an ox was correct to within 1% of the actual answer. He proposed that provided the size of the crowd hit a critical-mass, this would always be the case: The crowd is smarter than the sum of its parts. And its right.

To see the principle in action, our speaker asked us to make a test together using an online tool from Smarterer. The 200-odd attendees created quiz questions on “the 80s” related to different categories (trends, movies, hair-bands). Then we corrected the test questions that other people had written: Are the answer options correct? Is the question clear? Are there issues with the answers cited as correct? etc.. Within about 5 minutes, we had a test of 300 questions, signed-off by over 100 people.

My first reaction was: This is awesome! Crowdsourcing is brilliant. Where is the app for this? I want it!

Then I thought a little more…

Firstly, what about skills?
My neighbour made a knowledge-based training programme. For him, it might be interesting to test that knowledge. But how much do I really care about knowledge testing? How can I get the crowd involved in skills-assessment?

Actually, do I even care about testing at all?
If I slow down a bit and think about the final result I want from my learning initiative, it is hardly ever (never?) really about people passing a test. What I want is for people to do what they are supposed to do, to get the business results I need. Provided they are consistently doing that, do I really care what they learnt or how? Wouldn’t it be better to put the crowd’s wisdom and resources into we be putting more effort into supporting actual performance in the real workplace?

And finally: Am I really sold on the wisdom of the crowd? 
In Francis Galton’s original crowdsourcing experiment, the participants were “country-folk” living in an era of agriculture and farming. They might have known a thing or two about oxen. And they could see the ox, a physical thing, “weighed-up of” real facts.

But today, we were talking about people-culture, movies and random 80s opinion. There was no ox in the room and the questions did not concern physical factual attributes. Yes, we made a test together and yes we agreed on the questions and answers. But were we right? And if we are not yet sure and this has to be checked, then didn’t we just lose the whole (speed) mission of crowdsourcing the test creation in the first place (instead of just asking an “80s SME”) ?

I suppose therefore that this question of crowdsourcing expertise and testing is not about speed and test answers at all, but about trust and control. Can our U.S Federal Government learning designer put his faith in the crowd of government employees to make his test? Or will he feel the continued need to control and verify everything with someone who knows best?

To be or not to be, THAT is the question.

Should I ask the crowd?

What is the point of Jelly? My first experience…

With the arrival of Biz Stone’s new app Jelly, people are starting to comment on the user experience and aim of the app. So what exactly IS the point of being able to answer a bunch of questions from other people? I remember asking myself the same question when this feature was on Facebook years ago… Here are my thoughts on the Jelly platform.


There is a lot of potential to get addicted and lost answering questions for no real reason

My first experience was like most other new platforms. I go on and browse and get lost in answering questions and making updates. To be honest, it was a little addictive, but I quickly started to wonder how long I would remain interested and if it was actually useful. First response: Its not. Why would I care to answer random questions from random people about random topics?

When I posted a few questions myself, I wasn’t much more inspired. Like “Vine” and “Twitter” before *, I felt like I was forcing myself to come up with something clever to say. Like the people taking pictures of their cat and asking “What animal is this?” it seems anything is deemed jelly-worthy. I remain skeptical. But…


* of course, I love these tools today and keep using them for very valuable things


If I can afford to wait for answers, it might add value to the classical Google searches

According to crowdsourcing theory, if enough people answer a question, the average of their answers is probably going to be right. This was first suggested by Francis Galton, who asked the crowd to judge the weight of an ox at a country fair in Plymouth, England. The average answer was remarkably close to the reality, even though people could only judge by sight. *

In “The Wisdom of Crowds” by James Surowiecki, many other applications of the impact of the crowd are discussed. Perhaps Biz Stone was inspired by these in the creation of Jelly? Getting valuable input from a group of people is the whole core pitch of Jelly.

Personally, I am going to continue my Jelly testing to see what it gives..


* I am testing that theory on Jelly (and Facebook and Twitter) as we speak with a picture of a jar of stones. We’ll see….


Unless I want information about a specific image (in front of me) I will have to be visually clever to ask good questions

The trouble I see with Jelly is that unless I am actually in front of an interesting but confusing visual stimuli, to get the most out of Jelly will require a lot of visual creativity. In a museum, I could take a picture of a painting and ask questions. But if I really want some wisdom on other topics, why can’t I just ask? Why do I need a picture?


I can’t choose who I ask questions to

As far as I can see, the questions I post to Jelly are thrown out into the world in a very random way. (It isn’t clear to me yet if all Jelly users are seeing my questions, or only the people I know). From a learning point of view, this is not very interesting to me. Although Surowiecki’s book suggests that it probably isn’t necessarily wise to seek out answers from experts, I would like to able to address my questions to specific communities to illicit experts answers. Maybe I should stick with LinkedIn groups…


It’s not very searchable (yet)

At the moment, I can’t search with Jelly. If I want to find out what the crowd thinks about a specific topic, I am stuck. This seems a shame to me. Wouldn’t it be nice to search (or ask) questions (and answers) on specific topic or categories?


The marketeers are going to love it

General Electric has jumped on the Jelly app already to position their brand within the guise of a question. This will surely continue to be a trend and I suspect that in between random questions from good-willing users, there will be a lot of product placement. I hope it doesn’t go too far…

My Twitter discussion with @zmccune has given me more insight on this. He suggests that the app could be used for focus groups and quick value mobile surveys.

But what are the other applications? Can we use this in the learning world?


Questions to Jelly

As with many such platforms, I have a lot of questions as I start:

  • Who is deciding  which questions I am offered to answer and in which order?
  • Who is seeing my questions?
  • Are you keeping all my answers and opinions to be used against me or given to the sales guys?
  • What is coming next?


If you’ve tried Jelly, I’d love to have your comments. If you haven’t – get on there and waste a bit of your life. You never know what you might find…

Thanks for reading