Blog Archives

Ger Driessen’s Vision of Big Data for Learning and Performance Support

The man from next door (OK, The Netherlands) Ger Driessen kicks off ASTD2014 session SU210 by telling us that we won’t leave with concrete ready to implement tips today, but what he does want is that we be ready for the future of Big Learning Data…

 

What is big data?

Obviously it’s big, says Driessen. But when we talk about big data today, we mean something specific. It’s big, it’s second hand, it’s messy and it’s all about correlation.

In 1439, Gutenberg introduced the printing press. In less than 100 years, more than 8 million books had been printed. (More than in the previous 12 centuries!) This number kept growing at a ridiculous rate until the year 2000, where digital content started to take its place. Today, there is less than 7% of analogue data compared to 93% digital. To be more precise, 1200 exabytes. This number is enormous! Translate it into books, and you can cover the USA 52 times. Burn into onto CD-Roms and pile them up and you’ll get 5 times from Earth to the moon! So, big data is BIG!

The data we have is also very messy and second-hand. As an example, when the USA used to correlate information on pricing into a nice tidy report, they had to spend 250 million to collect data from many many offices. It was a big job. And inefficient: Between the time they had collected the information and the time they had out it all into a report, the data was old and out-of-date. With big-data potential, this will be a problem of the past…

Finally, Driessen underlines that when we talk about big data, we are not thinking in old-fashioned ways about causation, but rather concentrating on correlation and trends. If we can capture trends, we may have useful input for various applications. Like learning.

 

Data is available and applied everywhere

Data can be collected from reports, Internet, tablets and smartphones, GPS and location sensors, wearable technology and pretty much everything! What was the internet for sharing between computers became the internet of things, and now the internet of everything. In the future, we will hook up to the “internet of brains”.

The data collected is being used by Google to find out about flu trends in the USA, by Obama in his election campaigns and by Netflix to feed audience reactions into plot and script-writing of future episodes. Think of an application and you can probably use big-data to bring results.

 

So, what about learning and big-data?

Driessen starts by underlining that in the last few years, the learning focus with big-data has been on evaluation of learning, with a large focus on level 1 and level 2 evaluation. but he says that other examples are far more interesting, because they feed into learning activities, rather than pulling conclusions out of (about) learning that has already take place.

The first interesting example shared by Driessen is of the Bank of America. Faced with a problem in productivity in their call-centre, they were thinking about giving some training to their people. But first they decided to run some people analytics. Using wearable technology, they tracked the movements of their staff to look for trends at work. They quickly realised that most of the staff had extremely limited social contact at work. With the hypothesis that social-contact might lead to better sharing and learning (venting, discussing) they decided not to focus on training, but simply change the shift pattern in the call-centre to get people more in contact with each other. Result? Better productivity!

When it’s not people analytics, companies are using predictive analytics to look at what is currently happening (online) and make predictions. Facebook knows what you and your friends are looking at (and liking) and drives publicity to you that is likely to be interesting. Could the same kind of predictive analysis proactively help people to improve performance at work?

 

What kind of data could we collect to feed into learning + performance support?

According to Driessen, it will be very easy in the future to use devices to collect interesting data on position/metrics, biometrics, use of tools and hardware, social media usage etc… We will be able to track what people are doing and provide proactive input to help them perform better. Although it might be a bit early today, the future is coming….

 

See also:

 

Thanks for reading!

D

 

Big Data for Learning in a Call-Centre

Whilst researching for a conference speech I will give soon for a Belgian government organisation on new learning trends, I have been checking out some of the ideas and literature around Big Data. This is a hot buzz-word with a lot of applications in the world of marketing and sales, but I am wondering about its application to learning. I don’t know yet what is truly possible today, but I wanted to share an idea that came to me of how Big Data could help learning and performance improvement in a specific environment: Call centres…

 

When I was Training and Development Manager for Sitel in Belgium (2002-2006) I would regularly meet with my colleague Peter to discuss learning needs. Peter was the head of the quality department. If you’ve ever called a call-centre before, you know those guys exist. They are the ones listening to your calls that may be recorded for quality and training purposes.

At the time, there were around 15 quality monitors for something like 600 call agents. In order to “assure quality” and “assess learning needs”, Peter’s team would spend half of the day listening to calls and assessing quality against a check-list of standards. The other half of the day would be spent side-by-side helping the call agents with whatever issues they had.

Suppose a call lasts 3 minutes and the after-call assessment/admin time might take a quality monitor another 3 minutes. One call treated in 6 minutes. 10 in 60 minutes. That means that in every half-day, 1 QM would hear 40 calls. 15 QMs would hear 600 calls. If we had 600 call agents each taking only 4 calls an hour, that’s nearly 10000 calls in a half-day. Of those 10000 calls a day, 600 are being heard by the QM team. That’s 6%. Heard and helped.

 

What could Big Data principles do to help here?

Imagine that instead of a Quality Monitor listening to only 6% of calls we had a voice and speech recognition tool listening to every call. Programmes within the QM analysis software would recognise key words or phrases, questions or objections and analyse their frequency or position in the call along with changes in frequency or volume and many other data. These data packets would then be laid out against data concerning call-times, frequency of calls and all other previous customer data, time of day, absenteeism in the call-centre, seasonal information and any other data about employment of the call-agent or his team members…. With all the data collected, the machine would run queries on the data, assessing trends. The Quality Monitor would then pull out his report and analyse further, perhaps dipping into more specific and targeted and useful moments of a call-recording in order to bring the all important human ear and evaluation to the data already provided.

In some cases, the machine would recommend specific learning points all by itself. It might, for example, instruct sales agents to use keywords X, Y, Z in sales calls concerning ______ in order to close more sales. It could even provide predictions for staffing and potential quality problems for future promotions or services offered by the company.

In many cases, the quality monitor would be able to spend more time working with the people who need on-the-job training and less time listening to the generic call moments that bring no added-value to performance improvement. We cannot imagine that the work of the QM would be redundant. Absolutely not – those people will be required to make (emotionally) intelligent evaluations that the machine cannot and to analyse in further and more creative ways the data collected.

But in all cases, it is clear to me that such voice-recognition software and Big Data computing power along side good statistical analysis and human evaluation would in this example create improved efficiency and could have a massive impact on learning.

 

Larger and more diverse sets of well-collected and organised data, better needs analysis, with clearer trends and more time to focus on understanding and improvement. Big data for learning.

What do you think? Where could your organisation innovate its learning needs analysis if all the available data could be efficiently captured and quickly organised and treated?

 

Thanks for reading
@dan_steer