Tuesday, January 16, 2018

AI just outperformed humans at reading, potentially putting millions of customer service jobs at risk of automation. Could it do the same in learning?

Something momentous just happened. An AI programme, from Alibaba, can now, for the first time, read a text and understand it better than humans. The purple line has just crossed the red line and the implications are huge.
Think through the consequences here, as this software, using NLP and machine learning, gets better ad better. The aim is to provide answers to questions. This is exactly what millions of people do in jobs around the world. Customer service in call centres, Doctors with patients, anywhere people reply to queries... and any interactions where language and its interpretation matter.
Health warning
First we must be careful with these results, as it depends on two things 1) the nature of the text 2) what we mean by ‘reading’. Such approaches often work well with factual texts but not with more complex and subtle texts, such as fiction, where the language is difficult to parse and understand, and where there is a huge amount of ‘reading between the lines”. Think about how difficult it is to understand even that last sentence. Nevertheless, this is a breakthrough.
The Test
It is the first time a machine has out-done a real person in such a contest. They used the Stanford Question Answering Dataset, to assess reading comprehension. The test is to provide exact answers to more than 100,000 questions. As an open test environment, you can do it yourself, which makes the evidence and results transparent. Alibaba’s neural network model, based on a Hierarchical Attention Network, which reads down through paragraphs to sentences to words, identifies potential answers and their probabilities. Alibaba has already used this technology in their customer service chatbot, Dian Xiaomi, to an average of 3.5 million customers a day on the Taobao and Tmall platforms. (10 uses for chatbots in learning).
Indeed, the one area that is likely to benefit hugely from these advances is education and training. The Stanford dataset does have questions that are logically complex and, in terms of domain, quite obscure, but one should see this development as great at knowledge but not yet effective with questions beyond this. That’s fine as there is much that can be achieved in learning.We have been using this AI approach to create online learning content, in minutes not months, through WildFire. Using a  similar approach, we identify the main learning points in any document, PPT or video, and build online learning courses quickly, with an approach based on recent cognitive psychology that focuses on retention. In addition, we add curated content.
The online learning is very different from the graphics plus multiple-choice paradigm. Rather than rely on the weak ‘select from a list’ MCQs (see critique here), we get learners to enter their answers in context. It focuses on open-input and retention techniques outlined by Roedinger and McDaniel in Make It Stick.
To give you some idea of the sheer speed of this process we recently completed 158 modules for a global company, literally in days, without a single face-to-face meeting with the project manager. The content was then loaded up to their LMS and is ready to roll. This was good content and they are very happy with the results. It helped them win a recent major award.
Pain relief
An interesting outcome of this approach to creating content was the lack of heat generated during the production process. There was no SME/designer friction, as that was automated. That’s one of the reasons we didn’t need a single face-to-face meeting. It allowed us to focus on getting it done and quality control.
Organisations have been using this AI-created content as pre-training for face-to-face training for auditors in Finance, product knowledge and GMP in Manufacturing, health and safety, everything from nurse training to clinical guidelines in the NHS, apprenticeships in a global Hospitality company. All sorts of education and training in all sorts of contexts.

The breakthrough saw Microsoft and Baidu perform similarly, showing that the new AI-war is between China and the US. That’s a shame but we still have some edge here in Europe and the UK, if we could only overcome our tendency to see AI as a dystopian entity and start to use this stuff for social good, rather than being obsessed with ill-informed critiques. If we don’t, they will. These AI techniques have already hit the learning market. It is already automating the production of learning in that huge motherload of education and training: 101 courses and topics such as compliance, process, procedures, product knowledge and so on. Beyond this, AI-driven curation, which we use to supplement the core courses is also possible. If you want see how AI and WildFire can help you create content quickly, at much lower cost and increase retention, drop us a line and we’ll arrange a demo.

 Subscribe to RSS

Monday, January 08, 2018

Superfast AI creation of online learning in manufacturing - fast, cheap, effective

We clearly have a productivity problem in manufacturing, in part due to a lack of training and skills. As manufacturing becomes more complex and automated, it needs lots of skills other than those traditionally repetitive jobs that are being replaced. Could AI help solve this problem? AI may lead to a loss of jobs but we’re showing that AI can also help train in what jobs there are to increase productivity and help in training for new jobs. We’ve been creating online learning quickly and at low cost through WildFire.
Productivity puzzle
The manufacturing sector continues to struggle for productivity, despite growing levels of economic activity. Manufacturing productivity actually fell by 0.2 per cent in the third quarter of 2016, compared to 0.3 per cent growth in services. Many attribute this, at least partially, to low skills and training. As productivity growth seems to have stalled, technology offers a reboot, both in process and learning. Typically ‘basic goods’ manufacturing has been stuck with the rather basic use of technology. This is in stark contrast to ‘advanced manufacturing’ which has been eager to adopt advanced technology. Both, however, have been tardy in their use of technology to get knowledge and skills to their staff. They have both been far behind those in finance, healthcare, hospitality and other sectors. Understandably, learning in manufacturing has been largely classroom and learning by doing. Yet, as manufacturing becomes more complex, knowledge and skills has become ever more important.
One immediate way to increase productivity is through online learning. This has a double-dividend, in that it can save costs (travel, rooms, equipment and trainers) as well as increase productivity through better knowledge and skills. With access to mobile technology, learning can be delivered to distributed audience, even on the shop-floor. In addition, shift work and access to training in down-time and gaps in production, can also be achieved.
Manufacturing is often thought of as a sector not much involved in online learning. Several factors are at work here.
1. Lots of SMEs without large training budgets
2. Less likely to find a LMS to deliver content
3. Less likely to find L&D aware of online learning
3. Less access to devices for online learning
4. Practical environment where factory floor training more prevalent.
To make online learning work there needs to be more awareness of why online learning can help as well as how it can be done.
What we did
First we focused on basic, generic training needs, and produced dozens of modules on:
1. Manual handling
2. Health and safety
3. General Manufacturing Practice
4. Language of manufacturing
5. Gas Cylinders
6. Product knowledge
These are largely knowledge-based modules that underpin practical training in the lab, workshop or factory floor. Bringing everyone up to a common standard really helps when it comes to practical, vocational training. You really should understand what is going on with the science of gas storage and use if you handle dangerous gases and want to weld safely. In addition we trained everyone from apprentices and administration staff to sales people.
To this end we produced modules quickly and cheaply using WildFire, an AI service that takes any document, PowerPoint or video, and creates online learning in minutes not months. We have done this successfully in finance and healthcare but manufacturing posed different challenges.
1. Much of the training is text heavy from manuals without any sophisticated use of images. That we solved through quick and low cost photo-shoots. Literally shooting to a shot list as the online modules had already been created.
2. In not one case did we find a LMS (Learning management System), so we had to deliver from the WildFire server. This actually has one great advantage in that it freed us from the limitations of SCORM. We could gather oodles of data for monitoring and analysis.
3. Doing this learning at any time allows learners to train in down time or at anytime 24/7.
4. It means consistency.
5. We could deliver to any service, especially mobile, which helped.
We are still delivering and analysing the results. Sure there have been issues, especially in the absence of L&D staff in the target organisations but when it works, it works beautifully. If we are to take productivity seriously in the UK we must realise that this means better training and therefore performance. Wouldn’t it be wonderful if AI helps increase productivity through online learning so that people can skill themselves into relevant employment? AI may automate parts of roles but it can also be used to skill for the newly created roles. If you want to find out more please inquire here.

 Subscribe to RSS

Sunday, January 07, 2018

Astonishing fake in education and training - the graph that never was

I have seen this in presentations by the CEO of a large online learning company, Vice-Chancellor of a University, Deloitte’s Bersin, and in innumerable keynotes and talks over many years. It’s a sure sign that the speaker has no real background in learning theory and is basically winging it. Still a staple in education and training, especially in 'train the trainer' and teaching courses, a quick glance is enough to be suspicious.
Dales’s cone

The whole mess has its origins in a book by Edgar Dale way back in 1946. There he listed things from the most abstract to the most concrete: Still pictures, Visual symbols, Verbal symbols, Radio recordings, Motion pictures, Exhibits, Field trips, Demonstrations, Dramatic participation, Contrived experiences, Purposeful experience and Direct. In the second edition (1954) he added Dramatised experiences through Television and in the third edition, and heavily influenced by Bruner (1966), he added enactive, iconic symbolic.
But let’s not blame Dale. He admitted that it is NOT based on any research, only a simple intuitive model and he did NOT include any numbers. It was, in fact, simply a gradated model to show the concreteness of different audio-visual media. Dale warned against taking all of this too seriously, as a ranked or hierarchical order. Which is exactly what everyone did. He actually listed the misconceptions in his 1969 third edition p128-134. So the first act of fakery was to take a simple model, ignore its original purpose, and the authors warnings, and use it for other ends.
Add fake numbers
First up, why would anyone with a modicum of sense believe a graph with such rounded numbers? Any study that produces a series of results bang on units of ten would seem highly suspicious to someone with the most basic knowledge of statistics. The answer, of course, is that people are gullible, especially to messages that appeal to their intuitive beliefs, no matter how wrong. The graph almost induces confirmation bias. In any case, these numbers are senseless unless you have a definition of what you mean by learning and the nature of the content. Of course, there was no measurement – the numbers were made up.
Add Fake Author
At this point the graph has quite simply been sexed up by adding a seemingly genuine citation from an academic and Journal. This is a real paper, about self-generated explanations, but has nothing to with the fake histogram. The lead author of the cited study, Dr. Chi of the University of Pittsburgh, a leading expert on ‘expertise’, when contacted by Will Thalheimer, who uncovered the deception, said, "I don't recognize this graph at all. So the citation is definitely wrong; since it's not my graph." Serious looking histograms can look scientific, especially when supported by bogus academic credentials.
Add new categories
The fourth bit of fakery was to add ‘teaching others’ to the end, topping it up to, you guessed it – 90%. You can see what’s happening here – flatter teachers and teaching, and they’ll believe anything. They also added the ‘Lecture’ category on at the front – and curiously CD-ROM! In fact, the histogram has appeared in many different forms, simply altered to suit the presenter's point in a book or course. This is from Josh Bersin’s book on Blended Learning. It is easy to see how the meme gets transmitted when consultants tout it around in published books. Bersin was bought by Deloittes. What happens here is that Dales original concept is turned from a description of media into the prescription of methods.
The Colored Pyramid
The third bit of fakery, was to sex it up with colour and shape, going back to Dale’s pyramid but with the fake numbers and new categories added. It is a cunning switch, to make it look like that other caricature of human nature, Maslow’s hierarchy of needs. It suffers from the same simplistic idiocy that Maslow’s pyramid does – that complex and very different things lie in a linear sequence one after the other. It is essentially a series of category mistakes, as it takes very different things and assumes they all have the same output – learning. In fact, learning is a complex thing, not a single output. A good lecture may be highly motivating, there are semantic tasks that are well suited to reading and reflection, discussion groups may be useless when struggling with deep and complex semantic problems and so on. Of course, the coloured pyramid makes it look more vivid and real, all too easy to slot into one of those awful 'train the trainer' or 'teacher training' courses that love simplistic bromides.
What’s damning is that this image and variations of the data have been circulating in thousands of PowerPoints, articles and books since the 60s. Investigations of these graphs by Kinnamon (2002) found dozens of references to these numbers in reports and promotional material. Michael Molenda (2003), did a similar job. Their investigations found that the percentages have even been modified to suit the presenter’s needs. This is a sorry tale of how a simple model published with lots of original caveats can morph into a meme that actually lies about the author, the numbers, adds categories and is uncritically adopted by educators and trainers.
Much of this comes from the wonderful Will Thalhemer’s original work. I wanted to give it an extra and more structured spin on the development of the fakery.
Bruner, J. (1966), Toward a Theory of Instruction
Dale, E (1946), Audiovisual methods in teaching
Dale, E (1954), Audiovisual methods in teaching
Dale, E (1969), Audiovisual methods in teaching
Kinnamon, J. C. (2002). Personal communication, October 25
Kovalchick, A and Dawson,K (2004), Education and Technology
Thalheimer, W Blogpost

Molenda, M. H. (2003). Personal communications

 Subscribe to RSS