HomeCommunity ResourcesAI in L&D: The Immaturity Model October 30, 2024 Community Resources, News Adoption rates for AI in L&D are growing. But what does that mean? The adoption of AI for L&D takes many different forms. Our research shows that the greatest area of use is for the internal efficiencies of the L&D department. In our survey of 420 practitioners from over 50 countries, when asked about the benefits of using AI in L&D, respondents chose to focus on the speed of production of content rather than its quality, and content and efficiency dominated the top of the table of responses. On the other hand, however, the qualitative survey responses paint a more nuanced picture. Compared with last year, respondents increasingly use terms around skills and data to describe their work. In other words, it appears that while a core of respondents see AI as a way of improving efficiency in their current content production work, a smaller group sees it as a way of extending L&D’s current role. Beyond this, the case studies in the report show a further, different approach to using AI in a range of sophisticated ways. Many of them take skills as their starting point and go on to exciting uses of AI that go well beyond improving the efficiency of the L&D department. Notably, some of the case studies started more than two years ago, before the launch of ChatGPT excited general interest in AI. When exploring data from the first two reports in April 2024, Eglė created the Complexity Scale to explain the wide and very varied range of uses of AI. The Complexity Scale aims to answer the key question of AI adoption in L&D: why does L&D seem to use AI largely for ‘simple’ administrative, learning design and content tasks rather than more impactful and ‘sophisticated’ uses? The answer is that the ‘simple’ uses can be implemented by an individual within L&D using their free account and effectively bypassing the bureaucracy and politics of new technology adoption. However, each step towards greater sophistication requires L&D to have extra skills, technology or relationships to implement it. This Complexity Scale gave L&D practitioners a way to understand the vast range of ways of using AI in L&D and what was holding them back. It was met with an enthusiastic response, with organisations keen to pinpoint their position on it. We believe, however, that the presentation of the scale needs refining. The shaded background suggests the scale is smoothly graduated: that after starting with a simple use of AI, one will inevitably move to more sophisticated uses. This was not the intention of the original design of the scale, and we believe it to be misleading. Indeed, the evidence of the survey and the case studies suggests there is no smooth and guaranteed transition. To be clear: maturity models work when mastering a particular domain. An expert car rally driver will begin by passing their regular driving test and progress through various levels of complexity before competing at the highest level. The same applies to learning how to use AI to, for example, create learning content. People will begin at a basic level and progress to sophisticated tools and output. It is not automatic. It requires effort. But with enough applied effort, progress happens. AI as a whole, however, is not a single domain, like car driving or content creation. It is a fundamental empowering technology, like electricity. When we talk of the adoption of AI in L&D, we are really talking about a series of different adoptions – these include AI’s use in content production, in creating skills taxonomies, in analysing behaviour patterns, and more. What they have in common is the use of powerful software on large data sets with massive processing power. Like electricity, then, AI can be used for many things, and expertise in one area does not necessarily lead to expertise in another. An arc welder and a radiologist both use electric equipment, but neither is qualified to do the other’s job. This leads us to propose that the Complexity Scale is best considered not as a smoothly graduated spectrum but rather as a set of different domains of use. The domains (from Administrative tasks to Skills management) can be grouped into clusters. We have identified three distinct ways in which L&D uses AI: Internal efficiency – doing the current work of L&D faster and/or cheaper Point solutions – localised solutions to individual learning or performance problems Business integration – working with the entire business to support performance Is this still a maturity model by another name? No. There is no suggestion of automatic progression across these different clusters of uses. We can find no example of an organisation that began by experimenting with ChatGPT for creating content faster and went on to an integrated skills solution. So, if using AI well in one area does not make you a sophisticated area elsewhere, how can you become a more sophisticated user? The answer is that progress is determined key enablers already existent in your organisation – in particular the culture and leadership of the L&D department and the organisation’s readiness to mandate and support change. Organisations using AI to go beyond boosting internal L&D efficiencies have what we call ‘Open’ L&D departments. These are open to change, open to challenging and being challenged. Some of their characteristics are: Leadership ready to challenge norms inside and outside the department Leadership comfortable making decisions with imperfect information An operating environment that ties L&D into the business A focus on business outcomes (e.g. via performance consulting) Seeing content production as a means to an end, not an end in itself However, it is not enough to be an ‘Open’ L&D department to use AI in a sophisticated way. For that to happen, the organisation itself must also be ready to change, in part or in whole. What determines whether the organisation is ready and able to change, particularly whether it is ready to adopt new technology that may alter how it works? For AI adoption to succeed, an organisation needs both a culture that supports innovation and a mandate from the top to adopt AI. This is evident from the large organisations in our case studies that are adopting sophisticated uses of AI. Ericsson has been on a journey to being a skills-based organisation for several years and has not reached the end. This level of commitment can only be achieved with a mandate for change from the top and a culture ready to support that mandate. Ericsson is undoubtedly at the sophisticated end of our spectrum, carrying out work in the Business integration cluster. Not all organisations will reach this level of company-wide commitment to change. But if it is impossible to do everything, it is always possible to do something. This is where L&D departments can adopt AI solutions that do not spread across the organisation but deliver value in one part of it. We call them Point solutions. Point solutions may not revolutionise how L&D works with the business, but they can offer marked improvements to how L&D supports employees’ learning and performance, helping them navigate the content or development opportunities in the organisation, self-direct their learning, and practise key skills. This brings us to our Immaturity Model. There are no arrows on this, no suggestion that there is an automatic progression from simple to sophisticated. Progress is not determined solely by the L&D department but by what it does in the context of the organisation. We should always ask whether any new models matter. Does it matter how L&D practitioners use AI? It is comparatively easy to use generative AI tools to create learning content. It makes work easier and more productive and cuts significant costs, so why not simply focus there? As an individual, this may make work more rewarding and valuable. For L&D departments, however, the answer is different. Any department that focuses most of its effort on using AI purely internally, and in particular to produce more content faster, will likely face a difficult conversation within the next budget cycle. In the words of one of our case study partners, “If you can do things fast and cheap, why should we give you more money”? This is the danger of allowing the department to tie its identity to content. As generative AI makes the value of content plummet, so will the perceived value of L&D. We have presented the Immaturity Model dispassionately here, and in our report. But as we dwelt on the data and the case studies and the other two reports we have written in the past 11 months, we could not help but think that AI presents L&D with a once in- a-generation opportunity to have the impact and reach that it has always wanted. When a similar opportunity was presented at the end of the 1990s, the training function (as it was then) could have broadened its scope using the power of the new World Wide Web. Instead, the response was to create click-next elearning. If we similarly fail to seize this opportunity, then the Model will be not a roadmap for growth but a description of how, faced with opportunity, L&D chose once more to remain with less influence, having less impact, in immaturity. This article was adapted from AI in L&D: Intention and reality, by Donald H Taylor and Eglė Vinauskaitė, October 2024. https://donaldhtaylor.co.uk/research_base/focus03-intention-and-reality/ Provided for OEB Global 2024 by Donald H Taylor. Leave a Reply Cancel ReplyYour email address will not be published.CommentName* Email* Website Save my name, email, and website in this browser for the next time I comment.