We were lucky enough to have the chance to interview Sarah Burnett, Tech Evangelist at KYP.ai and renowned tech analyst, as well as the author of ‘The Autonomous Enterprise – Powered by AI’, published by BCS, the Chartered Institute for IT.
As part of National Customer Service Week in October, we’ve been investigating the future of artificial intelligence (AI) and what this means for the customer service industry. In our interview with Sarah, we discussed everything from generative AI and human performance to knowledge work, privacy and data concerns, and the ethical use of AI.
View the full interview below or download the transcript here.
- McKinsey recently released research that shows that generative AI will perform at a median level of human performance by the end of this decade – do you agree? Or do you think it could be even sooner?
Do I think it’s possible? I think it’s possible across some capabilities. I’m sure generative AI, as McKinsey has predicted will equal, if not overtake, human capability in areas like creating art from descriptions or to do with writing content, coding, and lots of different areas. I’m not entirely sure. I saw their research and I realised that they also predict that AI would match human capability in some areas like emotion detection and social interactions. I’m a bit doubtful about that – and I’ll tell you why. It’s because I think the way that AI is being trained for emotion detection doesn’t give it enough context.
For example, when it comes to detecting human emotions, it’s often based on facial expressions from images and I think that’s very narrow. For example, you could be frowning because you’re concentrating or you could be frowning because you’re unhappy or angry, you know.
So I believe there are some limitations there that they need to overcome and unfortunately it’s already being used for emotion detection in lots of different software. For example, some HR organisations are using it to detect whether a person is enthusiastic or not. You know, that kind of thing is really scary.
- What about the automation of knowledge work? Can we expect to this improve in the next decade or so? (in your book, you state that automation turns the traditional knowledge work inside out)
I believe that as more processes are automated, the way that we work will change. In an organisation where people are sitting or working on their computers, effectively we are pools of teams of people working on similar kinds of functions in groups. I think that will change. I think it will be more the robots, the solutions, the automation doing the work and the people supervising those automations, and that will definitely happen. I’m absolutely certain about that.
As to how much of each process will get automated, that very much depends on the type of process and the capabilities of AI as it develops faster and faster. But, we are already able to automate things that we couldn’t last year because of ChatGPT coming along and enabling us to automate content creation, coding and web development.
There were solutions there before that, but they were very niche and they weren’t quite as capable and not able to do it so easily within seconds. You know, it’s phenomenal. But what I see is more human augmentation at the moment. So you might use ChatGPT to start you off on some piece of content. How far that will go and how quickly it will become end-to-end?
I would advise people to always check the output of whatever’s produced – take it as a starting point rather than as a complete solution. But there are areas where it’s actually delivering significant benefits. There’s one example, one of my colleagues wrote a blog about it where a team in pharma that’s doing pricing calculations has automated those calculations with generative AI, so they have the pricing template and generative AI does the calculations. Over a week, I believe they’re saving more than an FTE in terms of the time that it used to take.
So that’s significant and that can’t be ignored by any business. It’s got to be taken into account – all kinds of new processes. The other aspect of it is, so there’s generative AI that allows you to automate a lot more than before, but then generative AI in combination with other technologies is a whole other chapter in this development.
I’m talking to intelligent automation vendors who tell me they’re going to integrate it with their solutions, so they’ll be able to provide better search. They’ll be able to provide conversational interfaces, among other things. I mean, some of them are telling me that it’s going to make the training of their AI a lot easier as well. So we’re going to see this wave of development thanks to the availability of generative AI as well.
- From your perspective, which industries stand to gain the most from AI?
I think there are some industries that perhaps were lagging and they’ll be able to catch up. I’m thinking of some of the professional services, legal firms and so on. I think they’ll be able to do a lot more than before. I’m also hearing that consultancies are using generative AI to help them develop bids for contracts and so it’s speeding up that side of their business as well. I believe legal and professional services they’ll gain a lot from generative AI.
I think global business service organisations will benefit hugely because they have their procedures and processes for things like accounting or Technical Support or whatever it is that they are doing for their customers. And then suddenly, they can add all these other things and improve what they’re doing, do it faster and better, maybe even change, give new options for pricing – that kind of thing.
But perhaps other areas. I think to include any of the horizontal functions like HR and, in particular customer service. I think it’s going to be revolutionised.
- What AI or automated activities will deliver the most value for organisations?
Well, honestly, I think it will vary a lot. So there are the horizontal functions that I mentioned like customer support, customer service, HR, those kinds of things. But at the end of the day, it depends on how complex your processes are and what you know about them. And this is where I’m afraid I have to do a bit of a plug for KYP.ai because it gives you the intelligence that you need in order to choose. So why wouldn’t you do it in a data-driven fashion, you know?
We have the ability to get visibility into how things are being done as well as optimising them, then choosing what could be automated, and getting recommendations for what technology to use to automate the identified process and then the business case for it as well.
It will be changing decision-making significantly because by using these technologies together, and other intelligent automation solutions, you can significantly increase the return on investment. The value generation is now really going through a different kind of a different scale of potential and opportunity I think.
My model in my book talks about the fact that there’s going to have to be supervision. You’ve got to think about where is the knowledge going to be held. How are you going to maintain the knowledge of your processes, your formula, all the things that you do as a business, that is the make up your business – are you going to trust all of that to AI or are you going to make sure that there are generations of employees that can continue their business and the succession planning? All of that is really important.
- What are your thoughts on privacy risks and data breaches because of the increased use of AI? What safeguards are needed to ensure the responsible use of AI, in your opinion?
There are risks and we can’t deny them the fact that AI needs an awful lot of data to be trained in the first place and validated. How is that data managed? Where is it? Where is it coming from? How is it sourced? Those are all really important questions and I think as an industry we need to answer those questions and be transparent about how we do things, so that we can win trust.
There have been far too many tools and apps that harvest information without our knowledge and that kind of thing needs to be stopped, and this is where I think industry and governments need to work together to produce guidelines, frameworks that aids development and the way it’s used. It’s good to see some governments starting to act and take steps.
The risk is that we would limit development. We don’t want to limit development, so it’s about finding the right balance between managing it and allowing it to develop.
I think starting with data, the way that you source it, if you ensure that it’s ethically sourced, that if you do use third party tools – there are a lot of third-party tools out there that validate the model for truthfulness and fairness – all these kinds of things. I hear they’re quite difficult to use. They’re quite time-consuming to implement and take a lot of effort, particularly as a lot of AI companies are startups. They don’t have this sort of manual capability or resources to use some of these technologies, so we need to come up with better solutions and I think it will happen. I think over time all these things will be developed.
And we will have guidelines. I’m really encouraged when I see vendors talk about how ethically they develop things and I think that kind of thing will become really quite important as time goes by.