AI: Challenges and opportunities for membership bodies
Conference Roundup 2023 - In the first of our series on AI, David D'Souza and Will Bryson discussed the impact of AI on the membership sector, as well as how to start exploring it's potential.
Chair: Marcia Philbin, Chief Executive of the Faculty of Pharmaceutical Medicine.
Will Bryson, Senior Associate at Bird and Bird. As a commercial lawyer, his practice focuses on tech transactions. Will looks at emerging technologies, with a particular focus over the last five years on Artificial Intelligence.
David D’Souza, Membership director at the CIPD - the professional body for HR and people development. Over the last 10 years, David has been looking at the impact of automation on the workplace, its implications for the future of work, as well as what it means for the CIPD as a professional body.
As AI continues to advance, what do you see as it’s most transformative applications in the professional membership sector?
In my sector, there are lots of exciting opportunities. There's massive opportunity for the drudgery, the boring bits of our work to be done by machines. If I can stop having to review countless identical contracts, looking for the same issues and have that taken out for me, that would be brilliant.
If the bulk of the drafting, the proofreading, can be done by prompting an AI to draft the contract for me, then I can provide the strategic input - the bits which need a more holistic understanding of the client and the industry. At the moment, and probably for a while, only a human could do that.
In Medicine, life-saving advances can be made. In drug development, being able to shape proteins and target drugs in ways you just couldn't imagine without this technology, could be transformative.
If you could have a tailored therapist on your phone, at a lower cost, everyone could have one. Something to talk to, something that learns from you and about you on a very personalised level - that could be transformative.
Ultimately, in any profession, the benefit is taking what is time-consuming and making it more efficient, or taking something that the human does at the moment and doing it better. I don't think it means humans go away.
What are the implications for the membership sector?
One is on the broader economy - if it means other people don’t have jobs, if there's constraint, as we've seen in the economy, that obviously impacts people's ability to pay. And that's what keeps the world – and our organisations - going round.
The second one is on the nature of work that people in the professions that you're supporting may be doing and the impact that it has directly on them. And then the final one will be on the machinations of your organisations.
The last six months have seen massive changes in the technology. There’s barely any aspect of what a traditional professional body or membership body might do that won't be impacted: from content creation to the running of community, to administration, to fielding queries from people, through to branding.
The technology is as easy to pick up and use as Google or Facebook was in the day. Go away from here, test it, play with it. And I think you'll have a profoundly different view of the implications not just within your organisation, but also for your customers or members more broadly.
How can AI help small or medium sized associations with limited resources?
It's brilliant for that. If you think about some of the work that you might need to outsource, some of the work that you might not be able to get to currently because of time constraints, everything from content generation to analysis, it will allow pace in doing that. For the moment, we need people in the loop in terms of decision-making, but probably in far fewer places than you may anticipate or expect.
Profound change is coming, but there are opportunities wrapped into this. We've got technology that suddenly has incredible scope in terms of its application, what we need to do is make sure that we're taking the right opportunities with that technology, to support and enhance people where we can, but also to run organisations efficiently.
For smaller organisations, don’t think of this as technology, but as someone described it to me: “it's like having an army of quite good people at your disposal, instantly.” It doesn't mean you don't need to check their work, it doesn't mean they'll always be right. But it does mean you get an instant output that's probably 80% of where you need to be.
I would echo that. Clients across the sectors are coming to us wanting to use AI, but concerned about putting their data in, and looking for some guardrails. I would encourage anyone to use it, but think about the guardrails. Depending on how you're using it and the instances (public vs private), you can probably be bolder than you think with what you put into the system.
We as a law firm put our own guardrails in place on what we can do. We're also running a hackathon. We've granted access so anyone can go away and see how we can use it internally for whatever purpose we need: what can we do to drive efficiencies, get things moving?
What impact will AI have on our professional qualifications in terms of maintaining quality and standards?
The skills you test for in an exam will change as the jobs change, and will need to assure a certain literacy with the benefits and risks of using these systems, so they are used responsibly.
Can you make the system do what you need it to? Can you spot any issues? It's so easy to think that thought has gone into an output you get from a system, that it has understood what the question means. It hasn't. It's just running statistical analysis on the text you put in, to see what comes next.
I think there's two things happening here. The more fundamental question is what are we assessing for and why? You wouldn't test an accountant now to make sure that they could do their job without Excel. Systems move on and it’s people's capability to make the use of them to provide an output that's the most important thing. So the first challenge for us all is to consider the future of the profession with people utilising this technology as part of their toolkit.
Then you've got the second, urgent, challenge, about how you assess whether people are using this technology [in exams]. They're using it increasingly intelligently, not to get perfect answers, but to try and get answers that look plausible. The problem is the technology that assesses their answers for plagiarism quite often falsely detects that work generated by a person ‘has been generated by a machine.’
Where do I go to source AI, or find the right advice?
Most of the big headline AIs are freely available. So Chat GPT, DALL-E, Stability's Stable Diffusion, Midjourney - just go online, you can just use the basic tier, though that comes with a cost. For example, some of the cost of using the free version of Chat GPT is that it will reuse your data unless you tell it not to. Also, the free version is not the latest one (GPT4), so you're not getting the full functionality.
If you want to get to the next level, then the question is, what are you using it for? What particular tool do you want? Open AI has a premium version, or if you have an enterprise account with Microsoft, just go to them.
If you want something more specific, tailored and trained, who do you go to for that? A lot of IT providers and consultancies are getting their ducks in a row as to what product they offer, working out what they can and can’t do. It’s all very nascent.
So if you have access to GPT 4 through Microsoft, a provider might offer to tune it to your business. They could take your dataset, run the model over that and it will tune the weightings and things like that to better fit your data, in a way that doesn’t involve your data being shared. Any solution will be quite specific to what you want to do with the system.
What advice would you give organisations on the need to upskill the workforce in order to integrate AI into their workflows?
My advice to organisations at the moment would be to enable and embolden people to experiment within boundaries. There are data concerns, you don't want to just have people loading up random spreadsheets of customer data into it. But equally, if people aren't getting to use it, then that's a problem. So create a safe environment for them, maybe get groups working through it together. Get them testing the boundaries, and then looking at their own workflows and asking, “how could that help us? How could that move us along?”
The great thing about this technology is there's no barrier to entry. If you can type Bing or Chat GPT or Bard into Google, you are immediately in a position to use this technology, for free for the most part. And given that, the only thing holding people back is fear. So if you think about where your organisation might need to be in five years’ time, stealing a march on it now and not falling behind is absolutely the thing. I've said to my team, “if you're not getting up-skilled on this, the labour market is leaving you behind.” It's that simple.
There's an incredible opportunity for people at the start of their career to use and acquaint themselves with this technology and push the boundaries of it faster than others may be inclined to. If you've got a generation coming up that can see the angles better than we can, we should be encouraging those people to experiment and find the shortcuts in our organisations, to serve our members faster, to get them better answers to their questions. If we don't test the boundaries, someone else will.
So as a mini challenge to you, if after this you're going to use that technology for the first time, ask it to set up a competitor to you, and tell you what are the first 10 things that you’d need to do? That freshens up the thinking.
Given the rapid developments in AI over the past six months, and the huge interest in the topic from our members, Memcom will be launching a dedicated AI network in the coming weeks. If you’d like to get involved, contact [email protected].