The inadvertent tyranny of AI use bureaucracy

When something goes wrong with AI, who is to blame? I can clearly picture future complaints or disciplinary hearings routinely struggle to identify liability in cases where something goes wrong because an AI system used by the counsellor. We can of course blame the counsellor. I suspect the counsellor may prefer to blame their employer, or perhaps an AI company. The employer and AI companies will no doubt prefer to blame the lack of legislation, or more simply just blame the counsellor. Could we blame the client? Well, perhaps, if we just let them sign their confidentiality away in a tick box waiver. With enough preparation in contracting, perhaps some counsellors could protect themselves with the equivalent of a ‘get out jail free’ card.

The noble counsellors among us will likely fall in favour of counsellors being responsible for AI use in the counselling room. After all, if we introduce it, we should assume we are in a position of power in how AI is used in the relationship. But is it right that counsellors are solely responsible? I’d imagine not. The profession as a whole has to be able to take its share of the responsibility. Did this counsellor get any training on the topic, were they ever even expected to?

To me, the answer is fairly simple. We shouldn’t let new technology boggle us away from essential ethical thinking , so here is a quick analogy to demonstrate why liability should probably be shared. If I ran a delivery service where I hired HGV drivers to transports pallets of produce around the country, I think it is a perfectly fair expectation for me to check that the drivers I hire have a drivers licence. Now’s let’s imagine there is a crash and harm is done. Who is liable? Of course, the driver is on one level. But if I’m the employer and I did not check that the drivers licence was valid, that they had insurance and that they’d had enough sleep, I would argue I am also partly liable for the crash. Those who did not take reasonable action to prevent the crash are in some way liable. If you are counsellor reading this and you use AI with your clients, please consider that you may have more in common with HGV driver who works without a license. We live in a time where perhaps only a few people have concluded that creating a system which produces counsellors who use AI safely is required.

And why would so many counsellors use AI (probably over 10% of counsellors will use AI fairly regularly). Well, they might not even realise it could be a problem. A tall enough child could drive a car, but not know that it is dangerous or illegal. In AI, we don’t have regulations or laws which specific govern the use of AI. If we’re lucky, we may have voluntary ‘policies’ which we can abide.

In another post, I will probably more fully cover the topic of ‘what harm could an AI do to a?’

In order to achieve enough transparency to ensure that AI use is safe, it may inadvertently make the use of it more unpleasant or bureaucratic, or may dissuade counsellors from using it when it could be beneficial. The higher the prevalence of use of AI by an organisation in individual cases?, the more they should be able to demonstrate how and why they are using AI ethically. In order to do this, it may involve a series of administrative processes, such as an AI risk register, maintaining a list of task-utility thresholds, adding and adjusting labels, seeking consent at each usage, environmental impact assessment, liability assessments, strategic planning, AI policy

processes etc. It may require further external validation or scoring of trustworthiness based on a history of trustworthy conduct (note to reader - a bit like insurers determine the cost of insurance based on a quantitative appraisal of prior risk factors).

Perhaps better tools will be developed to establish trust worthiness with AI in counselling contexts, but at this stage, October 2024, it feels like using AI ethically might actually be quite hard work, and where there is hard work there is the desire for shortcuts. Counsellors may just decide not to disclose their AI usage. In order to prevent this, we should consider a duty to disclose AI use as a core responsibility for the counselling profession.