By Nial Vivian
On 1 November 2017, MoneySavingExpert (MSE) published a report into the ability of ombud schemes to fulfil their purpose. They concluded that ombud schemes are perceived as biased by many users, that compliance with ombuds’ processes and decisions is poor, and that a tightening of regulations around the use of the title ‘ombudsman’, alongside a strengthening of powers available to them, are necessary.
This blog attempts to place the report by MSE within the existing scope of research conducted on ombud schemes and consumer ADR and assess what it adds to the wider discourse.
The state of play
On 1 November 2017, MoneySavingExpert (MSE) released ‘Sharper Teeth: The Consumer Need for Ombudsman Reform’, a self-described ‘damning’ report demanding an overhaul of the ‘farcical’ ombud system.[1] The authors recommend that all ombud schemes are provided with a statutory foundation, that schemes and individuals at senior roles be subject to further oversight to ensure accessibility and perceptions of fairness, and that the 8-week period that users are usually asked to wait before submitting their complaint to an ombud be shortened.
Among the report’s findings are:
- 60% of those surveyed thought the ombud scheme used was biased against them;
- more than 50% of respondents for all but one of the ombud schemes felt that the decision made was unfair;
- 53% of respondents stated that they were put off using an ombud again by their experience; and
- large numbers of decisions (between 13% and 48% dependent upon the scheme used by those surveyed) made by ombud schemes are not complied with by participating organisations.
On the surface, some of the findings tread new ground and might sound some alarm bells.
Questions arise around the reliability of the data collected; respondents for each scheme are uneven, and the authors themselves admit that the ‘opt-in’ nature of the survey they used is likely to have resulted in a sample disproportionately weighted towards disgruntled users. This doesn’t mean the findings should be ignored, however. Taken in context with contemporary research, what do MSEs findings tell us about the state of play for ombud schemes in the UK?
Objectivity is hard to come by
The MSE report raises an alarming statistic: that on average 60% of respondents felt that ombud schemes were biased towards the other party in their dealings with them.
Perceptions of impartiality, independence, and even-handedness are critical to the legitimacy of schemes, but a factor which is perhaps neglected in the report is in how these perceptions are formed. Research by Naomi Creutzfeldt, ‘Trusting the middle-man: Impact and legitimacy of ombudsmen in Europe’, has linked users’ views on impartiality and fairness to the outcome of the complaint and found that negative substantive outcomes have a negative impact on the perceived fairness of the procedure employed.[2]
This finding was reinforced in our report, ‘Confusion, Gaps, and Overlaps’, which like Creutzfeldt’s work also explored how the failure of ombud schemes to effectively communicate their purpose and remit before making a decision can negatively impact perceptions of fairness and impartiality. This builds on some of what Sharon Gilad discussed in her research on the Financial Ombudsman Service,[3] in that good interactional justice and ‘expectations management’ can have a positive impact on a users’ overall perception of justice, including their perception of the substantive outcome.
It is worth stating that the MSE report doesn’t distinguish between schemes that oversee primarily public bodies and those that oversee primarily private initiatives. Creutzfeldt’s research shows that perceptions amongst users of private and public sector schemes can be radically different,[4] with consumer ombud schemes showing far higher levels of satisfaction. She reports that this is often down to radically different expectations of what private and public-sector ombuds can provide, and points to the lengthy journey to an ombud scheme and to dissatisfaction with preceding complaints processes as the likely culprit of heightened consumer emotions and therefore expectations.
Perceptions of fairness are fluid and subjective, dependent upon the overall outcome of the complaint and the ability of the individual complaint handler and overall scheme to project an image of fairness, resting in their service standards, to the user. The vast array of different service standards in place among the varied schemes, however, only serves to make it more difficult for users to form concrete expectations of an ombud’s processes, or decision-making standards, and is one way of understanding some of this satisfaction. It is important to note the concerns around bureaucratic manipulation that Gilad acknowledges might arise from her work, however.[5] She states that expectations management might be seen by some as actively legitimising professionals’ determinations of which complaints are justified, and which are excessive. As such, in light of the consumer perceptions highlighted, a legitimate question remains around the actual substantive fairness of decisions made.
A hard road to nowhere?
The MSE report found that 53% of complainants would not want to use an ombud scheme again, citing difficulty with access and overall poor experience as reasons. This finding is reinforced in our report, ‘Confusion, Gaps, and Overlaps’, where the complexity of the consumer ADR landscape was highlighted as a barrier for complainants. The liberalisation of the market for consumer ADR has only compounded the difficulty consumers face when accessing justice and accentuated the disconfirmation of expectations that users are likely to feel when they discover that businesses are not obliged to play ball. Wider dissatisfaction with the ombuds ‘brand’ is exacerbated by media reports that courts are able to come to polar opposite decisions in certain circumstances, or similarly challenge the methods used by ombuds, or reject their findings.
Going beyond allegations of bias and looking at wider user experience with schemes, it remains that a not insignificant number of users seem to feel that decisions made are unsatisfactory,[6] that there might be a disconnect between an ADR scheme’s perception of a consumer’s satisfaction with the outcome and their actual satisfaction,[7] that users feel ‘badgered’ into accepting decisions,[8] and that they can be left unclear as to whether they can challenge the decision made.[9] The different organisations examined by these three pieces, coupled with the incidental nature of users’ experiences of schemes, mean that responses should be accordingly measured, but they do certainly indicate that more scrutiny is required.
What also raises concerns is that the MSE report indicates that organisations under jurisdiction may not comply with decisions as often as ombuds report. Yet these findings conflict with compliance levels reported by ombuds themselves. Last year, Ombudsman Services: Energy reported 89.1% compliance with its decisions, the Local Government and Social Care Ombudsman said that 96% of its decisions were complied with, and the Property Ombudsman reported almost 100% compliance with its decisions across the areas it covers. The MSE report, however, indicates that around 15% of Ombudsman Services: Energy’s decisions are not implemented, that almost 50% of the Local Government and Social Care Ombudsman’s decisions are not complied with, and that nearly 20% of The Property Ombudsman’s decisions are not put into place.[10] This could be explained away by citing the uneven sample used by MSE, but it remains that if bodies under jurisdiction are able to walk away from recommendations or decisions made by ombud schemes, this will be damaging to the wider brand and injurious for the users of the scheme.
This disparity might also indicate that ombuds schemes are overly optimistic in their reporting of compliance with decisions, or that perceptions of compliance held by users and the schemes themselves differ somewhat. What is indicated is that further research into, and wider reporting of, these figures and definitions of compliance utilised, could well be warranted. Measures for ensuring decisions are complied with, however, seem necessary if the underlying purpose of ombuds to provide redress and correct maladministration is not to be undermined.
Further to this, if there are limited or patchwork arrangements for challenging decisions, or if bodies under jurisdiction in some circumstances are able to merely opt-out of a scheme, even if some schemes are able to maintain very high levels of compliance, the overall legitimacy of the ombuds brand will be damaged. This damage could lead to more bodies under jurisdiction seeing fit to ignore decisions that they don’t like, with serious results for users of ombud schemes.
Conclusions
Having pondered some of the research that exists alongside the MSE report, this post agrees with many of the recommendations that the research puts forward, in terms of general direction at least, whilst not calling for the exact same measures to be put into place. Even though this post questions the validity, perhaps, of some of the conclusions that can be drawn from the data presented, the call for further scrutiny in certain areas, and strengthening of powers in others, is welcomed and expanded upon.
In addressing solely ombud schemes, the MSE report suggests that discrepancies in user experience between schemes could in part be sourced in whether the scheme itself has a statutory basis, which in turn dictates whether membership of the scheme is compulsory and a schemes’ ability to ensure that decisions are complied with. The ‘Confusion, Gaps, and Overlaps’ report, looking at consumer ADR in the round, finds that schemes which operate in a regulated area perform better in terms of user experience than those operating in non-regulated sectors. Both agree that the major variances in user satisfaction come from the differences in standards to which they are held, and that strengthened oversight is required in one way or another. For the consumer sector also, patchwork coverage and voluntary membership looks to be damaging perceptions of ADR as a whole, and through this perhaps the ombuds brand. Mandatory membership of a single scheme for each sector, as the MSE report recommends (as does ‘Confusion, Gaps, and Overlaps’), alongside strengthened oversight, appears to be sorely needed.
Reports of poor compliance, and dissatisfaction with the substantive outcome of upheld complaints, also indicates a need for further, and strengthened, oversight of ombud schemes. Whether user dissatisfaction is an error on the part of service users who have unreasonable expectations of outcome, or whether this is because an ‘ombudsbrand’ justice is indeed the second-rate type of justice that some fear it to be, will not be easy to determine before someone more concretely defines what ‘ombudsbrand’ justice is in the first place. The ability of individuals to exercise their rights, as always, relies on their knowledge of them. Clear service standards, including decision-making procedures, need to be publicly displayed by ombuds, and properly explained to users, for this to happen. The Ombudsman Association has produced a service standards framework in response to this, that members are currently implementing, but this may only address part of the problem – ombuds schemes also need to be held accountable to these standards. To avoid losing the unique abilities of ADR procedures to provide creative remedies and adopt a formality of approach suited to the dispute at hand, they should not be measured against the standards of the courts, or they will risk merely becoming a route to justice ‘in the shadow’ of them.
As such, this post suggests that a further measure be considered. When an individual is an actor in a dispute situation, objectivity becomes difficult to obtain, especially at a time when heightened emotions are perfectly reasonable. However, this will always impact consumer perceptions of fairness in ways that make their views of dispute resolution procedures less than impartial. What might protect the authority of an ombud, engender greater public confidence in decisions made through a layer of assurance, and guard against any concerns against bureaucratic manipulation of users’ expectations of justice, is an arena where substantive decisions could be reviewed in specific cases, and the results published, with remedial action taken where necessary. Determining who should watch the ‘watchers’ may be hard enough, but perhaps determining what standard the watcher should measure the ‘watchers’ against is harder still.
About the author:
Nial Vivian is a Lecturer in Dispute Resolution at Queen Margaret University, and is also completing a dissertation for an MSc in Dispute Resolution. Prior to this role, he was a complaint handler across various private-sector organisations and ombud schemes.
[1] Lewis, M., Barnes, W. and Good, K. (2017). Sharper Teeth: The Consumer Need for Ombudsman Reform [online]. [viewed 21 November 2017]. Available at: https://www.moneysavingexpert.com/news/protect/2017/11/mse-tells-mps-of-need-for-urgent-reform-to-ombudsman-farce
[2] Creutzfeldt, N. (2016). Trusting the middle-man: Impact and legitimacy of ombudsmen in Europe. [online]. [accessed 21 November 2017]. Available at: https://www.law.ox.ac.uk/sites/files/oxlaw/ombuds_project_report_nc_2.pdf
[3] Gilad, S. (2008). Accountability or Expectations Management? The Role of the Ombudsman in Financial Regulation. Law & Policy. Vol 30, no 2. Pp. 227-253.
[4] Creutzfeldt, N. (2016)
[5] Gilad, S. (2008).
[6] Creutzfeldt, N. (2016)
[7] Gill, C. et al. (2017)
[8] Ibid
[9] Ibid
[10] Lewis, M. et al. (2017)
Discussion
No comments yet.