There’s been much in the Irish media this week about a new report on sex workers’ clients, based on research conducted in five European countries (Ireland, Finland, Bulgaria, Cyprus and Lithuania). Unfortunately, if unsurprisingly, the media coverage has ranged from bad to abysmal. For a prime example of the latter, see this piece in the Irish Examiner, which starts off with the claim that “Ireland’s sex trafficking trade is worth an estimated €250m a year, a new study shows” – despite the fact that no such claim is made anywhere in the report. The journalist appears to have mistaken a made-up stat cited by a speaker at the report’s launch for an actual research finding, which I suppose is an easy enough error to make when you just repeat things NGOs tell you without ever cracking open a report yourself.
As for most of the other coverage, in general its worst sin is making the report out to be somehow shocking or revealing or breaking new ground, when it actually tells us very little – at least where Irish clients are concerned. By the time I finished reading it, my reaction was such a big fat “meh” I actually wondered if I should write this piece at all, and risk drawing more attention to something deserving so little.
Having decided (with some reservations) to proceed, I’ll start at the beginning. The report is an effort by the Immigrant Council of Ireland, the NGO behind the Turn Off the Red Light campaign, in partnership with like-minded groups from the other countries mentioned above. There is no attempt to hide the report’s agenda; Chapter 1.1 openly identifies it as part of an overall project that aims to
Reduce demand for the purchase of sexual services
and while this clearly gives the authors an incentive to find data that presents clients in the worst possible light, I don’t think they’ve actually achieved this – at least not when it comes to the Irish clients (whose responses I will limit this post to). The main reason for this is that their sample is so small as to be virtually meaningless: only 58 Irish clients took part in the survey, which was conducted entirely by means of an online questionnaire. (They actually did attempt to do face-to-face interviews but, as this excerpt relates, failed in almost comical fashion.) You’d need a pretty small population size for 58 to be remotely adequate enough to tell us anything about clients as a class – and the authors can hardly claim simultaneously that the population size is small enough, and that it spends €250m a year. After all, if the respondents amounted to even 10% of the sex-buying population of Ireland, that would still require them to pay an average of €431,034 per year for sex – something clearly impossible at the income levels reported (nearly three-quarters earn less than €40,000 per annum, and only 13% earn above €60,000). Add this to the finding that nearly half of respondents had paid for sex either “just once” or only “a few times”, and clearly the Irish sex industry is either a hell of a lot less lucrative than TORL advocates make it out to be, or those profits are coming from far too many clients to make this sample size sufficient. They can’t have it both ways.
While there are some disclaimers about the inferences that safely can be drawn from the report, they are both too little and too late. The “Research methodology” section (Chapter 1.2) explains that non-probability sampling was used, but suggests the only weakness of this approach is that it cannot
determine the percentage of the respective populations who had purchased sex
Nowhere does it explain that non-probability sampling cannot, by its nature, ensure a representative sample, and in fact at several points the report uses language that seems to assume the respondents are representative of Irish clients as a whole. Near the end, in Appendix 3, it concedes the risk of self-selection bias – where the sample is skewed by certain shared characteristics of those who choose to participate it – but then suggests this concern is unwarranted on the basis of
similarities between those who participated in this research and those who engaged in previous similar studies
Which previous similar studies they mean is unclear; looking through the report’s bibliography, I can’t find any previous research on clients in Ireland. This is a strange omission, in a report that everywhere else carefully references all the research it refers to.
To be clear, the report isn’t devalued by the use of non-probability sampling. Sometimes there isn’t any other way to study a particular group, and the information you get may still be useful even if it can’t be extrapolated to the group as a whole. For example, five Irish respondents said “a bar” was the location where they found the last person they paid for sex with; this is notable for indicating the need to study the poorly-researched phenomenon of bar-based prostitution, even if it can’t tell us what percentage of the industry that sector comprises. But in a report aimed at a non-academic audience, it’s important to make these limitations clear, and I don’t think this report adequately does this. Public pronouncements by the NGOs behind this report have certainly not done this – like this article from the Immigrant Council, which repeatedly equates “clients who completed this survey” with “men who pay for sex”.
The report also examines the meaning of “demand” in the relevant international law instruments, which require member states to reduce the demand that fuels human trafficking. The purpose of this chapter is to argue that “demand” in this context should be interpreted as demand for paid sex rather than demand for paid sex from a trafficked person. Obviously I disagree with them on this point: the current interpretation is in line with the requirement to reduce demand in non-sex sectors, and this is how it should be. Nobody suggests we need to reduce demand for agricultural workers just because some of them meet the indicators of trafficking.
Beyond that, though, I think there is much to criticise in the way the “demand” argument is made. Exploring the understanding of that term in academic research, the report relies heavily on the work of Bridget Anderson and Julia O’Connell Davidson, which is absolutely essential reading. Unfortunately, it elides one of their central arguments: that sex work and trafficking are not purely demand-led, and that supply itself may create the demand. Here’s a direct quote from the Anderson and O’Connell Davidson article setting out this position, which is entirely contrary to the impression of it given by the report:
“There is certainly demand for cheap and vulnerable sex workers, but it is by no means clear that this kind of demand acts as a stimulus for trafficking. It could equally be that a supply of cheap workers stimulates demand.”
There are a number of other, similar sleights-of-hand in this study. It cites a 2013 report by a Council of Europe anti-trafficking body, GRETA, in a manner that would lead the reader to believe – wrongly – that GRETA endorsed the Oireachtas Justice Committee’s anti-sex work proposals. It mentions that the Finnish Ministry for Justice recommended criminalising payment for sex, but fails to mention that the Finnish Government rejected that recommendation (though in fairness, that was a very recent decision). And it acknowledges that the failure to recruit more Irish clients may have had something to do with the
ongoing, very public discussion on the future of prostitution legislation in Ireland
but conveniently omits the fact that there is an ongoing, very aggressive campaign to make the research subjects into criminals, which campaign is being led by the authors of the study themselves. It seems to me that the interests of full disclosure should have required some mention of this.
But the study’s biggest flaw is the way it deals with the question of potentially trafficked or exploited sex workers. The online survey, which is reproduced in full in an Appendix, asks the question:
16. Have you ever changed your mind and walked away because the person seemed:
and a list of options follows, including “scared”, “controlled”, “unwilling”, “unhappy” and “too young”. “Trafficked” is not one of the options, but we are told in the Appendix that the options were chosen because they are
physical manifestations of exploitation [and] indicators of trafficking
In other words, a client who admits to walking away from an appointment because the escort seemed “unhappy” is assumed to have walked away because he believed she was trafficked! Quite plainly, this is absurd.
But it gets worse, because in the main body of the report, the question itself is completely rephrased to reach the finding the authors want to reach. Instead of reporting Question 16 as it’s actually worded, it reports it as if a significantly different question had been asked:
Around one-quarter of Irish buyers said they had encountered sellers they believed were being exploited.
This leaves no room for doubt: a client who might have ticked the box for “unhappy” because he’d walked out of an appointment with an independent escort who was in a bad mood would now be recorded as having encountered a sex worker who he believed was exploited or trafficked. This is not a conclusion that follows logically from the research question. It is a gigantic leap that undermines whatever credibility this survey might otherwise have had.
Next, the survey asks:
17. Have you ever considered reporting your suspicions that someone was being trafficked or controlled?
The only options given are “No” or “Yes”. There is no “Not applicable”. This is a classic “Have you stopped beating your wife?” type of question: there is no way to answer without allowing an unpleasant conclusion to be drawn. Though it was possible to skip the question entirely, and about a third did, it’s not clear whether respondents were explicitly told they could do so; thus the possibility can’t be ignored that some who would have selected “n/a” picked the next best option instead.
If the survey was designed so that Question 17 only popped up once Question 16 was answered affirmatively, this wouldn’t be a problem. But there’s no indication that it was. The sequential numbering (rather than as, say, Q.16 and Q.16a) suggests that it wasn’t. The text of the report also suggests that it wasn’t, and that Q.17 was asked of all respondents:
Buyers were also asked whether they had ever reported suspicions that someone was being exploited or controlled.
This is where it becomes really important to distinguish the actual findings from the spin. In the Immigrant Council article linked to above, their spokesperson writes:
“As well as profiling buyers the Immigrant Council of Ireland examined if the men ever came across women they believed were being controlled by pimps, were frightened or were trafficked. The results are startling, with over one in four admitting they had come across women and girls they believed were in such situations. A significantly lesser number considered to report this to the authorities, dispelling the myth that buyers are helpful is [sic] tackling human trafficking.”
“A significantly lesser number”? The report found that around a quarter of respondents had ticked one of those so-called trafficking indicator boxes. In a sample size of 58, that’s 14.5. The article above says “over one in four”, so we’ll round up to 15. It also found that 21% of respondents had considered reporting such a situation to the authorities. In the same sample size, that’s 12. The difference between 15 and 12 in a sample size of 58 may or may not be statistically significant (I’ll let someone else do the math), but it is hardly significant in layperson’s terms. The Immigrant Council’s use of that word in that article seems to be designed to mislead. And of course, when you consider the rephrasing of Q.16 (so that some of those 15 who walked away may not have done so because they thought the sex worker was being exploited), the difference could actually be even lower.
It is shameful how readily the Irish media allow themselves to be used as a vehicle for what can only be described as propaganda masquerading as research.
Another part of the survey that has drawn attention is a question asking clients what would deter them from paying for sex. Interestingly (though again bearing in mind the non-representative nature of the sample), “a bad experience or a disease” ranks first. Criminal penalties and the publication of their photo are also ranked highly. Predictably, this is treated as “evidence” that these measures would be successful in ending demand.
The problem with questions like this is that the answers are necessarily speculative, and human beings do not always behave as they expect themselves to. How people say they would react to the abstract hypothetical possibility of something happening, and how they actually do react when that something finally occurs, may not line up as neatly as the authors want us to think they will. The report fails to consider the phenomenon that criminologists call “initial deterrence decay”, whereby the effectiveness of a measure drops significantly after first appearing successful, as those who were originally deterred by it learn not to fear the penalties or find ways to get around them.
There are also some issues of concern with how the study was conducted. We are assured:
At all times, the research teams were aware of the ethical sensitivity of the issues being looked at.
However, there is no indication that any institutional ethical approval was sought or given. We are told that “training” and “guidelines” were given for the face-to-face interviews in two of the countries and for the handling of research data, but it is not clear whether full disclosure was made to any of the respondents about the nature and purpose of the study – a key ethical consideration when working with human research subjects.
A few other things struck me while reading the report, but I’ll leave it at this for now. One final comment: as the report’s real purpose is to advocate for the Swedish law of criminalising sex workers’ clients, it would be interesting to see a similar study carried out in Sweden. Presumably if the authors are going to accept these findings as authentic, they would have to do the same for an equivalent study on Swedish clients. I suspect the answers might surprise them.