- promoting a clearer understanding of men's experience -

MENZ.org.nz Logo First visit to MENZ.org.nz? Here's our introduction page.

MENZ Issues: news and discussion about New Zealand men, fathers, family law, divorce, courts, protests, gender politics, and male health.

Sun 2nd November 2014

Quality of the Decisions made in Preparing the Domestic Violence Act

Filed under: Domestic Violence,Gender Politics,Law & Courts,Sex Abuse / CYF — MurrayBacon @ 8:54 am

It is important to look back and see the quality of the decisions made in preparing the Domestic Violence Act.
Those who don’t know history, are forever doomed to keep repeating it…………
(Think Big, DV Act, Building Act, these alone total $100 billions of opportunity wasted and social self harm.)

The act was prepared based on off the cuff suggestions made by Sir Ron Davison, after he had reported on the Bristol murder suicide. His report was based on looking through a single familycaught$ file, but without looking at relevant medical records or taking any advice from medical people about the mental health issues involved and without taking any advice from people with criminology or sociology training. He accepted the familycaught$ file as gospel, without any checking, as is standard legal practice, certainly not sociological research practice and quite against common sense.

The largest single lesson is that legal practice does not necessarily give criminological skills. In fact where legal workers think that they are skilled criminologists, just without training, they are socially very dangerous. At no point was manipulation of the familycaught$ considered as a possibility. Such an omission sees naive and unprofessional from an experienced legal worker. The flow on effects onto all parties and in particular children, was given no thought at all.

Careful reading of the prior research shows that the NZ Domestic Violence Act was passed, quite against the lessons provided by the police arrest studies.


From Wikipedia: Minneapolis Domestic Violence Experiment

From USA Police Foundation:
Minneapolis Domestic Violence Experiment By Lawrence W. Sherman and Richard A. Berk

The following article is not so easy to read as plain text below. It can be read with large print headings in the Submission 2008. See pages 16 to 23.

International Research 1974 to 1995

Research to 1992 showed a small positive effect of mandatory arrest reducing total violence, slightly larger than the effects from counselling.

Closer examination of these studies by 1995 showed that whilst there was a positive total effect on average, it actually tended to increase violence in a small but significant fraction of the cases. These were the cases which tended to involve the most serious violence, thus were the cases where serious injury or death were most likely to occur. This underlined that unthinking mandatory responses tended to exacerbate problems and lead to worse violence. (Mandatory outcomes also increase the potential for complainats to use the DV system to abuse their ex-partner, by subjecting them to sanctions that have not been judged by competent process to be relevant and appropriate. This is then a classical abuse of natural justice.)

This research underlined the importance of appropriate responses to incidents, based on carefully weighing all of the evidence and wisely choosing the appropriate response.

The following extract shows that arrest after domestic violence results in a small reduction in recidivism. (This is not saying that automatic prosecution and conviction without fair trial reduces recidivism.) However, it has a little greater positive effect, for perpetrators who are working and has a significant negative effect on perpetrators who are unemployed ie increased violence. As there is some correlation between unemployment and predisposition to violence, these latter are typically the people whom we are most concerned to persuade to use less violence.

A very recent perspective, arguing for appropriate and wise responses, is given by Sotirios Sarantakos DV Policies where did we go wrong and Male DV Victims. These two papers are given in the Appendix to this submission.

Thus a wider conclusion should be that the response to the incident should be appropriate, proportionate and not an over-reaction.

From the book: Domestic Violence Program Evaluation
Chapter 04 What Are the Lessons of the Police Arrest Studies?
Joel H. Garner and Christopher D. Maxwell

Below is a short extract from Chapter 4. (The complete Chapter 4 is included in the Appendices.)

In reviewing what is known about the effectiveness of treatment or prevention programs in the area of domestic violence, the National Academy of Sciences (Chalk & King, 1998) surveyed over 2,000 studies published between 1980 and 1996. Of these studies, the Academy identified only 114 that (1) involved an intervention designed to treat some aspect of child maltreatment, domestic violence or elder abuse, (2) used an experimental or quasi-experimental design, and (3) measured and used violence as an outcome measure. Among the roughly six percent of the published studies of sufficient methodological value to warrant consideration by the Academy were seven studies that tested the deterrent effectiveness of the police making an arrest (or
issuing an arrest warrant) for misdemeanor assaults against a spouse or intimate partner. These are the “police arrest studies” reviewed in this paper.

The first of these seven studies, the Minneapolis domestic violence experiment (Sherman & Berk, 1984a), is among the most visible (Sherman & Cohn, 1989) and highly cited research articles in criminology (Cohn & Farrington, 1996). That experiment found that when suspects in misdemeanor spouse assault incidents were not arrested, the prevalence of official recorded re-offending within six months was 21%; this rate was 50% higher than the 14% re-offending rate of similarly situated suspects who were arrested.

In 1974, Lipton, Martinson, and Wilks (1975) reviewed the published research on effectiveness of rehabilitative treatments and concluded that “nothing worked.” Their review was limited to treatments implemented in a correctional setting and did not include law enforcement programs like police family crisis interventions but, as a result of their very negative assessment, the ideological underpinnings for all treatment programs were shattered.

In 1979, a panel of the National Academy of Sciences (Sechrest, White, & Brown, 1979) concurred with Martinson’s substantive assessment and added detailed critiques of the methodological weakness of much of the published research on rehabilitation. The Academy’s methodological critiques asserted that much of the prior criminological research had used unstandardized measures of recidivism, rarely had even roughly equivalent treatment and control groups, did not control for different times at risk, and failed to measure the delivery of treatment and control conditions.

In another highly controversial arena, Issac Erhlich’s econometric assessment supporting the deterrent effects of criminal sanctions was included in the U.S. Department of Justice’s amicus curiae brief supporting the constitutionality of the death penalty (Bork, 1974). The resulting substantive and methodological disputes over the value of criminal justice sanctions as an effective crime control strategy were addressed in a separate report by the National Academy of Sciences (Blumstein, Cohen, & Nagin, 1978). Among other issues, this Academy’s deterrence report emphasized the value of experimental designs as a means to assess the impact of changes in levels of criminal sanctions (Zimring, 1978).

These highly visible public debates over the relative effectiveness of rehabilitation and of deterrence, and the Academy’s repeated critiques of the methodological weaknesses of prior research provided support for the use of stronger research designs in Federally supported research at the National Institute of Justice.

In 1980, the new Director of Research at the Police Foundation, Lawrence W. Sherman, submitted a proposal to the Crime Control Theory Program that called for a rigorous test of deterrence theory; the idea was to use an experimental design to assess the deterrent effect of arrest on the crime of spouse assault. The rest is history.

The Minneapolis Domestic Violence Experiment 1984
The basic history of the Minneapolis Domestic Violence Experiment is an often told story. The Minneapolis police department agreed to implement an experimental design, where one of three alternative responses to incidents of misdemeanor domestic violence-arrest, separation, or counseling, would be determined on an equal probability basis. Sherman and his colleagues collected and analyzed data from the experimental incidents, from official police records of the subsequent criminal behavior of the suspects, and from interviews with victims. The findings of this study were reported in a Police Foundation Report (Sherman & Berk, 1984a), in the New York Times Science Section (Boffey, 1983), in many electronic and print media (Sherman & Cohn, 1989) and in several peer-reviewed scientific journals (Berk & Sherman, 1988; Sherman & Berk, 1984b).
Much has been made of the methodological rigor of the Minneapolis design but two other comparisons with the prior research on police family crisis intervention programs are, we think, instructive. First, Sherman and Berk’s study made victim safety, not police officer safety, the sole measure of success for alternative police responses to domestic violence. Following the Minneapolis experiment, victim safety is certainly the paramount and perhaps the only criteria for assessing the effectiveness of alternative police responses to domestic violence. Second, both reforms were based on research, were supported by NIJ, generated widely distributed reports, and received favourable media coverage.
Gartin (1991, p. 253) reports that, despite considerable missing data problems, the “analyses reported by Sherman and Berk (1984a) are reproducible” but that the weight of the evidence “seems to indicate that there was not as much of a specific deterrent effect for arrest” as the results from the original reports seemed to suggest.
The Minneapolis experiment is not above criticism. However, the rarely noted but actual exclusion of more than 5% of the experimental cases could as easily have compromised the rigor of this experiment as the often-noted speculation that officers who volunteered to conduct the research and helped design its protocols might have imperfectly implemented the random assignment. There is another lesson from the Minneapolis experiment. An earlier reanalysis of the Minneapolis data may have provided more reasonable expectations about how effective arrest alone would be as a treatment for reducing domestic violence. Such a reanalysis, however, requires the kind of hard work and scholarship that few commentators seem prepared to contribute, prior to publishing critical assessments of other people’s scientific products.

The Decision to Replicate
The importance of the Minneapolis experiment stems from its test of theory, its rigorous experimental design, its visibility in the popular press, its apparent impact on policy and the fact that it was replicated. Support for replication was widespread. The original authors urged replication (Sherman & Berk, 1984b). Early praise for the study’s design among criminological scholars was tempered by a preference for replication (Boffey, 1983; Lempert, 1984).
The decision to replicate the Minneapolis experiment turned out to be easier than the decisions on how to replicate. What aspects of the Minneapolis study should be copied and what aspects should be changed? How many new sites should be implemented and how would NIJ select the departments and the researchers to implement the replications in those sites? Perhaps the most important question was, would any police department other than Minneapolis agree to randomly assigning treatments to suspects? At the time, there were few scientific or administrative examples to guide this process.
The ultimate resolution of these issues was the initiation of six new experiments, one that began in 1985 (Omaha) and five additional sites initiated in 1986. NIJ required that each replication must involve experimental comparisons of alternative police responses to misdemeanor spouse assault incidents and measure victim safety using both official police records and victim interviews (NIJ, 1985). Other aspects of the design were left to the preferences of the local teams of researchers and implementing police agencies. Seventeen law enforcement agencies competed to be part of the replication program even though this program, unlike the NIJ Police Family Crisis Intervention programs of a decade earlier, did not provide additional financial resources to the department or to participating officers. The replication effort was research, not a demonstration, program and there were no Federal subsidies to the participating departments.
The main lesson of the events from 1983, when the Minneapolis results were initially released, to 1986 is that it was actually possible to replicate the design of the Minneapolis experiment but that this effort was neither instantaneous nor easy. In fact, the program’s design imposed a number of administrative burdens on the participating departments and none of the police arrest studies would have been possible without the willingness of law enforcement agencies throughout the country to participate in rigorous research examining their own behavior on an issue of considerable public controversy. Like Minneapolis, these departments had risen to Wilson’s challenge to gather systematic and empirical evidence of the consequences of their actions on the victims of domestic violence.

The Omaha Experiments 1990
There were two police arrest experiments implemented in Omaha, Nebraska between 1986 and 1989. One of these experiments (Dun-ford, Huizinga, & Elliot, 1990) closely copied the design of the Minneapolis Experiment: it involved the random assignment of arrest, separation and counseling in misdemeanor domestic violence incidents. The second experiment (Dunford, 1990), implemented simultaneously with the first, involved the random assignment of an arrest warrant in misdemeanor domestic violence incidents when the offender was not present when the police arrived. The Omaha studies found (and later studies confirmed) that when probable cause existed to make an arrest, the offender was absent more than 40% of the time. The first, and perhaps most important, lesson of the Omaha experiments is that police practices can be no better than 60% effective if they are limited to treating offenders who wait for the police to arrive. Using a variety of measures, Dunford (1990) found that warrants were consistently associated with less re-offending and that in several but not all of their measures, these comparisons exceeded the traditional tests of statistical significance. Based on the partial support from the statistical tests and the consistent direction of the effects of using warrants, Dunford (1990) suggested that the use of warrants deserved further investigation.
The substantive conclusions of the Omaha offender-present experiment did not confirm the original Minneapolis findings published by Sherman and Berk (1984a). In the Omaha offender-present experiment, Dunford and his colleagues reported that arrested offenders were more likely to re-offend based on official police records and less likely to re-offend based on victim interviews. Neither of the Omaha results, however, were sufficiently large to be statistically significant and Dunford et al. (1990), concluded that arrest “neither helped nor hurt victims in terms of subsequent conflict” (p. 204).
What lessons are to be drawn from the Minneapolis and Omaha results? The results are different but the experiments, while similar, were not conducted using the same measures or methods. For instance, in the victim interviews in Minneapolis, both violent acts and threats of violence were counted as failures and half of the re-offending instances involved threats only. In Omaha, only actual violence with injury to the victim was included in the measure of re-offending. Despite the more restrictive definition of new violence in the Omaha study, the proportion of victims that reported new violence in Omaha was over 40%; in the Minneapolis study the level of new violence reported in victim interviews was about 26%. In Omaha, Dunford and his colleagues compared treatments as randomly assigned and did not use statistical corrections for the misapplication of treatments. There are numerous other methodological differences between the two studies and it is difficult, if not impossible, from these two published works to determine whether the nature of police responses to domestic violence was different in Minneapolis and Omaha or whether some or all of the methodological differences generated the diverse results.
The publication of diverse findings is a common practice in social research but it can be disconcerting to policy makers who are trying to inform, if not base, policy on research findings. While there are methodological improvements in the Omaha offender-present study-notably researcher not police officer control of randomization and a much higher proportion of victims interviewed-both studies approach the standards for research advocated by the National Academy of Sciences. A major lesson of the Minneapolis and Omaha studies is that rarely will one social experiment, no matter how well designed and implemented, tell us very much and a second experiment, even one designed as a replication, does not add that much more knowledge. This would be true if the Omaha results were exactly the same as the Minneapolis results, but the disparate results emphasize the weakness of a scientific literature or a public policy based on one or two studies. In its wisdom, the management of NIJ had foreseen the limitations of just two police arrest studies and had found the funds and the will to initiate six replications.
The Omaha experiments reported on the prevalence of re-offending, the frequency of re-offending and the time to first new offense. The original publications on the Minneapolis experiment (Sherman & Berk, 1984a, 1984b) had reported only on the prevalence of re-offending. A 1986 National Academy of Sciences report (Blumstein, Cohen, Roth, & Visher, 1986) had encouraged the use of these alternative dimensions of criminal careers and victimization and the Omaha and other police arrest studies adopted the use of these alternative measures. In addition, Berk and Sherman (1988) reanalyzed the Minneapolis data using a survival model and continued to find statistically significant deterrent effects. Dunford and his colleagues reported that in both official records and in victim interviews some victims reported multiple new offenses and that the total number of new offenses was higher for arrested suspects than for suspects not arrested. Neither of these effects was statistically significant. In their analysis of the time to first failure, they found effects in the direction of deterrence in the victim interviews but in the other direction in the official records; neither findings were statistically significant. The lesson here is that arrest could decrease the proportion of suspects with new offenses but increase the total number of new offenses against a smaller number of victims.
The use of alternative measures and data sources means that there are not just one or two but many effects from each of the police arrest studies and a serious evaluation of the effectiveness of arrest requires a clear specification of which effects are important and which are not.
Unfortunately, our theories of deterrence and our understanding of how arrest and other treatments might improve the safety of women are not sufficiently well developed to specify exactly which measure or methods are the best tests of effectiveness. This is not simply a methodological issue but a central concern for individuals concerned with policy and for individuals concerned with testing theory. For the purposes of this paper, we have generally limited our discussion to the prevalence of re-offending but our choice is based on the need for parsimony and does not reflect theoretical or policy preference.

The Charlotte Experiment 1992
The Charlotte experiment (Hirschel & Hutchison, 1992; Hirschel, Hutchison, & Dean, 1992) followed the Minneapolis and Omaha models of testing three police actions-arrest, separation and counseling, and used official records and victim interviews to assess re-offending among randomly assigned treatments. Omaha and Minneapolis, however, were mid-sized Midwestern cities with relatively low crime and low unemployment. The racial composition of the Minneapolis sample was almost predominately White (57%) or Native American (18%). In Omaha, the sample was about 50% White and 50% African-American. Charlotte is a southern city with relatively high crime, high unemployment and the experiment there had a relatively large (70%) minority population. The evidence from Minneapolis and Omaha may be inadequate to address the effectiveness of alternative police responses in this very different context.
The published results of the Charlotte experiment were similar to those obtained in Omaha: in the official records, arrest was associated with increased re-offending and in the victim interviews, arrest was associated with reduced re-offending. In Charlotte, as in Omaha, neither of these effects were statistically significant and Hirschel and his colleagues argued that their experiment provides “no evidence that arrest is a more effective deterrent to subsequent assault” (Hirschel et al., 1992, p. 29). There are, however, two possible interpretations of the results obtained in Charlotte and in Omaha. One interpretation is that there is, in fact, no difference between arrest and other treatments. The second interpretation is that the research designs used in these studies are not capable of detecting differences that do exist. Despite the experimental design, the Omaha study had only 330 experimental cases (and 242 interviews), so the Omaha design is unlikely to be able to detect effects as big as those found in the Minneapolis study. The 686 experimental cases (and 338 interviews) in the Charlotte study meant that the analysis of official records was powerful enough to detect the kinds of effects reported in the official records in Minneapolis but not the effects reported in the 338 victim interviews.4
The results of the Minneapolis, Omaha and Charlotte studies agree on one point: there is no large or even medium sized deterrent effect for arrest. The Minneapolis results suggest that there is a small to medium sized effect; the Omaha and Charlotte studies did not find even small effects but their designs are generally not strong enough to detect modest or small effects (Cohen, 1988; Garner et al., 1995). The main lesson is this: three relatively small studies are not sufficient to answer the two central issues of this research: does arrest deter spouse assault, and, if it does, by how much?

The Milwaukee Experiment 1992
In Milwaukee, teams of researchers and police managers, in cooperation with local domestic violence service providers, designed and implemented an experiment that obtained 1,200 experimental cases and interviews with 921 victims (Sherman, 1992; Sherman et al., 1991; Sherman et al., 1992). The results of this experiment were consistent with the results found in Omaha and Charlotte: there was no statistically significant difference in the re-offending rates in official records and in victim interviews based on whether the suspect was arrested or not. In Milwaukee, on both measures, the arrested suspects had higher rates of re-offending in both the victims interviews and official records. Because of the random assignment of treatments and the larger sample size, there is no confusion in the Milwaukee study between non-existence effects and weak designs. In fact, the statistical power of the Milwaukee study was sufficient to detect even small effects but no such effects were found.
The design of the Milwaukee experiment involved some innovative approaches to better understand the effectiveness of alternative police responses to domestic violence. First, in order to assess the underlying mechanism of how arrest might deter future violence, this experiment examined differences between on-scene arrest with a short period of incarceration and on-scene arrest with a longer period of incarceration. Using official police records and victim interviews, the study found no statistically significant differences between these two arrest treatments. Second, the Milwaukee study used a third measure of re-offending-records of police calls to the local shelter. Using this measure, the Milwaukee study found statistically significant results showing arrest associated with higher rates of re-offending (Sherman et al., 1991). While the uniqueness of this measure makes direct comparison of these results with the results from the other police arrest studies difficult, the evidence obtained from the shelter data clearly does not support the notion that arrest deters subsequent violence. Third, the Milwaukee design called for interviewing some of the arrested suspects immediately after they were arrested. While the nature of these interviews limits their utility, the idea of suspect interviews is important. In fact, deterrence theory (Maxwell, 1998; Zimring & Hawkins, 1971) posits changes in suspect behaviour but the design of the police arrest studies was to interview victims.

The Experiments in Metro-Dade 1992
The experiment in alternative police responses to domestic violence in Dade County (Pate et al., 1991) found statistically significant deterrent effects for arrest when re-offending is measured by victim interviews; the official records also showed arrest to be associated with decreased re-offending but the effect was not statistically significant.5 This was the first confirmation of the statistically significant effects observed in Minneapolis and increased the likelihood that there is a deterrent effect for arrest. With the addition of the Dade findings, we can observe that, using victim interviews, four of the five experiments had found effects in the direction of deterrence; in two of these experiments, the effects were statistically significant. Using official records, two of the five experiments had found effects in the direction of escalation and in only one experiment (Minneapolis) were these effects statistically significant. Minneapolis had established the importance of measuring the safety of victims; the emerging pattern suggests the importance of how victimization is measured, by victim interview or by police records.
There were two experiments implemented in Dade. The first was the replication of the Minneapolis experiment with just two treatments, arrest and no arrest. The second experiment used the same incidents as the first but randomly assigned half the cases to a program of follow-up services that was already in place in Dade County. This second experiment was larger and more rigorous than the Minneapolis, Omaha and Charlotte experiments and just as rigorous as the replication experiment in Dade County. Pate et al. (1991) report that there were no differences in the official records and in the victim interviews between those victims who had been given the follow-up police services treatment and those who had not. The statistical power of this experiment was sufficient to warrant the conclusion that these services did not protect the victims of domestic violence. The results of this second experiment were never published and have received no attention in the voluminous literature of alternative police responses to domestic violence. The study was not even mentioned in either of the recent National Academy of Sciences reports (Chalk & King, 1998; Crowell & Burgess, 1996), despite the fact that it meets all of the Academy’s criteria for research quality. Given the extensive interest in post arrest follow-up services for victims of domestic violence, continued inattention to the nature and results of the one true experiment on the limited ability of these services to actually help victims ignores the best available evidence and may put the safety and lives of women at unnecessary risk.

The Colorado Springs Experiment 1992
In the largest police arrest study ever conducted, the Colorado Springs Police Department (Berk, Campbell, Klap, & Western, 1992a; Black et al., 1991) randomly assigned 1,660 domestic violence incidents to four treatment groups-arrest, separation, on-scene counselling and post incident counselling. The results of this experiment in many ways mirror the results reported in Dade County-a statistically significant deterrent effect existed when re-offending is defined using victim interviews but the deterrent effect found in the official records was not statistically significant. The results of the Dade and Colorado Springs experiments breathed new life into the diverse findings from the police arrest studies but they did not resolve whether the weight of the available evidence favoured or opposed the deterrence argument.
The size of the Colorado Springs experiment strengthened its design but it also created numerous implementation problems for the Colorado Springs Police Department. The study’s design called for interviewing all of the victims shortly after the experimental incident and at about six months after the experimental incident. Had they accomplished those goals they would have completed 3,320 interviews. In addition, the Colorado Springs study attempted to interview three fourths of the victims by phone on a biweekly schedule for up to three months. Had they accomplished that goal they would have completed another 6,225 interviews for a total of 9,545 interviews. They actually interviewed 1,350 or 84% of the victims at least once and completed a total of 6,032 interviews. The extensive interviewing, however, raises another question: did the attention and surveillance involved in the interviewing process contribute to or detract from the safety of the victims. This issue is relevant to all of the police arrest studies where the assigned treatment was not just arrest but arrest with follow-up interviews; however, the interview intensive study in Colorado Springs highlights the importance of this design feature. Ironically, prior to Maxwell (Maxwell, 1998), there were no published results based on the victim interviews from Colorado Springs.

The Atlanta Experiment
There was a seventh police arrest study initiated in the Atlanta Police Department but, as of 1999, this project has not produced a final report to NIJ or published any findings from this research and it is unlikely that it ever will. Given the conflicting findings from the other six experiments, the evidence from Atlanta could have contributed much to the issue of the effectiveness of arrest as a response to spouse assault. Implementation failures happen, but the fact that this project did not produce an accounting of why the study was not completed means that we learned next to nothing from this $750,000 investment. The failure of the Atlanta project, however, highlights the accomplishments of the other studies: despite innumerable obstacles, eight police arrest studies were competently and, in some aspects, expertly implemented in six jurisdictions.

Summarizing the Site Specific Results
The existence of diverse findings from the police arrest studies raises the central issue of this paper: how can the information in these studies best be understood. Since the publication of reports and articles on the design, implementation and findings of the six police arrest studies, several assessments of the meaning and lessons of these experiments have been produced. Four of these prior assessments warrant note.
A very different review and assessment of the police arrests studies was published in three companion articles (see: Berk, Campbell, Klap, & Western, 1992b; Pate & Hamilton, 1992; Sherman, Smith, Schmidt, & Rogan, 1992). These assessments analyzed the raw data from four (Omaha, Milwaukee, Colorado Springs and Dade County) of the six police arrest studies and found that arrest deterred employed suspects but did not deter unemployed suspects.
We argue that the effect of arrest was real but modest: reductions in subsequent aggression varied from four to 30%, depending upon the source of the data (official records or victim interviews) and the measure of re-offending (prevalence, frequency or time to failure) employed (Maxwell et al., forthcoming, 2000). We call these effects modest for several reasons. First, in three of the five tests, the effects did not reach statistical significance. Second, other effects were much larger than those for arrest. For instance, the suspect’s age and prior criminal history were associated with increases in re-offending from 50 to 330%. Third, regardless of site, outcome measures, or treatment delivered, most suspects did not re-offend. Consistent with other studies (Langan & Inns, 1986), the police arrest studies have found consistent desistence from re-offending once the police have been called. Our finding is that arrested suspects desisted at higher rates than suspects who were not arrested. Lastly, we determined that the effect for arrest was modest because, even among the arrested cases, a substantial proportion of victims-on the order of 30%-reported at least one new offense and those who were re-victimized reported an annual average of more than five new incidents of aggression by their partner. However consistent the deterrent effect of arrest may be in our analysis, it is clearly not a panacea for the victims of domestic violence.

The police arrest studies command a unique place in criminology and in our understanding of alternative police responses to domestic violence. Beginning with the Minneapolis experiment, they changed the nature of public debate from the safety of police officers to the safety of victims and demonstrated how good research could contribute to the policies and practices of the police. These studies heralded the use of higher methodological standards for criminological research and continue to inform a central theoretical debate in criminology over the deterrent effects of legal sanctions.
These qualities are rare (to non-existent) in criminological research in general and in most investigations into the nature of domestic violence in particular. Few studies can match the methodological rigor, implementation fidelity, theoretical contribution or impact on policy of any of these studies; as a group they may be unsurpassed by any other multi-site collaborative effort in social research on crime and justice. Despite these qualities, it is unlikely that another police arrest study will ever be conducted. The policy debate on alternative police responses to domestic violence is no longer about alternatives to arrest but alternatives to what the police and other agencies should do after an arrest. Random assignment between arrest and other treatments was ethically appropriate only when policymakers agreed that they had insufficient evidence to choose among them. The police arrest studies took advantage of that unusual historical moment and experimented with the lives of over 10,000 victims and suspects (and their families). As a result, we now know far more about the nature of domestic violence and the ability of arrest to improve the safety of victims. Although the size of the deterrent effect of arrest is modest, the empirical and political support for arrest is unlikely to evaporate sufficiently to warrant new tests like the Minneapolis and replication experiments. There may be additional reviews of this research and even more reanalyses of its data, but this research program is finished collecting data and implementing experiments.
The police arrest studies were, to say the least, imperfect. Sites were selected based on the willingness of police agencies to participate, not as a representative sample. Victim interviews were preferred over suspect interviews. The measures of failure did not include a variety of psychological, employment, or quality of life indicators which may be relevant to an assessment of the overall effectiveness of arrest. The experiments did not standardize the delivery of treatments within or between sites and obtained few common measures of what the alternative police responses to domestic violence actually involved. Both official records and victim interview data collection were not always systematic, complete or accurate. The data that were collected and archived do not permit the production of the complete set of originally contemplated multi-site analyses and, of course, the findings and data from Atlanta were never published. Future research would do well to build upon the strengths of the police arrest studies and to avoid, if possible, their design and implementation limitations.

One Response to “Quality of the Decisions made in Preparing the Domestic Violence Act”

  1. Alastair says:

    Davidson did not use the Court file alone, Christine Bristol, (Scarcely impartial) and Neville Robertson (Well known female apologist from Waikato University) also made submissions.

    If that was not enough bias Davidson Deviated from his terms of reference in that the enquiry was to find out what happened, NOT advocate for a solution.

Leave a Reply

Please note that comments which do not conform with the rules of this site are likely to be removed. They should be on-topic for the page they are on. Discussions about moderation are specifically forbidden. All spam will be deleted within a few hours and blacklisted on the stopforumspam database.

Since May 2016 this site is cached. Comments will not appear immediately unless you are logged in. Please do not make multiple attempts.

« »

Powered by WordPress

Skip to toolbar