Friday, November 11, 2016


By Richard Wexler, Executive Director, National Coalition for Child Protection Reform
November, 2016

A pdf version of this report is available here


Introduction


In the dystopian science fiction movie Minority Report, people are arrested and jailed based on the predictions of three psychics in a bathtub that at some point in the future they will commit a crime. The film was meant to be a cautionary tale.

But in the field of child welfare, there are many who seem to be responding to Minority Report the way Rick Santorum would respond if he read The Handmaid’s Tale – seeing it as a blueprint instead of a warning.

No, they are not proposing to rely on psychics in a bathtub.  Instead they’re proposing something even less reliable: using “predictive analytics” to decide when to tear apart a family and consign the children to the chaos of foster care.  Giant software firms claim they can use demographic information and other “data points” to create algorithms that predict who will commit a crime, or who will neglect their children.

"Big Data" told us she would be president. (Photo by Gage Skidmore)
It’s the same approach used so brilliantly by organizations such as FiveThirtyEight and The New York Times to predict the outcome of the 2016 Presidential Election.


But, as a Times analysis points out, there is reason for concern about predictive analytics that goes far beyond that one “yuuuge” failure. And those concerns should extend to child welfare.

● Predictive Analytics already has gone terribly wrong in criminal justice, falsely flagging Black defendants as future criminals and underestimating risk if the defendant was white.

●In child welfare, a New Zealand experiment in predictive analytics touted as a great success wrongly predicted child abuse more than half the time.

● In Los Angeles County, another experiment was hailed as a huge success in spite of a “false positive” rate of more than 95 percent.  And that experiment was conducted by the private, for-profit software company that wants to sell its wares to the county.

None of this has curbed the enthusiasm of advocates who have made predictive analytics the latest fad in child welfare. The campaign is led largely, though not exclusively, by the field’s worst extremists – those who have been most fanatical about advocating a massive increase in the number of children torn from everyone they know and love and consigned to the chaos of foster care – and also by those most deeply “in denial” when it comes to the problem of racial bias in child welfare.

Some predictive analytics boosters don’t even know the meaning of the most basic data they cite to support their case.

Others – leading researchers in the field - have even argued that “prenatal risk assessments could be used to identify children at risk of maltreatment while still in the womb.” Though these researchers argue that such targeting should be used in order to provide help to the mothers, that’s not how child welfare works in the real world.

“Yes, it’s Big Brother,” said another predictive analytics enthusiast. “But we have to have our eyes open to the potential of this model.”

Predictive analytics is a fad that presents serious new dangers to children in impoverished families, especially children of color.  That’s because predictive analytics does not eliminate the racial and class biases that permeate child welfare, predictive analytics magnifies those biases.  Predictive analytics amounts to data-nuking impoverished families. It is computerized racial profiling.

Indeed, when ChildTrends, an organization that specializes in analyzing data on children’s issues, published its 2015 list of Top Five Myths About Child Maltreatment, #1 was: “We can predict which children will be maltreated based on risk factors.”

ChildTrends explains:

Risk factors associated with child maltreatment include extreme poverty, family unemployment, caregiver substance abuse, lack of understanding of child development, and neighborhood violence. However, each of these only weakly predicts the likelihood of maltreatment.

For example, although maltreatment is more common among families living in poverty than among other families, the majority of parents with low incomes do not maltreat their children. When risk factors are present, protective factors can mitigate the likelihood of maltreatment. Such protective factors include parental social connections, knowledge of parenting and child development, concrete support in times of need, and children’s social-emotional competence.

Because maltreatment is so difficult to predict, prevention approaches that strengthen protective factors among at-risk families broadly—even if the risk is low—are likely to be most effective in reducing maltreatment.

The stakes


          As is always the case when advocates of finding new ways to interfere in the lives of impoverished families try to justify trampling civil liberties, they misrepresent the “balance of harms.”  They claim that investigating families based on the potential for what is known in Minority Report as “future crime” is a mere inconvenience – no harm done if we intervene and there’s no problem, they say.  But if they don’t intervene, something terrible may happen to a child.

          But a child abuse investigation is not a benign act.  It can undermine the fabric of family life, creating years of insecurity for a child, leaving severe emotional scars. The trauma is compounded if, as often happens, the investigation includes a stripsearch of a child by a caseworker looking for bruises. If anyone else did that it would be, in itself, sexual abuse.

          And, of course, the trauma is vastly increased if the investigation is compounded by forcing the child into foster care.

When we think of child abuse the first images that come to mind are of children brutally beaten, tortured and murdered.  But the typical cases that dominate the caseloads of child welfare workers are nothing like the horror stories. Far more common are cases in which family poverty has been confused with “neglect.” Other cases fall between the extremes. 

So it’s no wonder that two massive studies involving more than 15,000 typical cases found that children left in their own homes fared better even than comparably-maltreated children placed in foster care.  A third, smaller study, using different methodology, reached the same conclusion.

· When a child is needlessly thrown into foster care, he loses not only mom and dad but often brothers, sisters, aunts, uncles, grandparents, teachers, friends and classmates.  He is cut loose from everyone loving and familiar.  For a young enough child it’s an experience akin to a kidnapping.  Other children feel they must have done something terribly wrong and now they are being punished.  The emotional trauma can last a lifetime. 

· That harm occurs even when the foster home is a good one.  The majority are.  But the rate of abuse in foster care is far higher than generally realized and far higher than in the general population.  Multiple studies have found abuse in one-quarter to one-third of foster homes.  The rate of abuse in group homes and institutions is even worse.

· But even that isn’t the worst of it.  The more that workers are overwhelmed with false allegations, trivial cases and children who don’t need to be in foster care, the less time they have to find children in real danger.  So they make even more mistakes in all directions. 

None of this means no child ever should be taken from her or his parents.  But foster care is an extremely toxic intervention that should be used sparingly and in small doses.  Predictive analytics almost guarantees a big increase in the dose - that’s why the biggest champions of predictive analytics often are also the biggest supporters of throwing more children into foster care.[1]

To understand why predictive analytics does so much harm, one need only look at what has happened in criminal justice – and what the early evidence is revealing in child welfare itself.

The bias already in the system


          As is noted above, the overwhelming majority of cases that come to the attention of child protective services workers are nothing like the images that come to mind when we hear the words “child abuse.”  On the contrary, typically, they involve neglect. And neglect often is a euphemism for poverty.

          What is “neglect”?  In Illinois, it's failure to provide "the proper or necessary support ... for a child's well-being." In Mississippi, it's when a child is "without proper care, custody, supervision, or support." In South Dakota and Colorado, it's when a child's "environment is injurious to his welfare." 

Three studies have found that 30 percent of foster children
could be home right now if their parents
just had adeqate housing
          With definitions that broad neglect can include very serious harm. Deliberately starving a child is “neglect.” But so is running out of foodstamps at the end of the month. Locking a child in a closet for weeks at a time is neglect.  But so is leaving a child home alone because the babysitter didn’t show and if the parent misses work she’ll be fired.  Three studies have found that 30 percent of America’s foster children could be home with their own parents right now, if those parents just had adequate housing.

          The biggest single problem in American child welfare is the confusion of poverty with neglect.  For reasons discussed below, predictive analytics worsens the confusion.

          The class bias is compounded by racial bias.  Obviously, a system that confuses poverty with neglect will do disproportionate harm to children of color, since they are more likely to be poor. But study after study has found racial bias over and above the class bias.

          This should come as no surprise.  After all, racial bias permeates every other facet of American life.  What makes child welfare unusual is the extent to which the field is, to use one of its own favorite terms, “in denial” about this basic truth.  And, as is discussed in more detail below, that’s one reason why using predictive analytics in child welfare is likely to be even more harmful, and even more prone to abuse, than its use in criminal justice.

          Predictive analytics does nothing to counteract these biases.  On the contrary, predictive analytics makes these biases worse.

The criminal justice experience


Eric Holder warned of the dangers of
predictive analytics
A 2016 story from the nonprofit in-depth journalism site ProPublica quotes a warning about predictive analytics issued in 2014 by then-Attorney General Eric Holder to the U.S. Sentencing Commission.

Although these measures were created with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice. They may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.

ProPublica found that Holder was right.
ProPublica looked at 7,000 cases in Broward County, Fla., which uses a secret algorithm created by a for-profit company to assign risk scores to people arrested in that county, much as Los Angeles County plans to use a secret algorithm from a for-profit company to apply predictive analytics to its child abuse investigations.
According to the story, when it came to predicting violent crime, the algorithm did a lousy job in general – four times out of five, people the algorithm said would commit a violent crime within two years did not.
In addition, according to the story:
The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.  White defendants were mislabeled as low risk more often than black defendants.

The company that came up with the algorithm disputes the findings, saying its own analysis of the data found no racial disparities.

Poverty is equated with risk


Since the algorithm itself is secret, we can’t be sure why the results came out racially biased.
But Prof. Sonja Starr of the University of Michigan Law School has written that the factors used to create these sorts of algorithms typically include “unemployment, marital status, age, education, finances, neighborhood, and family background, including family members’ criminal history.”

Or as Prof. Starr put it to Bloomberg Technology:Every mark of poverty serves as a risk factor.”

Similarly, the algorithm Los Angeles plans to use for child abuse investigations includes risk factors such as whether the child has been taken often to an emergency room or whether the child often changes schools, both factors closely correlated with poverty.  Perhaps that helps explain why, when the Los Angeles model predicted a catastrophe in a family, 95 percent of the time, the prediction was wrong.

There is a similar problem when it comes to the use of “criminal history.”

As The Marshall Project and the website FiveThirtyEight explain:

Heavy policing in some neighborhoods … makes low-income and nonwhite residents more likely to be arrested, whether or not they’ve committed more or worse crimes. … Even using convictions is potentially problematic; blacks are more likely than whites to be convicted of marijuana possession, for example, even though they use the drug at rates equivalent to whites.

The same, of course, is true when it comes to “reports” alleging child abuse – some communities are much more heavily “policed” by child protective services.

So just as predictive analytics in criminal justice puts black defendants at greater risk of prolonged sentences, predictive analytics in child welfare puts black children at greater risk of being sentenced to needless foster care – with all of the attendant harms noted earlier in terms of abuse in foster care itself and other rotten outcomes.

Predictive analytics as computerized racial profiling



The parallels to child welfare don’t end there.
● In criminal justice, the use of predictive analytics is far outrunning objective evaluation. ProPublica found that evaluations were rare and often done by the people who developed the software. ProPublica had to do its own test for racial bias because, it seems, no one else has bothered. Similarly, Los Angeles is moving ahead with predictive analytics in child welfare based on tests and evaluations run by the software company that wants to sell its product to the county. 

●Predictive analytics originally was sold in criminal justice as a benevolent intervention – meant to help agencies custom tailor rehabilitation and supportive services to the needs of high-risk defendants and reduce incarceration. But it’s quickly metastasizing into use at all stages of the criminal justice projects, including, most ominously, sentencing.  Child welfare will be even less likely to keep the use, and abuse, of predictive analytics under control.

That’s partly because at least in criminal justice, there is a vibrant community of progressives and civil libertarians on guard against abuse. But too often, in child welfare, if you want to get a liberal to renounce everything he claims to believe in about civil liberties and due process, just whisper the words “child abuse” in his ear.
This can be seen most clearly through another comparison to criminal justice.

Predictive analytics: The stop-and-frisk of child welfare


In 2016, The Daily Show did a superb analysis of “stop-and-frisk” – the policing tactic pioneered in New York City under former Mayor Rudy Giuliani and struck down by a judge who branded it indirect racial profiling.”

In the clip, available here and embedded below, Trevor Noah goes through the problems with stop-and-frisk one after the other:
§  The rate of false positives – innocent people stopped and frisked – is staggering.

§  Though the name suggests a gentle, benign process, the reality is a deeply frightening, humiliating experience to those who must undergo it.

§  It is racially biased.

§  Defenders say it’s not biased, it’s based on applying a series of risk factors said to be associated with criminal behavior.

§  It backfires by sowing so much fear and distrust in poor communities of color that it undermines law enforcement and compromises safety.

But backers of stop-and-frisk – overwhelmingly white and middle class – say they know better than people who actually live in communities of color. Former House Speaker Newt Gingrich put it this way:
Too many liberals start to sound like Newt Gingrich
when you whisper the words "child abuse" in their ears.
(Photo by Gage Skidmore)

You run into liberals who would rather see people killed than have the kind of aggressive policing … And a lot of the people whose lives were saved because of policing in neighborhoods that needed it the most, were minority Americans.

But what else would you expect from right-wing Republicans like Gingrich, or Giuliani or Donald Trump? Liberals would never tolerate such a harmful, racially biased intrusion on civil liberties.
Or would they?
As you watch the clip, try this: Whenever Trevor Noah says “crime” or “criminal” substitute “child abuse” or “child abuser.”  And whenever he says stop-and-frisk, substitute  “predictive analytics.”



As with stop-and-frisk, predictive analytics puts a pseudo-scientific veneer on indirect racial profiling.  ProPublica proved it. And as with stop-and-frisk, predictive analytics leads to an enormous number of false positives, guaranteeing that many more innocent families will be swept into the system.

If anything the collateral damage of predictive analytics can be worse than stop-and-frisk. With stop-and-frisk, a child may see his father thrown up against a wall and roughed up, but at least when it’s over the child still will have his father.
Yet, as has been the case so many times before, the Left has failed to mobilize to counter a threat to civil liberties disguised as fighting child abuse.

Shouldn’t analytics proponents know what their own data mean?


          As is common in child welfare, some backers of predictive analytics say, in effect, if you dare to disagree with me you don’t care if children are hurt.

Case in point, Joshua New, a “policy analyst” for something called The Center for Data Innovation.  Exactly what that is is unclear, since I can find on the website for the group no listings of board, staff or funders.
New accuses those of us who disagree with him not simply of opposing predictive analytics, but “sabotaging” it, a word that conjures up images of luddites from the Vast Family Preservation Conspiracy sneaking into offices to destroy computers.  This proponent of Big Data offers no data at all to support his claim of sabotage.
Then, he alleges that those of us who disagree with him are “more fearful of data than they are concerned about the welfare of children.”
No; we are fearful of people who harm the welfare of children by pushing the use of Big Data, even when they don’t know what the most fundamental statistics actually mean.
In the second sentence of his column, New misunderstands the first statistic he cites.  He writes: “Consider that in 2014, 702,000 children were abused or neglected in the United States …”
But that’s not true. Rather, the 702,000 figure represents the number of children involved in cases where a caseworker, typically acting on her or his own authority, decided there is slightly more evidence than not that maltreatment took place and checked a box on a form to that effect.

For purposes of this particular statistic, there is no court hearing beforehand, no judge weighing all sides, no chance for the accused to defend themselves.
I am aware of only one study that attempted to second-guess these caseworker decisions. It was done as part of the federal government’s second “National Incidence Study” of child abuse and neglect.  Those data show that caseworkers were two to six times more likely to wrongly substantiate maltreatment than to wrongly label a case “unfounded.”[2]
I don’t blame the federal government for compiling the data.  I don’t blame the computers that crunched the numbers.  My problem is with how the human being – New – misinterpreted the numbers in a way favorable to his point of view.
I’m not saying he did it on purpose (after all, he’s only human).  But it also is less than reassuring to see him cite the supposed success of the Los Angeles experiment without mentioning that pesky 95 percent false positive rate.

Other reasons the risk is greater in child welfare


          The failure of many on the Left to stand for civil liberties when the issue is child abuse has created a whole series of other problems. They add up to still more reasons for concern about adding predictive analytics to the mix:

At least in criminal justice, every accused is entitled to a lawyer – though not necessarily an effective one. At least in criminal justice conviction requires proof beyond a reasonable doubt. At least in criminal justice, the records and the trial are public. At least in criminal justice, almost everyone now admits that racial bias is a problem, even if they disagree about how much of a problem.  And at least in criminal justice, the leader of one of the largest and most important law enforcement organizations, the International Association of Chiefs of Police, issued a public apology to communities of color for the actions of police departments.
In contrast, none of these protections is universal – and most never apply at all – in cases where the stakes often are higher: cases in which a child protective services agency decides to throw a child into foster care.
The right to counsel, and whether hearings are open or closed, vary from state to state. In every state, child protective services can hide almost every mistake behind “confidentiality” laws. Homes can be searched and children can be strip-searched – and seized – without a warrant.
The standard of proof for a court to rubber-stamp removal of a child is only “preponderance of the evidence,” the same standard used to determine which insurance company pays for a fender-bender. 

And not only has there been no apology to the African American community for the ongoing racial bias that permeates child welfare, there is an entire coterie in child welfare insisting that people in the field are so special, so superior to the rest of us, that racial bias isn’t even an issue. Stripped of all the blather and euphemism, their position boils down to this: Of course there used to be racism in America, and that made African-Americans and Native- Americans bad parents, so we have to take away their children. As noted earlier, common sense, and abundant research, say otherwise.

Indeed, imagine for a moment the uproar that would follow if a prominent figure in law enforcement, asked which states were doing the best job curbing crime replied that it’s complicated “but I will tell you the states that do the best overall are the ones that have smaller, whiter populations.”

If a police official said that he might have to resign, and probably would be effectively, and rightly, exiled from debates over criminal justice.

Michael Petit
But now consider what happened when Michael Petit, then running a group calling itself “Every Child Matters,” which itself has a record of misusing data, was asked at a congressional hearing “what states have had the best system in place to predict and deal with and prevent [child abuse]?”

Petit said it was a complicated question, “but I will tell you the states that do the best overall are the ones that have smaller, whiter populations.”

Not only was Petit not asked to apologize and not ignored from then on in the child welfare debate, he was named to a national commission to study child abuse and neglect fatalities, where he was the most outspoken advocate for taking away more children – and for predictive analytics.   
For all these reasons predictive analytics probably is even more dangerous in child welfare than in criminal justice.

Research specific to child welfare

But we don’t need to draw analogies to criminal justice to see the failure of predictive analytics.  Consider what a researcher found in New Zealand, a world leader in trying to apply predictive analytics to child welfare:

 “New Zealand Crunches Big Data to Prevent Child Abuse,” declared a  headline on a 2015 story about predictive analytics.

The story quotes Paula Bennett, New Zealand’s former minister of social development, declaring at a conference: “We now have a golden opportunity in the social sector to use data analytics to transform the lives of vulnerable children.”
If implemented, the story enthuses, it would be a “world first.”
All this apparently was based on two studies that, it now turns out, used methodology so flawed that it’s depressing to think that things ever got this far.  That’s revealed in a detailed analysis by Professor Emily Keddell of the University of Otago.

The studies that supposedly proved the value of predictive analytics attempted to predict which children would turn out to be the subjects of “substantiated” reports of child maltreatment.
Among the children identified by the software as being at the very highest risk, between 32 and 48 percent were, in fact, “substantiated” victims of child abuse. But that means more than half to more than two-thirds were false positives.
Think about that for a moment. A computer tells a caseworker that he or she is about to investigate a case in which the children are at the very highest level of risk.  What caseworker is going to defy the computer and leave these children in their homes, even though the computer is wrong more than half the time?
But there’s an even bigger problem. Keddell concludes that “child abuse” is so ill-defined and so subjective, and caseworker decisions are so subject to bias, that “substantiation” is an unreliable measure of the predictive power of an algorithm. She writes:
How accurately the substantiation decision represents true incidence is … crucial to the effectiveness of the model. If substantiation is not consistent, or does not represent incidence, then identifying an algorithm to predict it will produce a skewed vision …

Turns out, it is not consistent, it does not represent incidence, and the vision is skewed. Keddell writes:
Substantiation data as a reflection of incidence have long been criticized by researchers in the child protection field … The primary problem is that many cases go [unreported], while some populations are subject to hypersurveillance, so that even minor incidents of abuse are identified and reported in some groups.

That problem may be compounded, Keddell says, by racial and class bias, whether a poor neighborhood is surrounded by wealthier neighborhoods (substantiation is more likely in such neighborhoods), and even the culture in a given child protective services office.

Predictive analytics becomes self-fulfilling prophecy


Algorithms don’t counter these biases, they magnify them.
Having a previous report of maltreatment typically increases the risk score. If it’s “substantiated,” the risk score is likely to be even higher. So then, when another report comes in, the caseworker, not about to overrule the computer, substantiates it again, making this family an even higher “risk” the next time. At that point, it doesn’t take a computer to tell you the children are almost certainly headed to foster care.
Writes Keddell:
“prior substantiation may also make practitioners more risk averse, as it is likely to heighten perceptions of future risk to the child, as well as of the practitioner’s own liability, and lead to a substantiation decision being made.”

So predictive analytics becomes a self-fulfilling prophecy.
Keddell also highlights the problems when even accurate data are misused by fallible human beings:
Several researchers note the tendency for individualised risk scores to be utilised in negative ways in practice, where actuarial approaches are prioritized over professional judgement. While statistical modellers may understand the tentative nature of statistical prediction or correlation … practitioners tend to treat statistical data, especially when stripped of its explanatory variables, as solid knowledge, which can function as a received truth.

But it turns out there may be one area where predictive analytics can be helpful. Keddell cites two studies in which variations on analytics were used to detect caseworker bias. In one, the researchers could predict which workers were more likely to recommend removing children based on questionnaires assessing the caseworkers’ personal values.
In another, the decisions could be predicted by which income level was described in hypothetical scenarios. A study using similar methodology uncovered racial bias.

So how about channeling all that energy now going into new ways to data-nuke the poor into something much more useful: algorithms to detect the racial and class biases among child welfare staff? Then we could teach those staff to recognize and overcome those biases, and protect children from biased decisions by “high risk” caseworkers.

The reality on the ground

          To see how the brave new world of predictive analytics likely would play out in an actual American case, let’s take a trip into the very near future and consider a hypothetical case.
Child Protective Services has just received a child maltreatment report concerning a father of five. With a few keystrokes, CPS workers find out the following about him:
He’s married, but the family lives in deep poverty. He has a criminal record, a misdemeanor conviction. He and his wife also had the children taken away from them; they were returned after six months.
These data immediately are entered into a computer programmed with the latest predictive analytics software. And quicker than you can say “danger, Will Robinson!” the computer warns CPS that this guy is high-risk.
When the caseworker gets to the home, she knows the risk score is high, so if she leaves those children at home and something goes wrong, she’ll have even more than usual to answer for.
That means the allegations in the actual report – and whether or not those allegations are true – barely matter. In this new, modern age of “pre-crime” making determinations based on what actually may have happened is passe.  Instead we make decisions based on what the computer says might happen.  So those children are likely to be taken away, again.

So, now let’s return to the present and meet the family at the center of the actual case, from Houston, Texas, on which this hypothetical is based. Follow this link or watch the video below:




         Notice how the case involves no accusation of abuse.  The children were not beaten, or tortured. They were not left home alone.  They were not locked in a closet.  They were not starved.  The children were taken because their father was panhandling while they were nearby – with their mother. Period.
In the hypothetical, I changed two things about this story. First, the story mentions no criminal charges, and, in fact, panhandling is illegal only in some parts of Houston. But predictive analytics tends not to factor in Anatole France’s famous observation that “the law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal bread.”
So had there been a criminal conviction, or even a charge, regardless of the circumstances, it almost certainly would have added to the risk score.  But even without adding a criminal conviction to the hypothetical, we’re still talking about a family which not only had a previous report of child neglect “substantiated” but already had the children taken away.
And second, I’m assuming the father, Anthony  Dennison and his wife actually will get their children back. In fact, there’s no telling what will happen, and the family is under the impression that CPS is pursuing termination of parental rights.
What we do know is that in the brave new world of predictive analytics, if Dennison’s children ever are returned, and if Dennison ever is reported again, the children are likely to be removed again. And, what with it then being the second time and all, they’re more likely to stay removed forever.
For now, the parents don’t know where their children are. But given that this is Texas foster care we’re talking about, odds are it’s nowhere good.

The child welfare response: We can control our nukes


          Although the strongest pressure for predictive analytics comes from those who also want to see far more children taken away, there are some reformers running child welfare systems who also want to use it.

          Their argument boils down to this: We understand the enormous power of Big Data, and you know we don’t want to take away lots more kids.  So you can trust us. We’ll only use our enormous power for good!

          There are a number of variations on the theme, the most prominent being: We’ll only use the data to target help to families, we won’t use it to decide whether to remove the children. And then there are those, like Joshua New, who paraphrase the National Rifle Association: Computers don’t decide whether to take away children, they say, people do!  Human beings are free to override any conclusions reached by an algorithm.

          But that’s not how it’s going to work in real life.

          For starters, let’s return to that case in Houston cited above:

When child protective services in Houston encountered the Dennison family, they did not offer emergency cash assistance. They did not offer assistance to Dennison to find another job, or train for a new one.
They took the children and ran. Just as Houston CPS did in another case, where they rushed to confuse poverty with neglect.

An algorithm won’t make these decisions any better.  They’ll just make it easier to take the child and run.
          But, say some reformers, we’re more enlightened than the people in Houston, we wouldn’t misuse analytics in a case like this. Maybe not. But child welfare agency chiefs don’t tend to stay on those jobs for very long. To those reformers I would respond: What about your successor?  And her or his successor? 

          And, as noted earlier, what caseworker will leave a child in a home rated by a computer as high risk, knowing that if something goes wrong, she’ll be all over the local media as the caseworker who ignored “science” and “allowed” a tragedy to occur?  Of course she won’t leave the children in that home.  So children will be taken in many, many cases where the algorithm got it wrong and produced a “false positive.”

         
Big Data is like nuclear power at best, nucelar weapons at worst
Big Data is like nuclear power at best, nuclear weapons at worst. When only the smartest, most dedicated people are involved in every step of the process, from building the power plants, to installing safety features, to running them, nuclear power might be a safe source of electricity. But put so much power in the hands of typical, fallible human beings and you get Three Mile Island, Chernobyl and Fukushima. Put it in the hands of the malevolent and you get the North Korean nuclear weapons program.

          Few in child welfare are truly malevolent.  But there are lots and lots of typically falliable human beings.  And nothing in the history of child welfare indicates that it can responsibly handle the nuclear weapon of Big Data.  That’s why predictive analytics actually amounts to data-nuking poor families.

Efforts to abuse analytics already are underway


          There’s no need to speculate about whether America’s child welfare bureaucrats would misuse predictive analytics – some already are trying.

Misusing analytics was, in fact, the primary recommendation of a group that was known as the Commission to Eliminate Child Abuse and Neglect Fatalities (CECANF).  The commission was the brainchild of Michael Petit – the same Michael Petit whose appalling comments on race are noted above.  The recommendation to misuse analytics also was Petit’s idea.
As I noted in the trade journal Youth Today, the commission, created by an act of Congress, was chaotic, angry, dysfunctional and secretive. It made decisions based on newspaper horror stories.  One of only two African American commissioners was treated with appalling disrespect as the Commission rushed to complete deliberations.  (That commissioner, Patricia Martin, presiding judge of the Child Protection Division of the Circuit Court of Cook County, Illinois, wrote a scathing dissent.)  In other words, the Commission to Eliminate Child Abuse and Neglect Fatalities didn’t study the child welfare system, it recreated the child welfare system.
The Commission used predictive analytics to justify what some commissioners called a “surge,” in which states would use “multi-disciplinary teams” to reopen huge numbers of cases – but only cases in which children were allowed to remain in their own homes.  There would be no checking on children in foster care to see if they really need to be there. (Other commissioners, with no sense of irony, referred to this idea as an “accelerant.” Both terms were dropped after commission PR staff has been instructed to find something more palatable.) 
The surge/accelerant calls for demanding that states look at every child abuse death for the past five years, try to find common risk factors, and then reopen any case where even one such risk factor may be present.
The flaw in the logic is one for which we all should be grateful. Though each is among the worst imaginable tragedies, the chances of any parent killing a child are infinitesimal. The chances of a parent with a given “risk factor” killing a child are ever-so-slightly less infinitesimal.
It should not have been necessary for the federal Department of Health and Human Services to point out something so obvious, but, since they were mandated by Congress to respond to the Commission’s report, they did so. Here’s what they said:
States frequently have significant year-to-year swings in the number and rate of fatalities. In small states, a single incident rather than a systemic issue can dramatically affect annual statistics. In addition, in small states an analysis of data from the past five years…would include too few cases to draw definitive conclusions.
The surge isn’t the only area where the Commission misused analytics.  They also used it as the basis for a recommendation that would prohibit child protective hotlines from screening out any call involving a child under age 3, and another one to bar screening out any case in which someone calls more than once.
The hotline recommendations alone, if implemented, probably would increase the number of cases investigated every year by nearly 40 percent. (For NCCPR’s full analysis of the Commission report, see our full rebuttal here.)
The Commission recommendations add up to a regime of domestic spying that would make the NSA blush. And even if you’re not fazed by the enormous harm inflicted on children by needless investigations and needless foster care, consider what a 40 percent increase in caseloads would do to the quality of investigations.  Workers would have far less time for any case – and more children in real danger would be missed.
All because the commission fell in love with the concept of predictive analytics.
These posts to the NCCPR Child Welfare Blog describe in detail the
CECANF was the Keystone Kops of commissions
chaos and incompetence that characterized the deliberations of what turned out to be the Keystone Kops of commissions not to mention the time and money they wasted writing blog posts that could be turned into Mad Libs). And these are supposedly national leaders in the field. Do we really want to give people like this, running secret systems with no real due process protections, and no real accountability the nuclear weapon of predictive analytics?

What really happened in Tampa

In the face of all the evidence of the dangers of predictive analytics, both from what we can learn from criminal justice and from the New Zealand study specific to child welfare, proponents can site only one real-world example where, they claim, analytics worked. That’s why they fall back on it over and over.
The Commission, and most other proponents, make their case for “predictive analytics” based almost entirely on only one real-world application in the child welfare field, in Hillsborough County (metropolitan Tampa), Florida.  But there is no real evidence that alleged improvements there had anything to do with predictive analytics.

In the Florida system, almost all child welfare services are privatized.  Regional “lead agencies” oversee both foster care and in-home supervision.

Between 2009 and 2012 there was what newspapers love to call a “spate” of child abuse deaths in the county – nine in all.  The state terminated the contract of the “lead agency” and replaced it with one that had a particularly good reputation.  That agency adopted a predictive analytics tool called “Rapid Safety Feedback” (RSF).  Since then, it has been repeatedly claimed, there have been no child abuse deaths.  (This is not quite accurate, but to find that out it’s necessary to look at endnote 32 in the Commission report.)         

But whatever the exact figure, if there’s a reduction RSF must have caused it and everyone should rush to embrace it, right?  That’s what the Commission seems to think.

But correlation is not causation.

For starters, we have yet to see an account of the supposed miracle in Tampa that tells us how many child abuse deaths there were in Hillsborough County in the years before the “spate.” Presumably, since the 2009 to 2012 events raised such an alarm, there must have been few or none in the preceding four years. 

In addition, determining whether a death is due to child maltreatment is not as easy as it may seem. As the Commission report itself explains well on page 77, it’s actually as subjective as almost everything else in child welfare. 

For example, suppose early one Sunday morning, while Mom and Dad are asleep, a small child manages to let himself out of the house, wanders to some nearby water and drowns.  Was that an accident or neglect?  Given the biases that permeate child welfare, odds are if it was a backyard pool in a McMansion it’s going to be labeled an accident; if it was a pond near a trailer park, it probably will be labeled neglect.

The picture is further muddied by the peculiar politics of Florida.  In that state policy and guidance concerning what kinds of deaths to label as maltreatment have changed several times in recent years, making it even harder to make a true comparison.

And there was another change in Tampa: A lot of additional caseworkers were hired.  But unlike in so many other cases where this happens in the wake of high-profile tragedies, the new lead agency and the state Department of Children and Families worked hard – at first - to ensure there was no foster-care panic – no sudden surge in child removals in that region.  So the new workers actually had time to do their jobs, instead of drowning in new cases.  (Sadly this did not last.  Driven largely by grossly inaccurate news coverage, a statewide foster-care panic has sharply increased entries into care.  So the record concerning deaths may not last either.  Indeed, statewide, deaths of children “known to the system” have increased.)

The Commission held an entire hearing in Tampa – but chose to ignore a crucial warning from a key witness, Prof. Emily Putnam-Hornstein of the University of Southern California School of Social Work, who said:

“[W]e would be mistaken to think about predictive risk modeling, or predictive analytics, as a tool we would want to employ with that end outcome specifically being [preventing] a near fatality or a fatality, because … I don’t think we will ever have the data or be able to predict with an accuracy that any of us would feel comfortable with and intervene differently on that basis.”

Of course you won’t find this in the Commission report – only in Judge Martin’s dissent.

Finally, there is one other possible reason for what happened in Florida: Dumb luck.  Even a story in the Chronicle of Social Change, an online publication that has been cheerleader-in-chief for predictive analytics, had to acknowledge that “given the rarity of these events, a lapse in child deaths could be as much anomaly as anything else.”

Indeed, the director of quality assurance for the new lead agency told the Chronicle: “I never try to claim causality.”

No one else should either.


Lessons from the elections of 2016


 In 2016, predictive analytics told us Hillary Clinton would be president.  Predictive analytics told us the Democrats would take control of the Senate.  And The New York Times says there are lessons from that – lessons that go far beyond election forecasting.  According to the Times:

It was a rough night for number crunchers. And for the faith that people in every field … have increasingly placed in the power of data. [Emphasis added]
 [The election results undercut] the belief that analyzing reams of data can accurately predict events. Voters demonstrated how much predictive analytics, and election forecasting in particular, remains a young science …
  [D]ata science is a technology advance with trade-offs. It can see things as never before, but also can be a blunt instrument, missing context and nuance. … But only occasionally — as with Tuesday’s election results — do consumers get a glimpse of how these formulas work and the extent to which they can go wrong. … The danger, data experts say, lies in trusting the data analysis too much without grasping its limitations and the potentially flawed assumptions of the people who build predictive models.

As we’ve seen, flawed assumptions, built into the models, were the root of the rampant racial bias and epidemic of false positives that ProPublica found when analytics is used in criminal justice. And as we’ve seen, Prof. Emily Keddell found much the same when she examined bias and false positives specific to predictive analytics in child welfare.

The Times story also includes a lesson for those who insist they can control how analytics is used – those who say they’ll only use it to target prevention – not to decide when to tear apart families:

Two years ago, the Samaritans, a suicide-prevention group in Britain, developed a free app to notify people whenever someone they followed on Twitter posted potentially suicidal phrases like “hate myself” or “tired of being alone.” The group quickly removed the app after complaints from people who warned that it could be misused to harass users at their most vulnerable moments.


Conclusion – Big Data is watching you

          If you really want to see the world envisioned by proponents of predictive analytics, forget the bland reassurances of proponents in child welfare.  Just look at how those pushing the product market it to other businesses:



So, what do we have here?  A bunch of data analysts, presumably working for a firm that sells sporting goods, are spying on a woman’s recreational habits. They have amassed so much data and their algorithms are so wonderful that it’s like having a camera watching her 24/7. Not only do they know her preferences, they know exactly why she prefers one sport over another and exactly what she’ll do next.

In other words, they’re stalking her.

But this is not presented as a warning of the dangers of predictive analytics. On the contrary, virtual stalking is what they’re selling.

That’s because the commercial is not aimed at consumers – such as the woman being stalked. The target audience is potential stalkers; in this case people who want to sell her stuff.

The maker of the stalking – er, analytics – software, and maker of the commercial, is SAP – described in a story by the analytics cheerleaders at the Chronicle as one of the “market leaders” in predictive analytics and a potential competitor in the child welfare market.

Unlike the bland reassurances given when people raise concerns about predictive analytics, the commercial reveals the real mindset of some of the human beings pushing Big Data.

Apparently, no one at SAP was creeped out by the ad’s Orwellian overtones. The slogan might as well have been “Big Data Is Watching You.” That alone ought to be enough to make anyone think twice about turning these companies loose in the child welfare field.

And it ought to make anyone think twice about giving this kind of power to secretive, unaccountable child welfare bureaucracies that have almost unlimited power to take away people’s children.





[1] In addition to Michael Petit, whose role in the debate is discussed in the text, big boosters of predictive analytics include Richard Gelles, a strong proponent of taking away more children and expanding the use of orphanages, Elizabeth Bartholet, whose own proposals for changing the system would require the removal of millions more children from their homes, and Daniel Heimpel, who assists Bartholet with her research, and regularly uses the so-called Chronicle of Social Change he publishes to attack family preservation efforts and promote predictive analytics. (I also am a regular blogger for the Chronicle - It’s the child welfare equivalent of being the token liberal at Fox News.  Much of this publication is drawn from columns originally published there.)             

[2] Study Findings: Study of National Incidence and Prevalence of Child Abuse and Neglect: 1988 (Washington: U.S. Dept. of Health and Human Services, National Center on Child Abuse and Neglect, 1988), Chapter 6, Page 5.