The gap between insight and intervention

I have discovered, over the years, that I’m a spectrum thinker. On white boards and bar tables and with wild air gestures, I always seem to be explaining how there are two opposing endpoints and why I’m only interested in this or that part of the area between them.

So it is perhaps unsurprisingly that over the past few months of evangelizing the idea of a Chief Behavioral Officer and talking to companies both large and small about how psychology fits in their business, I’ve started to see a spectrum in how application is happening.

On one end is Insight. The function typically lives in data science/analytics/whatever the heck we are calling it these days and reports to the head of that area. The primary inputs tend to be variables that are already instrumented, and the primary output is typically some sort of report that indicates a potential surface area for change, with some more ambitious companies also including a few recommendations for high-level potential intervention. This report is handed off to whoever controls the variable itself (marketing, product, ops, etc.), while the Behavioral Scientist returns to the data puddle to investigate something new.

This is where the bulk of the job openings are at the moment: Allstate, Amazon, Facebook, you name it. In some ways, the descriptions often sound like a hybrid of user research and data science, with the goal expressed as “We want to understand our users’ behavior, particularly where it is irrational”. Understand is the key word; this role is about the why of human behavior. Certainly there is an implicit belief that the understanding will lead to better behavior change, but the actual change lives elsewhere, with whoever owns the lever that may need pulling.

Contrast that with the other end of the spectrum, Intervention. This function is focused on the actual changing of human behavior and seems to be living in Strategy/Innovation/Global Services. While this role may touch data, it doesn’t seem to have analysis at the core of its function (think SPSS instead of R) and if paired with a solid data team, may not actually be doing much data work at all. The output isn’t a report but rather an intervention that has been experimented and iterated until it can be shown to reliably change a behavior and is ready for scale.

Similar to Insights, there is still a handoff at the scaling point, where the intervention is handed off to the relevant team for ongoing ownership, but relatively speaking, the Intervention function is picking up the ball later (after an insight) and carrying it farther (a scaleable intervention exists).

There have been comparatively fewer roles I’ve seen here, in part because Insight already fits into existing structures (Data Science reports on trend, someone else pulls the lever), whereas Intervention requires creating a new step in between. But I believe that this is a little like the recent pseudo-bifurcation of data science as analytics (BI, Insight teams, etc.) and data science as product (machine learning, AI, etc.). A conversation with an insurance company recruiter sticks in my mind: “We’d love to be doing intervention, we just don’t think we are there yet, so we’re starting with insights.”

Is this spectrum rigorous? Absolutely not, and every company is thinking about it differently. There is no science here, only an attempt to pattern match the signal out of the noise. But I think that in order for behavioral science to catch up to data science in terms of corporate understanding, it behooves us to start understanding how to use a common vernacular. Executives need terms they can buy in to and recruiters need roles they can recruit for.

One potential option is to recognize the commonality of the two roles by keeping a single title, Behavioral Scientist, but emphasizing differing job requirements and responsibilities. Speaking very broadly, I’ve seen more postings use “behavioral economics” as a requirement when they are looking for Insight and “behavioral design” when looking for Intervention, although both of those terms are about the modification of other fields to incorporate psychology rather putting psychology at the center.

In my conception of CBO, both insight and intervention are needed. I’ll admit that I’m biased toward the intervention side, since my expertise is mostly in the building of things, but look at Bing in the Classroom: there were initial insights (“School search volume is lower than expected”, “Curiosity is not the root cause”) that allowed for the intervention. Ditto GetRaised (“Women are significantly underpaid”, “Women are less likely to ask for raises and less likely to get them when they do”).

But as with data science, we must resist the urge to simply relegate behavioral scientists to insight functions. There is a natural tendency to look at the black box of human behavior and long for understanding.  But in reality, business is driven by the ability to change behavior, so to not apply science directly to the intervention design seems foolhardy.  Regardless of which is more needed, however, the predicting of behavior and the modification of behavior are related but not the same, and should not be painted with a single brush.

Side note: For years, I resisted calling myself a feminist. Typical arguments about humanism and striving for equality not being gendered and blah blah blah. And now it is in my damn Twitter bio. Similarly, for years I’ve resisted the term behavioral design. Science is so important to me, it is hard to leave it out. And yet as people increasingly use behavioral design to differentiate from behavioral economics, it may be something to consider. I’m not convinced enough to yet start using the term in a self-applied way…but I’m tempted. Particularly because I distinctly don’t want to spend the rest of my life predicting behavior; I want to create it.

an N of 1: in statistics, a sample size of 1 has almost no validity. in life, this is less true.