I believe that behavioral science, correctly applied, can change the world. But, as with any emerging discipline, there is a period of self-definition in which people fight (with varying amounts of actual animosity) about who can claim what title and where the borders of the field are.
Personally, I’ve largely been uninterested in the debate about who can and cannot call themselves a behavioral scientist (though to be clear, as a non-PhD, it benefits me not to start drawing lines). But that’s different from what it actually means to be doing behavioral science; as the name of the field suggests, it is the behaviors that should concern us. So I have become increasingly interested in how we might break down the various components of behavioral science into smaller units of work that could be credibly offered independently, while firmly maintaining the integrity of the behavioral science process as a whole.
To begin, let’s be clear that I am actually talking about applied behavioral science, which is explicitly concerned with changing behavior. This is distinct from academic behavioral sciences (like social psychology, behavioral economics, etc.), which further our understanding of the basic principles that underlie human behavior. That doesn’t mean academic folks don’t care about change or that applied folks don’t care about knowledge, just that each prioritizes one over the other. In my case, as an applied behavioral scientist, that means that while I still sometimes publish peer-reviewed papers, my primary work is changing the behaviors of populations.
My simple definition of applied behavioral science has always been “behavior as an outcome, science as a process,” which has the benefit of being easy to explain to people without exposure to the discipline and sounding pithy when you say it in a presentation. But if you’re trying to buy behavioral science services, or understand how you might begin to build them internally, that definition isn’t terribly useful.
To make it more practical, I propose a four-stage model below that balances an understanding that each part is essential with the need to break it down into units of work that can be spread across internal teams and external vendors when necessary. But be warned: each handoff increases the potential for loss, particularly when there is an incomplete understanding of the adjoining stages. A tightly integrated process managed by people who understand the end-to-end process will always have the greatest likelihood of creating meaningful behavior change; that we can name the parts should not detract from the need for a whole.
- Strategy: the defining of a desired behavioral outcome, with population, motivation, limitations, behavior, and measurement all clearly demarcated. Plain version: figuring out what “works” and “worth doing” mean in behavioral terms by collaborating with stakeholders.
- Insights: the discovery of observations about the pressures that create current behaviors, both quantitative and qualitative. Plain version: figure out why people would want to do the behavior and why they aren’t already by talking to them individually and observing their behavior at scale.
- Design: the design of proposed interventions, based on behavioral insights, that may create the pre-defined behavioral outcome. Plain version: design products, processes, etc. to make the behavior more likely.
- Evaluation: the piloting (often but not always using randomized controlled trials) of behavioral interventions to evaluate to what extent they modify the existing rates of the pre-defined behavioral outcomes. Plain version: figure out whether the products, processes, etc. actually change the pressure, make the behavior more likely, and do so at a magnitude that is attractive for scaling.
- Behavioral Science: combining all four of those processes. Plain version: behavior as an outcome, science as a process.
Step 1: Behavioral Strategy
Because the process is linear and each step requires that the previous step was done (although not necessarily by the same person), we need to start by defining the behavioral outcome we want to achieve. In the latest version of the Intervention Design Process (or IDP; the applied system I propose in my book), we do that using a behavioral statement: When [population] wants to [motivation], and they [limitations], they will [behavior] (as measured by [data]). Arriving at that statement is deceptively hard work and requires running a disciplined process with stakeholders to define each of those variables. But done correctly, it paints a picture of the world we want to create when our interventions are working.
Given that the process prioritizes what we want the result to be rather than the interventions that actually create the result, my proposed term is behavioral strategy. While it doesn’t have to include a cost/benefit ratio that defines how much an intervention can cost relative to the impact that it has, certainly knowing this can shape the rest of the process and allows stakeholders to more clearly understand the actual stakes.
Inside a company, both Strategy and Product teams try to answer this question regularly, although they often express it in imprecise, non-behavioral terms that create misalignment later. Externally, a strategy firm like McKinsey could likely spin up a unit that did this work in a reasonable way but like internal teams, they currently tend not to focus specifically on behaviors and don’t offer this as a service today.
Step 2: Behavioral Insights
The next step in the IDP is understanding the distance from the world we want by understanding the pressures that create the world of behavior we have today. Insights can be both quantitative and qualitative, so collectively I propose behavioral insights and then splitting as needed: qualitative behavioral insights and quantitative behavioral insights, since there are specialists that concentrate on one approach or the other.
Existing user researchers and data scientists frequently do this work (Spotify has Quantitative User Research, for example), and as long as they’re doing the work with an explicit emphasis on generating insights to change behavior, these teams could slot in here. If you wanted to buy it as a service, IPSOS’ behavioral science team seems to do behavioral insights as a specialized form of market research that focuses on behavior and other agencies may be able to provide insights if specifically pointed toward behavioral outcomes.
Step 3: Behavioral Design
Having mapped the behavior we want and understanding why it doesn’t yet occur, in the IDP we next get into pressure mapping and intervention design. There are lots of ways to create behavioral interventions that don’t use pressure mapping, like design thinking, but ultimately we are always trying to generate proposed interventions that may change behavior. I say “proposed” and “may” because while we have supporting evidence (because the design process is based on the behavioral insights we defined above), we haven’t actually tested whether the interventions create the behavior.
Design and Product departments do this within companies today, although often lacking the behavioral focus, so it seems appropriate to call this behavioral design. And an agency like Fjord could potentially do this externally, so long as they are given an articulated behavior outcome and the relevant behavioral insights (neither of which they are likely to create themselves).
Step 4: Behavioral Evaluation
Finally, we have the evaluation of the proposed interventions, to see to what degree they actually create the outcome articulated in the behavioral strategy. While this is called impact evaluation in the non-profit world, behavioral testing builds on the more widely understood experimentation that is used in most for-profit companies. The theoretical gold standard is a randomized controlled trial, in which participants exposed to the intervention are compared against a control group, but that may not always be feasible; remember, in applied behavioral science, we only need to be as right as the cost/benefit ratio dictates. In a perfect world, doing this process also results in the observation of additional behavioral insights (because trying to change a system often reveals underlying truths about it) but I don’t think we should try to make this a specific requirement of this process.
Very few companies actually run rigorous pilots today, although it does happen in some Product and Data Science organizations (and Marketing loves non-theory-driven RCTs in the form of A/B tests), so this is probably the largest potential growth area for behavioral science as a whole. In the non-profit sector (where impact evaluation is sometimes built into grants), an agency like Social Impact will do an RCT on your interventions for you, if you’re careful to make sure they translate “impact” in behavioral terms.
Combined: Behavioral Science
To me, behavioral science requires the combination of all of the above. If you can’t define a behavioral outcome (AKA don’t do behavioral strategy), then you miss out on the whole point of “behavioral” in this discussion; you can’t run a scientific process if you can’t measure what works, and you don’t know what “works” means if you don’t define it.
Similarly, you could run pilots and measure behavioral outcomes but without behavioral insights, that’s not science: your interventions aren’t necessarily designed based on replicable understandings (my favorite example of this is Marissa Mayer testing 41 shades of blue at Google; because there was no theory behind the iterations, you could only know what worked in that limited moment but not why) and so if they don’t work, you’re not actually any closer to something that does. It is only when all four processes come together that you truly get to both of the words in the term behavioral science…and neatly arrive back at “behavior as an outcome, science as a process.”
Some people who currently offer behavioral science services are going to hate this taxonomy, because it threatens their identity, both personally and professionally. And I understand that feeling: removing ambiguity can feel like a loss, when clarity reveals you’re only covering some of the territory. And not offering some services isn’t always by choice; for example, I’ve often heard consultants complain that they can’t sell behavioral impact evaluation to clients because they already “know” it will work after the behavioral design phase.
But the purpose of this guide is arriving at a shared understanding of applied behavioral science and its components, and part of that is recognizing that no one piece of the field is better than any other. There is no shame in only doing part of it, as long as we clearly explain the other parts and push the importance of doing the full process. By creating areas of intersection and smooth handoffs, we can better allow for specialization and move the world incrementally closer to behavior as an outcome, science as a process. And that’s work worth doing, in any form.
Side Note: My belief in this model is why I’ve decided to join frog as the Executive Director of Behavioral Science. My role is two-fold: help my fellow frogs apply behavioral strategy, insights, design, and impact evaluation in their projects and help our clients build their own applied behavioral science capabilities. While I’ve worked hard to evangelize the field broadly in my previous roles (including writing Start At The End, which was as close to a handbook as I can get, and doing 30+ talks a year), ultimately my career to date has been about creating a long series of behavioral interventions that accomplished internal business goals. In contrast, at frog I’ll be focusing specifically on behavioral science as a process, both internally and externally. As we see more senior behavioral scientists within large companies, we have the opportunity to leverage existing cross-disciplinary expertise to further support that work. And frog, particularly as part of CapGemini Invent, is the right place to do that. The agencies I mention in the examples can all learn to do parts of the behavioral science process. But because they typically do only their siloed step, they think of their stage’s deliverable in isolation. At frog, because we can and have done the full cycle, we know each step is just a milestone, so we can take a more holistic view and plan our work to naturally connect to the next necessary step. And through our Org Activation practice, we can teach organizations alongside projects to help them grow their own capabilities. Behavioral science doesn’t belong to frog – it belongs to everyone. And it is with that belief firmly in mind that we look forward to growing this discipline together.