When I wrote Start At The End, my goal was to help folks apply behavioral science in their everyday lives. And while it sold well and got good reviews, it was also an artifact fixed in time – the downside of books is that they can’t grow with you.

So in 2023, with the help of Lorraine Minister, I launched a full course with videos and practice exercises that built on the work of the book but updated it with my latest thinking. And after two years, we’ve earned enough to pay back our original development cost. And that means, as of today, the course is now available for free.

I recently gave a talk at GAABS about citizenship in applied behavioral science. Not how we serve the world (although that is important) but how we show up in our community, to advance the practice of applied behavioral science generally. My argument was essentially that we all need to do three things: learn (by advancing the field through scholarship), earn (by making the field financially viable), and serve (by giving back more than we take from the community).

But it can’t just be talk: if I believe it, I have to do it. And so, on my birthday, I decided that the best gift I could get would be to give something away. This course, which I so loved creating, is it.

Enjoy at https://mattwallaert.com/free-course/

Technology has always been at its best when it allows us to selectively reject it, to choose our own hardship.

I don’t really use AI for much but periodically, it comes in handy. And then, just when I’m not paying attention, it punches me right in the gut.

Because I’m doing a workshop on pressure maps, the organizers asked me to generate a sample. So I decided to make a pressure map about the promoting and inhibiting pressures for using a pressure map (meta, I know). And because it is 8:30pm the night before and I’m not sleeping much these days, I needed a thought partner.

Enter ChatGPT. I wrote a short prompt explaining what a pressure map is and what I wanted and sure enough, it spat back a very comprehensive list of pressures. But as I was congratulating myself on such good prompt engineering, I noticed it had appended the following:

“Want to contrast pressure maps with other tools (e.g., COM-B, Fogg, behavior over time graphs)? Or identify behavioral nudges that increase likelihood someone actually makes one?”

Paging Jared Peterson; this thing is trying to make a framework competition! And it clearly wasn’t using my prompt; it already knew all about pressures maps. So I asked it who invented the pressure map.

“​The concept of the pressure map, distinguishing between promoting and inhibiting pressures influencing behavior, was developed by behavioral scientist Matt Wallaert. He introduced this tool as a core component of his Intervention Design Process (IDP), detailed in his book Start at the End. The pressure map serves to identify and visualize the factors that encourage or discourage a particular behavior, aiding in the design of effective interventions.”

Ruh roh. Clearly someone fed this thing my book. So I threw it a curve ball: is pressure mapping similar to Lewinian field theory?

“Yes—very similar. In fact, Wallaert’s pressure map is explicitly inspired by Kurt Lewin’s field theory. Lewin’s field theory (1940s) conceptualized behavior (B) as a function of the person (P) and their environment (E): B = f(P, E). Within this “field,” driving forces promote movement toward a goal and restraining forces oppose it. Lewin used this to explain both action and stasis (e.g., why people stay in undesired states). Wallaert’s promoting and inhibiting pressures directly parallel Lewin’s driving and restraining forces. The key differences:

 <insert very smart chart here>

In short: pressure maps operationalize Lewinian dynamics for intervention design. Wallaert takes the theory and makes it actionable.”

Well…shit.

I’m not scared of AI. People still like to do workshops with me, because my jokes are better. I’m happy that someone can have a cogent discussion with a computer about applied behavioral science.

But as Joanna Maciejewska said, “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” I like thinking! I have a high need for cognition! That ChatGPT can save time and cognitive effort for someone else doesn’t mean I want it to do it for me in my field of expertise.

But that’s the magic. Your laundry is my PhD thesis and vice versa. We all like thinking but we don’t all like thinking about the same things. The promise of AI isn’t that it automates everything but that it has the potential to automate everything. So that then we can choose what we don’t want to automate, what we want to only enhance or even do completely on our own.
It is like really great camping gear. Some people love glamping. Some people want to kill their own food with a spear they chipped out of squirrel bones. Technology has always been at its best when it allows us to selectively reject it, to choose our own hardship.

And so the most important feature of AI is always, always going to be the ability to turn it off. To be human is to struggle; utopia is just choosing what those struggles are going to be.

Lower-wage workers are using lower-quality digital tools for no good reason.

Recently, we started looking for a new applicant tracking system at Oceans and so I queried my network. Everyone seemed to be recommending the same trendy, venture-backed ATS startup, so I arranged for a sales call. And it was all going swimmingly until we got to the worst part of every SaaS conversation: pricing.

The startup charges per seat, with a minimum of 10% of the employee count. And that is every employee, not just recruiters: interns, part-timers, doesn’t matter. 

If you’re a large tech company and the average comp of your employees is $200k+, the per-seat fee itself isn’t staggering; at those salary levels, paying a small percentage to manage recruiting is no big deal. 

But for most normal businesses, the cost of that system becomes prohibitive. I cannot justify paying a startup large amounts of money that could be put directly into the pocket of an employee. Outside of high-salary ecosystems like the tech bubble, the aggregate cost of SaaS products have become so exorbitant that they now compete directly with the wages of the people that use them.

The net effect of this system of pricing is a new form of workplace wealth inequity: the SaaS gap. Just as lower-wage physical workers have to use inferior tools, lower-wage digital workers now have the same issue.

This is an opportunity for smart SaaS companies. The ATS could easily have captured my business by adjusting their pricing to the relative wages of my workers; I’m fine with it being 2% at both big tech and my smaller services company. And so is big tech – they don’t care and there is no risk of cannibalization there.

Wage-adjusted pricing can still be profitable. The whole point of SaaS is its relatively high margins. With physical workers, better tools typically have very real production costs that cannot be avoided on a per-tool basis. With digital workers, better tools generally mean di minimis additions in server and support costs plus an upfront sales and integration cost, all of which pale in comparison to the sunk cost of developing a SaaS product.

It does require some internal adjustments. If your sales force is being compensated as a percentage of deal value, they’re likely to ignore lower-wage employers, and many SaaS companies run all deals through a one-size-fits-all sales process. But this is where self-serve options and salary-based integration specialists shine. For my 2% of worker wages, I expect a high-quality fixed product but am willing to take on much of the variable burden myself: I know what I need, who has it, and how to configure it. I would gladly put money back in the pocket of our Divers and take on the integration burden myself; I just need to find a SaaS provider smart enough to take that business. In the words of the best clients: please, take my money.

While I’ve tried to keep this in business terms, it is impossible to ignore there is also a moral angle here. To me, it is unconscionable to prevent lower-wage populations from accessing higher-quality tooling simply because you are too lazy to set up smart business processes. That decision has ripples: lower-wage workers use lower-quality SaaS and thus never get trained on the higher-quality tools that would allow them to rise in the workforce. They expend time and energy needlessly, burnout faster, and generally live worse worklives. If equity is part of your mission, you need to get on this.

Consider productive uncertainty as a more mission-aligned approach.

In most sports, you always know where you stand. The score is continuously updated, so whether it is a touchdown or a goal or a basket, you know the balance between you and your opponent at all times. And you often make strategic decisions with this knowledge: slow the pace down here, speed it up there.

Boxing is a notable exception. Knockouts aside, it is scored on a round-by-round basis, but you don’t know the score until after the bout has concluded. And because you only have a general sense of how you are doing, most boxers will continue to actively fight until the last bell, in case their naive understanding doesn’t match what is on the scorecards.

With the focus on OKRs and metrics, modern business tends to feel like most sports. You have a target and at least somewhat continuous measurement, so you can effectively judge how hard to lean in given your progress and the time left in the period (a sprint, a quarter, a fiscal year, whatever). 

This visibility is typically championed by managers, who then have a better understanding of velocity and can decide where to make investments in order to meet expectations. This aggregates all the way up the hierarchy to the CEO, who can balance the progress against the stock market and competitors.

But what if some parts of working need to be more like boxing? Instead of achieving a target, you simply continue to fight as actively as you can to reach the best possible outcome.

To say “your sales quota is $10k” is to make it more likely that each of your salespeople will land around that number, which is great for predictability but not for maximization. Whereas “we measure using deal value; fight for every deal” will create greater variability but likely better total outcomes; the productive uncertainty keeps people focused on continuous incremental gains.

This is more about targets than it is OKRs. Knowing what you are trying to accomplish is certainly in line with a boxing mentality; matches are scored on clear criteria that every boxer understands. But rather than setting a specific target, you use your understanding of the criteria to fight for every point.

Some workplaces already function this way and reveal some of the risks. Emergency rooms, for example, operate on the assumption that every person is worth helping, not on the notion that you simply have to help more than the target for the day. And because they fight for each life, ER staff have one of the highest burnout rates and worst worklife balances of any team in a hospital. Elite boxers fight only once or twice a year for good reasons.

But there are other structural ways to combat burnout: boxing is a limited number of timed rounds precisely to prevent injury. And so while productive uncertainty isn’t a fit for every team, it is worthwhile to question where targets make sense versus simply establishing a system for scoring and encouraging a point-by-point mentality.

And this matters, especially in mission-aligned businesses. Sales quotas aren’t vital to the advancement of the world but saving lives is. At Oceans, every sale is a job opportunity for someone who would otherwise be forced to leave their home and those they love; the target is not “100 net new accounts next quarter” – it is as many as we can possibly win. I don’t want “less than 5% regrettable attrition” – I want 0%. Because that is as mission-aligned as we can possibly be.

The process is easy: replace the word “AI” (or whatever they are using) with “calculator” and see what it does to the comment.

And the reason it works is simple. We already know that the introduction of the calculator did not destroy humanity. It didn’t reduce the number of jobs, or make people less smart, or create any of the cataclysmic outcomes that skeptics at the time claimed it would.

This also works for positive comments. We also know that calculators didn’t propel us into a massive global wave of prosperity, where we all sit around making art and exploring the universe.

So if the argument being made against AI is the same as the one being made about the calculator, it is likely to be specious and can safely be ignored. This is a handy heuristic that allows you to focus on thoughtful, nuanced critiques of AI that are actually important.

Here’s an example of the calculator test run on the abstract from a recent (ridiculous, poorly run, self-report) paper about AI and critical thinking. The only change is a find/replace on “GenAI” and “a calculator”:

“The rise of Generative AI (a calculator) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using a calculator, and 2) when and why a calculator affects their effort to do so. Participants shared 936 first-hand examples of using a calculator in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in a calculator are predictive of whether critical thinking is enacted and the effort of doing so in a calculator-assisted tasks. Specifically, higher confidence in a calculator is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, a calculator shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing a calculator tools for knowledge work.”

See how it sounds both cogent and ridiculous at the same time? That is a sign that this paper can be safely ignored. It reads like a debate from the 70s, where some well-meaning alarmist predicts the death of critical thinking by arguing that people’s degree of trust in a calculator is proportional to their ability to think critically.


The argument is internally consistent and hangs together, and yet we know it to be false: calculator usage doesn’t meaningfully reduce critical thinking or turn us into information verification machines. Instead, calculators freed up a tremendous number of very smart people to do very smart things instead.

Want to try a paper worth paying attention to? Let’s take an abstract from one by Timnit Gebru.

“Rising concern for the societal implications of calculators has inspired a wave of academic and journalistic literature in which deployed calculators are audited for harm by investigators from outside the organizations deploying the calculators. However, it remains challenging for practitioners to identify the harmful repercussions of their own calculators prior to deployment, and, once deployed, emergent issues can become difficult or impossible to trace back to their source.

In this paper, we introduce a framework for algorithmic auditing that supports calculator development end-to-end, to be applied throughout the internal organization development life-cycle. Each stage of the audit yields a set of documents that together form an overall audit report, drawing on an organization’s values or principles to assess the fit of decisions made throughout the process. The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale calculators by embedding a robust process to ensure audit integrity.”

Notice how this one sounds ridiculous? That’s because Gebru isn’t making a claim about calculators; she and her co-authors are saying something meaningful that is unique to artificial intelligence systems. And so you know this paper isn’t specious and should be read in full.

AI is important and so are many of the debates about how it affects our future. But there is a difference between AI being important and every comment made about AI being important. Learning how to filter AI-based arguments is key so TI-83 it and have some fun!

Job descriptions are always imperfect, so rather than pitch what is written, pitch what makes sense.

Recently, I’ve been helping out at a company where many of the senior leaders didn’t have updated job descriptions. This isn’t uncommon at high-growth startups, where hiring competent generalists and letting them loose on the things that need doing often works surprisingly well. 

Until it doesn’t. Growth means a constant balance between creeping scope and new hires, so at some point, a lack of defined swimlanes is a recipe for the Type 1/Type 2 of workplace errors: too many things that no one owns and too many things that everyone thinks they own. Periodically refreshing job descriptions helps avoid both, while also allowing for employee growth and succession planning.

But writing good job descriptions is hard. Ideally, they spell out both the outcomes over which someone will be accountable as well as the levers they’ll pull to accomplish them, while somehow packaging that in an external-friendly format that is readable by people who aren’t familiar with the details of the business.

That’s why finding the hidden bullet point is so important. 

When interviewing for jobs, most of us fall victim to the tendency to “teach to the test” – you assume that the hiring process is a perfect assessment and then optimize for the highest possible score. This is due in large part to the education system’s emphasis on standardized testing, which rewards this type of behavior; you apply to work what you learn in school.

But by accepting that job descriptions are imperfect representations of what the work actually is, you can change your strategy. 

Rather than looking at the bullet points as a series of checkboxes, imagine them as brushstrokes, meant to give the impression of a scene without being photorealistic. Your job in the hiring process then becomes to place yourself in the scene by looking at the gestalt and convincing me that you fit.

Take this job at Oceans. I’d like to think that I did reasonably well at describing the role: I talk about the legs and arms of your T, what you’ll actually be doing, and how you’ll be assessed.

But this is a 600-word summary of someone’s entire worklife; there is no way I could possibly fully describe every detail. Ultimately, the bullet points are not prescriptive but rather descriptive; they are meant to give you an impression of the role overall, rather than a detailed checklist that you’ll wake up and follow.

And so the candidates who impress are the ones who find the hidden bullet point. You signal this by asking questions like “What about upselling across product offerings?” or “How will I be involved in hiring?” – these are key expansions that take the basic themes and extend them.

You then use those answers to demonstrate your fitness. “In other roles like this, I’ve…” or “My approach in situations like these is…” are the kind of phrases that show that you’re effectively pattern matching across larger experiences and that you fit in the scene.

This might be uncomfortable for some people, as it can feel like an overstep. But you want to work at the kind of places that welcomes this collaboration; places that hire people who are “teaching to the test” in interviews generally are the same places that fall victim to those Type 1/Type 2 workplace errors. Finding the hidden bullet point not only helps the right companies find you but helps you find the right workplaces that will support your growth and allow you to expand.

The more senior the role, the fewer the candidates, and thus the less it makes sense to restrict the pipeline.

Last week, a friend posted on LinkedIn about a role: $1.5m-$2.3m comp (base + equity), only hiring via referral. As someone who has publicly pledged never to accept more than $1m a year in comp, I suggested that the salary range was an immediate red flag. But it is the referral-only that really bothers me.

Broadly speaking, workers are a combination of two things: skills (things they can do) and temperament (the way they do those things). Most companies require large numbers of lower-skill employees, with successively smaller numbers of higher-skill employees at each level above them. Temperament matters at all levels, because companies aren’t just giant skill machines and you actually need the gears to want to work together. 

The problem is that skills are relatively easy to assess through the hiring process but temperament is much harder; the short, artificial nature of interviews means you don’t really know much about how someone actually does the work until six months or more into working with them.

This is why employers often rely on referrals. Because of homophily (the tendency of likes to attract), it is generally true that if you enjoy working with a referrer, you’ll enjoy working with the referral.

Here is where it gets tricky. Because you will always have a larger supply of lower-skill talent (most people can do most things at the absolute bottom of the skill pyramid), the primary differentiating factor for lower-skill roles is temperament. Which means referrals should be far more important in low-skill roles.

But that is the precise opposite of what happens. Instead, higher-skill roles are much more likely to be referral-only, despite the fact that the relative rarity of high-skill/good-temperament creates a much smaller hiring pool. Temperament is a much smaller differentiator when only a few people have the necessary skills.

In some ways, all of this is moot because in reality, no role should be referral-only. There is overwhelming evidence that relying on referrals alone increases systemic barriers related to gender, ethnicity, credential, etc. Remember that homophile? It applies to more than just temperament. Referral-only recruiting is anti-science and pro-bias.

But if you insist on referral-only, it should be primarily used for lower-skill roles, where it is a better differentiator. This also avoids magnifying the bias problem by conflating it with higher-skill roles that typically come with higher compensation. If you’re using referral-only and then paying those candidates millions of dollars, you’re being sexist, racist, etc. at the highest possible scale of wealth-gap creation.

The death of shared experience is a fundamental paradigm shift…and it will absolutely wreck your customer service department.

At some point, every teen ends up in the same conversation: how do I know that the reality I am experiencing is the same one you are? The context changes (as does the amount of alcohol consumed) but the easy example is always color. I look at the sky and call it blue, and so do you, but that is just because we have both been taught to label the experience that way. How do I know that your blue is my blue?

The truth, as most adults come to realize, is that functionally it doesn’t matter. As long as we agree on the connection between the label and the experience, and both of those are stable, we can communicate with each other.

And that’s useful. One of my many jobs in college was working the IT helpdesk. And it was relatively easy to support people over the phone because the interface was predictable; I can tell you to go to the lower right and click the blue button, because I know what I see is what you see. Sometimes there are errors when people encode the information differently (Do you think the blue button is more of a purple?) but as long as you find a way to translate, it works.

One of the recent promises of AI optimists is fully personalized interfaces. They imagine a world that adapts to us to such a degree that there is no one standard way of engaging. This is often presented as being modular at first, where designers will create discrete blocks that rearrange depending on your needs, and then growing increasingly adaptive as the blocks become smaller and smaller.

But this disrupts one of the most fundamental assumptions that allows humans to collaborate: shared experience. In a future where I have literally no way of replicating the experience you’re having, how am I supposed to support you when you need help? How can we work together, when we literally exist in different worlds?

The easy, AI optimistic answer is that I don’t need to, because the AI also does the supporting and collaborating. But even if we believe that one AI is going to support you with another AI, what happens when the AIs aren’t having the same experience? And how are humans helping you before the support AI gets trained?

Hyperpersonalization feels a clear win; if all my experiences are tailored to fit me perfectly, how can that possibly be a bad thing? But if you take it to the logical conclusion, where the Venn diagram of the experiences between humans drifts to nothing…that has profound implications for humanity.

And for your customer service team. This isn’t just drunken teenage musing; shared experience is fundamental to how businesses currently operate. And so at the same time you’re investing in the latest technology to make hyperpersonalization possible, you have to also invest in the business infrastructure that makes it supportable.

But our tendency to confuse solutions with causes often traps us; when it comes to plugging leaks, it can’t just be a form of bailing faster.

Take racism. We know that empathetic listening, coupled with strong challenges, can convert hardliners. And we have plenty of examples: Ann Atwater and C. P. Ellis. Matthew Stevenson and Derek Black. Radical kindness does seem to be one potential solution to abject hatred.

At the same time, plenty of well-meaning people have taken that solution and suggested the false corollary that a lack of kindness is what radicalizes young men in the first place. They spin tales of how young White men experience hardship that causes them to start down the road to racism and it feels intuitively true: if kindness is the solution, then a lack of kindness must be the cause.

But C.P. Ellis didn’t become a Ku Klux Klan leader because Black people were unkind to him; he lived in extreme poverty and parroted the beliefs of his KKK-leading father. Derek Black had almost no interaction with Black people and yet hated them intensely, encouraged by his Stormfront-founding father (I’m noticing a theme). Americans did not enslave Africans because they were wronged by them. The boat is not leaking for lack of bailing.

This is particularly relevant to those of us in tech right now. In the wake of Mark Zuckerberg calling for an increase in toxic masculinity in the workplace, there has been a fair amount of online handwringing about what can be done to combat the pervasive sexist behavior of men in tech. And far too many of the suggestions sound like “Well, if women were just nicer to men in the first place…”

No. No no no.

It isn’t just the -isms of the world; it is any behavior change. The crusade to reduce smoking before it killed us all started with anti-smoking ads; in 1967, the FCC’s Fairness Doctrine ruled that for every smoking ad on TV, there had to be a corollary anti-smoking ad. But of course smoking continued: you can’t bail faster than the leaks.

It took us 30 years to figure out that instead of just running matched anti-cigarette ads, we should just ban cigarette advertising to begin with; the Master Settlement Agreement didn’t happen until 1998.

So how can you prevent these false syllogisms when you’re designing interventions?

I often talk about the five behavioral archetypes: Always, Never, Sometimes, Started, Stopped. The false syllogisms above are really a conflation of Never and Stopped; if radical kindness and anti-smoking ads can cause someone to Stop, they can also cause them to Never. 

But that isn’t always true. Be deliberate and systematic in your approach to gathering insights, remember that Never and Sopped are not equivalents, and investigate the two behavioral states as clearly different to reveal where the pressures themselves diverge.

The thin line between ambition and moral bankruptcy isn’t just about what we do but what we allow from others. In the end, we are what we tolerate.

The fabulous Katie Scarpa recently opened a role for an Integration Specialist on her team at Oceans. And almost immediately, she sent me a very impressive resume to look at. The candidate’s claim to fame? Inventing time travel.

Oddly, he didn’t explicitly mention his invention. But the first job on his resume was “Integration Specialist at Oceans”, which he apparently has been doing since October of last year. And that’s a truly impressive feat, since we just posted for the first hire on this team a few weeks ago.

Now if it were me and I invented time travel, you better believe that would be my resume headline. But no, this humble candidate just slipped it in casually, like it was no big deal. “I’m already an Integration Specialist at Oceans, so you should…hire me to be an Integration Specialist at Oceans.”

It makes sense, until it doesn’t.

This is apparently now a common tactic: trying to trick algorithmic resume screening by telling it that you’re perfect for the job by pretending you already do it. Presumably, once you cheat your way past the system, the hiring manager is supposed to respect your hustle and want to meet you so you can shoot your shot.

It reminds me of Apple Cider Vinegar on Netflix (I haven’t watched it but my partner loves telling me about it). She was relating a scene in which the main character Belle bluffs her way into a publishing meeting by pretending to have an appointment. Belle confesses the lie, lies again and gets caught, but the publisher accepts her anyway and just suggests lies that will sell better to the public.

Bluffing your way into a meeting is the sort of thing you hear about in startup culture all the time; it is meant to be a sign of how determined and gritty you are. If the lie is discovered, it becomes a form of recommendation: look how hard I’ll work to make this startup succeed.

The liars aren’t all that interesting to me because I understand the candidate who looks at the current job market and in desperation to simply be seen, fakes a resume to get past the screener. They probably tell themselves it isn’t a lie, that the hiring manager will know they are simply hustling, and that Shakespeare had it right: “My poverty and not my will consents.”

What interests me is the hiring manager who goes along with it. The investor or publisher who decides that lying is a virtue, not a liability. It isn’t Romeo and Juliet; it is Macbeth – being a lord but wanting to be king.

All of us are at least sometimes in positions of power. We choose where to spend our money, time, attention, and love. And there will always be people who don’t have enough of those things and so will do what they feel they must to get them. Which means that the limiting factor in the development of our culture is how we choose to allocate scarce resources, more than how we choose to pursue them.

Faking your resume to get past AI doesn’t work at Oceans because a human reads every application (we even say it at the top of our job postings!). And we don’t reward deceit, whether it comes from determination or desperation. Because celebrating growth and creating opportunity means being a good steward of those resources, even if it means losing out on hiring the inventor of time travel (he’d probably just get us stuck in a temporal paradox anyway). Sure, you might miss out on that rare apothecary whose willingness to hustle gets you more More MORE…but at the end of the day, you are what you accept. So accept better.