It has been a very long year here at Oceans. We doubled in size and put in place the structures that allow us to do it again in 2026, all while managing through a Chikungunya epidemic and a devastating cyclone. So, for the holidays and with the help of my diver Abdul, I decided to send a personalized video message to my team – all 500+ of them.

The video itself is a bit unusual (I included the one for our CEO Ian below). Because of the strong cultural values around family in Sri Lanka, it is targeted not at the employee but at their loved ones, thanking them for the support they provide that makes it possible for our Divers to go deep with their Clients.

Each video used the name and pronouns of the employee but was otherwise similar in theme. Which, of course, meant that it would have been easy to just use generative AI. Because my personal workstation is highly custom, I have thousands of hours of video of me in the same white shirt and blazer, with the same lighting, on the same black background. I already look a little bit like Max Headroom.

So I’m confident that technically, the AI-generated videos would be indistinguishable from what we actually made; I fully believe that AI can do a convincing impression of me, with enough variation to make videos that feel individualized.


But that entirely misses the point.

I am generally a fan of industrialization. I recognize that by moving from piece work to standardized forms of production, we have revolutionized what is available to the average person. And in a time when many people feel cripplingly lonely, it is tempting to want to scale empathy through that same process. After all, if it is indistinguishable from the real thing, it should have the same effect.

But knowledge changes the frame. It is why magicians don’t reveal their tricks: knowing how they work can ruin the feeling of wonder we get from seeing them performed. Technically identical, AI-generated videos simply do not have the same meaning as sitting down and taking the time to think about all 500+ people in my org as I record their message.

This is going to become even more important in the years to come. Just as carving a gift by hand is different than buying it from a store, the things that we create for others will increase in interpersonal value. Yes, there will be an AI avatar of me at some point that handles all sorts of things. But that only highlights how important getting the real me is. We are defined by what we choose to automate and what we choose to honor; the future is predicated on our ability to choose wisely.

As a social psychologist by training, I always loved the TV show “What Would You Do”. Loosely based on social experiments, it uses a hidden camera format to show the varied reactions people have to ordinary situations based on environmental cues.

One of my favorite episodes from the very first season had a simple premise: people ignoring or informing strangers of social issues like food in their teeth, toilet paper on their shoes, or an unbuttoned shirt. And I love it because of how rarely people inform; the vast majority of folks clearly notice the problem but do nothing about it.

This is top of mind because recently, I was reviewing video applications for an Integration Specialist role at Oceans. In one stage of the process, we ask people to record a short video with a simple prompt: share one of your most deeply held but controversial opinions about the workplace.

I like this prompt because it is both thoughtful and personal, mimicking the psychologically safe communication that Integration Specialists have to facilitate for our Divers and Clients. And we’ve had good success using it to find talent for the current team.

Given the epically bad job market (or how epically awesome it is to work at Oceans), we got several thousand applicants for the role. So even though this was a later funnel stage, we still had about a hundred videos to watch.

It was painfully obvious who was using AI rather than thinking through their own beliefs. Across several hundred videos, I now know the various answers that the flavors of LLMs tend to give (four-day work week, quality over speed, performance is about alignment), down to the bullet points and supporting evidence. Some candidates were better or worse at presenting them, but the beliefs themselves? Computer-generated.

Since it is a later pipeline stage, we generally give feedback even to the candidates who don’t move on. But how do you tell someone they’ve got AI in their teeth?

From a macro perspective, it is important because it is likely that candidates will continue to miss out on roles if they keep going as they are. But for an employer brand, staying quiet is the safe bet; at least some of the time, people shoot the messenger, and we lose nothing if candidates continue to fail elsewhere.

Fortunately, morality isn’t game theory and one of the Oceans’ values is “Integrity is our Superpower” so we are going to deliver the feedback. But I suspect that not everyone will make the same choice and as we think about the divides that AI usage will create, it is useful to remember that many of them aren’t about AI at all; knowing what behaviors to do or not do is largely a function of the feedback you get from society, and that means all the sexist/racist/classist tropes apply. If we want a better and more equitable world, we have to be willing to tell people when using AI is hurting rather than helping.

But it plays one on TV.

In the mid-80s, Vicks launched cough syrup commercials with a simple hook: they got actors who played doctors on soap operas to endorse the benefits of the product. “I’m not a doctor but I play one on TV” became an instant notable quotable.

Recently, I got an email from a candidate objecting to the Oceans’ employment contract. And because I take these things seriously, I sat down to see if there was anything we could do to improve it.

From the email’s first bullet point, it became clear what happened: the candidate had loaded the contract into ChatGPT and then sent me over the pasted output. To confirm, I tried it; ChatGPT gave me more or less the same arguments and even offered to produce a redlined version.

In some respects, I love this. Most folks don’t know how to access a good lawyer and having any legal advice can be better than none at all.

But the Vicks message was that listening to actors with no medical training is foolish; the whole point was that just because someone sounds like an expert doesn’t mean they are. The commercial was encouraging adults not to pretend to be doctors and to use the right medication that real doctors recommended for them.

When you give ChatGPT a legal document, by default it will give back an answer in the unique language that is legalese. And because it sounds like a lawyer (social psychologists would say the advice has face validity), it is easy to accept it as credible.

But ChatGPT is essentially an actor; it repeats the lines in a certain way that signals expertise but doesn’t genuinely understand them. Out of the eight points the candidate sent over, seven were clear misreadings of the contract that no actual lawyer would make; only one was a true disagreement in principle and relatively minor.

This may cause the candidate to reject a significant opportunity based on illusory concerns. I’m going to reach out to clarify but I am absolutely not a lawyer, don’t play one on TV, and won’t speak in legalese. Can being empathetic and direct beat out legalese? We’ll see.

It would be relatively easy for OpenAI to detect legal documents and refuse to engage with them or change tone to give advice while sounding more like your neighbor and less like an actual lawyer.

But in the current system, they have no incentive to do that: they benefit from the inferred expertise of their presentation. It makes their product look powerful and worth subscribing to.

And this is why we need to enforce the laws we have. If you induce someone to think you are offering qualified legal advice without being an actual lawyer, you’re both civilly and criminally liable; AI-generated advice is no different, except you’re holding the company and not the AI responsible. I’m not a particularly litigious person, but without economic and criminal damage, this won’t change.

So let’s see the lawsuits. Maybe ChatGPT will choose to self-represent in court.

A lot of hustle porn (a term I first heard from Alexis Ohanian Sr.) has the same bombastic quality as sexual porn; a recent article I saw on LinkedIn screamed that you absolutely should expect your employees to spend 20% of their time outside of work on professional growth activities, because otherwise they must be terrible performers (!?!).

That isn’t real. That is the movie version of an important, messy part of our lives. And just like porn can damage our understanding of reality, particularly for those who are new to sex, hustle porn can damage our understanding of what work should actually look and feel like, particularly for those that are new to the workplace.

As part of my work at Oceans, I travel to Sri Lanka for two weeks every two months. These are planned trips, with disciplined routines before and after that focus on non-work activities. And I sleep pretty well on planes, so the 27-hour flight isn’t normally too bad.

But this time I sat next to screaming babies on both connections (to be clear, I would also scream if no one explained why my head felt like it was going to explode), so on a leadership team call last night, I apparently took an unintended nap (the picture above, lovingly captured by my team from the recording).

We joke about it now (and of course there is a Slack emoji) but at the time, my team freaked out; they were worried that something was seriously wrong. And that is the right reaction: work kills people, directly and indirectly, every single day.

Fortunately, I’m fine and after a good night’s sleep, ready to get back to work worth doing alongside the fabulous folks at Oceans. But it is important that we are both honest about moments like these and that we don’t turn them into a culture of celebration.

This is one moment, in one meeting.

It isn’t a well-lit, in-focus shot of someone sleeping under their desk to show how passionate they are so a VC will deign to grace them with their next round of funding.

It isn’t the culture of Oceans and something we believe every employee should do, every day. It isn’t even something we think our C-suite should be doing.

Yes, sometimes work causes you to hustle a little more than usual because of its elastic nature; you can’t perfectly plan for every demand curve. But that’s a bug, not a feature.

How do we get rid of hustle porn? By not creating or engaging in it, of course, but that’s easier said than done; like porn, it exists because we consume it. So we need a better plan.

Maybe we need MakeWorkNotHustlePorn; acknowledging that sex is fun to watch and then encouraging the creation and distribution of sex that focuses on unsimulated reality is an appealing blueprint for how we tackle this problem. LinkedIn can do a better job of flattening the demand curve, reducing the availability of sensationalist posts. Posters can focus on authenticity and steward an understanding of work that is grounded in reality. And most importantly, we can choose to engage with content that is authentic and not always attractive (hello, shiny bald head).

Because it really does matter. How we work in public influences how future generations will work in private and if we don’t design that experience deliberately, we run the very real risk of damaging their relationship with one of life’s great joys. I believe in work worth doing; I don’t believe in dying for it.

When I wrote Start At The End, my goal was to help folks apply behavioral science in their everyday lives. And while it sold well and got good reviews, it was also an artifact fixed in time – the downside of books is that they can’t grow with you.

So in 2023, with the help of Lorraine Minister, I launched a full course with videos and practice exercises that built on the work of the book but updated it with my latest thinking. And after two years, we’ve earned enough to pay back our original development cost. And that means, as of today, the course is now available for free.

I recently gave a talk at GAABS about citizenship in applied behavioral science. Not how we serve the world (although that is important) but how we show up in our community, to advance the practice of applied behavioral science generally. My argument was essentially that we all need to do three things: learn (by advancing the field through scholarship), earn (by making the field financially viable), and serve (by giving back more than we take from the community).

But it can’t just be talk: if I believe it, I have to do it. And so, on my birthday, I decided that the best gift I could get would be to give something away. This course, which I so loved creating, is it.

Enjoy at https://mattwallaert.com/free-course/

Technology has always been at its best when it allows us to selectively reject it, to choose our own hardship.

I don’t really use AI for much but periodically, it comes in handy. And then, just when I’m not paying attention, it punches me right in the gut.

Because I’m doing a workshop on pressure maps, the organizers asked me to generate a sample. So I decided to make a pressure map about the promoting and inhibiting pressures for using a pressure map (meta, I know). And because it is 8:30pm the night before and I’m not sleeping much these days, I needed a thought partner.

Enter ChatGPT. I wrote a short prompt explaining what a pressure map is and what I wanted and sure enough, it spat back a very comprehensive list of pressures. But as I was congratulating myself on such good prompt engineering, I noticed it had appended the following:

“Want to contrast pressure maps with other tools (e.g., COM-B, Fogg, behavior over time graphs)? Or identify behavioral nudges that increase likelihood someone actually makes one?”

Paging Jared Peterson; this thing is trying to make a framework competition! And it clearly wasn’t using my prompt; it already knew all about pressures maps. So I asked it who invented the pressure map.

“​The concept of the pressure map, distinguishing between promoting and inhibiting pressures influencing behavior, was developed by behavioral scientist Matt Wallaert. He introduced this tool as a core component of his Intervention Design Process (IDP), detailed in his book Start at the End. The pressure map serves to identify and visualize the factors that encourage or discourage a particular behavior, aiding in the design of effective interventions.”

Ruh roh. Clearly someone fed this thing my book. So I threw it a curve ball: is pressure mapping similar to Lewinian field theory?

“Yes—very similar. In fact, Wallaert’s pressure map is explicitly inspired by Kurt Lewin’s field theory. Lewin’s field theory (1940s) conceptualized behavior (B) as a function of the person (P) and their environment (E): B = f(P, E). Within this “field,” driving forces promote movement toward a goal and restraining forces oppose it. Lewin used this to explain both action and stasis (e.g., why people stay in undesired states). Wallaert’s promoting and inhibiting pressures directly parallel Lewin’s driving and restraining forces. The key differences:

 <insert very smart chart here>

In short: pressure maps operationalize Lewinian dynamics for intervention design. Wallaert takes the theory and makes it actionable.”

Well…shit.

I’m not scared of AI. People still like to do workshops with me, because my jokes are better. I’m happy that someone can have a cogent discussion with a computer about applied behavioral science.

But as Joanna Maciejewska said, “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” I like thinking! I have a high need for cognition! That ChatGPT can save time and cognitive effort for someone else doesn’t mean I want it to do it for me in my field of expertise.

But that’s the magic. Your laundry is my PhD thesis and vice versa. We all like thinking but we don’t all like thinking about the same things. The promise of AI isn’t that it automates everything but that it has the potential to automate everything. So that then we can choose what we don’t want to automate, what we want to only enhance or even do completely on our own.
It is like really great camping gear. Some people love glamping. Some people want to kill their own food with a spear they chipped out of squirrel bones. Technology has always been at its best when it allows us to selectively reject it, to choose our own hardship.

And so the most important feature of AI is always, always going to be the ability to turn it off. To be human is to struggle; utopia is just choosing what those struggles are going to be.

Lower-wage workers are using lower-quality digital tools for no good reason.

Recently, we started looking for a new applicant tracking system at Oceans and so I queried my network. Everyone seemed to be recommending the same trendy, venture-backed ATS startup, so I arranged for a sales call. And it was all going swimmingly until we got to the worst part of every SaaS conversation: pricing.

The startup charges per seat, with a minimum of 10% of the employee count. And that is every employee, not just recruiters: interns, part-timers, doesn’t matter. 

If you’re a large tech company and the average comp of your employees is $200k+, the per-seat fee itself isn’t staggering; at those salary levels, paying a small percentage to manage recruiting is no big deal. 

But for most normal businesses, the cost of that system becomes prohibitive. I cannot justify paying a startup large amounts of money that could be put directly into the pocket of an employee. Outside of high-salary ecosystems like the tech bubble, the aggregate cost of SaaS products have become so exorbitant that they now compete directly with the wages of the people that use them.

The net effect of this system of pricing is a new form of workplace wealth inequity: the SaaS gap. Just as lower-wage physical workers have to use inferior tools, lower-wage digital workers now have the same issue.

This is an opportunity for smart SaaS companies. The ATS could easily have captured my business by adjusting their pricing to the relative wages of my workers; I’m fine with it being 2% at both big tech and my smaller services company. And so is big tech – they don’t care and there is no risk of cannibalization there.

Wage-adjusted pricing can still be profitable. The whole point of SaaS is its relatively high margins. With physical workers, better tools typically have very real production costs that cannot be avoided on a per-tool basis. With digital workers, better tools generally mean di minimis additions in server and support costs plus an upfront sales and integration cost, all of which pale in comparison to the sunk cost of developing a SaaS product.

It does require some internal adjustments. If your sales force is being compensated as a percentage of deal value, they’re likely to ignore lower-wage employers, and many SaaS companies run all deals through a one-size-fits-all sales process. But this is where self-serve options and salary-based integration specialists shine. For my 2% of worker wages, I expect a high-quality fixed product but am willing to take on much of the variable burden myself: I know what I need, who has it, and how to configure it. I would gladly put money back in the pocket of our Divers and take on the integration burden myself; I just need to find a SaaS provider smart enough to take that business. In the words of the best clients: please, take my money.

While I’ve tried to keep this in business terms, it is impossible to ignore there is also a moral angle here. To me, it is unconscionable to prevent lower-wage populations from accessing higher-quality tooling simply because you are too lazy to set up smart business processes. That decision has ripples: lower-wage workers use lower-quality SaaS and thus never get trained on the higher-quality tools that would allow them to rise in the workforce. They expend time and energy needlessly, burnout faster, and generally live worse worklives. If equity is part of your mission, you need to get on this.

Consider productive uncertainty as a more mission-aligned approach.

In most sports, you always know where you stand. The score is continuously updated, so whether it is a touchdown or a goal or a basket, you know the balance between you and your opponent at all times. And you often make strategic decisions with this knowledge: slow the pace down here, speed it up there.

Boxing is a notable exception. Knockouts aside, it is scored on a round-by-round basis, but you don’t know the score until after the bout has concluded. And because you only have a general sense of how you are doing, most boxers will continue to actively fight until the last bell, in case their naive understanding doesn’t match what is on the scorecards.

With the focus on OKRs and metrics, modern business tends to feel like most sports. You have a target and at least somewhat continuous measurement, so you can effectively judge how hard to lean in given your progress and the time left in the period (a sprint, a quarter, a fiscal year, whatever). 

This visibility is typically championed by managers, who then have a better understanding of velocity and can decide where to make investments in order to meet expectations. This aggregates all the way up the hierarchy to the CEO, who can balance the progress against the stock market and competitors.

But what if some parts of working need to be more like boxing? Instead of achieving a target, you simply continue to fight as actively as you can to reach the best possible outcome.

To say “your sales quota is $10k” is to make it more likely that each of your salespeople will land around that number, which is great for predictability but not for maximization. Whereas “we measure using deal value; fight for every deal” will create greater variability but likely better total outcomes; the productive uncertainty keeps people focused on continuous incremental gains.

This is more about targets than it is OKRs. Knowing what you are trying to accomplish is certainly in line with a boxing mentality; matches are scored on clear criteria that every boxer understands. But rather than setting a specific target, you use your understanding of the criteria to fight for every point.

Some workplaces already function this way and reveal some of the risks. Emergency rooms, for example, operate on the assumption that every person is worth helping, not on the notion that you simply have to help more than the target for the day. And because they fight for each life, ER staff have one of the highest burnout rates and worst worklife balances of any team in a hospital. Elite boxers fight only once or twice a year for good reasons.

But there are other structural ways to combat burnout: boxing is a limited number of timed rounds precisely to prevent injury. And so while productive uncertainty isn’t a fit for every team, it is worthwhile to question where targets make sense versus simply establishing a system for scoring and encouraging a point-by-point mentality.

And this matters, especially in mission-aligned businesses. Sales quotas aren’t vital to the advancement of the world but saving lives is. At Oceans, every sale is a job opportunity for someone who would otherwise be forced to leave their home and those they love; the target is not “100 net new accounts next quarter” – it is as many as we can possibly win. I don’t want “less than 5% regrettable attrition” – I want 0%. Because that is as mission-aligned as we can possibly be.

The process is easy: replace the word “AI” (or whatever they are using) with “calculator” and see what it does to the comment.

And the reason it works is simple. We already know that the introduction of the calculator did not destroy humanity. It didn’t reduce the number of jobs, or make people less smart, or create any of the cataclysmic outcomes that skeptics at the time claimed it would.

This also works for positive comments. We also know that calculators didn’t propel us into a massive global wave of prosperity, where we all sit around making art and exploring the universe.

So if the argument being made against AI is the same as the one being made about the calculator, it is likely to be specious and can safely be ignored. This is a handy heuristic that allows you to focus on thoughtful, nuanced critiques of AI that are actually important.

Here’s an example of the calculator test run on the abstract from a recent (ridiculous, poorly run, self-report) paper about AI and critical thinking. The only change is a find/replace on “GenAI” and “a calculator”:

“The rise of Generative AI (a calculator) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using a calculator, and 2) when and why a calculator affects their effort to do so. Participants shared 936 first-hand examples of using a calculator in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in a calculator are predictive of whether critical thinking is enacted and the effort of doing so in a calculator-assisted tasks. Specifically, higher confidence in a calculator is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, a calculator shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing a calculator tools for knowledge work.”

See how it sounds both cogent and ridiculous at the same time? That is a sign that this paper can be safely ignored. It reads like a debate from the 70s, where some well-meaning alarmist predicts the death of critical thinking by arguing that people’s degree of trust in a calculator is proportional to their ability to think critically.


The argument is internally consistent and hangs together, and yet we know it to be false: calculator usage doesn’t meaningfully reduce critical thinking or turn us into information verification machines. Instead, calculators freed up a tremendous number of very smart people to do very smart things instead.

Want to try a paper worth paying attention to? Let’s take an abstract from one by Timnit Gebru.

“Rising concern for the societal implications of calculators has inspired a wave of academic and journalistic literature in which deployed calculators are audited for harm by investigators from outside the organizations deploying the calculators. However, it remains challenging for practitioners to identify the harmful repercussions of their own calculators prior to deployment, and, once deployed, emergent issues can become difficult or impossible to trace back to their source.

In this paper, we introduce a framework for algorithmic auditing that supports calculator development end-to-end, to be applied throughout the internal organization development life-cycle. Each stage of the audit yields a set of documents that together form an overall audit report, drawing on an organization’s values or principles to assess the fit of decisions made throughout the process. The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale calculators by embedding a robust process to ensure audit integrity.”

Notice how this one sounds ridiculous? That’s because Gebru isn’t making a claim about calculators; she and her co-authors are saying something meaningful that is unique to artificial intelligence systems. And so you know this paper isn’t specious and should be read in full.

AI is important and so are many of the debates about how it affects our future. But there is a difference between AI being important and every comment made about AI being important. Learning how to filter AI-based arguments is key so TI-83 it and have some fun!

Job descriptions are always imperfect, so rather than pitch what is written, pitch what makes sense.

Recently, I’ve been helping out at a company where many of the senior leaders didn’t have updated job descriptions. This isn’t uncommon at high-growth startups, where hiring competent generalists and letting them loose on the things that need doing often works surprisingly well. 

Until it doesn’t. Growth means a constant balance between creeping scope and new hires, so at some point, a lack of defined swimlanes is a recipe for the Type 1/Type 2 of workplace errors: too many things that no one owns and too many things that everyone thinks they own. Periodically refreshing job descriptions helps avoid both, while also allowing for employee growth and succession planning.

But writing good job descriptions is hard. Ideally, they spell out both the outcomes over which someone will be accountable as well as the levers they’ll pull to accomplish them, while somehow packaging that in an external-friendly format that is readable by people who aren’t familiar with the details of the business.

That’s why finding the hidden bullet point is so important. 

When interviewing for jobs, most of us fall victim to the tendency to “teach to the test” – you assume that the hiring process is a perfect assessment and then optimize for the highest possible score. This is due in large part to the education system’s emphasis on standardized testing, which rewards this type of behavior; you apply to work what you learn in school.

But by accepting that job descriptions are imperfect representations of what the work actually is, you can change your strategy. 

Rather than looking at the bullet points as a series of checkboxes, imagine them as brushstrokes, meant to give the impression of a scene without being photorealistic. Your job in the hiring process then becomes to place yourself in the scene by looking at the gestalt and convincing me that you fit.

Take this job at Oceans. I’d like to think that I did reasonably well at describing the role: I talk about the legs and arms of your T, what you’ll actually be doing, and how you’ll be assessed.

But this is a 600-word summary of someone’s entire worklife; there is no way I could possibly fully describe every detail. Ultimately, the bullet points are not prescriptive but rather descriptive; they are meant to give you an impression of the role overall, rather than a detailed checklist that you’ll wake up and follow.

And so the candidates who impress are the ones who find the hidden bullet point. You signal this by asking questions like “What about upselling across product offerings?” or “How will I be involved in hiring?” – these are key expansions that take the basic themes and extend them.

You then use those answers to demonstrate your fitness. “In other roles like this, I’ve…” or “My approach in situations like these is…” are the kind of phrases that show that you’re effectively pattern matching across larger experiences and that you fit in the scene.

This might be uncomfortable for some people, as it can feel like an overstep. But you want to work at the kind of places that welcomes this collaboration; places that hire people who are “teaching to the test” in interviews generally are the same places that fall victim to those Type 1/Type 2 workplace errors. Finding the hidden bullet point not only helps the right companies find you but helps you find the right workplaces that will support your growth and allow you to expand.