For the last several years, I have been making myself available for free, first-come-first-served meetings in the style of academic office hours. They’re 30 minutes, 1:1, virtual, and guided by the participant on topics ranging from career advice to applied behavioral science.  And they’re specifically designed to address the inequities inherent in gatekeeping culture.

I’m a big believer in the power of framing to shift how we think about our behaviors.  When we talk about volunteering our time to help out people via warm intros, it sounds positive.  And it is; we could be spending that time on ourselves.  But that same time, there is another frame: that meeting via warm intro is a form of active discrimination against those who don’t have the same social access.  Yes, we’re helping people get ahead, but we’re also often helping the people who are already ahead to get even further ahead.  Warm intro meetings are more likely to be white, male, educated, etc. because those are the people already most likely to have access to the social elite and so rather than addressing inequity, we’re magnifying it.

To measure our ability to create equitable access through open office hours, in 2021 we released our first Diversity Report, a concentrated effort to make sure that this system is in fact serving a broad range of underrepresented people.  In our 2022 edition, we started doing trend analysis, which we’ve continued this year.

I use the plural repeatedly throughout this report.  That’s because making office hours happen is a team effort; even if I’m the one actually showing up, there is a tremendous amount of work from a number of people to make sure that we follow up on action items, share our learnings, and prepare this report every year.  In particular, Melanie Perera and Alaanah Sallay from Oceans and Lorraine Minister are instrumental in connecting job seekers with opportunities, sending out materials, and preparing the Mentor Minutes for social media. I am grateful for all they do and hope you take a moment to celebrate them.

Before looking at the results, a few quick notes on methodology.  To gather the data, we set up a Google Forms survey and then used Zapier to automatically email participants with a link after each meeting, along with an end-of-year followup reminding them of the survey.  In addition to asking for qualitative feedback to help us improve, we asked basic demographic questions about age, gender identity, sexual orientation, ethnicity, etc.  No questions were required, all were multiple choice with options presented in random order, with “Other” and “Prefer not to say” options included.

For 2023, we went from ~550 meetings last year to more like ~750 meetings this year. We received 140 survey responses, giving us a response rate of ~19%.

That is lower than last year, so we’ve made some changes for 2024 to try to make sure we’re getting more accurate data.  For example, we’re now using Zoom’s automatic followup feature to give people the survey immediately after the meeting, which should also improve data accuracy.  Even with a lower response rate, however, we still have a significant sample, so we can make some reasonable assumptions that this data is representative of the larger population of participants. You could always make an argument that some segments are more likely to respond; caveat emptor. 

I’m changing the format somewhat this year, as most data has remained relatively flat from last year (I’ve included the change from the 2022 numbers in parentheses), and so it is more appropriate to save the commentary for the end. As always, I’m open to questions and feedback on the analysis, as well as suggestions on what commitments you’d like me to make; just shoot me an email

Age

The mean age in respondents was 36 (+2) and the median was 35 (+2).  The best comparison is probably the median age of the US working population, which is 42, so overall we’re skewed a little younger.  However, the standard deviation was around 9 (+0), with participants ranging from 20 to 60, so there was a good bit of variability.

Gender

Among respondents, 57% (+1) identified as women, 37% (-1) identified as men, and 6% (+1) identified as non-binary/genderqueer. This is a fairly large overrepresentation of women and potentially a large overrepresentation of non-binary/genderqueer people, although that number is harder to evaluate because of the correlation with age.

Sexual Orientation

78% (+0) of respondents identified as heterosexual, with 22% (+0) identifying as some form of LGBQ.  This is significantly different than the base rate of 93% and 7%, respectively.

Race and Ethnicity

53% (+0) of respondents identified as White (base rate 77%), 9% (+2) as Black or African American (base rate 13%), 23% (+4) as Asian (base rate 6%), 1% (+0) Native American (base rate 2%), and 14% (-6) as More Than One Ethnicity (base rate 2%).  In addition, 17% (+5) identified as Hispanic or Latino/a/x (base rate 18%), with Mexican, Mexican American, or Chicano/a/x as the largest group.

Other

27% (+7) of respondents are first-generation Americans (base rate 14%), while 27% (+4) are first-generation college graduates (base rate 35%).  37% (+10) view themselves as underrepresented in their field, while 10% (+1) are living in poverty and 9% (+4) identify as disabled. 27% (-14) did not add any additional tagging. 73% (-1) are currently living in the United States.

Commentary and Commitments

Demographically, there were surprisingly few changes this year to individual categories; most were within the margin of error for our sample size.  The largest change actually came in the number of folks who identified with no underrepresented categories of any kind, 9.3% (-3.3).  Neither gender identity, ethnicity, or sexual orientation meaningfully changed, which suggests that this year, more Straight White Males saw themselves as reflecting other underrepresented identities.

On the one hand, that could indeed be progress: we could be reaching a different audience than in previous years.  Or perhaps Straight White Men are simply coming to recognize a broader array of potential ways in which someone can face challenges.  The cynical view, however, is potentially quite worrisome: that the language of representation is being co-opted by those who struggling to find any possible way in which they can distance themselves from the trappings of privilege.

Regardless of how you choose to interpret it, it does present a compelling case for why intersectionality needs to become the default way of looking at representation.  Saying that 90.7% of office hours were devoted to underrepresented folks obscures the fact the very real difference between facing one demographic underrepresentation and several.

We made two important commitments in our 2022 report, both of which we were able to honor.  The first was to repurpose more of the content generated in office hours for a wider audience.  It isn’t particularly efficient for me to say the same thing over and over again, when we could be using office hours for custom questions and content.  So we did more editing this year to distribute clips of the advice I repeated most often.

We also promised to increase our available tooling.  This year, for example, we refined our job tracking spreadsheets so we can more easily refer people for open positions and released self-paced courses on applied behavioral science, thanks in large part to Lorraine Minister’s efforts as our Head of Education.

For 2024, we’re going to continue to focus on scalability:

  • AI-identified content.  Using new tooling from a stealth partner, we’re now able to automatically identify the phrases and examples I use most often and clip them for sharing.  The shift from manual to automatic identification should significantly increase our ability to release in a timely fashion.
  • Structured education.  Our approach to office hours was born out of my experiences in academia and I believe that 1:1 conversations benefit most when they’re an augment to structured learning content.  So we’ll be introducing new guides and classes this year to help cover some of the basics and make our 1:1 time more efficient.

Side Note: It is eerie how many of these percentages were the same as last year, despite having a different sample size.  I went back and checked the data repeatedly, just because it felt so unusual that, for example, the percentage of White participants remained exactly identical.  It is a good reminder that things often don’t change as much as we think they do; even if they feel different day-to-day, the prevailing pressures that created the circumstance remain the same and so repeats are likely.  It also reminds me that I need to do more to put my finger on the scale to make sure that next year, things do change.

Also, CapGemini still hasn’t released diversity numbers. So that makes it two years in a row that I’ve done what they are unwilling to.

(This is a post primarily about the results of an experiment, but I did build a tool as part of it that you can use to gather behavioral feedback from former coworkers; you can go directly to WorkWithMeAgain.com to use it for free without reading about the journey or my data.)

In 2020, I started directly soliciting feedback from former colleagues in a one-question survey: would they seek (+2) or avoid (-2) working with me again? It was a deliberate attempt to use a behavioral measure to understand how different demographics experienced our time together in the workplace. The data was gathered via an anonymous form and always more than six months after working with me, to avoid a recency bias.

We spend more than a third of our life at work, often with other people. And the quality of those interactions are the single greatest predictor of our satisfaction with our work. Not money, not commute time or hours at the office, but who we spend our time with. That means that if we’re interested in making a better world, how we show up for other people is one of the single most important decisions we make.

The last time I collected this data in 2020, I discovered that while most people would seek out working with me again, white women had particularly polarized reactions to my behavior at work. While not statistically significant, both non-white and non-men coworkers had a lower desire to work with me again and their standard deviations was higher – while some truly enjoyed the experience, others found it actively aversive.

After reading through the qualitative feedback they left, I started making changes. And now, three years later, I can see the results of those changes: both non-white and non-men coworkers showed a 10% increase in their desire to work together.

Interestingly, there was also a slight improvement for coworkers that were either white or men; showing up better for underrepresented groups helped me show up better for everyone.

So the obvious question is: what changed?

One of the limitations of this simple survey is that I can’t say for sure; while people could leave me qualitative feedback, those comments are snapshots of momentary interactions rather than reflections on a longer relationship.

That said, looking at the comments in 2020 and in 2023, a few potential differences emerge. These are filtered through my personal experience, however, so should be taken with a grain of salt.

One trend that seems clear between the two sets of comments is how my role (and thus I) was perceived. Prior to 2020, the majority of the comments focus on my leadership and the notion of working for me, whereas the 2023 comments tend to center on my expertise and the idea of working with me. This is probably an inevitable shift that occurs somewhat because of tenure but is also a reflection point; am I my best self when I’m coaching more than managing? Should I seek out more coaching roles?

There is also a shift between direct judgment and framework establishment. Prior to 2020, many of the comments center on my ability to make executive decisions about what someone should or shouldn’t do – when those decisions were perceived as wrong, people were less likely to want to work with me again. The 2023 comments reflect a shift toward presenting frameworks by which decisions could be made, so that people felt more autonomy that aligned with their accountability.

There is another source of data that confirms this shift. In addition to the Work With Me Again survey, I have an anonymous form that people can use to give me feedback that appears in my email signature. Many of the comments post-2020 focus on my questioning and how working as a thought partner made people feel “inspired, not embarrassed’ when they didn’t have all the answers.

Part of this may also be colored by how I left my most recent role. When I resigned from frog/CapGemini because of their refusal to release diversity data, it is possible that it increased underrepresented people’s willingness to work with me again. How someone leaves a role can also who is willing to follow them forward and it highlights how important clear messaging is when switching jobs. Despite all the noise about radical candor and transparency in the workplace, the simple fact is that most people just leave without ever really being clear why. And that’s something we need to change, both at an individual and company level.

I’m happy to see that I was able to close the gap in the last three years, but it doesn’t mean I should stop. And hopefully neither will you. If you’re committed to doing the same kind of work, the tools I used are publicly available at WorkWithMeAgain.com.  It will allow you to copy the Google Sheet that contains all the calculations, as well as instructions on how to launch the survey to your former coworkers.  I am absolutely convinced that putting in the effort to gather meaningful behavioral feedback can be a key component in how we change our individual behaviors and thus the world.

Side Note: One of the things that is clear from the over 200 people that have taken the survey is that there are some people out there who feel deeply wronged by me. That’s potentially inevitable and part of being human, but it doesn’t have to be the end of the store: if an apology would help bring closure, I’m here.

For the last several years, I have been making myself available for free, first-come-first-served meetings that I call office hours. They’re 30 minute 1:1 virtual meetings, guided by the participant, on topics ranging from career advice to applied behavioral science.  And they’re motivated by the belief that when we require introductions or other forms of social proof to gate access to our time, we replicate the existing systemic biases inherent in those social systems.

In 2021, we released our first Diversity Report, a concentrated effort to make sure that this system is in fact serving a broad range of underrepresented people.  In our 2022 edition, we provide updates on our commitments from last year, refreshed data, and make our commitments for next year.

I use the plural repeatedly throughout this report.  That’s because making office hours happen is a team effort; even if I’m the one actually showing up, there is a tremendous amount of work from a number of people to make sure that we follow up on action items, share our learnings, and prepare this report every year.  In particular, Melanie Perera from Oceans and Zsanelle Lalani are instrumental in connecting job seekers with opportunities, sending out materials, and preparing the Mentor Minutes for social media. I am grateful for all they do and hope you take a moment to celebrate them.

Before looking at the results, a few quick notes on methodology.  To gather the data, we set up a Google Forms survey and then used Zapier to automatically email participants with a link after each meeting, along with an end-of-year followup reminding them of the survey.  In addition to asking for qualitative feedback to help us improve, we asked basic demographic questions about age, gender identity, sexual orientation, ethnicity, etc.  No questions were required, all were multiple choice with options presented in random order, with “Other” and “Prefer not to say” options included.


For 2022, given immense changes in my personal life (including moving across the country and my co-parent’s cancer diagnosis), my ability to do office hours consistently was somewhat curtailed – we went from ~800 meetings last year to more like ~550 meetings this year. We received 150 survey responses, giving us a response rate of ~27%.

That is roughly the same as last year and generally quite high.  Typical survey response rates are less than 5%, so we can make some reasonable assumptions that this data is representative of the larger population of participants. But you could always make an argument that some segments are more likely to respond, so take it all with a grain of salt.

On to the 2022 data!  Each section has a two paragraph format: first data, then interpretation.  There will be a separate section at the end for commentary and 2023 commitments. I’ve included the change from the 2021 numbers in parentheses so you can also see trends. I’m open to questions and feedback on the analysis, as well as suggestions on what commitments you’d like me to make; just shoot me an email

Age

The mean age in respondents was 34 (+0) and the median was 33 (+2).  The best comparison is probably the median age of the US working population, which is 42, so overall we’re skewed a little younger.  However, the standard deviation was around 9 (+0), with participants ranging from 17 to 69, so there was a good bit of variability.

I see this as an improvement from last year.  Not only did the range of ages go up, but so did the average and median age.  Why is that a success?  Because younger people have greater access to formal mentorship programs than do older people, a general upward trend in age means we’re serving the underserved.

Gender

Among respondents, 56% (+1) identified as women, 39% (-3) identified as men, and 5% (+2) identified as non-binary/genderqueer. This is a fairly large overrepresentation of women, who generally are less likely to participate in the workforce.  For non-binary people, the estimate jumped from 1% to 5% this year which is in line with our number; I expect this is a more accurate number for the population rather than a genuine increase in gender identification.

This is a mild improvement from last year. It isn’t that I don’t think men are deserving of help (especially since they may be underrepresented for other reasons) but since my office hours are an attempt to democratize access and systemic sexism is an issue, this is a positive trend.

Sexual Orientation

78% (+3) of respondents identified as heterosexual, with 22% (+7) identifying as some form of LGBQ.  This is significantly different than the base rate of 93% and 7%, respectively.

I am very publicly liberal and work in and around fields that are generally more liberal (tech/design/etc.), so these numbers aren’t entirely atypical in my larger community. That said, there are still significant biases present in our often heteronormative culture, so I’m generally happy to see overrepresentation here.

Race and Ethnicity

53% (+13) of respondents identified as White (base rate 77%), 7% (-8) as Black or African American (base rate 13%), 19% (-11) as Asian (base rate 6%), 1% (+1) Native American (base rate 2%) and 20% (+5) as More Than One Ethnicity (base rate 2%).  In addition, 12% (-1) identified as Hispanic or Latino/a/x (base rate 18%), with Mexican, Mexican American, or Chicano/a/x as the largest group.

This is obviously disappointing; a 13 point jump in White is a lot, even if it is still significantly below the baseline. This increase, however, is correlated with an increase in meetings with people based outside the United States; 58% of international meetings were White, compared to just 47% of US-based meetings.

Other

20% (-3) of respondents are first-generation Americans (base rate 14%), while 23% (+5) are first-generation college graduates (base rate 35%).  27% (-13) view themselves as underrepresented in their field, while 9% are living in poverty and 5% identify as disabled. 41% (+3) did not add any additional tagging. 74% are currently living in the United States, while 26% live in one of eighteen other countries with Canada, Germany, and the UK being the most represented.

Commentary and Commitments

In 2022, it was difficult for me to react to these findings because the only data I had to compare them with was the base rate. In 2023, I have last year’s data as a benchmark and generally come away with mixed feelings.

On the positive side, age, gender, and sexual orientation all showed year-over-year increases in representation.  While the gains were modest, they trended in the right direction and over a much larger sample size, giving us increasing confidence that we are serving those who need it most.

The biggest disappointment is obviously the increase in White-only participants.  Even with the growth in international audience, the purpose of my office hours is to reduce systemic bias and I simply can’t do that if I’m not meeting with those who face biases related to race and ethnicity.  While the rate of straight, cis White men who didn’t identify with any underrepresented categories remained the same this year at 13%, this is a place where I expect to see year-over-year improvements and so no change simply isn’t good enough.  It is on me to take action in 2023 to change this trajectory.

We made two important commitments in our 2021 report, both of which we were able to honor.  As promised, we began releasing clips from our office hours across multiple social media platforms and will continue to do that into 2023.  We also exceeded our $5K committed budget and spent closer to $12K supporting the needs of individuals who had challenges this year; while we have not yet set a budget for 2023, we will continue to push forward on that front.

There were a number of other operational changes to office hours this year, across the technology, processes, and team available.  For example, we introduced a tracker that allowed us to more easily view the applications of those who were looking for work and make referrals where appropriate; we assisted in 61 job searches in 2022.

I continue to believe that public, open office hours on a first-come, first-served basis can be a lever for reducing some forms of systemic bias.  Office hours are not just an opportunity for mentorship but a chance to deploy resources against real needs, whether that is using social capital to make a referral or financial capital to provide pilot funding and even essentials like food and interview clothing.

For 2023, we’re going to concentrate on the scalability of our impact by addressing two key shortcomings:

  • Reusable content.  Reviewing our Vowel recordings from this year, it is clear that I’m spending a tremendous amount of time answering very similar questions. We’re working on everything from written guides and short videos to a chatbot trained on the office hour transcripts to make sure that we can serve more people.  I don’t intend to spend less time doing office hours but hopefully these solutions allow us to concentrate that time on increasingly personal questions and specific situations.
  • Increased tooling.  While the ability to distribute Vowel recordings and shared notes has been a tremendous help to individual participants, process improvements like the job tracker have allowed us to more quickly provide asynchronous help in the moments when people need it most.  Rather than confining our impact to the office hours, we’re going to intensify our tooling, ranging from lightweight spreadsheets to custom web-based calculators, in order to create new scaffolding for people to help themselves.

I fundamentally believe that transparency helps drive accountability and accountability allows for autonomy.  My hope is to be able to offer an updated diversity report yearly for as long as I am able to continue doing office hours at this pace and with this team.  As I mentioned earlier, I’m open to questions and feedback on the analysis, as well as suggestions on what commitments you’d like me to make; just shoot me an email.

Side Note: When I released the diversity report for 2021, I wrote “Sometimes, doing the right thing feels absolutely ridiculous”. In August of 2022, I left my global executive role at CapGemini because they refused to release basic diversity data. Putting myself out of work during a recession feels like one of those ridiculous things and I am acutely aware of the privilege that allowed me to do it.  But accountability matters.  Working toward change from within a company is important but when business leaders refuse outright to even consider change, it is time to go.  So I’m going to keep releasing these reports yearly and I hope CapGemini and others consider doing the same.

Humans are habituation machines.  Once something becomes true for us, our brain starts incorporating it into our reality through selective attention and a variety of other cognitive biases, such that it is hard to remember a time when it wasn’t true.

Take the internet. If you’re old enough, you might be able to dredge out some specific memories about a time before ubiquitous connectivity.  But even those memories are fairly selective and it is hard to really emotionally connect with them; the internet simply is in our present reality.

Diversity reports are another example.  20 years ago, it wasn’t ubiquitously true that every major company released a comprehensive report on the demographics of its workforce.  And yet now it would be surprising to find a major company that doesn’t.  Accountability allows autonomy and transparent data is the first step toward that accountability.

The link between accountability and autonomy isn’t just for big companies; it is a core building block of any service relationship.  Beginning several years ago, I offer my time as a public service in the form of open office hours, which I wrote a guide to when I started.  

But in order for me to offer that public service in an accountable away, I also need to be transparent.  This post is my attempt to do that, in what I hope to make a yearly practice, by releasing diversity statistics for my 2021 office hours.

First, a few quick notes on methodology.  To gather the data, we setup a Google Forms survey and then used Zapier to automatically email participants with a link after each meeting.  In addition to asking for qualitative feedback to help us improve, we asked basic demographic questions about age, gender identity, sexual orientation, ethnicity, etc.  No questions were required, all were multiple choice, with “Other” and “Prefer not to say” options included.

For 2021, I committed to two hours per day of office hours in 30-minute slots or ~1K potential meetings.  While obviously I couldn’t always manage that, we did have a utilization rate ~80%, so ~800 meetings in total.  Since we started collecting diversity data in November, we have only two months of participants to work with or ~130 people.  We received 40 survey responses, giving us a response rate of ~30%.

Generally speaking, that’s a lot.  Typical survey response rates are less than 5%, so we can make some reasonable assumptions that this data is representative of the larger population of participants.  That said, you could always make an argument that some segments are more likely to respond, so take it all with a grain of salt.

On to the 2021 data!  Each section has a two paragraph format: data, then interpretation.  There will be a separate section at the end for commentary and 2022 commitments.  I’m open to questions and feedback on the analysis, as well as suggestions on what commitments you’d like me to make; just shoot me an email.

Pie graph of Open Office Hours for 2021, expressing percent of participants by number of underrepresented identities, including gender, ethnicity, sexual preference, and other (e.g. first-gen college). 12.5% have no underrepresented identities, 17.5% have one underrepresented identity, 35% have two underrepresented identities, 25% have three underrepresented identities, and 10% have four underrepresented identities.

Age

The mean age in respondents was 33.6 and the median was 31.  The best comparison is probably the median age of the US working population, which is 42, so overall we’re skewed a little younger.  However, the standard deviation was around 9, with participants ranging from 20 to 59, so there was a good bit of variability.

It is hard to interpret this in terms of representativeness.  I was 39 in this period of 2021, so there are a variety of reasons why people older than me might not have felt I could be supportive to them.  And younger people are probably more comfortable with the idea of digital open office hours generally; both might be factors.

Gender

Among respondents, 55% identified as women, 42% identified as men, and 3% identified as non-binary.  For women and men, these numbers are essentially the same as the workforce participation rates.  For non-binary, this is likely a bit higher than the base rate of less than 1%, although for people is a very small sample size and the population-level data is unreliable.

Candidly, I was initially disappointed by these results.  In that my office hours are an attempt to democratize access and systemic sexism is an issue, I had hoped to reach a group that was more heavily skewed.  This shows the danger of univariate thinking, however; as we continue to look at the other forms of diversity, a different picture emerges and so I’d like to withhold judgment for a bit.

Sexual Orientation

75% of respondents identified as heterosexual, with 20% identifying as bisexual and 5% preferring not to answer.  This is significantly different than the base rate of 94% and 6%, respectively.

I honestly don’t have a ready explanation for this.  Because participants skew younger and the proportion of the population that identifies as non-heterosexual also skews younger, it may simply be due to a mediating variable.  It could also be a network effect driven by homophily and my political stances also tend to be relatively public, so it could be self-selection.  I simply don’t know.

Race and Ethnicity

40% of respondents identified as White (base rate 77%), 15% as Black or African American (13%), 30% as Asian (6%), and 15% as More Than One Ethnicity (2%).  In addition, 13% identified as Hispanic or Latino/a/x (18%), with Mexican, Mexican American, or Chicano/a/x as the largest group.

There is a lot to unpack here.  It is unclear why there is a massive overrepresentation of Asian people and people who viewed themselves as having a mixed ethnicity; all of the factors from sexual orientation could potentially be at play here.  There is certainly room for growth in other categories, although as with gender, it is hard to look at these results in isolation.

Other

23% of respondents are first-generation Americans (base rate 14%), while 18% are first-generation college graduates (base rate 35%).  40% view themselves as underrepresented in their field, while 38% did not add any additional tagging.

I was surprised by the base rate of first-generation college graduates, although I probably shouldn’t be: because almost all of the people I interact with in a professional context have degrees, it is easy to forget that higher education is far from ubiquitous.  I was also surprised by the overrepresentation of first-generation Americans; I can theorize as to why they might be more likely to be interested in office hours but have no proof.

Commentary and Commitments

As with any personal feedback, it is hard to know how to react to this data.  I have long believed that public, open office hours on a first-come, first-served basis could be a potential lever for reducing some forms of systemic bias.  If they remain only at the level of mentorship, office hours are unlikely to create real change: we have evidence that women are over-mentored and under-sponsored and there is reason to believe that is true of other underrepresented groups as well.  But to the extent that we are able to use them as a catalyst for sponsorship, where resources are expended to create new opportunities, they have power.

If the purpose of open office hours is to specifically focus on the underrepresented, then we’ve achieved some success: only 13% of participants were straight, cis white men who didn’t identify with any underrepresented categories.  But there are still clear places where there is much room for growth (like Black or African Americans, where we only achieved parity with the population).  The question becomes how to create that change.

For 2022, I’m going to concentrate on two key pressures: reducing suspicion (an inhibiting pressure) and increasing followup (a promoting pressure).

In a perfect world, everyone would know that office hours exist, decide for themselves if they are beneficial, and then take a slot that works in their schedule.  But we live in an imperfect world.  I’m frequently asked whether there is a fee and many people have expressed disbelief that someone would offer free support.  And these doubts were not evenly distributed; anecdotally, it was more often underrepresented participants who expressed the most suspicion.

To me, this is entirely logical.  We know underrepresented people are receiving the least help and are the most likely to be exploited.  So when faced with an opportunity for free support (from a cis white dude, no less), being cautious is a reasonable reaction.

Here is what I’m going to do about it:

  • Release recordings.  We use Vowel as a platform for office hours, so that participants can view the video, transcript, and notes after the call has been completed (plus, it has the handy live “percentage talked” counter that helps me to remember to shut up).  In 2022, we’re going to start releasing edited clips of office hours to help clarify what people can expect and they can see proof that it is a free service.  We’ll select clips likely to be useful to others, edit them to just my video and voice, and not use anything that mentions participant details.  In our pilots so far, underrepresented groups that were shown a clip of office hours were significantly more likely to subsequently sign up for a slot than those that didn’t see a clip.
  • Clarify cost (and the lack thereof).  Previously, I relied on the academic understanding of “office hours” as a term that indicated freely available support.  But we’ve now clarified the language on both the Get Support page and LinkedIn to be clear that these slots are available completely free.

We cannot simply reduce inhibiting pressures, however – we must also increase promoting pressures.  Our follow-up surveys are generally positive but I recognize that I don’t always follow through on commitments that I make in office hours, mostly out of inattention.  So here is what I’m going to do about it:

  • Add team review.  One of my team members will review each office hours recording and document any action items I’ve agreed to, following up with support and reminders as needed.  The hope is that we deliver on every commitment that I make; this will have the added benefit of making it more likely that we transcend mentorship to full sponsorship.
  • Create a followup budget.  Some followup items require money to accomplish.  In 2021, we did this on a one-off basis but that opens the door to inequitable distribution and also makes it hard for me to limit my commitment to a level I can sustain.  So this year, I’m setting aside an initial budget of $5K that the team can tap into directly, without approval from me, to take action on items that require financial support.

Finally, we’re adding a few more tweaks simply to improve our processes and make things generally more inclusive.

  • Taking a more holistic view.  For example, adding a “disabled” option to the self-identification question, as well as a question about country of residence to capture international participation.
  • Varying the times of office hours.  For most of the year, my office hours were during working hours for people in both PST and EST.  This might create barriers for some, so I’ve created a more flexible schedule designed to allow for a wider range of participation.

I fundamentally believe that transparency helps drive accountability and accountability allows for autonomy.  My hope is to be able to offer an updated diversity report yearly for as long as I am able to continue doing office hours at this pace and with this team.  As I mentioned earlier, I’m open to questions and feedback on the analysis, as well as suggestions on what commitments you’d like me to make; just shoot me an email.

Side Note: Sometimes, doing the right thing feels absolutely ridiculous.  Pulling this report together took a few weeks and there were moments where I almost abandoned it; posting it could easily be seen as communal narcissism (which I willingly admit to being at times), so it was tempting to simply analyze the data and make the changes entirely privately. Talking about social justice action often feels like a Catch-22: do it and look performative, don’t do it and be complicit in the racist, sexist, classist status quo. So I often think of the extremity test: is the universe where nobody does a behavior better or worse than the one where everyone does?  In the case of diversity statistics, I’d far rather a world where everyone releases them than nobody does, so I posted mine in an effort to tip the scales in that direction.  Social pressure works – talking about what we do makes it incrementally more likely, on the whole, that other people also do it.  And if that feels (and is) ridiculous and results in a cascade of clown emojis…well, at least I was entertaining.

I believe that behavioral science, correctly applied, can change the world. But, as with any emerging discipline, there is a period of self-definition in which people fight (with varying amounts of actual animosity) about who can claim what title and where the borders of the field are. 

Personally, I’ve largely been uninterested in the debate about who can and cannot call themselves a behavioral scientist (though to be clear, as a non-PhD, it benefits me not to start drawing lines). But that’s different from what it actually means to be doing behavioral science; as the name of the field suggests, it is the behaviors that should concern us. So I have become increasingly interested in how we might break down the various components of behavioral science into smaller units of work that could be credibly offered independently, while firmly maintaining the integrity of the behavioral science process as a whole.

To begin, let’s be clear that I am actually talking about applied behavioral science, which is explicitly concerned with changing behavior. This is distinct from academic behavioral sciences (like social psychology, behavioral economics, etc.), which further our understanding of the basic principles that underlie human behavior. That doesn’t mean academic folks don’t care about change or that applied folks don’t care about knowledge, just that each prioritizes one over the other. In my case, as an applied behavioral scientist, that means that while I still sometimes publish peer-reviewed papers, my primary work is changing the behaviors of populations.

My simple definition of applied behavioral science has always been “behavior as an outcome, science as a process,” which has the benefit of being easy to explain to people without exposure to the discipline and sounding pithy when you say it in a presentation. But if you’re trying to buy behavioral science services, or understand how you might begin to build them internally, that definition isn’t terribly useful. 

To make it more practical, I propose a four-stage model below that balances an understanding that each part is essential with the need to break it down into units of work that can be spread across internal teams and external vendors when necessary. But be warned: each handoff increases the potential for loss, particularly when there is an incomplete understanding of the adjoining stages. A tightly integrated process managed by people who understand the end-to-end process will always have the greatest likelihood of creating meaningful behavior change; that we can name the parts should not detract from the need for a whole.

  • Strategy: the defining of a desired behavioral outcome, with population, motivation, limitations, behavior, and measurement all clearly demarcated. Plain version: figuring out what “works” and “worth doing” mean in behavioral terms by collaborating with stakeholders.
  • Insights: the discovery of observations about the pressures that create current behaviors, both quantitative and qualitative. Plain version: figure out why people would want to do the behavior and why they aren’t already by talking to them individually and observing their behavior at scale.
  • Design: the design of proposed interventions, based on behavioral insights, that may create the pre-defined behavioral outcome. Plain version: design products, processes, etc. to make the behavior more likely.
  • Evaluation: the piloting (often but not always using randomized controlled trials) of behavioral interventions to evaluate to what extent they modify the existing rates of the pre-defined behavioral outcomes. Plain version: figure out whether the products, processes, etc. actually change the pressure, make the behavior more likely, and do so at a magnitude that is attractive for scaling.
  • Behavioral Science: combining all four of those processes. Plain version: behavior as an outcome, science as a process.

Step 1: Behavioral Strategy

Because the process is linear and each step requires that the previous step was done (although not necessarily by the same person), we need to start by defining the behavioral outcome we want to achieve. In the latest version of the Intervention Design Process (or IDP; the applied system I propose in my book), we do that using a behavioral statement: When [population] wants to [motivation], and they [limitations], they will [behavior] (as measured by [data]). Arriving at that statement is deceptively hard work and requires running a disciplined process with stakeholders to define each of those variables. But done correctly, it paints a picture of the world we want to create when our interventions are working. 

Given that the process prioritizes what we want the result to be rather than the interventions that actually create the result, my proposed term is behavioral strategy. While it doesn’t have to include a cost/benefit ratio that defines how much an intervention can cost relative to the impact that it has, certainly knowing this can shape the rest of the process and allows stakeholders to more clearly understand the actual stakes. 

Inside a company, both Strategy and Product teams try to answer this question regularly, although they often express it in imprecise, non-behavioral terms that create misalignment later. Externally, a strategy firm like McKinsey could likely spin up a unit that did this work in a reasonable way but like internal teams, they currently tend not to focus specifically on behaviors and don’t offer this as a service today.

Step 2: Behavioral Insights

The next step in the IDP is understanding the distance from the world we want by understanding the pressures that create the world of behavior we have today. Insights can be both quantitative and qualitative, so collectively I propose behavioral insights and then splitting as needed: qualitative behavioral insights and quantitative behavioral insights, since there are specialists that concentrate on one approach or the other. 

Existing user researchers and data scientists frequently do this work (Spotify has Quantitative User Research, for example), and as long as they’re doing the work with an explicit emphasis on generating insights to change behavior, these teams could slot in here. If you wanted to buy it as a service, IPSOS’ behavioral science team seems to do behavioral insights as a specialized form of market research that focuses on behavior and other agencies may be able to provide insights if specifically pointed toward behavioral outcomes.

Step 3: Behavioral Design

Having mapped the behavior we want and understanding why it doesn’t yet occur, in the IDP we next get into pressure mapping and intervention design. There are lots of ways to create behavioral interventions that don’t use pressure mapping, like design thinking, but ultimately we are always trying to generate proposed interventions that may change behavior. I say “proposed” and “may” because while we have supporting evidence (because the design process is based on the behavioral insights we defined above), we haven’t actually tested whether the interventions create the behavior. 

Design and Product departments do this within companies today, although often lacking the behavioral focus, so it seems appropriate to call this behavioral design. And an agency like Fjord could potentially do this externally, so long as they are given an articulated behavior outcome and the relevant behavioral insights (neither of which they are likely to create themselves).

Step 4: Behavioral Evaluation

Finally, we have the evaluation of the proposed interventions, to see to what degree they actually create the outcome articulated in the behavioral strategy. While this is called impact evaluation in the non-profit world, behavioral testing builds on the more widely understood experimentation that is used in most for-profit companies. The theoretical gold standard is a randomized controlled trial, in which participants exposed to the intervention are compared against a control group, but that may not always be feasible; remember, in applied behavioral science, we only need to be as right as the cost/benefit ratio dictates. In a perfect world, doing this process also results in the observation of additional behavioral insights (because trying to change a system often reveals underlying truths about it) but I don’t think we should try to make this a specific requirement of this process. 

Very few companies actually run rigorous pilots today, although it does happen in some Product and Data Science organizations (and Marketing loves non-theory-driven RCTs in the form of A/B tests), so this is probably the largest potential growth area for behavioral science as a whole. In the non-profit sector (where impact evaluation is sometimes built into grants), an agency like Social Impact will do an RCT on your interventions for you, if you’re careful to make sure they translate “impact” in behavioral terms.

Combined: Behavioral Science

To me, behavioral science requires the combination of all of the above. If you can’t define a behavioral outcome (AKA don’t do behavioral strategy), then you miss out on the whole point of “behavioral” in this discussion; you can’t run a scientific process if you can’t measure what works, and you don’t know what “works” means if you don’t define it. 

Similarly, you could run pilots and measure behavioral outcomes but without behavioral insights, that’s not science: your interventions aren’t necessarily designed based on replicable understandings (my favorite example of this is Marissa Mayer testing 41 shades of blue at Google; because there was no theory behind the iterations, you could only know what worked in that limited moment but not why) and so if they don’t work, you’re not actually any closer to something that does. It is only when all four processes come together that you truly get to both of the words in the term behavioral science…and neatly arrive back at “behavior as an outcome, science as a process.”

Some people who currently offer behavioral science services are going to hate this taxonomy, because it threatens their identity, both personally and professionally. And I understand that feeling: removing ambiguity can feel like a loss, when clarity reveals you’re only covering some of the territory. And not offering some services isn’t always by choice; for example, I’ve often heard consultants complain that they can’t sell behavioral impact evaluation to clients because they already “know” it will work after the behavioral design phase.

But the purpose of this guide is arriving at a shared understanding of applied behavioral science and its components, and part of that is recognizing that no one piece of the field is better than any other. There is no shame in only doing part of it, as long as we clearly explain the other parts and push the importance of doing the full process. By creating areas of intersection and smooth handoffs, we can better allow for specialization and move the world incrementally closer to behavior as an outcome, science as a process. And that’s work worth doing, in any form.

Side Note: My belief in this model is why I’ve decided to join frog as the Executive Director of Behavioral Science. My role is two-fold: help my fellow frogs apply behavioral strategy, insights, design, and impact evaluation in their projects and help our clients build their own applied behavioral science capabilities. While I’ve worked hard to evangelize the field broadly in my previous roles (including writing Start At The End, which was as close to a handbook as I can get, and doing 30+ talks a year), ultimately my career to date has been about creating a long series of behavioral interventions that accomplished internal business goals. In contrast, at frog I’ll be focusing specifically on behavioral science as a process, both internally and externally. As we see more senior behavioral scientists within large companies, we have the opportunity to leverage existing cross-disciplinary expertise to further support that work. And frog, particularly as part of CapGemini Invent, is the right place to do that. The agencies I mention in the examples can all learn to do parts of the behavioral science process. But because they typically do only their siloed step, they think of their stage’s deliverable in isolation. At frog, because we can and have done the full cycle, we know each step is just a milestone, so we can take a more holistic view and plan our work to naturally connect to the next necessary step. And through our Org Activation practice, we can teach organizations alongside projects to help them grow their own capabilities. Behavioral science doesn’t belong to frog – it belongs to everyone. And it is with that belief firmly in mind that we look forward to growing this discipline together.

Jason Fried, the CEO of Basecamp, has been making some changes at the org and decided that they “deserve an announcement”. While worth reading in their entirety, the changes are geared around taking the challenges of leading a company and addressing them by promoting monoculture (in the veil of individualism; Jason would call it “being responsible for [only] ourselves”) as a solution. He quotes Aldous Huxley in his introduction: “We live together, we act on, and react to, one another; but always and in all circumstances we are by ourselves.”

The changes are a stark departure from his previously expressed views on diversity (I say “expressed” because social signaling around diversity is different from the articulation of specific work policies) and are antithetical to most existing science on behavior change toward positive outcomes within an org. And so, as with an earlier post responding to points made by Domm Holland about how to grow a team, I drafted this post to offer a counterpoint to some of Fried’s changes (I won’t call them recommendations, since in his individualistic frame they are made for Basecamp only, although then why bother publishing them and emphasizing how much you “give back to the community” by speaking and publishing on management topics?) by surfacing potential alternatives.

Before breaking down the changes point-by-point, I have to apologize. Because how we create should be evidence-based, I normally compare the outcomes associated with the author’s recommendations to a benchmark (for example, looking at Fast’s diversity compared to Google). In the case of Basecamp, because they have so few employees and most don’t identify themselves with a picture on either LinkedIn or Basecamp’s website, it is impossible for me to currently tell you much about the monoculture that is the result of Fried’s policies. Should diversity data become later available, I will update this post with it alongside a relevant benchmark.

It is also worth noting that diversity data may not even be an appropriate benchmark for these recommendations; Fried’s measure for the changes seems to actually be profitability (despite the Basecamp Jobs page, which explicitly states “diversity has deeper value beyond monetary”), although the relationship between diversity and profitability is well-established. Certainly Basecamp’s product has a diverse subscriber base, so in making these policies public, change could be measured not through attrition of employees but of users; if Basecamp became unprofitable because of Fried’s policies because everyone unsubscribes, presumably he would see that as a failure.

Fried Point #1: No more societal and political discussions at Basecamp. Today’s social and political waters are especially choppy. Sensitivities are at 11, and every discussion remotely related to politics, advocacy, or society at large quickly spins away from pleasant. You shouldn’t have to wonder if staying out of it means you’re complicit, or wading into it means you’re a target. These are difficult enough waters to navigate in life, but significantly more so at work. It’s become too much. It’s a major distraction. It saps our energy, and redirects our dialog towards dark places. It’s not healthy, it hasn’t served us well. And we’re done with it at Basecamp.
Commentary: Certainly word choice matters: describing social justice conversations as “a major distraction” and the result of “sensitivity” while emphasizing the need for the workplace to be “pleasant” is a direct appeal to the desire for a monoculture. But it is difficult to interpret the policy itself. What does it mean to say that there will be no more societal or political discussions in a workplace? Does wearing a #BLM t-shirt on a Zoom call mean you’re fired? Even with nebulous consequences, the policy seems certain to reduce psychological safety, which Google previously found to be the greatest predictor of team success. It is difficult to imagine that scale items like “Members of this team are able to bring up problems and tough issues.” and “People on this team accept others who are different.” would be positively impacted by an explicit command not to talk about differences, especially when they’re potentially unpleasant to the rich white male in charge of the company.
Alternative Tip #1: Embrace differences that contribute to the psychological safety of the group by creating spaces and systems that allow for the discussion of all areas of impact, regardless of their relationship to the the social power structure, while mindfully balancing short-term velocity with long-term value. Encourage and guide respectful, validating discussion.

Fried Point #2: No more paternalistic benefits. For years we’ve offered a fitness benefit, a wellness allowance, a farmer’s market share, and continuing education allowances. They felt good at the time, but we’ve had a change of heart. It’s none of our business what you do outside of work, and it’s not Basecamp’s place to encourage certain behaviors — regardless of good intention. By providing funds for certain things, we’re getting too deep into nudging people’s personal, individual choices. So we’ve ended these benefits, and, as compensation, paid every employee the full cash value of the benefits for this year. In addition, we recently introduced a 10% profit sharing plan to provide direct compensation that people can spend on whatever they’d like, privately, without company involvement or judgement.
Commentary: When social psychologist Daniel Kahneman won the Nobel Prize in Economics in 2002, it was for his work challenging a notion that many previous economic models relied on: homo economicus – the infinitely rational person. At this point, decades of research has shown that money is objectively not fungible, despite Fried’s assertion that it should be, and non-monetary incentives at work are a key component of both job satisfaction and performance. Some categories of benefits like continuing education have different tax treatments when not lumped into pay and the non-monetary benefits negotiated by workplaces typically have outsized impact on workers at the lower end of the pay spectrum, making pure cash payments specifically inequitable. Finally, even if Fried were correct in saying that Basecamp should not care what you do outside of work, it presupposes an artificial barrier that is empirically untrue: what you do outside of work directly affects what you do at work and vice versa.
Alternative Tip #2: Invest in greater quality-of-life benefits that have recognized impact across diverse populations. Be clear that behavior is behavior, in and out of the workplace, and that where there is a demonstrated relationship between the two, they will be considered together.

Fried Point #3: No more committees. For nearly all of our 21 year existence, we were proudly committee-free. No big working groups making big decisions, or putting forward formalized, groupthink recommendations. No bureaucracy. But recently, a few sprung up. No longer. We’re turning things back over to the person (or people) who were distinctly hired to make those decisions. The responsibility for DEI work returns to Andrea, our head of People Ops. The responsibility for negotiating use restrictions and moral quandaries returns to me and David. A long-standing group of managers called “Small Council” will disband — when we need advice or counsel we’ll ask individuals with direct relevant experience rather than a pre-defined group at large. Back to basics, back to individual responsibility, back to work.
Commentary: The pairing of accountability and autonomy at work is a necessary precursor to equity, both in celebrating success and managing through failure. But committees are not the antithesis of individual accountability/autonomy. And a mandate that counsel comes only when sought by the accountable person from the sources of their choosing presupposes that that person knows who actually can provide value or is willing to hear them (this is particularly dangerous when coupled with Fried Point #1, since nobody should be talking about anything outside their hypothetical swim lane in the first place) and that there is neither serendipitous nor contrarian value. This is simply not true. As repeatedly proven both in the academic literature and by applied studies done by folks like Cloverpop, large groups of diverse people typically act as strong advisors to individual decision makers. As a small side note, it is also highly incongruent to talk about responsibility for DEI work (process) and then advocate for individual responsibility around outcomes.
Alternative Tip #3: Maintain high individual accountability for clearly expressed outcomes, coupled with high autonomy on the process to reach them. Allow for a diversity of voices to inform (not dictate) that individual accountability and ensure appropriate forums for those diverse voices to be heard by the accountable individual.

Fried Point #4: No more lingering or dwelling on past decisions. We’ve become a bit too precious with decision making over the last few years. Either by wallowing in indecisiveness, worrying ourselves into overthinking things, taking on a defensive posture and assuming the worst outcome is the likely outcome, putting too much energy into something that only needed a quick fix, inadvertently derailing projects when casual suggestions are taken as essential imperatives, or rehashing decisions in different forums or mediums. It’s time to get back to making calls, explaining why once, and moving on.
Commentary: As with Fried Point #1, I don’t actually know what this means as a policy and yet it seems meant to fix a litany of decision making issues (many of which are the product of poor decision structures and a lack of the autonomy/accountability pairing suggested in Alternative Tip #3). While making quick decisions with short explanations certainly increases velocity, an important characteristic of progress, it actively degrades overall progress by reducing the probability that you’re heading in the right direction, especially as confirmation bias makes it increasingly difficult to see the error of a decision direction as more decisions are made in that direction.
Alternative Tip #4: Have clear decision making paradigms (including criteria like reversibility), with established review points (both during and after a decision) to balance velocity against accuracy.

Fried Point #5: No more 360 reviews. Employee performance reviews used to be straightforward. A meeting with your manager or team lead, direct feedback, and recommendations for improvement. Then a few years ago we made it hard. Worse, really. We introduced 360s, which required peers to provide feedback on peers. The problem is, peer feedback is often positive and reassuring, which is fun to read but not very useful. Assigning peer surveys started to feel like assigning busy work. Manager/employee feedback should be flowing pretty freely back and forth throughout the year. No need to add performative paperwork on top of that natural interaction. So we’re done with 360s, too.
Commentary: As with many of the Fried Points, he equates the result of a badly implemented system with the system itself. While there are legitimate, empirically-backed issues with some review processes that Fried identifies (including the tendency to be periodic rather than as-it-happens, with an emphasis on summary rather than specific feedback), that is not an inherent flaw of specifically peer review, with Fried singles out to target. Manager-only reviews have a long history of centralizing bias, especially since cis white men continue to disproportionately be the ones doing the reviewing. As with the Points 3 and 4, getting diverse feedback from a larger group works not only enlarges the actual performance space considered (literally the 360) but mitigates the inherent bias that comes with a single dyad.
Alternative Tip #5: Ensure that feedback is timely and specific, while ensuring that it also comes from a diversity of sources, not just in level but in working relationship and context.

Fried Point #6: No forgetting what we do here. We make project management, team communication, and email software. We are not a social impact company. Our impact is contained to what we do and how we do it. We write business books, blog a ton, speak regularly, we open source software, we give back an inordinate amount to our industry given our size. And we’re damn proud of it. Our work, plus that kind of giving, should occupy our full attention. We don’t have to solve deep social problems, chime in publicly whenever the world requests our opinion on the major issues of the day, or get behind one movement or another with time or treasure. These are all important topics, but they’re not our topics at work — they’re not what we collectively do here. Employees are free to take up whatever cause they want, support whatever movements they’d like, and speak out on whatever horrible injustices are being perpetrated on this group or that (and, unfortunately, there are far too many to choose from). But that’s their business, not ours. We’re in the business of making software, and a few tangential things that touch that edge. We’re responsible for ourselves. That’s more than enough for us.
Commentary: Every company is, inherently, a social impact company: the products and services we create continually change behavior in the world around us. We might not all be double-bottom-line, or certified B-Corps, but what and how we create matters. Fried knows this; he notes that Basecamp “gives back” (although without acknowledging that many of these activities also contribute to the Basecamp bottom line) presumably because he believes that those activities create change. The false distinction that he is drawing is that there are somehow borders to that change, artificial lines we draw on a map. That simply isn’t true. The edges of our impact are defined by the impact itself; the decisions we make ripple not only where we want them to but far, far beyond. I can understand the desire to operate in a convenient world where Fried gets to decide what he doesn’t care about; rich white men have been doing that for ages. But the practical reality is that our choices have consequences and we must seek to confront them head on.
Alternative Tip #6: Understand that the impact of your company is defined by what it demonstrably impacts: not its aspiration to create impact, nor its desire to avoid it. Create processes to understand that zone of impact, to measure it, and to make conscious choices about how you change it.

Side Note: In talking about this with Tim Morey, he reflected that a lot of these edicts sound like a return to shareholder value added, the antiquated notion that companies should be judged on profit alone, usually in the quarter-over-quarter sense. And I’m struck on how much of this technocratic libertarianism, like SVA, is just an excuse for short-term profiteering that enrich the richest among us. I’m reminded of the Warren Buffet quote about investment: “Our favorite holding period is forever.” If you were trying to create permanent value, to build toward a utilitarian ideal that maximized outcomes across the consideration set, why would you seriously consider any of the policies above? And why would you support those who do?

Recently, I accepted a new job offer.  And I was excited, as so many folks are when they find meaningful work.  It felt like such a great gig: my entire reporting line would be women!  It would be global in scope!  It would be spreading behavioral science among Fortune 500 CEOs!  I turned down three competing offers, told my family, and started working on plans for the first few months.

Then the employment agreement arrived.

It had the standard bevy of non-competes and non-solicits, which are problematic in their own right but still a standard that hasn’t been broadly challenged (although the FTC is working on it).  And yes, they wanted to own my IP, which has its own thorny definitional challenges when your job is uncovering and appplying underlying truths about human behavior (“Who owns science?” can turn into a very long conversation).  But those issues were navigable and in twenty years of working, despite all my apprehensions, I’d never actually had any problems arise.

Yet the packet also referenced agreeing to policies, like a code of conduct, that weren’t attached.  So I asked the executive recruiter if they could send them over, because one of the things drilled into me by my father was to never sign a contract I didn’t understand (FBFP, or “fucked by fine print”, was a common condition in rural Oregon).

“I’m sorry, our policy is not to share our policies outside of the company and since you don’t work here yet, we can’t send those to you.”

Ruh roh.

That stance isn’t actually all that uncommon, as the standard of many enterprise companies is to be closed by default, because they believe it minimizes risk.  In this case, risk that they’ll get bad press or lawsuits around something in the policies.  Or risk that candidates will read the policies and refuse to join, rather than reading them once they’re already dependent on the paycheck and cognitive dissonance changes their mind (literally).

And I get it, I really do.  Even though I believe employment policies should be open by default because public scrutiny is actually the best protection against catastrophic risk, I recognize it is something upon which reasonable people could disagree.

But closed by default isn’t just about legal risk; it is also about creating inclusive workplaces.  In the case of my offer, it turns out that joining would require me to stop tweeting (except to retweet officially sanctioned company propaganda…I mean, uh, very valuable thought leadership pieces) and blogging (because I might leak the secret recipe to behavioral science on an unapproved channel; its “behavior as an outcome, science as a process”, just FYI).

That was enough for me to turn down the offer, because I’m privileged enough to have other options and money to fall back on if I didn’t.  My new employer also asked me to agree to their code of conduct as part of my employment but when I asked to read it, they simply sent it over.  And just like my previous boss, Vivek Garipalli at Clover Health (“I don’t care if other companies literally follow you around the office every day; they can’t execute like we can.”), my new boss believes that public science is a good thing: he likened it to football, where playing in public helps the sport evolve.

But not everyone gets a happy ending.  Sometimes, “my poverty, but not my will, consents.”  And as a white guy, it is a) likely that the code of conduct I never got to read is built around my cultural norms and b) I probably wouldn’t even get in that much trouble for violating it, just because of how accountability works in corporate America.  But that is dramatically different for many, many people.  Often those with the fewest options are also those the environment is least inclusive of. 

And yes, it is possible that, tweeting and blogging aside, the policies I never ready were very sensible.  But closed by default means we can’t know and in the absence of evidence of the contrary, we should assume the norm.  And unfortunately, as evidenced by the preponderance of white males at the top of the social pyramid, racism and sexism and other forms of bias are the norm.

If you think that a bad code of conduct sounds far fetched, remember that Google is just now acknowledging that people have the legal right to talk about their salaries.  And we had to pass laws so that black people could wear their hair naturally at work (and they aren’t even universally adopted yet) because 80% of black women felt pressured to change their hair style in order to fit in.  Does the code of conduct mention salary data or hair?  No idea, because you can’t read the policy until after you join and I didn’t.  But again, until we specifically know that something is an exception, we should assume the norm.

In the end, closed by default policies minimize some risks but create others.  You lose talent that won’t sign in the blind or refuses to work in an environment that doesn’t value their ability to have a public opinion.  You decrease diversity by preventing the formation of an open, inclusive environment.  And given the preponderance of evidence that open environments are associated with profitability (Satya Nadella’s tenure at Microsoft is an excellent case study), closed by default creates very real profit risk.

We need to be default open.  To share our code of conduct, when people ask.  To publish research and processes, because execution is really the only moat.  And if the sharing sparks controversy or others iterate on it, we can use that feedback to build something better.  Because we can’t have it both ways, preaching innovation through failure but hiding out of fear of our own failures.  Truth will out; out yourself.

Side Note:  There are a host of industries that profit on ambiguity by trading on information asymmetry, either directly or indirectly.  But because of the endowment effect, asymmetry is hard to perpetuate because once we have access to information, it instantly becomes more valuable because we psychologically “own” it. That is why workplaces will always drift toward more liberal policies; once you’ve had access to more relaxed rules around social media, spending, dress code, etc, the price of giving that up will simply get higher and higher and higher. Thus shifting to a liberal workplace early is actually a competitive advantage, even if Jamie Diamond doesn’t see it yet.

Over the weekend, Fast’s CEO Domm Holland posted a short Twitter thread about growing from 2 to 120+ people in 18 months while maintaining “an exceptionally high talent bar” and offered some tips based on Fast’s hiring process.  But as I was reading the thread, I was struck by how many of the practices seemed likely to perpetuate a monoculture.  So I drafted this post to offer a counterpoint to some of his recommendations by surfacing potential alternatives.

Note that I am not responding to all of Holland’s tips: “treat your people well” is grounded in strong evidence for supporting a diverse and inclusive workplace that leads to high performance.  I’m also not responding specifically to Fast’s culture, although the thread responses do point out potential issues across a variety of domains, from rescinding offers to using Nigerian devs at very low wages to build V1 of the product and then terminating them without cause.

Before going through the tips, it is important to remember that hiring practices should be evidence-based and so I needed to look at Fast’s hiring data before responding to Holland’s thread; maybe their practices are a secret recipe for diversity and I’m simply wrong about their monocultural nature.  Since Fast doesn’t have a public diversity report that I could find, I gathered 110 people on LinkedIn who identify themselves as working at Fast.  I then coded for perceived ethnicity/gender (using names and Twitter/other photos if they didn’t have one on LinkedIn), had another blind rater do the same, and compared; there was only one person we coded differently, so I removed them from the sample, leaving 109.  

Obviously, this methodology leaves out other important forms of diversity and coding gender/ethnicity using LinkedIn photos and names is flawed; ideally, all companies would release their self-reported diversity data to avoid these limitations.  

To understand Fast’s diversity data, I compared it to Google’s hiring data from their 2020 diversity report.  Since Holland specifically calls out growth in the last 18 months and focusing on “the top 1% of major tech companies”, Google feels like an appropriate benchmark.

FastGoogle
Male68.8%67.5%
Female31.2%32.5%
White62.4%43.1%
Asian28.4%48.5%
Latinx4.6%6.6%
Black4.6%5.5%

Given that Fast achieved essentially identical gender diversity and significantly less ethnic diversity than Google, which has itself recently paid fines for documented hiring biases, a reply to Holland’s thread seems justified by the data.

Holland Tip #1: “We strictly do all recruiting internally” and “recruiting internally keeps tighter quality control”
Commentary: It isn’t entirely clear what Holland means here, since recruiting can include any number of responsibilities, from sourcing and screening to interviewing and negotiation.  There are entirely valid reasons to keep any and all of those processes internal but unless specifically designed to increase diversity, internal processes tend to favor the status quo.  And since the status quo in tech is overwhelmingly white and male and privileged, that means perpetuating those characteristics.
Alternative Tip #1: Because diversity begets diversity, use outside resources (including diverse external sourcing that you pay for) to challenge the status quo when needed.

Holland Tip #2: “we receive A LOT of inbound” and “yet still put 90% of effort into outbound sourcing to find the exact background & skill set we are looking for”
Commentary: A wide funnel of inbound is a great sign of traction but not always useful for increasing diversity, since those likely to apply are those likely to know, and those likely to know are those likely to already be in your network.  So outbound is an absolutely critical part of increasing diversity.  The red flag here is how that outbound is happening: not to broaden the inbound pipeline but to narrow it.  “Exact background & skill set” are often code words for biased filters like specific universities or companies that already have issues with bias; just as predators aggregate pollutants from the animals they eat, relying on biased sources means you aggregate those biases.
Alternative Tip #2:  Keep a wide funnel in your inbound by using clear job descriptions with bias reducing features and use outbound in a targeted way to widen, not narrow, wherever you have measured diversity issues.

Holland Tip #3: “we almost exclusively hired experienced people”
Commentary: “Experienced” is an interesting euphemism but Holland fortunately defines his usage as “hiring people who are currently thriving in a place that would be our next level up.”  This seems at odds with his first tip, since hiring specifically from competitors is very much like using outside recruiters (you are relying on the filtering of others) but as with Tip 2, the net effect is that you aggregate their biases.
Alternative Tip #3:  Monitor source diversity to avoid overindexing on single sources, be they specific schools, companies, or industries, and target increasing source diversity independently to achieving candidate diversity.

Holland Tip #4: “a-players attract a-players” and “the more we have focused on the best people, the higher the quality of applicant we get”
Commentary: The tendency of like to attract like (often called homophile) is well documented and when used purposefully, it can actually be a diversity superpower: diverse companies tend to get more diverse because of it.  But when type matching (like “a-players”), homophile generally reduces diversity by creating monolithic patterns for what is considered “the best”.
Alternative Tip #4:  Recognize and plan for broad representations of talent by having clear plans for recruiting unique skills and talents, then valuing and utilizing them.  Think of Venn diagrams that touch but don’t overlap more than 50%.

Holland Tip #5: “our team are the hiring panel, most of our team interview, screen and make hiring decisions”
Commentary: Unless a hiring manager is specifically attuned to increasing the diversity of their team, using hiring panels (rather than single hiring managers) increases diversity because it increases the number of potential advocates for underrepresented talent.  But this only occurs if the panel itself is diverse (again, homophile applies), trained and attuned to increasing diversity, and when unanimous decisions are not required.
Alternative Tip #5: Use diverse panels with specific training and allow for non-unanimous decisions; augment with external panel members when there is not enough internal diversity or availability.

Holland Tip #6: “internal referrals are exceptionally value [sic], they know the best people they have worked with”
Commentary: Internal referrals are specifically homophilic; as Holland says, we know who we have previously worked with.  This leads to the compounding of problems/virtues: diverse teams get more diverse, monocultures get more monocultural.
Alternative Tip #6:  Be clear about the value of diverse referrals by clearly publishing diversity data and encouraging diverse referrals both individually and systematically (either through rewards or refusal to consider overrepresented referrals).

Hiring high performing teams is hard and so for founders that believe in doing hard work, Holland’s thread is a seductive opus to that effort.  But as a white male founder, it is easy to think that easy things are hard simply because we believe they should be, even when the easiest thing to do is hire other white males: they are probably who you know, who you’ve worked with, and who are easiest to attract.  

So when we talk about doing the work, we need to recognize what the real work is.  Literal decades of research at this point tell us how important diversity and inclusion are to high performing teams; a failure to explicitly address those challenges in a thread about hiring practices is to acknowledge that you either don’t understand the current state of diversity in the workplace or aren’t actually committed to team as a strength.  Either way, we can and should do better; as Holland says at the end of his thread, “people deserve it”.

Side Note: I recently read some of my older blog posts and was aghast as the privilege encoded in them (along with the really terrible advice resulted).  While Holland’s thread created a cacophony of justifiable anger directed at both him and Fast, I wonder if that facilitates individual change or mires it in defensiveness; could I look back at my entries and acknowledge their flaws had I been attacked for them at the time?  Also, I wonder at the shelf life of even the alternative tips I list here; hopefully new science proves some of these suboptimal and we get ever better at creating the more equitable world we want.