TLDR: Almost all economic increases come from change and so profit maximalism demands a world where everything changes, all of the time. But change has a cost for individuals and communities; be cautious, in yourself and your designs.

Recently, I was introduced to the Benedictine ‘Vow of Stability’: a promise that, upon entering a monastic community, you will remain a part of it for life. Philosophically, it is rooted in a belief that instead of seeking out an environment of perfection, you have an ethical responsibility to improve where you are. And practically, it was historically good to have monks that weren’t monastery hopping, trying to find greener grass.

My introduction to the VOS was secular; it came from someone who agreed early on in their marriage to pursue stability as a family wherever they could. They recognized that voluntary changes often created additional stress and even as changes could create a step up in circumstance, there was a hidden cost of turbulence.

As they told me about the choices they made, it struck me just how much of modern capitalism requires paying a change fee in order to advance.

The easiest example is their home, which has quadrupled in value over the 20 years they’ve been in it. They have often contemplated selling in order to harvest that value but that would require destabilizing their family. So instead they bought an empty lot nearby, with the goal of eventually building a home they can comfortably retire in while remaining part of the community.

Slightly more nuanced is education. Their school district has consistently been in the bottom 25% of their already less-than-stellar state. But rather than move, they have sought out enrichment activities for their kids and volunteered to make things better.

I’m not suggesting that their VOS-inspired choices are right. I’ve moved for job opportunities, to give my son an education that fits him, and to be in places that support my individual and family happiness. And I’m content with the choices I’ve made.


But I was in NYC last week and it was just so easy to get business done, because my network there is expansive. When we moved to San Diego, I essentially hit the reset button, both personally and professionally, as did the rest of the family. As individuals, it is worth considering whether all change is worth it.

Perhaps more importantly, when we design interventions, we must be on guard against demanding too many change fees from those we intend to benefit. Turbulence upsets not just the individual balance but also the group; when my son changes schools, it is not only he who loses friends – each of his friends also loses him. What can feel small rapidly becomes larger.

VOS behaviors are all around. When a stock that pays a dividend rather than trying to constantly escalate in price. When a factory stays rather than offshoring. Many changes are choices and we could all stand to choose a little more carefully.

TLDR: Learning is not a smooth curve; we frequently grow in spurts and jumps. But those rarely align with external validation like graduation or licensing. Credentialism isn’t just inequitable, it is a business and cultural liability. And there is a market opportunity in refusing to accept the bias.

Recently, there has been a trend in mental health to switch to associates: psychologists who have not yet been fully licensed and cannot practice on their own, without supervision. This is driven largely by cost, as associates are 20-40% cheaper than their fully licensed peers, largely because they cannot create their own practices and must sacrifice some money to a mediating entity.

The requirements for an associate to become fully licensed vary by state but generally include an exam, some number of supervised hours practiced across various populations, and an annoying amount of paperwork that gets returned to you if you misspell something. Bureaucracy gonna bureauc.

Obviously, experience matters. In study after study, we can prove that people do actually get better at things over time; some version of “practice makes perfect” arose in pretty much every language for a reason. 

But that improvement is lumpy. When we look at learning, people tend to proceed in epochs, with long periods of level performance between jumps in ability. And those jumps are hard to pin down; if you do a cohort analysis, when people make a leap up is largely unpredictable. People who are learning and working together don’t necessarily hit milestones at the same time.

And so it isn’t necessarily true that you’re a better therapist today than you were yesterday and it certainly isn’t true that you’re a better therapist the day after the state declares you fully licensed than you were the day before. No one magically produces better work simply because they walked across a stage and got handed a degree.

And yet the day after graduation, your chances of getting a job go up dramatically. Credentialism, or the tendency to rely on formal qualifications over demonstrated ability, is rampant in hiring simply because across a large number of applicants, it is very difficult to find a better proxy.

The problem, of course, is that not everyone has equal access to credentials. Licensing fees can run into the thousands of dollars in some professions, let alone the time investment of a bewildering number of forms. Credentialism tends to amplify sexism, racism, and classism, because the systems that bestow credentials are themselves sexist, racist, and classist.

It is easy to look at associates as inferior and lambast the companies that are increasingly relying on them. Because hiring isn’t the only place credentialism occurs; consumers can just as easily interpret a credential as a valid signal of quality, without actually determining whether there is a real difference.

But ultimately, that drives up market pricing. Consumers who put false faith in credentials end up spending more, without any additional return. And because consumers are buying based on the credential, people are forced to get credentials to increase their wages; the tail is wagging the dog.

As employers, we have structural tools at our disposal: temp-to-hire as a method of determining actual ability, removing degree requirements, etc. And as consumers, we can use reviews as an alternative to credentials. But all of these are choices: ultimately, it is on us to give people the chance to demonstrate their ability to perform beyond their credentials.


And there is plenty to be gained. Besides combating inequity, in a market economy that overvalues credentials, there is a price opportunity in not making the same mistake: you can get more and better, for cheaper, if you’re willing to fight the bias that others are falling victim to. Fighting credentialism is a moral imperative but if you need an economic justification, it is certainly there.

TLDR: Even if our processes buffer us from academic misconduct, we still need to be conscious of both the practices and people that we platform. Above all else, we must be applied, behavioral, and scientific.

Yesterday on BlueSky, Neil Lewis Jr. pointed out the latest Atlantic article by Daniel Engber on academic misconduct in behavioral science, and one of the themes was compensation and the outsized benefits that come with novel findings: tenure, grants, social status, etc.

My first reaction to the article was dismissal: in my version of applied behavioral science (SIDE), where every intervention gets validated by a pilot and published studies are used only as generative prompts during the Design phase, academic misconduct doesn’t have the same scale of negative impact. If someone made something up, the pilot will show that it doesn’t work and as long as we don’t also falsify the pilot results, all it did was waste time.

But then I started thinking about our role in the attention economy. Most of us still read and debate journal articles. I talk to clients about work that brings academia closer to application, without actually being in those labs and watching that data collection. We’re not just consumers; we also use our expertise to direct attention.

A few months ago, I blocked someone on LinkedIn. We first interacted back when I published the SIDE model; they insisted that Evaluation was unnecessary if an intervention was soundly based in theory. I objected and said the whole point of science was being willing to collect evidence that might disprove a generalized theory.

We sparred a few more times, most recently when they insisted that one cognitive model was more “scientific” than another simply because it was published in an academic journal. After a few rounds of comments, I blocked them and moved on.

But sometimes they pop in my feed because we’re both included on a “who to read in applied behavioral science” list. And I have a visceral reaction every time it happens, because I don’t want what I do to ever be associated with their approach.

Applied behavioral scientists can’t just opt-out of the discourse on academic misconduct, even if our methods shelter us from its ill effects. Because as experts in our field, we still have a role in who we platform. When we say “you should follow X” or “read Y”, we’re socially endorsing it.

So what to do, in a world where you can’t validate everything? For one, use papers and case studies as examples, not rules. And be sure you’re communicating to others that what matters about those examples isn’t the phenomenon but the methodology. Our job as applied behavioral scientists is to do and teach a scientific approach to changing behavior, not to create generalizable descriptions of the world.

Be careful who and what you platform. And if someone says in public that pilots are unnecessary and whatever methodology gets published in a journal is the thing we should do…don’t put them on the same list with me.

TLDR: Narrow misses feel worse than wide ones because it is easier to imagine all the things that might have gone differently. But regret is a red herring; a near miss is an almost win and needs an “ante vitam” meeting.

_

Imagine that you’re late for your train in two parallel universes. In one universe, you arrive on the platform just as the train is pulling away. In the other, you missed it by 30 minutes. Which universe would you rather be in?

Most people intuitively understand that missing it by a few seconds will feel worse and they’re right: we feel more regret about narrow misses than wider ones because of counterfactual thinking. It is easy to imagine a dozen “if only” ways you could have saved a few seconds: got out of the car quicker, chose your shirt faster, not had to look for your keys. It is harder to imagine saving 30 minutes.

Anticipating that regret, most of us would say we’d rather have missed it by a mile.

But regret is a red herring; it tends to make us focus on our past instead of our future. And that change in framing makes all the difference. Because in reality, missing something by a few seconds means that next time, you’re very likely to make it. If you are going to be taking this train every day, it is far better to have missed it by an inch than a mile.

Back to the train platforms. Now you’re not alone: you’re traveling with your best friend. 


When you miss by inches, the fingerpointing begins. Because when we feel regret, we often externalize it and begin the blame game; it feels better to grumble about how slow your best friend is than confront the reality that it might have been your fault. 

This happens all the time in workplaces: near misses devolve into analysis paralysis as Product, Design, Marketing, and Tech focus on who to castigate. Blame is easy because any number of decisions by any of the departments would have resulted in a win.

Think about the 2024 presidential election. As Democrats argue about “the reason” for losing, the reality is that with a popular vote margin of only about 1%, most of the cited reasons are valid and a change in any of them could have swung the balance. Racism, sexism, communication, the lack of a primary, whatever…they’re all on the table in the way they wouldn’t be with a 30% deficit.

It is important to teach your team to identify the size of a gap and to change your strategy accordingly. With wide misses, you need a few heavily resourced interventions capable of closing a large gap. But with almost wins, it is more important to spread your resources across a number of smaller bets, since any of them is enough to tip the balance. Making this explicit can help cross-functional teams quickly move away from blame and toward solutions.

The frame change can be as simple as a name change. A post mortem is “after death” and makes sense for wide misses. But for an almost win, consider an ante vitam meeting, because an intervention that almost worked is just “before life”.

For the last several years, I have been making myself available for free, first-come-first-served meetings in the style of academic office hours. They’re 30 minutes, 1:1, virtual, and guided by the participant on topics ranging from career advice to applied behavioral science.  And they’re specifically designed to address the inequities inherent in gatekeeping culture.

I’m a big believer in the power of framing to shift how we think about our behaviors.  When we talk about volunteering our time to help out people via warm intros, it sounds positive.  And it is; we could be spending that time on ourselves.  But that same time, there is another frame: that meeting via warm intro is a form of active discrimination against those who don’t have the same social access.  Yes, we’re helping people get ahead, but we’re also often helping the people who are already ahead to get even further ahead.  Warm intro meetings are more likely to be white, male, educated, etc. because those are the people already most likely to have access to the social elite and so rather than addressing inequity, we’re magnifying it.

To measure our ability to create equitable access through open office hours, in 2021 we released our first Diversity Report, a concentrated effort to make sure that this system is in fact serving a broad range of underrepresented people.  In our 2022 edition, we started doing trend analysis, which we’ve continued this year.

I use the plural repeatedly throughout this report.  That’s because making office hours happen is a team effort; even if I’m the one actually showing up, there is a tremendous amount of work from a number of people to make sure that we follow up on action items, share our learnings, and prepare this report every year.  In particular, Melanie Perera and Alaanah Sallay from Oceans and Lorraine Minister are instrumental in connecting job seekers with opportunities, sending out materials, and preparing the Mentor Minutes for social media. I am grateful for all they do and hope you take a moment to celebrate them.

Before looking at the results, a few quick notes on methodology.  To gather the data, we set up a Google Forms survey and then used Zapier to automatically email participants with a link after each meeting, along with an end-of-year followup reminding them of the survey.  In addition to asking for qualitative feedback to help us improve, we asked basic demographic questions about age, gender identity, sexual orientation, ethnicity, etc.  No questions were required, all were multiple choice with options presented in random order, with “Other” and “Prefer not to say” options included.

For 2023, we went from ~550 meetings last year to more like ~750 meetings this year. We received 140 survey responses, giving us a response rate of ~19%.

That is lower than last year, so we’ve made some changes for 2024 to try to make sure we’re getting more accurate data.  For example, we’re now using Zoom’s automatic followup feature to give people the survey immediately after the meeting, which should also improve data accuracy.  Even with a lower response rate, however, we still have a significant sample, so we can make some reasonable assumptions that this data is representative of the larger population of participants. You could always make an argument that some segments are more likely to respond; caveat emptor. 

I’m changing the format somewhat this year, as most data has remained relatively flat from last year (I’ve included the change from the 2022 numbers in parentheses), and so it is more appropriate to save the commentary for the end. As always, I’m open to questions and feedback on the analysis, as well as suggestions on what commitments you’d like me to make; just shoot me an email

Age

The mean age in respondents was 36 (+2) and the median was 35 (+2).  The best comparison is probably the median age of the US working population, which is 42, so overall we’re skewed a little younger.  However, the standard deviation was around 9 (+0), with participants ranging from 20 to 60, so there was a good bit of variability.

Gender

Among respondents, 57% (+1) identified as women, 37% (-1) identified as men, and 6% (+1) identified as non-binary/genderqueer. This is a fairly large overrepresentation of women and potentially a large overrepresentation of non-binary/genderqueer people, although that number is harder to evaluate because of the correlation with age.

Sexual Orientation

78% (+0) of respondents identified as heterosexual, with 22% (+0) identifying as some form of LGBQ.  This is significantly different than the base rate of 93% and 7%, respectively.

Race and Ethnicity

53% (+0) of respondents identified as White (base rate 77%), 9% (+2) as Black or African American (base rate 13%), 23% (+4) as Asian (base rate 6%), 1% (+0) Native American (base rate 2%), and 14% (-6) as More Than One Ethnicity (base rate 2%).  In addition, 17% (+5) identified as Hispanic or Latino/a/x (base rate 18%), with Mexican, Mexican American, or Chicano/a/x as the largest group.

Other

27% (+7) of respondents are first-generation Americans (base rate 14%), while 27% (+4) are first-generation college graduates (base rate 35%).  37% (+10) view themselves as underrepresented in their field, while 10% (+1) are living in poverty and 9% (+4) identify as disabled. 27% (-14) did not add any additional tagging. 73% (-1) are currently living in the United States.

Commentary and Commitments

Demographically, there were surprisingly few changes this year to individual categories; most were within the margin of error for our sample size.  The largest change actually came in the number of folks who identified with no underrepresented categories of any kind, 9.3% (-3.3).  Neither gender identity, ethnicity, or sexual orientation meaningfully changed, which suggests that this year, more Straight White Males saw themselves as reflecting other underrepresented identities.

On the one hand, that could indeed be progress: we could be reaching a different audience than in previous years.  Or perhaps Straight White Men are simply coming to recognize a broader array of potential ways in which someone can face challenges.  The cynical view, however, is potentially quite worrisome: that the language of representation is being co-opted by those who struggling to find any possible way in which they can distance themselves from the trappings of privilege.

Regardless of how you choose to interpret it, it does present a compelling case for why intersectionality needs to become the default way of looking at representation.  Saying that 90.7% of office hours were devoted to underrepresented folks obscures the fact the very real difference between facing one demographic underrepresentation and several.

We made two important commitments in our 2022 report, both of which we were able to honor.  The first was to repurpose more of the content generated in office hours for a wider audience.  It isn’t particularly efficient for me to say the same thing over and over again, when we could be using office hours for custom questions and content.  So we did more editing this year to distribute clips of the advice I repeated most often.

We also promised to increase our available tooling.  This year, for example, we refined our job tracking spreadsheets so we can more easily refer people for open positions and released self-paced courses on applied behavioral science, thanks in large part to Lorraine Minister’s efforts as our Head of Education.

For 2024, we’re going to continue to focus on scalability:

  • AI-identified content.  Using new tooling from a stealth partner, we’re now able to automatically identify the phrases and examples I use most often and clip them for sharing.  The shift from manual to automatic identification should significantly increase our ability to release in a timely fashion.
  • Structured education.  Our approach to office hours was born out of my experiences in academia and I believe that 1:1 conversations benefit most when they’re an augment to structured learning content.  So we’ll be introducing new guides and classes this year to help cover some of the basics and make our 1:1 time more efficient.

Side Note: It is eerie how many of these percentages were the same as last year, despite having a different sample size.  I went back and checked the data repeatedly, just because it felt so unusual that, for example, the percentage of White participants remained exactly identical.  It is a good reminder that things often don’t change as much as we think they do; even if they feel different day-to-day, the prevailing pressures that created the circumstance remain the same and so repeats are likely.  It also reminds me that I need to do more to put my finger on the scale to make sure that next year, things do change.

Also, CapGemini still hasn’t released diversity numbers. So that makes it two years in a row that I’ve done what they are unwilling to.

(This is a post primarily about the results of an experiment, but I did build a tool as part of it that you can use to gather behavioral feedback from former coworkers; you can go directly to WorkWithMeAgain.com to use it for free without reading about the journey or my data.)

In 2020, I started directly soliciting feedback from former colleagues in a one-question survey: would they seek (+2) or avoid (-2) working with me again? It was a deliberate attempt to use a behavioral measure to understand how different demographics experienced our time together in the workplace. The data was gathered via an anonymous form and always more than six months after working with me, to avoid a recency bias.

We spend more than a third of our life at work, often with other people. And the quality of those interactions are the single greatest predictor of our satisfaction with our work. Not money, not commute time or hours at the office, but who we spend our time with. That means that if we’re interested in making a better world, how we show up for other people is one of the single most important decisions we make.

The last time I collected this data in 2020, I discovered that while most people would seek out working with me again, white women had particularly polarized reactions to my behavior at work. While not statistically significant, both non-white and non-men coworkers had a lower desire to work with me again and their standard deviations was higher – while some truly enjoyed the experience, others found it actively aversive.

After reading through the qualitative feedback they left, I started making changes. And now, three years later, I can see the results of those changes: both non-white and non-men coworkers showed a 10% increase in their desire to work together.

Interestingly, there was also a slight improvement for coworkers that were either white or men; showing up better for underrepresented groups helped me show up better for everyone.

So the obvious question is: what changed?

One of the limitations of this simple survey is that I can’t say for sure; while people could leave me qualitative feedback, those comments are snapshots of momentary interactions rather than reflections on a longer relationship.

That said, looking at the comments in 2020 and in 2023, a few potential differences emerge. These are filtered through my personal experience, however, so should be taken with a grain of salt.

One trend that seems clear between the two sets of comments is how my role (and thus I) was perceived. Prior to 2020, the majority of the comments focus on my leadership and the notion of working for me, whereas the 2023 comments tend to center on my expertise and the idea of working with me. This is probably an inevitable shift that occurs somewhat because of tenure but is also a reflection point; am I my best self when I’m coaching more than managing? Should I seek out more coaching roles?

There is also a shift between direct judgment and framework establishment. Prior to 2020, many of the comments center on my ability to make executive decisions about what someone should or shouldn’t do – when those decisions were perceived as wrong, people were less likely to want to work with me again. The 2023 comments reflect a shift toward presenting frameworks by which decisions could be made, so that people felt more autonomy that aligned with their accountability.

There is another source of data that confirms this shift. In addition to the Work With Me Again survey, I have an anonymous form that people can use to give me feedback that appears in my email signature. Many of the comments post-2020 focus on my questioning and how working as a thought partner made people feel “inspired, not embarrassed’ when they didn’t have all the answers.

Part of this may also be colored by how I left my most recent role. When I resigned from frog/CapGemini because of their refusal to release diversity data, it is possible that it increased underrepresented people’s willingness to work with me again. How someone leaves a role can also who is willing to follow them forward and it highlights how important clear messaging is when switching jobs. Despite all the noise about radical candor and transparency in the workplace, the simple fact is that most people just leave without ever really being clear why. And that’s something we need to change, both at an individual and company level.

I’m happy to see that I was able to close the gap in the last three years, but it doesn’t mean I should stop. And hopefully neither will you. If you’re committed to doing the same kind of work, the tools I used are publicly available at WorkWithMeAgain.com.  It will allow you to copy the Google Sheet that contains all the calculations, as well as instructions on how to launch the survey to your former coworkers.  I am absolutely convinced that putting in the effort to gather meaningful behavioral feedback can be a key component in how we change our individual behaviors and thus the world.

Side Note: One of the things that is clear from the over 200 people that have taken the survey is that there are some people out there who feel deeply wronged by me. That’s potentially inevitable and part of being human, but it doesn’t have to be the end of the store: if an apology would help bring closure, I’m here.

For the last several years, I have been making myself available for free, first-come-first-served meetings that I call office hours. They’re 30 minute 1:1 virtual meetings, guided by the participant, on topics ranging from career advice to applied behavioral science.  And they’re motivated by the belief that when we require introductions or other forms of social proof to gate access to our time, we replicate the existing systemic biases inherent in those social systems.

In 2021, we released our first Diversity Report, a concentrated effort to make sure that this system is in fact serving a broad range of underrepresented people.  In our 2022 edition, we provide updates on our commitments from last year, refreshed data, and make our commitments for next year.

I use the plural repeatedly throughout this report.  That’s because making office hours happen is a team effort; even if I’m the one actually showing up, there is a tremendous amount of work from a number of people to make sure that we follow up on action items, share our learnings, and prepare this report every year.  In particular, Melanie Perera from Oceans and Zsanelle Lalani are instrumental in connecting job seekers with opportunities, sending out materials, and preparing the Mentor Minutes for social media. I am grateful for all they do and hope you take a moment to celebrate them.

Before looking at the results, a few quick notes on methodology.  To gather the data, we set up a Google Forms survey and then used Zapier to automatically email participants with a link after each meeting, along with an end-of-year followup reminding them of the survey.  In addition to asking for qualitative feedback to help us improve, we asked basic demographic questions about age, gender identity, sexual orientation, ethnicity, etc.  No questions were required, all were multiple choice with options presented in random order, with “Other” and “Prefer not to say” options included.


For 2022, given immense changes in my personal life (including moving across the country and my co-parent’s cancer diagnosis), my ability to do office hours consistently was somewhat curtailed – we went from ~800 meetings last year to more like ~550 meetings this year. We received 150 survey responses, giving us a response rate of ~27%.

That is roughly the same as last year and generally quite high.  Typical survey response rates are less than 5%, so we can make some reasonable assumptions that this data is representative of the larger population of participants. But you could always make an argument that some segments are more likely to respond, so take it all with a grain of salt.

On to the 2022 data!  Each section has a two paragraph format: first data, then interpretation.  There will be a separate section at the end for commentary and 2023 commitments. I’ve included the change from the 2021 numbers in parentheses so you can also see trends. I’m open to questions and feedback on the analysis, as well as suggestions on what commitments you’d like me to make; just shoot me an email

Age

The mean age in respondents was 34 (+0) and the median was 33 (+2).  The best comparison is probably the median age of the US working population, which is 42, so overall we’re skewed a little younger.  However, the standard deviation was around 9 (+0), with participants ranging from 17 to 69, so there was a good bit of variability.

I see this as an improvement from last year.  Not only did the range of ages go up, but so did the average and median age.  Why is that a success?  Because younger people have greater access to formal mentorship programs than do older people, a general upward trend in age means we’re serving the underserved.

Gender

Among respondents, 56% (+1) identified as women, 39% (-3) identified as men, and 5% (+2) identified as non-binary/genderqueer. This is a fairly large overrepresentation of women, who generally are less likely to participate in the workforce.  For non-binary people, the estimate jumped from 1% to 5% this year which is in line with our number; I expect this is a more accurate number for the population rather than a genuine increase in gender identification.

This is a mild improvement from last year. It isn’t that I don’t think men are deserving of help (especially since they may be underrepresented for other reasons) but since my office hours are an attempt to democratize access and systemic sexism is an issue, this is a positive trend.

Sexual Orientation

78% (+3) of respondents identified as heterosexual, with 22% (+7) identifying as some form of LGBQ.  This is significantly different than the base rate of 93% and 7%, respectively.

I am very publicly liberal and work in and around fields that are generally more liberal (tech/design/etc.), so these numbers aren’t entirely atypical in my larger community. That said, there are still significant biases present in our often heteronormative culture, so I’m generally happy to see overrepresentation here.

Race and Ethnicity

53% (+13) of respondents identified as White (base rate 77%), 7% (-8) as Black or African American (base rate 13%), 19% (-11) as Asian (base rate 6%), 1% (+1) Native American (base rate 2%) and 20% (+5) as More Than One Ethnicity (base rate 2%).  In addition, 12% (-1) identified as Hispanic or Latino/a/x (base rate 18%), with Mexican, Mexican American, or Chicano/a/x as the largest group.

This is obviously disappointing; a 13 point jump in White is a lot, even if it is still significantly below the baseline. This increase, however, is correlated with an increase in meetings with people based outside the United States; 58% of international meetings were White, compared to just 47% of US-based meetings.

Other

20% (-3) of respondents are first-generation Americans (base rate 14%), while 23% (+5) are first-generation college graduates (base rate 35%).  27% (-13) view themselves as underrepresented in their field, while 9% are living in poverty and 5% identify as disabled. 41% (+3) did not add any additional tagging. 74% are currently living in the United States, while 26% live in one of eighteen other countries with Canada, Germany, and the UK being the most represented.

Commentary and Commitments

In 2022, it was difficult for me to react to these findings because the only data I had to compare them with was the base rate. In 2023, I have last year’s data as a benchmark and generally come away with mixed feelings.

On the positive side, age, gender, and sexual orientation all showed year-over-year increases in representation.  While the gains were modest, they trended in the right direction and over a much larger sample size, giving us increasing confidence that we are serving those who need it most.

The biggest disappointment is obviously the increase in White-only participants.  Even with the growth in international audience, the purpose of my office hours is to reduce systemic bias and I simply can’t do that if I’m not meeting with those who face biases related to race and ethnicity.  While the rate of straight, cis White men who didn’t identify with any underrepresented categories remained the same this year at 13%, this is a place where I expect to see year-over-year improvements and so no change simply isn’t good enough.  It is on me to take action in 2023 to change this trajectory.

We made two important commitments in our 2021 report, both of which we were able to honor.  As promised, we began releasing clips from our office hours across multiple social media platforms and will continue to do that into 2023.  We also exceeded our $5K committed budget and spent closer to $12K supporting the needs of individuals who had challenges this year; while we have not yet set a budget for 2023, we will continue to push forward on that front.

There were a number of other operational changes to office hours this year, across the technology, processes, and team available.  For example, we introduced a tracker that allowed us to more easily view the applications of those who were looking for work and make referrals where appropriate; we assisted in 61 job searches in 2022.

I continue to believe that public, open office hours on a first-come, first-served basis can be a lever for reducing some forms of systemic bias.  Office hours are not just an opportunity for mentorship but a chance to deploy resources against real needs, whether that is using social capital to make a referral or financial capital to provide pilot funding and even essentials like food and interview clothing.

For 2023, we’re going to concentrate on the scalability of our impact by addressing two key shortcomings:

  • Reusable content.  Reviewing our Vowel recordings from this year, it is clear that I’m spending a tremendous amount of time answering very similar questions. We’re working on everything from written guides and short videos to a chatbot trained on the office hour transcripts to make sure that we can serve more people.  I don’t intend to spend less time doing office hours but hopefully these solutions allow us to concentrate that time on increasingly personal questions and specific situations.
  • Increased tooling.  While the ability to distribute Vowel recordings and shared notes has been a tremendous help to individual participants, process improvements like the job tracker have allowed us to more quickly provide asynchronous help in the moments when people need it most.  Rather than confining our impact to the office hours, we’re going to intensify our tooling, ranging from lightweight spreadsheets to custom web-based calculators, in order to create new scaffolding for people to help themselves.

I fundamentally believe that transparency helps drive accountability and accountability allows for autonomy.  My hope is to be able to offer an updated diversity report yearly for as long as I am able to continue doing office hours at this pace and with this team.  As I mentioned earlier, I’m open to questions and feedback on the analysis, as well as suggestions on what commitments you’d like me to make; just shoot me an email.

Side Note: When I released the diversity report for 2021, I wrote “Sometimes, doing the right thing feels absolutely ridiculous”. In August of 2022, I left my global executive role at CapGemini because they refused to release basic diversity data. Putting myself out of work during a recession feels like one of those ridiculous things and I am acutely aware of the privilege that allowed me to do it.  But accountability matters.  Working toward change from within a company is important but when business leaders refuse outright to even consider change, it is time to go.  So I’m going to keep releasing these reports yearly and I hope CapGemini and others consider doing the same.

Humans are habituation machines.  Once something becomes true for us, our brain starts incorporating it into our reality through selective attention and a variety of other cognitive biases, such that it is hard to remember a time when it wasn’t true.

Take the internet. If you’re old enough, you might be able to dredge out some specific memories about a time before ubiquitous connectivity.  But even those memories are fairly selective and it is hard to really emotionally connect with them; the internet simply is in our present reality.

Diversity reports are another example.  20 years ago, it wasn’t ubiquitously true that every major company released a comprehensive report on the demographics of its workforce.  And yet now it would be surprising to find a major company that doesn’t.  Accountability allows autonomy and transparent data is the first step toward that accountability.

The link between accountability and autonomy isn’t just for big companies; it is a core building block of any service relationship.  Beginning several years ago, I offer my time as a public service in the form of open office hours, which I wrote a guide to when I started.  

But in order for me to offer that public service in an accountable away, I also need to be transparent.  This post is my attempt to do that, in what I hope to make a yearly practice, by releasing diversity statistics for my 2021 office hours.

First, a few quick notes on methodology.  To gather the data, we setup a Google Forms survey and then used Zapier to automatically email participants with a link after each meeting.  In addition to asking for qualitative feedback to help us improve, we asked basic demographic questions about age, gender identity, sexual orientation, ethnicity, etc.  No questions were required, all were multiple choice, with “Other” and “Prefer not to say” options included.

For 2021, I committed to two hours per day of office hours in 30-minute slots or ~1K potential meetings.  While obviously I couldn’t always manage that, we did have a utilization rate ~80%, so ~800 meetings in total.  Since we started collecting diversity data in November, we have only two months of participants to work with or ~130 people.  We received 40 survey responses, giving us a response rate of ~30%.

Generally speaking, that’s a lot.  Typical survey response rates are less than 5%, so we can make some reasonable assumptions that this data is representative of the larger population of participants.  That said, you could always make an argument that some segments are more likely to respond, so take it all with a grain of salt.

On to the 2021 data!  Each section has a two paragraph format: data, then interpretation.  There will be a separate section at the end for commentary and 2022 commitments.  I’m open to questions and feedback on the analysis, as well as suggestions on what commitments you’d like me to make; just shoot me an email.

Pie graph of Open Office Hours for 2021, expressing percent of participants by number of underrepresented identities, including gender, ethnicity, sexual preference, and other (e.g. first-gen college). 12.5% have no underrepresented identities, 17.5% have one underrepresented identity, 35% have two underrepresented identities, 25% have three underrepresented identities, and 10% have four underrepresented identities.

Age

The mean age in respondents was 33.6 and the median was 31.  The best comparison is probably the median age of the US working population, which is 42, so overall we’re skewed a little younger.  However, the standard deviation was around 9, with participants ranging from 20 to 59, so there was a good bit of variability.

It is hard to interpret this in terms of representativeness.  I was 39 in this period of 2021, so there are a variety of reasons why people older than me might not have felt I could be supportive to them.  And younger people are probably more comfortable with the idea of digital open office hours generally; both might be factors.

Gender

Among respondents, 55% identified as women, 42% identified as men, and 3% identified as non-binary.  For women and men, these numbers are essentially the same as the workforce participation rates.  For non-binary, this is likely a bit higher than the base rate of less than 1%, although for people is a very small sample size and the population-level data is unreliable.

Candidly, I was initially disappointed by these results.  In that my office hours are an attempt to democratize access and systemic sexism is an issue, I had hoped to reach a group that was more heavily skewed.  This shows the danger of univariate thinking, however; as we continue to look at the other forms of diversity, a different picture emerges and so I’d like to withhold judgment for a bit.

Sexual Orientation

75% of respondents identified as heterosexual, with 20% identifying as bisexual and 5% preferring not to answer.  This is significantly different than the base rate of 94% and 6%, respectively.

I honestly don’t have a ready explanation for this.  Because participants skew younger and the proportion of the population that identifies as non-heterosexual also skews younger, it may simply be due to a mediating variable.  It could also be a network effect driven by homophily and my political stances also tend to be relatively public, so it could be self-selection.  I simply don’t know.

Race and Ethnicity

40% of respondents identified as White (base rate 77%), 15% as Black or African American (13%), 30% as Asian (6%), and 15% as More Than One Ethnicity (2%).  In addition, 13% identified as Hispanic or Latino/a/x (18%), with Mexican, Mexican American, or Chicano/a/x as the largest group.

There is a lot to unpack here.  It is unclear why there is a massive overrepresentation of Asian people and people who viewed themselves as having a mixed ethnicity; all of the factors from sexual orientation could potentially be at play here.  There is certainly room for growth in other categories, although as with gender, it is hard to look at these results in isolation.

Other

23% of respondents are first-generation Americans (base rate 14%), while 18% are first-generation college graduates (base rate 35%).  40% view themselves as underrepresented in their field, while 38% did not add any additional tagging.

I was surprised by the base rate of first-generation college graduates, although I probably shouldn’t be: because almost all of the people I interact with in a professional context have degrees, it is easy to forget that higher education is far from ubiquitous.  I was also surprised by the overrepresentation of first-generation Americans; I can theorize as to why they might be more likely to be interested in office hours but have no proof.

Commentary and Commitments

As with any personal feedback, it is hard to know how to react to this data.  I have long believed that public, open office hours on a first-come, first-served basis could be a potential lever for reducing some forms of systemic bias.  If they remain only at the level of mentorship, office hours are unlikely to create real change: we have evidence that women are over-mentored and under-sponsored and there is reason to believe that is true of other underrepresented groups as well.  But to the extent that we are able to use them as a catalyst for sponsorship, where resources are expended to create new opportunities, they have power.

If the purpose of open office hours is to specifically focus on the underrepresented, then we’ve achieved some success: only 13% of participants were straight, cis white men who didn’t identify with any underrepresented categories.  But there are still clear places where there is much room for growth (like Black or African Americans, where we only achieved parity with the population).  The question becomes how to create that change.

For 2022, I’m going to concentrate on two key pressures: reducing suspicion (an inhibiting pressure) and increasing followup (a promoting pressure).

In a perfect world, everyone would know that office hours exist, decide for themselves if they are beneficial, and then take a slot that works in their schedule.  But we live in an imperfect world.  I’m frequently asked whether there is a fee and many people have expressed disbelief that someone would offer free support.  And these doubts were not evenly distributed; anecdotally, it was more often underrepresented participants who expressed the most suspicion.

To me, this is entirely logical.  We know underrepresented people are receiving the least help and are the most likely to be exploited.  So when faced with an opportunity for free support (from a cis white dude, no less), being cautious is a reasonable reaction.

Here is what I’m going to do about it:

  • Release recordings.  We use Vowel as a platform for office hours, so that participants can view the video, transcript, and notes after the call has been completed (plus, it has the handy live “percentage talked” counter that helps me to remember to shut up).  In 2022, we’re going to start releasing edited clips of office hours to help clarify what people can expect and they can see proof that it is a free service.  We’ll select clips likely to be useful to others, edit them to just my video and voice, and not use anything that mentions participant details.  In our pilots so far, underrepresented groups that were shown a clip of office hours were significantly more likely to subsequently sign up for a slot than those that didn’t see a clip.
  • Clarify cost (and the lack thereof).  Previously, I relied on the academic understanding of “office hours” as a term that indicated freely available support.  But we’ve now clarified the language on both the Get Support page and LinkedIn to be clear that these slots are available completely free.

We cannot simply reduce inhibiting pressures, however – we must also increase promoting pressures.  Our follow-up surveys are generally positive but I recognize that I don’t always follow through on commitments that I make in office hours, mostly out of inattention.  So here is what I’m going to do about it:

  • Add team review.  One of my team members will review each office hours recording and document any action items I’ve agreed to, following up with support and reminders as needed.  The hope is that we deliver on every commitment that I make; this will have the added benefit of making it more likely that we transcend mentorship to full sponsorship.
  • Create a followup budget.  Some followup items require money to accomplish.  In 2021, we did this on a one-off basis but that opens the door to inequitable distribution and also makes it hard for me to limit my commitment to a level I can sustain.  So this year, I’m setting aside an initial budget of $5K that the team can tap into directly, without approval from me, to take action on items that require financial support.

Finally, we’re adding a few more tweaks simply to improve our processes and make things generally more inclusive.

  • Taking a more holistic view.  For example, adding a “disabled” option to the self-identification question, as well as a question about country of residence to capture international participation.
  • Varying the times of office hours.  For most of the year, my office hours were during working hours for people in both PST and EST.  This might create barriers for some, so I’ve created a more flexible schedule designed to allow for a wider range of participation.

I fundamentally believe that transparency helps drive accountability and accountability allows for autonomy.  My hope is to be able to offer an updated diversity report yearly for as long as I am able to continue doing office hours at this pace and with this team.  As I mentioned earlier, I’m open to questions and feedback on the analysis, as well as suggestions on what commitments you’d like me to make; just shoot me an email.

Side Note: Sometimes, doing the right thing feels absolutely ridiculous.  Pulling this report together took a few weeks and there were moments where I almost abandoned it; posting it could easily be seen as communal narcissism (which I willingly admit to being at times), so it was tempting to simply analyze the data and make the changes entirely privately. Talking about social justice action often feels like a Catch-22: do it and look performative, don’t do it and be complicit in the racist, sexist, classist status quo. So I often think of the extremity test: is the universe where nobody does a behavior better or worse than the one where everyone does?  In the case of diversity statistics, I’d far rather a world where everyone releases them than nobody does, so I posted mine in an effort to tip the scales in that direction.  Social pressure works – talking about what we do makes it incrementally more likely, on the whole, that other people also do it.  And if that feels (and is) ridiculous and results in a cascade of clown emojis…well, at least I was entertaining.