Real friends tell the truth so…you’ve got AI in your teeth.

As a social psychologist by training, I always loved the TV show “What Would You Do”. Loosely based on social experiments, it uses a hidden camera format to show the varied reactions people have to ordinary situations based on environmental cues.
One of my favorite episodes from the very first season had a simple premise: people ignoring or informing strangers of social issues like food in their teeth, toilet paper on their shoes, or an unbuttoned shirt. And I love it because of how rarely people inform; the vast majority of folks clearly notice the problem but do nothing about it.
This is top of mind because recently, I was reviewing video applications for an Integration Specialist role at Oceans. In one stage of the process, we ask people to record a short video with a simple prompt: share one of your most deeply held but controversial opinions about the workplace.
I like this prompt because it is both thoughtful and personal, mimicking the psychologically safe communication that Integration Specialists have to facilitate for our Divers and Clients. And we’ve had good success using it to find talent for the current team.
Given the epically bad job market (or how epically awesome it is to work at Oceans), we got several thousand applicants for the role. So even though this was a later funnel stage, we still had about a hundred videos to watch.
It was painfully obvious who was using AI rather than thinking through their own beliefs. Across several hundred videos, I now know the various answers that the flavors of LLMs tend to give (four-day work week, quality over speed, performance is about alignment), down to the bullet points and supporting evidence. Some candidates were better or worse at presenting them, but the beliefs themselves? Computer-generated.
Since it is a later pipeline stage, we generally give feedback even to the candidates who don’t move on. But how do you tell someone they’ve got AI in their teeth?
From a macro perspective, it is important because it is likely that candidates will continue to miss out on roles if they keep going as they are. But for an employer brand, staying quiet is the safe bet; at least some of the time, people shoot the messenger, and we lose nothing if candidates continue to fail elsewhere.
Fortunately, morality isn’t game theory and one of the Oceans’ values is “Integrity is our Superpower” so we are going to deliver the feedback. But I suspect that not everyone will make the same choice and as we think about the divides that AI usage will create, it is useful to remember that many of them aren’t about AI at all; knowing what behaviors to do or not do is largely a function of the feedback you get from society, and that means all the sexist/racist/classist tropes apply. If we want a better and more equitable world, we have to be willing to tell people when using AI is hurting rather than helping.



Leave a Reply
Want to join the discussion?Feel free to contribute!