It has been a very long year here at Oceans. We doubled in size and put in place the structures that allow us to do it again in 2026, all while managing through a Chikungunya epidemic and a devastating cyclone. So, for the holidays and with the help of my diver Abdul, I decided to send a personalized video message to my team – all 500+ of them.

The video itself is a bit unusual (I included the one for our CEO Ian below). Because of the strong cultural values around family in Sri Lanka, it is targeted not at the employee but at their loved ones, thanking them for the support they provide that makes it possible for our Divers to go deep with their Clients.

Each video used the name and pronouns of the employee but was otherwise similar in theme. Which, of course, meant that it would have been easy to just use generative AI. Because my personal workstation is highly custom, I have thousands of hours of video of me in the same white shirt and blazer, with the same lighting, on the same black background. I already look a little bit like Max Headroom.

So I’m confident that technically, the AI-generated videos would be indistinguishable from what we actually made; I fully believe that AI can do a convincing impression of me, with enough variation to make videos that feel individualized.


But that entirely misses the point.

I am generally a fan of industrialization. I recognize that by moving from piece work to standardized forms of production, we have revolutionized what is available to the average person. And in a time when many people feel cripplingly lonely, it is tempting to want to scale empathy through that same process. After all, if it is indistinguishable from the real thing, it should have the same effect.

But knowledge changes the frame. It is why magicians don’t reveal their tricks: knowing how they work can ruin the feeling of wonder we get from seeing them performed. Technically identical, AI-generated videos simply do not have the same meaning as sitting down and taking the time to think about all 500+ people in my org as I record their message.

This is going to become even more important in the years to come. Just as carving a gift by hand is different than buying it from a store, the things that we create for others will increase in interpersonal value. Yes, there will be an AI avatar of me at some point that handles all sorts of things. But that only highlights how important getting the real me is. We are defined by what we choose to automate and what we choose to honor; the future is predicated on our ability to choose wisely.

As a social psychologist by training, I always loved the TV show “What Would You Do”. Loosely based on social experiments, it uses a hidden camera format to show the varied reactions people have to ordinary situations based on environmental cues.

One of my favorite episodes from the very first season had a simple premise: people ignoring or informing strangers of social issues like food in their teeth, toilet paper on their shoes, or an unbuttoned shirt. And I love it because of how rarely people inform; the vast majority of folks clearly notice the problem but do nothing about it.

This is top of mind because recently, I was reviewing video applications for an Integration Specialist role at Oceans. In one stage of the process, we ask people to record a short video with a simple prompt: share one of your most deeply held but controversial opinions about the workplace.

I like this prompt because it is both thoughtful and personal, mimicking the psychologically safe communication that Integration Specialists have to facilitate for our Divers and Clients. And we’ve had good success using it to find talent for the current team.

Given the epically bad job market (or how epically awesome it is to work at Oceans), we got several thousand applicants for the role. So even though this was a later funnel stage, we still had about a hundred videos to watch.

It was painfully obvious who was using AI rather than thinking through their own beliefs. Across several hundred videos, I now know the various answers that the flavors of LLMs tend to give (four-day work week, quality over speed, performance is about alignment), down to the bullet points and supporting evidence. Some candidates were better or worse at presenting them, but the beliefs themselves? Computer-generated.

Since it is a later pipeline stage, we generally give feedback even to the candidates who don’t move on. But how do you tell someone they’ve got AI in their teeth?

From a macro perspective, it is important because it is likely that candidates will continue to miss out on roles if they keep going as they are. But for an employer brand, staying quiet is the safe bet; at least some of the time, people shoot the messenger, and we lose nothing if candidates continue to fail elsewhere.

Fortunately, morality isn’t game theory and one of the Oceans’ values is “Integrity is our Superpower” so we are going to deliver the feedback. But I suspect that not everyone will make the same choice and as we think about the divides that AI usage will create, it is useful to remember that many of them aren’t about AI at all; knowing what behaviors to do or not do is largely a function of the feedback you get from society, and that means all the sexist/racist/classist tropes apply. If we want a better and more equitable world, we have to be willing to tell people when using AI is hurting rather than helping.

But it plays one on TV.

In the mid-80s, Vicks launched cough syrup commercials with a simple hook: they got actors who played doctors on soap operas to endorse the benefits of the product. “I’m not a doctor but I play one on TV” became an instant notable quotable.

Recently, I got an email from a candidate objecting to the Oceans’ employment contract. And because I take these things seriously, I sat down to see if there was anything we could do to improve it.

From the email’s first bullet point, it became clear what happened: the candidate had loaded the contract into ChatGPT and then sent me over the pasted output. To confirm, I tried it; ChatGPT gave me more or less the same arguments and even offered to produce a redlined version.

In some respects, I love this. Most folks don’t know how to access a good lawyer and having any legal advice can be better than none at all.

But the Vicks message was that listening to actors with no medical training is foolish; the whole point was that just because someone sounds like an expert doesn’t mean they are. The commercial was encouraging adults not to pretend to be doctors and to use the right medication that real doctors recommended for them.

When you give ChatGPT a legal document, by default it will give back an answer in the unique language that is legalese. And because it sounds like a lawyer (social psychologists would say the advice has face validity), it is easy to accept it as credible.

But ChatGPT is essentially an actor; it repeats the lines in a certain way that signals expertise but doesn’t genuinely understand them. Out of the eight points the candidate sent over, seven were clear misreadings of the contract that no actual lawyer would make; only one was a true disagreement in principle and relatively minor.

This may cause the candidate to reject a significant opportunity based on illusory concerns. I’m going to reach out to clarify but I am absolutely not a lawyer, don’t play one on TV, and won’t speak in legalese. Can being empathetic and direct beat out legalese? We’ll see.

It would be relatively easy for OpenAI to detect legal documents and refuse to engage with them or change tone to give advice while sounding more like your neighbor and less like an actual lawyer.

But in the current system, they have no incentive to do that: they benefit from the inferred expertise of their presentation. It makes their product look powerful and worth subscribing to.

And this is why we need to enforce the laws we have. If you induce someone to think you are offering qualified legal advice without being an actual lawyer, you’re both civilly and criminally liable; AI-generated advice is no different, except you’re holding the company and not the AI responsible. I’m not a particularly litigious person, but without economic and criminal damage, this won’t change.

So let’s see the lawsuits. Maybe ChatGPT will choose to self-represent in court.

A lot of hustle porn (a term I first heard from Alexis Ohanian Sr.) has the same bombastic quality as sexual porn; a recent article I saw on LinkedIn screamed that you absolutely should expect your employees to spend 20% of their time outside of work on professional growth activities, because otherwise they must be terrible performers (!?!).

That isn’t real. That is the movie version of an important, messy part of our lives. And just like porn can damage our understanding of reality, particularly for those who are new to sex, hustle porn can damage our understanding of what work should actually look and feel like, particularly for those that are new to the workplace.

As part of my work at Oceans, I travel to Sri Lanka for two weeks every two months. These are planned trips, with disciplined routines before and after that focus on non-work activities. And I sleep pretty well on planes, so the 27-hour flight isn’t normally too bad.

But this time I sat next to screaming babies on both connections (to be clear, I would also scream if no one explained why my head felt like it was going to explode), so on a leadership team call last night, I apparently took an unintended nap (the picture above, lovingly captured by my team from the recording).

We joke about it now (and of course there is a Slack emoji) but at the time, my team freaked out; they were worried that something was seriously wrong. And that is the right reaction: work kills people, directly and indirectly, every single day.

Fortunately, I’m fine and after a good night’s sleep, ready to get back to work worth doing alongside the fabulous folks at Oceans. But it is important that we are both honest about moments like these and that we don’t turn them into a culture of celebration.

This is one moment, in one meeting.

It isn’t a well-lit, in-focus shot of someone sleeping under their desk to show how passionate they are so a VC will deign to grace them with their next round of funding.

It isn’t the culture of Oceans and something we believe every employee should do, every day. It isn’t even something we think our C-suite should be doing.

Yes, sometimes work causes you to hustle a little more than usual because of its elastic nature; you can’t perfectly plan for every demand curve. But that’s a bug, not a feature.

How do we get rid of hustle porn? By not creating or engaging in it, of course, but that’s easier said than done; like porn, it exists because we consume it. So we need a better plan.

Maybe we need MakeWorkNotHustlePorn; acknowledging that sex is fun to watch and then encouraging the creation and distribution of sex that focuses on unsimulated reality is an appealing blueprint for how we tackle this problem. LinkedIn can do a better job of flattening the demand curve, reducing the availability of sensationalist posts. Posters can focus on authenticity and steward an understanding of work that is grounded in reality. And most importantly, we can choose to engage with content that is authentic and not always attractive (hello, shiny bald head).

Because it really does matter. How we work in public influences how future generations will work in private and if we don’t design that experience deliberately, we run the very real risk of damaging their relationship with one of life’s great joys. I believe in work worth doing; I don’t believe in dying for it.