Does Using an LLM During the Hiring Process Make You a Fraud as a Candidate? | by Christine Egan | Jan, 2024

Editor
2 Min Read


Employers, ditch the AI detection tools and ask one important question instead.

I saw a post on LinkedIn from the Director of a Consulting Firm describing how he assigned an essay about model drift in machine learning systems to screen potential candidates.

Then, based on criteria that he established based on his intuitions (“you can smell it”) he used four different “AI detectors” to “confirm” that the applicants used ChatGPT to write their responses to the essay.

The criteria for “suspected” bot-generated essays were:

  • weird sentence structure
  • wonky analogies
  • repetition
  • switching from one English dialect to another (in separate writing samples from the same application)

One criteria notably missing: accuracy.

The rationale behind this is that using AI tools is trying to subvert the candidate selection process. Needless to say, the comments are wild (and very LinkedIn-core).

I can appreciate that argument, even though I find his methodology less than rigorous. It seems like he wanted to avoid candidates who would copy and paste a response directly from ChatGPT without scrutiny.

However, I think this post raises an interesting question that we as a society need to explore — is using an LLM to help you write cheating during the hiring process?

I would say it is not. Here is the argument for why using an LLM to help you write is just fine and why it should not exclude you as a candidate.

As a bonus for the Director, I’ll include a better methodology for filtering candidates based on how they use LLMs and AI tools.

Photo by Google DeepMind
Share this Article
Please enter CoinGecko Free Api Key to get this plugin works.