Ethical Considerations of Utilizing AI Tools in the Employment Seeking Process
- Jon Peroutka
- May 24
- 6 min read
Updated: May 25
Disclaimer: Unless otherwise noted, all content in the white paper following are the product of Jon Peroutka himself. The thoughts and opinions expressed are his and his alone, and do not represent the thoughts, opinions, or policies of any associated employer, individual, or entity to Jon Peroutka.
Background & Context
By 2025, several Artificial Intelligence (AI) tools have formed to assist job seekers in their pursuit to obtain employment. This will be the first job search for me personally in which these tools are available for utilization. While they seem powerful and valuable, I have observed examples of these tools creating seemingly unethical situations, such as interviewees being fed a script to read out in real time to answer questions from an interviewer. Conversely, some AI tools seem perfectly reasonable for a candidate to utilize, such as tools which conduct mock interviews in which the candidate can practice their thought organization and delivery prior to a real interview.
Making a broad-brush decision to completely restrict utilization of all AI tools because a portion can be considered unethical undercuts the effectiveness of the job seeker's pursuits in scenarios where ethical AI tools could be utilized. This would cause those individuals to fall behind their peer competition who are utilizing such tools to effectively promote themselves as candidates. The ethical conundrum at hand is which types of tools should be ethically reasonable for utilization in the job seeking process, and which tools create an unethical situation and should be restricted from utilization. The conclusions in this white paper will drive my personal decision making for utilizing AI tools in my personal job seeking process.
First Principles
Thinking from first principles, the primary goal of a hiring manager for any candidate evaluation for a job opening is to have a reasonable belief that the candidate is qualified or unqualified to perform effectively in the role they are being evaluated for. Qualification can consist of a combination of technical skills (ability to produce valuable output as an individual and within a team) as well as interpersonal skills (ability to create and leverage relationships in pursuit of driving the objectives of their role, their team, and the company at large). From the lens of the candidate, their primary goal matches that of the hiring manager. They seek to obtain a reasonable belief that they are qualified or unqualified to perform effectively in the role they are being evaluated for. These primary goals drive the primary focus from which ethical considerations should be evaluated.
To focus the scope of this white paper, we will consider the ethical considerations of tools positioned to help job seekers, rather than AI tools utilized by employers. We will also focus on AI tools which have influence on the presentation and interpretation of a candidate's qualifications, rather than other elements of the hiring process (compensation/benefits negotiation, etc).
Ethics Framework
Using a utilitarian framework, the ethical considerations of job seekers utilizing AI tools should be evaluated from the standpoint of the consequences that these tools have to the goal(s) of the hiring manager and the company they are pursuing for employment. Given the primary goal for the hiring manager is to have a reasonable belief that the candidate is qualified or unqualified for the position, any tool which creates a false representation of the true qualifications of the candidate should be considered unethical for utilization.
Here, the definition of 'false representation' distills the primary essence of the ethical battle. A legal definition of false representation is "an untrue or incorrect representation regarding a material fact that is made with knowledge or belief of its inaccuracy"**. In a hiring scenario in which the primary objective is to determine a candidate's qualifications for the job opening, a false representation can be defined as making untrue or incorrect statements regarding material facts influencing the hiring manager's interpretation of a candidate's qualifications. In the simplest sense, stating facts which are not true is unethical.
Part of the difficulty of further defining 'false representation' in this context is the scope that the phrase entails. Are representations limited to the words on a page or words which are spoken? Or do representations also include who or what created the words on a page or words which are spoken? Absolutely. In a technical aptitude interview, the interviewer presents problems for the candidate to answer. The goal of this exercise is for the interviewer to obtain an understanding of the candidate's innate abilities to solve the problem at hand. These abilities drive the interviewer's interpretation of the candidate's technical qualifications for the role. If the candidate utilizes an external tool to solve the problem they are given in this scenario, they are creating a false representation that their abilities are solely attributable to themselves. This is easily defined as 'plagiarism.'
The key element to plagiarism or cheating is the hiring manager's expectation that the words being said or written are the candidate's own. So, without being able to speak to each hiring manager before initiating the application process, we must reasonably estimate the individual’s and hiring company’s expectations of what should be the candidate’s wholly owned representations and which representations can be a combination of their own and external sources. If the company or hiring manager explicitly states expectations of utilizing the assistance of external resources such as AI, the only ethical answer is to utilize those resources according to their specifications.
Decision-Making Framework
Utilization of AI tools can be considered ethical if:
Usage abides by the hiring company’s and hiring manager’s specifications of utilizing external sources in the specific steps and components of the application and interview process
There is reasonable expectation that the candidate utilizes the assistance of external sources in a particular component, and:
Content presented by the candidate is not represented as content which was wholly created by the candidate themselves
Facts presented by the candidate are accurate and represent truthful representations of reality
Utilization of any tool can be considered unethical if it doesn’t meet all of the criteria above.
Ethical Assessment of the Landscape of AI Tools for Employment Pursuit*
Category (primary purpose) | What it does | Misrepresents candidate's skills as their own | Creates false facts about the candidate | Ethical Assessment, if permitted by hiring manager and hiring company | Ethical Assessment, if not permitted by hiring manager and hiring company |
AI Job-Discovery & Matching Platforms | Parse millions of postings, infer your skills/intent, then surface personalised roles (often via chat-style queries). | No | No | Ethical | Unethical |
AI Résumé / CV / Cover-Letter Optimisers | Generate or rewrite application materials, keyword-tune for ATS filters, flag gaps, suggest phrasing. | No | Possibly | Ethical if content generated does not misrepresent facts about the candidate | Unethical |
AI Interview Preparation & Real-Time Coaching | Produce likely questions, record mock answers, analyse speech, body language or code, and give instant feedback. | No | No | Ethical | Unethical |
AI Skills-Assessment & Gap-Closing Advisors | Benchmark your “skill DNA” against market demand, recommend micro-courses or certifications. | No | No | Ethical | Unethical |
AI Application-Tracking & Process Automation | Auto-fill forms, de-duplicate postings, remind you of deadlines, log follow-ups, visualise funnel metrics. | No | Possibly | Ethical if content generated does not misrepresent facts about the candidate | Unethical |
AI Networking & Personal-Brand Assistants | Draft LinkedIn posts, profile sections, and first-contact messages; identify warm connections. | Possibly | Possibly | Ethical if content posted is represented as being assisted by AI tools and does not misrepresent facts about the candidate | Unethical |
Deep Fake Identity Masks | Real-time video overlays that replace or alter your face during a live interview. | No | Yes | Unethical | Unethical |
AI voice-cloning & modulation | Generates (or subtly alters) your voice so you sound like someone else or remove an accent. | No | Yes | Unethical | Unethical |
Real-time answer whisperers / teleprompters | Listen to the interviewer’s question, query an LLM, and feed you text or audio responses a split-second later. | Yes | Yes | Unethical | Unethical |
Code-test auto-solvers & plagiarism bots | Pipe the assessment prompt straight into an LLM, paste back the generated code, sometimes auto-submitting. | Yes | Yes | Unethical | Unethical |
Résumé/credential fabricators | Generate entire career histories or inflate metrics well beyond reality. | No | Yes | Unethical | Unethical |
Psychometric-test manipulators | Guarantee a top-percentile score on behaviour or honesty assessments. | Yes | Yes | Unethical | Unethical |
*Note: Categories and "What it does" details sourced from ChatGPT. Ethical assessments made by Jon Peroutka using the decision-making framework above.
Kommentare