Tech induction in the process of recruitment is not new. Companies like Naukri, Indeed, Shine, etc., made the hiring process feasible for both recruiters and candidates, especially with the globalisation of the workforce. But when such tools started getting adopted, AI wasn’t a thing in the HR processes. Those weren’t automated enough to be able to handle everything from shortlisting to onboarding. Then came Agentic AI and changed the game.
As the name suggests, Agentic AI is a technology that powers tools to act like agents or chatbots to automate some tasks, such as preliminary interview rounds, resume shortlisting, candidate matching, etc. This means, in terms of hiring technology, it goes a step ahead of the other basic tools that recruiters have been using for years.
Commenting on this, Lulu Khandeshi, Chief Human Resources Officer, ManpowerGroup, said, “Organisations exploring Agentic AI in the hiring and HR spaces are moving well beyond simple automation. These systems are being conducted to autonomously plan and optimise key talent acquisition and people operations desks.”
Agentic AI-powered tools make basic decisions by analysing the past hiring and rejection patterns. Interestingly, studies indicate a rising number of use cases for Agentic AI in hiring in India.
LinkedIn found that 71% of recruiters in India are now using AI to uncover candidates with skills they may have otherwise missed, and around 80% say AI already speeds up hiring, signalling a sharp rise in adoption of AI-powered hiring tools as organisations seek efficiency and competitive advantage.
But instead of focusing on the rising dependence, questions about accountability need to be addressed.
Relevance and Accountability
The LinkedIn study further revealed that the top three priorities for the Agentic AI users are high-quality candidates with transferable skills (57%), adopting smarter hiring tech (52%), and proving the return on investment (ROI) of hiring investments to C-suite leaders (46%).
This shows that the relevance of Agentic AI in hiring is no longer just a question. It has penetrated deep into the system. So much so that it is enough to include it in the recruitment budgets and workflows, signalling that it is here to stay. But as adoption deepens, the conversation must move from capability to consequence and accountability.
Questions such as who audits the patterns it learns from, who questions the criteria it optimises for, and who ultimately stands behind its decisions are top priorities. The more indispensable the technology becomes, the more deliberate leadership oversight must be, ensuring that efficiency does not quietly replace ethical responsibility.
It is pertinent to note that without direction and transparent governance, tech can amplify biases and erode trust. Half of recruiters surveyed in India by LinkedIn now feel pressure to explain how AI is used in screening and shortlisting, pointing to emerging demands for transparency and responsible use.
On this, Anjan Pathak, CTO and Co-founder of Vantage Circle, said that the impact of technology depends on how it’s applied. For example, when used in hiring or performance evaluations, AI can assess every candidate or employee based on a set of consistent, predefined criteria, which can help reduce human bias.
“However, it’s crucial to remember that AI itself isn’t immune to bias. If the data used to train AI models is skewed, such as historical hiring patterns or performance reviews, it can unintentionally reinforce those biases. To prevent this, it’s essential to ensure fairness throughout all employee-related processes and maintain a strong focus on transparency and ethical AI practices,” he added.
According to him, if past hiring decisions were biased, whether based on gender, caste, ethnicity, university pedigree, or career breaks, the AI system may blindly replicate these biases. This will lead to potentially damaging diversity and fairness in recruitment.
A critical thing to consider here is that when technology is sitting in a decision-making position, it is slowly restructuring the hiring practices and policies of a company. In such a case, it will only scale flawed decisions instead of removing them from the system. Thus, accountability here doesn’t just confine to accuracy.
It also covers regularly monitoring inputs, auditing the outcomes, and remaining visibly responsible for the final call. It is to be remembered that though processes can be automated, the outcomes need to be monitored to avoid scaling of flaws and bias.
To this, Lulu Khandeshi added, “The AI doesn’t understand fairness or context. It just detects statistical association. A model trained heavily on historical success profiles may undervalue emerging skills, overvalue traditional credentials, and misclassify unconventional but high-potential candidates, so overall, in fast-moving industries or domains, this interval can be costly. Human hiring managers also rely on past patterns.”
Key challenges
With algorithms taking over, a key challenge that accompanies this is that candidates experience rejection without context, employees see patterns without explanations, and leaders lose the habit of standing behind the people’s decisions.
This is bad because hiring is a human decision that determines access to opportunity, influence, and long-term growth within an organisation. When outcomes have no visible decision-maker, leaders stop questioning why certain profiles are favoured or excluded.
Yet another crucial thing to consider is the risk of data breach. Obviously, as hiring requires the scanning of a pile of personal data, such as behavioural signals, video interviews, and psychometric indicators, using it on a platform with no human regulation could pose a threat to the identity security of the candidates. For example, a candidate’s PAN, Adhaar details, financial statements and even facial features are all at risk of exposure.
Further, Anjan Pathak said that Agentic AI, when applied to hiring practices across organisations, refers to AI systems that autonomously make decisions or offer suggestions, sometimes replacing or augmenting human judgment. While this can bring efficiency, it can also pose risks for candidates. For example, a CV may not be “AI-qualified” based on its format or specific keywords, even though the candidate may have the relevant experience and skills needed for the role.
“This is why I don’t recommend relying solely on AI for hiring. It’s essential to stay aligned with technological advancements, but I always tell my team not to let it overshadow human judgment. Technology should enhance our decisions, not replace them. By over-relying on AI, we risk missing out on candidates who bring the right emotional intelligence or cultural fit, elements that AI may struggle to assess. Ignoring these aspects could cost us the opportunity to hire great talent,” he added.
Coping With The Challenges
Looking at the challenges, what can be derived is that the least that can be done is to have a balanced approach of using tech with human regulations. No matter how intelligent tech gets, a thing to remember is that analytical and critical thinking is an art which can be mastered with experience and time by humans.
Defining human checkpoints in the hiring journey ensures that judgement, context, and ethical consideration are not weathered away. Leaders must explicitly own the outcome, not just approve the output.
However, to manage the threats of data breaches, it is important to form an audit committee. This also applies to those outsourcing their preliminary screening tasks to hiring consultancies. Compliance with India’s Digital Personal Data Protection (DPDP) Act, 2023, for regular vendor security audits, encryption standards, access controls, and clear data deletion timelines may be adopted as a regular practice.
Agentic AI in the future of work
“AI’s potential to streamline processes, improve efficiency, and make data-driven decisions is so powerful that its applications will likely expand into every industry, institution, and system we interact with,” said Anjan Pathak.
It goes without saying that Agentic AI is inevitable in hiring and other HR practices. However, what remains uncertain is whether leadership is simultaneously analysing the risks associated with it.
Hiring trends are always shaped by leadership, no matter how much tech evolves and enters the system. Some decisions are meant to be taken by humans. In the years ahead, the organisations that dare to draw clear lines between automation and responsibility will be the ones to succeed in efficient hiring. Because when machines can hire at scale, the real differentiator will be leaders’ approach towards owning the people decisions that shape culture, capability, and trust.
Notably, in future, the real competitive advantage will lie with organisations that use Agentic AI boldly yet responsibly, while keeping checks on data security, bias, transparency, and human ownership.
