AI Agents Aren’t Ready To Function As Autonomous Employees: Emmanuel David

Though organisations speak about values and purpose, ethics and empathy are rarely listed as key criteria for AI roles, said Emmanuel David.
AI Agents Aren’t Ready To Function As Autonomous Employees: Emmanuel David
AI Agents Aren’t Ready To Function As Autonomous Employees: Emmanuel David
Sudeshna
Monday March 02, 2026
7 min Read

Share

The rapid rise of AI is fundamentally reshaping how work is defined, how skills are developed, and how organisations create value. As automation and AI systems increasingly take over routine and analytical tasks, the nature of human contribution is shifting from execution to judgment, creativity, and ethical decision-making. 

This transformation is forcing businesses, educational institutions, and employees to rethink traditional models of learning, hiring, and career growth. The focus is gradually moving from degree-based qualifications to continuous learning, adaptability, and demonstrable skills.

At the same time, the AI-driven workplace raises critical questions about workforce displacement, organisational responsibility, and the social impact of technological progress. In this conversation with Emmanuel David, TPB has explored how the industry and academia are looking at this shift.

Emmanuel David is the Managing Director of GridSynergies Pvt Ltd. He is a seasoned corporate leader with experience and insights in diverse sectors, including Executive Education, Hospitality, Financial Services, and more. 

Formerly the Director of Tata Management Training Centre, he led the Leadership Development of the Tata Group, where the participants were Tata Leaders and Civil Servants of the Government of India.  

He served as an Independent Director and NRC Chairman of Aster DM Health Care and Brookfields Private Equity. He mentors at Northeastern University’s MS Program on Strategic Technology Leadership. He holds numerous educational and community service initiatives. 

He is a credentialled coach from ICF and has certifications in key personality assessment tools.

Here’s the full conversation:

  • Some companies, such as LTIMindtree and Amazon, have started pushing AI agents into the workforce. How do you look at this change? 

When we celebrate “nonlinear growth,” we should ask a simple question: what is actually growing, and what is being lost?

Take the recent example of LTIMindtree, which added $64 million in revenue while cutting 1,900 jobs. On the balance sheet, this looks like progress. But from a social and ethical perspective, it can be misleading. The financial gain may not always come from creating more value or expanding markets. Sometimes it comes from reducing the workforce.

An educated but unemployed workforce is not just a statistic; it becomes a social risk. When organisations overlook the well-being of people, the consequences eventually show up as inequality, frustration, and visible hardship in society.

The core idea is simple: progress cannot be measured only by profits or productivity. Organisations must recognise that economic “doing” is deeply connected to social “being,” and ignoring that balance can create long-term societal problems.

  • Do you think that AI agents are in a position to operate autonomously as employees? Why? 

AI agents are not yet ready to function as fully autonomous employees. However, currently, AI excels at improving efficiency, supporting operations, controlling costs, and helping achieve clearly defined goals.

True autonomy, however, requires moving to strategic development. This level depends on the human side of work, which includes understanding change, making judgment calls, and aligning systems with long-term strategic intent.

AI lacks conscience, what theologian Mathew Henry described as “God’s Deputy.” Because of this, it cannot take moral or social responsibility for an organisation’s long-term sustainability, especially over a one-to-two-year horizon. Real autonomy requires “Flow,” meaning a balance between the complexity of a role and the individual’s capabilities — something AI has not yet achieved.

  • How do you see roles evolving in an AI-first workplace?

Today, companies are building AI systems that affect billions of people, yet they are not training their workforce to manage the human impact of these technologies. A review of job descriptions from major companies in 2018 and again in early 2026 shows a worrying pattern. 

Even though organisations speak about values and purpose, ethics and empathy are rarely listed as real job requirements in AI roles. Instead, technical STEM skills such as Science, technology, engineering, and mathematics still make up nearly 80% of hiring criteria, while human skills remain only aspirational.

If an AI-first workplace were truly successful, trust should be increasing. Instead, we see a fractured social contract. To address this, the idea is to move beyond a purely technical STEM model toward a broader framework called ESTEEM — ethics, science, technology, engineering, empathy, and management.

AI can act as a practical ally for routine work, but it still lacks opinion, judgment, and ethics needed for complex, open-ended decisions. By making ethics and empathy core professional skills, organisations can ensure technology strengthens human potential rather than replaces it, helping rebuild trust and social responsibility in the workplace.

  • What are the key considerations for adopting AI agents as employees?

In my view, the following aspects are necessary in the context:

  1. The scale of the organisation,  especially the requirements, surpasses human capability.
  2. Safety of operations and risk to Human Life or the environment, e.g., Manual Scavenging, Painting, etc.
  3. Complexity of the nature of the work, such as Air Traffic Control or Surveillance.
  4. Quality requiring high accuracy and precision, e.g., data computing, financial services, Aeronautics, and research.

This would be an optimal partnership where Humans can make decisions based on nuanced insights and exceptions. 

  • How do we keep work meaningful when machines do most tasks? 

At first glance, this may seem like a dilemma, but with deeper thought, its meaning becomes clear. Human beings naturally seek purpose, which may be something meaningful to pursue, care about, or believe in. This search for meaning drives both personal excellence and the progress of civilisation.

Meaning creates lasting memories, while routine often leads to monotony.

As AI takes over repetitive and routine tasks, humans gain the freedom to focus on work that truly matters, which is the impact we have on one another. AI may handle the unseen mechanics of daily work, but the human role remains rooted in connection, empathy, and shared experience. As Maya Angelou beautifully expressed, “People will forget what you said, people will forget what you did, but people will never forget how you made them feel.”

  • Are Indian business schools prepared for AI-led corporate realities?

As AI reshapes education, student reactions are transforming the learning environment itself. In this new discipline, the line between teacher and student is fading. Success no longer depends on how much knowledge someone stores, but on how quickly they learn and contribute. The old academic hierarchy is weakening, with both experts and learners searching for answers together.

Students are increasingly bypassing traditional authority, building their own AI expertise and valuing demonstrated ability over credentials. Degrees are giving way to “presence” — proof of skill through living portfolios and real problem-solving. Anxiety about AI is turning into a race for relevance, where influence belongs to those who can guide and manage AI systems responsibly. At the same time, students are seeking uniquely human strengths such as ethical judgment and craftsmanship that AI cannot replicate.

Universities are beginning to adapt. At the University of Queensland, students are moving beyond passive lectures and collaborating to shape ethical AI, focusing less on information overload and more on thoughtful decision-making.

  • Who owns the reskilling responsibility: employer or employee? 

When it comes to skilling, organisations usually have three choices: Borrow, Buy, or Build.

The Borrow model is the most common today. When companies face a new market challenge, product expansion, or risks such as customer complaints or compliance issues, they bring in consultants or external experts. 

The Buy option is used during strategic shifts or expansion into new markets or geographies. Hiring external talent can provide a quick advantage, but it may unintentionally signal to existing employees that their skills are not sufficient.

The most sustainable approach is to build. Since employees create most organisational value, reskilling them becomes a shared responsibility. Building talent requires foresight about future skills and sends a strong message that employees matter.

In conclusion, reskilling is a joint responsibility. Leadership and organisations must provide direction and opportunities, while employees must actively develop new skills to remain employable, ideally by learning ahead of change, rather than reacting after downsizing occurs.

latest news

trending

Subscribe To Our Newsletter

Never miss a story

By submitting your information, you will receive newsletters and promotional content and agree to our Terms of Use and Privacy Policy. You may unsubscribe at any time.

Tagged:

More of this topic

Subscribe To Our Newsletter

Never miss a story

By submitting your information, you will receive newsletters and promotional content and agree to our Terms of Use and Privacy Policy. You may unsubscribe at any time.