Skip to content

Digital labour ethics: Who’s accountable for the AI workforce?

  • Digital labour is becoming more common in the workplace, but few widely accepted rules have been implemented for governing it.
  • AI is being used for more regular tasks such as drafting proposals and handling inquiries.
  • Managing this technology’s implementation and governance is a key leadership challenge.

Digital labour is becoming more common in the workplace. In Japan, the Henn-na Hotel is staffed almost entirely by robots – from check-in to concierge, while in the UK, talent company Adecco Group has used agentic AI for tasks such as pre-screening of candidates.

Meanwhile, AI-powered robots deployed by police patrol public areas and interact directly with citizens in Dubai and in the US, FarmWise has built autonomous machines that weed vast fields with no human labour. In Singapore, robot baristas are automating more repetitive tasks of the coffeeshop experience, freeing up staff for other customer-facing tasks.

While these may seem like extreme examples, it’s not hard to find artificial intelligence (AI) taking on more common roles and tasks previously done exclusively by humans in other workplaces – such as drafting proposals, reconciling invoices and handling support inquiries. GitHub Co-pilot, for example, can execute coding tasks to free up coders for high-level problem solving, while Harvey AI supports lawyers on issues such as contract analysis, due diligence, legal research and drafting.

For executives, the challenge is not proving the technology works, but in defining the rules under which it operates and the values it upholds.

AI agents are labour, not software

Agentic AI goes beyond traditional automation by making decisions and taking action autonomously after gathering and analysing data. The deployment of advanced AI agents that perform workflows similar to what a human would have done, such as call centre representatives, is as much a leadership responsibility as a technical one.

Breakthroughs dominate headlines, but many companies are struggling behind the scenes. A recent MIT Sloan report found that 95% of generative AI pilots fail to produce meaningful returns. This is a leadership issue.

The hype cycle portrays AI as a revolutionary breakthrough, yet most organizations use it as a low-cost labour replacement. The real tension is between AI’s transformative potential and the tendency to treat it as an efficiency tool. Without a clear strategy for realizing that potential, companies chase short-term savings through pilots that rarely scale or last.

When you treat agentic AI as ordinary software, you risk chasing hype over purpose. But when you approach it as a new class of labour that is governed, trained and held accountable, they can help the enterprise learn and build lasting value.

Digital labour governance requires shaping and testing

To govern digital labour responsibly, whether in the physical world through robotics or virtually, such as in digital platforms, requires new frameworks that account for agency, accountability and alignment across both human and machine actors.

Accordingly, CEOs need three capabilities: to see, shape and test. Every agentic action must be auditable, providing visibility into what data it used, how it reasoned, which policy guided it and what outcome it produced.

AI’s scope, access and behaviour must be defined and adjusted as conditions change. Accuracy, bias, speed and business impact must be continuously tested before and after changes.

Security principles such as zero trust that govern humans must also apply to digital labour. Role-based restrictions, least-privilege by default and strong identity requirements should extend to every AI system – with no part of the workforce, human or digital, allowed unfettered access to systems.

Service layers must be segmented, privileges strictly enforced and every action logged. If AI influences a decision that affects a customer, employee or citizen, the organization must be able to explain how and why. Autonomy without boundaries and structure is risk disguised as progress.

Investing in relationships with digital workers

Like its human colleague, a digital worker requires an investment in time, money and relationship building. It begins by defining the problem the system is hired to solve, the decisions it can make independently, and what must be escalated.

Onboarding should include credentials, process maps, policies and the business context enabling the system to reason within organizational language and standards.

Have you read?

Just as with humans, training is not a one-time event; it demands consistent input, feedback and coaching. Performance must be tracked for accuracy, bias, timeliness and impact. Supervisors must be able to tune constraints as they would in a performance review.

Finally, when a digital role is no longer needed, it should be retired with the same discipline applied to humans: access revoked, artefacts preserved and closure ensured.

The CEO’s essential role in enterprise transformation

Ethics in the era of digital labour cannot be focused on compliance alone. The true measure is whether these systems enhance human dignity and opportunity and make life better.

For CEOs, the role of leadership is to maximize enterprise transformation while ensuring the path is observable, governable and accountable. The winners will see agentic AI not as a low-cost replacement for labour, but as a catalyst for reinvention — when paired with strategy it redeploys human capacity toward creativity, judgement and impact that machines alone cannot achieve.

Imagine across your enterprise, digital workers understand what you are trying to do and participate in your business reality. Instead of executing tasks, they model scenarios, anticipate outcomes and compress time to empower better decisions at every level. This creates exponential value: higher quality decisions compounding across an organization.

CEOs are facing some of the biggest workforce challenges in history. The choices they make about AI have the potential to shape the very structure of work and the opportunities available to people. It already is.

Treating AI as nothing more than a cost-cutting tool limits the real progress businesses can make toward positive, transformative change that creates opportunity for both people and the enterprise. The real test for leaders is whether they can think bigger — to use AI not only to redesign roles and anticipate outcomes, but to help every employee contribute at a higher level and build organizations capable of reaching their full potential.

Technology by itself cannot deliver lasting value, and only when paired with governance and cultural adaptation do short-term gains lead to sustainable advantage.

Digital labour needs to serve people as well as performance

AI adoption has surged. In 2024, 78% of organizations reported using AI, up from 55% just a year earlier. Analysts estimate that half of white-collar roles could be reshaped within the next few years, starting with entry-level work. Yet only a fraction of firms are redesigning operating models or putting enforceable guardrails in place.

The companies that thrive will be those that leverage AI to act decisively, reimagine their business and see it as part of the workforce, governed with the same discipline and accountability as their people.

CEOs who set clear standards, enforce accountability and design digital labour to amplify human potential will ensure that progress within their organization serves people as much as performance.