
Indian IT employers have long monitored their workforce, but the rise of generative AI has made surveillance more complex and controversial.
Monitoring now extends beyond keystrokes and emails to how employees use AI tools, including uploads, downloads, and application-specific activity, with outputs often undergoing human validation.
Companies frequently describe these practices in terms of “productivity” and “compliance,” leaving employees uncertain about the extent of oversight.
In the Indian IT sector, tier‑1 firms such as Infosys, TCS, Wipro, and HCLTech have implemented mature AI and GenAI governance frameworks covering hundreds of thousands of employees.
Infosys, which monitors remote work hours, GenAI tool usage, and enforces data sensitivity policies for its workforce of over 500,000, holds ISO 27001, ISO 27701, ISO 22301, ISO 42001, and SOC 2 Type II certifications.
ISO 27001 certifies that an organisation has a robust information security management system; ISO 27701 extends this to privacy management and personal data protection.
ISO 22301 ensures business continuity during disruptions; ISO 42001 governs responsible and transparent AI system management; and SOC 2 Type II attests that a company’s data security, availability, and privacy controls are operating effectively over time.
Together, these certifications demonstrate that a company follows globally recognised standards for security, privacy, and ethical AI governance, assuring clients, regulators, and employees that data is handled responsibly, risks are minimised, and compliance is continuously maintained.
TCS has trained more than 570,000 employees in generative AI, though public details of its monitoring systems remain limited.
Wipro and HCLTech maintain ISO-aligned compliance, but specifics on internal surveillance are not publicly disclosed.
Tier‑2 companies, including LTIMindtree, Persistent Systems, and LTTS, are increasingly deploying AI tools and productivity monitoring, but detailed practices are largely undisclosed.
However, Madhu K, chief information security officer at Sonata Software, told AIM that the adoption of generative AI has been accompanied by enhanced monitoring aligned with ISO 27001, ISO 27701, and SOC 2 Type II frameworks.
“Advanced Data Loss Prevention tools monitor all AI-related activity, and outputs undergo human validation to ensure accuracy, fairness, and compliance,” he said.
At Sonata Software, he said that transparency and informed consent are central: employees are notified of monitoring, safeguards comply with India’s Digital Personal Data Protection Act, European Union’s General Data Protection Regulation, and other privacy standards, and monitoring is limited to company systems while personal activity is respected.
Employee Anxiety
Globally, anxiety over data surveillance surged after Anthropic’s new policy allowed training on user chat data by default, a reminder of how even well-intentioned AI use can raise privacy concerns.
For Indian IT firms, the episode reinforced the need for stricter internal controls over employee-AI interactions.
However, workers from leading IT companies and startups said they feel uneasy under expanded scrutiny, often unclear about what data is collected or how it is used.
A mid-career developer at a major IT firm, speaking anonymously, noted that time on applications like VSCode or Chrome is closely tracked, while a Bengaluru startup employee said he avoids personal use of her office laptop out of privacy concerns.
Digital oversight now extends beyond traditional keystroke and network monitoring.
Tools like Sapience, ProHance, Hubstaff, WorkComposer, WorkForce Next, MaxelTracker, and We360.ai are used across the sector, with tier‑1 firms mostly relying on proprietary internal systems integrated with cybersecurity and AI governance frameworks, while tier‑2 and smaller companies often deploy commercial solutions.
AI enhances these systems by flagging suspicious activity, blocking sensitive uploads, and monitoring unapproved AI usage.
The Indian employee monitoring solutions market is estimated at $33.42 million in 2025, growing at a 12.5% CAGR.
Balancing Security and Privacy
Legal experts caution that AI monitoring can blur the line between protecting company data and intrusive surveillance.
Rohit Lalwani and Mridusha Guha from law firm AMLEGALS said, “Organisations are using a multifaceted strategy centred on technology, purpose, and policy, but AI surveillance is often continuous, automatic, and opaque, creating the impression of constant evaluation.”
Risks include erosion of psychological safety, burnout, and mistrust.
While the Digital Personal Data Protection Act, 2023, provides a baseline, issues like algorithmic transparency, bias auditing, and human oversight are not explicitly regulated, highlighting a gap in current laws.
Industry leaders acknowledge the need to balance security with privacy.
Neeti Sharma, CEO of TeamLease Digital, said firms are anonymising logs, restricting access, and limiting data retention.
M Chockalingam, director of technology at Nasscom AI, noted that generative AI has expanded monitoring, and the industry body is working with member firms to raise employee awareness.
Wipro, Infosys, TCS, Cognizant, LTTS, Coforge, Persistent, and Firstsource declined to comment on their surveillance policies.
The post Indian IT Firms Tighten Surveillance as GenAI Takes Hold appeared first on Analytics India Magazine.