The Aakhya Weekly #174 | Balancing Innovation and Rights in India’s Digital Future
In Focus: Toward Responsible Digital Governance
India is at a critical juncture in its digital evolution. As the nation surges ahead with ambitious technological advancements, the challenge lies not in the speed of progress but in ensuring that this rapid innovation does not come at the expense of the rights and freedoms of citizens. In the race to secure a leadership position in the global digital economy, India must grapple with a dilemma: How can it embrace innovation while ensuring that people’s rights are protected? The recent developments surrounding the Sanchar Saathi app mandate and the India AI Governance Guidelines 2025 highlight this very dilemma. While both aim to boost security and position India as a leader in technology, they’ve raised important questions about privacy, consent, and the role of government in our digital lives.
The balancing act between fostering innovation and safeguarding fundamental rights is, therefore, a delicate one. The government’s decision to withdraw the mandatory pre-installation of the Sanchar Saathi app and the subsequent scrutiny are indicative of the diversity of opinions on the subject. These examples remind us that progress need not come at the cost of trust. At times, the urgency of technological progress gives us the impression that policy decisions might overshadow the careful, rights-based approach that should underpin these developments. This is especially evident in the debates about AI governance and state surveillance tools, where the temptations to achieve a high pace of innovation could take precedence over the need for robust safeguards.
Sanchar Saathi: From Mandate to Voluntary Use
The government’s original order requiring all new smartphones to come with Sanchar Saathi pre-installed sparked immediate concerns over privacy, consent, and state intrusion. Critics argued that system‑level access for a state‑owned app risked normalising deeper state access to device data, not just for lost/stolen‑phone tracking, but with potential for future expansion. This concern was grounded in broader academic literature, which warns that mobile apps with extensive data access, if deployed without strict safeguards, can compromise core privacy rights. However, following widespread pushback, including from users, industry stakeholders, and civil society advocates, the mandate was reversed, and the app’s installation was made optional. On the bright side, this demonstrates a willingness to respond to public concerns. It also suggests that policy can be adapted to accommodate concerns and build a public discourse on critical decisions around the role of emerging technologies and AI in governance.
Comparable International Frameworks:
In the European Union, the General Data Protection Regulation (GDPR) imposes strong baseline rights: consent, data minimisation, transparency, and purpose limitation on data‑collecting apps.
For AI and automated systems, jurisdictions such as the Infocomm Media Development Authority (IMDA) in Singapore adopt advisory frameworks, like its Model AI Governance Framework that emphasise transparency, human‑centric design and accountability.
These models illustrate how policy objectives could be preserved (security, innovation) while ensuring user autonomy, transparency, and legal safeguards. Moreover, such frameworks create precedents, allowing stakeholders to tweak and craft futuristic governance frameworks that suit Indian socio-economic, political, and cultural sensitivities.
AI Governance in India: Promise with Gaps
The India AI Governance Guidelines 2025 constitute a step toward institutionalising AI oversight. The guidelines envisage risk assessment, red‑teaming, and transparency mechanisms, and contemplate setting up an AI‑safety institution. Yet, as several legal scholars note, a heavy reliance on voluntary compliance without legally enforceable standards may leave fundamental rights vulnerable. The risk is especially acute in high‑stakes domains like law enforcement, welfare, finance or public services, where automated decisions impact livelihoods, justice, and social protections.
International comparison reinforces this concern. For instance:
In the EU, the forthcoming EU Artificial Intelligence Act will impose binding obligations, especially for “high‑risk AI systems”, covering transparency, human oversight, accountability, and rights to contest automated decisions.
In the Asia‑Pacific region, while approaches vary, many countries (e.g. South Korea, China) emphasise regulatory oversight, security reviews and data protection laws for AI deployment.
The challenge for India lies in understanding the intricate second and third-order effects, which also tie it with the need to build up institutional capacity. In addition to this, India’s success will also rely on its agility in taking an approach that allows it to build the capability to pre-empt patterns of misuse and tendencies to override ethical AI practices. This remains crucial to ensure timely justice, swifter response procedures, and create a high-trust social fabric towards the role of AI in our society.
Building a Balanced, Rights‑Centric Digital Policy
To reconcile technological ambition with civil liberties and public trust, India could pursue measures to:
Institutionalise Transparency & Oversight for State Apps:
Require a public “data‑processing impact assessment” before deployment of any state‑mandated or state‑affiliated app (similar to Privacy by Design standards under GDPR)
Publish clear documentation about what data the app accesses, why, and how users can control or remove consent; capacities that must not be restricted or hidden.
Adopt a Risk‑Based Legal Framework for AI Systems:
Move beyond voluntary guidelines: establish binding obligations for high‑risk AI applications (e.g. welfare allocation, policing, identity verification), including transparency, human oversight, and the right to appeal decisions.
Ensure compliance with broader data‑protection law (e.g. Digital Personal Data Protection Act, 2023, DPDP Act) when AI systems collect or process personal data.
Introduce Periodic Audits, Public Grievance/Redress Mechanisms, and Accountability Standards:
Institute independent audits (internal or third‑party) for government‑run apps or AI deployments, particularly where data rights and security intersect.
Facilitate channels for citizens to challenge automated decisions, demand explanations, or request data deletion, rights comparable to those under GDPR′s data‑subject access and erasure provisions.
Pilot Before Mandate, Use Voluntary Trials with Clear Evaluation Metrics:
For new digital tools (apps, AI systems), begin with pilot phases, assessing real-world risks, user uptake, and privacy impact, before rolling out mandatory deployment.
Use feedback loops: Involve civil society, privacy experts, and citizen focus groups to shape policy, build trust, and avoid future pushback.
Toward a Digital India That Achieves the Balance
The reversal of the Sanchar Saathi pre-installation mandate is an encouraging sign, as it demonstrates policy responsiveness and openness to course correction. These measures are not bureaucratic impediments to progress; rather, they are enablers of sustained innovation. Systems that are reliable, explainable and contestable attract greater public trust and are more likely to be adopted, and hopefully achieve an effective means to scale. However, implementation will require Herculean efforts in continuous capacity building, given the speed of technological evolution. In line with this, regulators and public agencies need to be equipped with the requisite tools to develop the technical literacy to commission and assess audits, conduct impact assessment exercises, and interpret algorithmic attributes and metrics.
This implies the need for investments in an empowered regulator or a network of regulators with defined mandates for overseeing AI and digital services, while ensuring nimbleness in amending rules whenever necessary. Such an outlook will arm India sufficiently to deliver on its digital ambitions, while ensuring its leadership in contributing to international norms. Moreover, technologies developed under stronger safeguards are more exportable, interoperable and resilient to reputational and legal risks. With a rights-first approach, India could enjoy a strategic advantage, aligning with its efforts to provide inclusive and accessible AI and digital technologies to its people.
Top Stories of the Week
India Turns Messaging Into a SIM-Exclusive Club
The Department of Telecommunications (DoT) has issued a directive requiring messaging apps such as WhatsApp, Telegram, and Signal to implement mandatory “SIM binding“ by February 2026. This means these applications will cease to function if the SIM card used for registration is removed, replaced, or deactivated, moving beyond the current one-time verification process. The new rules also mandate that web versions of these apps, such as WhatsApp Web, automatically log out users every six hours, requiring re-authentication via QR code.
The government states that these measures are aimed at curbing cyber fraud and impersonation scams, particularly those originating from outside India, which exploit the current loophole where apps remain functional without an active SIM. However, concerns have been raised regarding potential inconveniences for legitimate users, including travellers, those with multiple devices, and professionals who rely on web-based versions of these applications. The Internet and Mobile Association of India has called the amended rules a “clear overreach.
Health Security Cess Bill Targets Public Health & National Security Funding
The Ministry of Finance introduced the Health Security & National Security Cess Bill, 2025, in the Lok Sabha on December 1, which proposes a new cess on the production of goods such as pan masala and other products notified by the government. The cess will be levied monthly on manufacturers based on machine capacity, ranging from ₹1.01 crore to ₹25.47 crore per machine, depending on production speed and pouch weight or at a fixed ₹11 lakh per month for fully manual operations. The revenue generated will be earmarked for strengthening public health systems and national security, with scope for the government to double the cess rates if required in the public interest.
To ensure compliance, the Bill empowers senior officers to conduct audits, recover unpaid dues with interest, and initiate penalties for violations such as non-payment, undeclared machines, or tampering with seized goods. A three-tier appeals mechanism has also been established for aggrieved parties.
A Few Good Reads
Montek Singh Ahluwalia and Utkarsh Patel propose seven key steps to shape India’s energy transition and the next decade of climate action.
R. Srinivasan highlights that India’s EV and other subsidy programmes require well-defined goals and structured oversight to prevent ad hoc decisions and policy gaps.
Vipin Sondhi and Anurag Srivastava argue that India can lead next-generation manufacturing by using the Fibonacci Innovation Acceleration (FIA) model, which connects research, industry, and policy to speed ideas from concept to impact.
Pranab Dhal Samanta writes that Russia should shed the Soviet lens on India and embrace its growth story.
Kenneth Rogoff warns that rising anti-immigrant backlashes in developed countries risk stifling economic growth by limiting the skilled labour needed for critical sectors.


