From healthcare to education, Artificial Intelligence hype has covered every sector, with some calling it the greatest breakthrough since splitting the atom (Price, 2023) while others leveling up their research on existential threats due to AI (Future of Life Institute, 2023). Whether or not AI is worth the hype, constant and steady progress is being made globally in its development and deployment (Asia News Network, 2025; Bhuiyan, 2025). This will eventually impact all sectors of governance, and hence needs critical scrutiny. It is clear is the deployment of AI has also shown great potential for harm (Shroff, 2025), especially for women, gender minorities, and groups in other vulnerable situations like platform workers, immigrants, and refugees (AI Incident Database, 2017; Oppenheim, 2018; Hersey, 2022). Amidst this, countries are stepping up efforts to craft governance strategies and policies to balance safeguarding the interests of their citizens while allowing businesses to thrive. This pushes researchers and policy analysts to think beyond the binaries.
India currently ranks fourth globally in the Stanford Human-Centered AI (HAI) Global AI Vibrancy Tool, with a score of 25.54 across a balanced range of indicators including responsibility, R&D, policy and governance, infrastructure, and public engagement. This ranking underscores several core strengths: India leads globally in AI conference citations, holds the third position in AI journal publications, and ranks second worldwide in AI-related GitHub projects, highlighting the vibrancy of its developer ecosystem (Jeevanandam, 2024). Public discourse around AI is equally robust—India stands second globally in both AI-related social media voice share and total AI posts, reflecting widespread societal engagement and awareness (ibid.). However, despite these notable achievements, India still trails behind leading nations—especially the U.S. and China—in critical domains like policy and governance, responsible AI and technological infrastructure, which suppress its overall vibrancy score.
The Finance Minister of India, in his budget speech for 2018 – 2019, mandated NITI Aayog to establish the National Program on AI. In pursuance, NITI Aayog drafted a report on “National Strategy for Artificial Intelligence”. This report put forth a unique brand of #AIforAll. The report focused on how India can leverage transformative technologies to ensure social and inclusive growth in line with the development philosophy of the government (NITI Ayog, 2018).
This paper examines the conceptual and practical dimensions of building AI for All, with a focus on inclusivity, accessibility, and socio-technical equity. The paper uses a decolonial framework not simply to critique the social costs of technological advancement, but to foreground how colonial systems operated through extractive logics that depleted the economic, technological, epistemic, and social capital of colonized societies. The concern here is not a binary between technological progress and equity, but rather a deeper interrogation of how the legacies of colonial knowledge hierarchies and resource extraction shape contemporary AI systems and infrastructures and how can India prioritize technological advancement with an inclusive design. In this light, building equitable AI in India must involve a commitment to data sovereignty, epistemic inclusion, and historical redress, rather than merely localizing global models.
Drawing on critical works done on responsible and inclusive AI by scholars from the Global South, I advocate for a framework that is indigenous to India and ground it in feminist decoloniality. I propose two guiding principles that should steer inclusive AI governance frameworks in India. First, AI systems differ from existing artificial entities (of governance, e.g. banking system) in the capacity of AI to increasingly make decisions without humans in the loop. Moreover, AI systems play an increasingly important role in the operation of all the other artificial entities; for example, in financial markets, only a small sliver of all transactions are executed by humans. As a result, they leave a footprint on all aspects of the governance ecosystem. This pushes all stakeholders to rethink how capable our existing systems of governance are to welcome AI into the system (Bullock et al., 2023). Hence, the values we embed in AI systems become extremely important. Second, not thinking of AI as a governance black box (Kosinski, 2024) and enabling a consistent, open and rigorous research environment around AI. Both of these principles are inseparable from each other and without upholding both together, we risk stymying the other.
