
Artificial Intelligence is undoubtedly one of the most significant technological advancements of the 21st century. The world stands at a crossroads: if we proceed with caution, uphold ethical standards, and regulate the technology responsibly, AI can usher in a new era of prosperity and well-being. However, if left unchecked, its potential for harm could outweigh its advantages.
The risks of AI and tech platforms are not limited to privacy breaches or misinformation. A glaring example of ethical issues in the tech industry was revealed when it was reported that Meta (formerly Facebook) had shut down internal research into the mental health effects of its platforms after discovering that its products were causing harm to users, particularly teenagers. This came to light in unredacted filings from a lawsuit filed by several U.S. school districts against Meta and other social media companies.
The research reportedly found that platforms like Facebook and Instagram were contributing to anxiety, depression, and other mental health issues in young people, especially in vulnerable populations. Meta, rather than addressing these findings, allegedly buried the research, opting to downplay the negative effects of its platforms. The decision to suppress these findings raises significant concerns about the ethical responsibilities of tech companies when it comes to the well-being of their users.
Meta allegedly stopped its internal research into the mental health effects of its platforms midway after the study found causal evidence that its products, particularly Facebook and Instagram, were contributing to negative mental health outcomes among users, especially teenagers. According to unredacted filings in a lawsuit by U.S. school districts against Meta, the research indicated that the platforms were exacerbating anxiety, depression, and other psychological issues. Rather than addressing these findings or making changes to improve user well-being, Meta reportedly chose to suppress the research, likely due to the potential financial and reputational risks of acknowledging such damaging evidence. The decision to halt the study raises serious ethical concerns, particularly about the company’s prioritization of profits over the mental health of its users.

This is not just an issue of poor corporate responsibility; it reflects a larger issue regarding the impact of AI-driven technologies on mental health. Social media platforms leverage sophisticated AI algorithms to keep users engaged, often promoting content that can be psychologically damaging. Meta’s actions, whether deliberate or negligent, exemplify the dangers of technology companies prioritizing profit over the welfare of their users. This type of behavior underscores the need for stronger ethical frameworks and regulatory oversight in the tech industry to ensure that AI and social media technologies do not harm society, especially its most vulnerable members.
In his speech at the G20 summit at Johannesburg, Prime Minister Narendra Modi made an insightful observation regarding the transformative potential of Artificial Intelligence (AI). While acknowledging the remarkable benefits AI brings to the global economy, health, education, and various other sectors, he also highlighted its potential for misuse. This dual-edged nature of AI – the opportunity it presents versus the risks it entails – is not just a matter of technological concern but also of ethical, social, and political importance. Prime Minister Narendra Modi’s comments at the G20 summit reflect a growing realization that while AI offers transformative benefits, it also carries considerable risks.
Addressing the third session of the G20 Summit on “A Fair and Just Future for All – Critical Minerals; Decent Work; Artificial Intelligence,” he also called for a fundamental change in the way critical technologies are promoted. He noted that such technology applications must be ‘human-centric’ rather than ‘finance-centric’, ‘global’ rather than ‘national’, and based on ‘open source’ rather than ‘exclusive models’. He elaborated that this vision has been integrated into India’s technology ecosystem, and the same has resulted in significant benefits, be it in space applications, AI or digital payments, where it is a world leader.
Speaking on Artificial Intelligence, Prime Minister outlined India’s approach based on equitable access, population-level skilling, and responsible deployment. He noted that under the India-AI Mission, accessible high-performance computing capacity is being built with the objective of ensuring that AI benefits reach everyone in the country. Underlining that AI must translate into global good, he called for a global compact based on the principles of transparency, human oversight, safety-by-design and prevention of misuse. He emphasized that while AI should expand human capabilities, the ultimate decision should be made by humans themselves. Prime Minister stated that India will be hosting the AI Impact Summit in February 2026 with the theme ‘Sarvajanam Hitaya, Sarvajanam Sukhaya’ [Welfare for all, Happiness for all], and invited all G20 countries to join this effort.
Prime Minister emphasized that in the age of AI, there is a need to rapidly shift our approach from ‘Jobs of Today’ to ‘Capabilities of Tomorrow’. Recalling progress made on talent mobility at the New Delhi G20 Summit, he proposed that the group should develop a Global Framework for Talent Mobility in the coming years.
Significantly, AI, like all revolutionary technologies, offers immense promise but comes with its own set of challenges that cannot be overlooked. As world leaders and innovators continue to race toward AI-driven progress, a balanced approach is crucial. In this article, we will explore Prime Minister Modi’s remarks in greater detail, reflecting on both the positive aspects of AI and the potential risks it poses, while proposing ways in which nations can harness its power responsibly.
Artificial Intelligence has rapidly evolved over the past few decades, from basic machine learning algorithms to sophisticated systems capable of performing complex tasks with unprecedented speed and accuracy. The integration of AI into sectors such as healthcare, education, finance, transportation, and manufacturing has already begun reshaping economies and improving lives.
One of the most promising applications of AI lies in the healthcare sector. AI systems are already being used to diagnose diseases with remarkable accuracy, sometimes outperforming human doctors. For example, AI algorithms can analyze medical images, such as X-rays and MRIs, detecting early signs of cancers, fractures, and neurological conditions that might otherwise go unnoticed. Machine learning models can also predict patient outcomes, enabling more personalized and preventive care.

In countries with limited access to healthcare professionals, AI-powered telemedicine platforms can bridge the gap by providing expert consultations remotely. Additionally, AI’s role in drug discovery and vaccine development – seen during the rapid creation of COVID-19 vaccines – demonstrates how it can expedite life-saving advancements.
AI’s role in education is also transformative. Personalized learning platforms powered by AI can cater to the unique learning styles and paces of individual students. These tools can analyze a student’s strengths and weaknesses, providing tailored content and assessments that ensure no student is left behind. For instance, platforms like Coursera and Duolingo already use AI to recommend courses and lessons based on a learner’s progress.
Moreover, AI can assist teachers by automating administrative tasks, grading assignments, and providing real-time feedback to students. This allows educators to focus more on teaching and less on routine tasks, improving the overall quality of education.
On a macroeconomic level, AI is already driving innovation across industries, from automated manufacturing processes to predictive analytics in supply chain management. In the financial sector, AI is revolutionizing trading algorithms, risk assessment, and fraud detection. In agriculture, AI-powered tools help farmers optimize irrigation, detect pests, and predict crop yields.
The integration of AI into industry leads to increased efficiency, reduced operational costs, and the creation of new business models. In developing economies, AI can be a powerful tool for leapfrogging traditional stages of economic growth, enabling nations to adopt advanced technologies that otherwise might take decades to implement.
While AI holds great promise, it is not without its dangers. Prime Minister Modi’s warning about the misuse of AI is a timely reminder that unchecked technological progress can have unintended consequences. The very characteristics that make AI powerful – its ability to learn, adapt, and automate – can also be turned against us.
One of the primary concerns around AI is its potential to infringe on individual privacy. With the advent of AI-driven surveillance systems, governments and corporations can monitor individuals’ movements, behaviors, and even emotions in real time. Social media platforms already use AI to track users’ interests and preferences, feeding them targeted ads and content. While this can improve user experience, it also raises concerns about data privacy and the lack of informed consent.
Moreover, AI-powered surveillance systems, if misused, can lead to violations of civil liberties. For example, the use of facial recognition technology by governments for surveillance purposes can be weaponized against citizens, leading to unwarranted arrests and the suppression of dissent. The social credit systems being implemented in some countries – where AI analyzes citizens’ behavior to determine their societal trustworthiness – could potentially erode personal freedoms.
Another major issue with AI is its impact on employment. While AI will certainly create new job opportunities, it will also lead to the displacement of millions of workers, especially in sectors that involve repetitive, manual tasks. Automation in industries like manufacturing, retail, and logistics could result in widespread job losses for low-skilled workers. This could exacerbate economic inequality, as the benefits of AI are disproportionately distributed to those who control the technology – namely, large corporations and tech companies.
As AI systems take over routine tasks, the demand for human labor may shift to more specialized roles. However, these roles will likely require advanced technical skills, and not all workers will have the resources or time to retrain for these new positions. This could further marginalize disadvantaged populations, leaving them without access to the opportunities AI creates.
AI’s ability to create deepfake videos and manipulate content is a serious concern in the age of social media. Deepfakes – hyper-realistic fake videos created using AI – can be used to spread misinformation, cause political instability, and damage reputations. The ability to manipulate video, audio, and even text makes it difficult to distinguish between truth and fiction.
In the political realm, AI-driven bots are already being used to manipulate public opinion by flooding social media with misleading or biased content. This manipulation can affect elections, public policy, and public trust in institutions. The spread of fake news has already had significant consequences in several democracies, and AI is only making it easier for bad actors to exploit the system.
Autonomous Weapons and Warfare
The potential use of AI in warfare raises serious ethical concerns. The development of autonomous weapons – drones and robots that can make independent decisions on when and how to attack – could revolutionize military strategies but also lead to catastrophic consequences. The question of accountability becomes crucial in such scenarios: if an AI system autonomously makes the decision to harm civilians or escalate conflict, who should be held responsible?
Autonomous weapons also lower the threshold for war, making conflicts potentially more frequent and devastating. The misuse of AI in military operations could result in unforeseen and uncontrollable escalation, making it imperative that international norms and regulations be established to govern AI in warfare.
Regulating AI to Make it Responsible Innovation
Prime Minister Modi’s remarks on AI at the G20 summit underscore the urgent need for global cooperation in addressing the risks associated with the technology. While AI holds the potential to improve lives, it also requires careful governance. Countries must come together to create regulatory frameworks that ensure AI is used responsibly and ethically.
One of the most pressing challenges in AI regulation is the lack of a global consensus on ethical guidelines. Each country has its own regulatory framework, but these regulations often differ in scope and enforceability. For instance, the European Union’s General Data Protection Regulation (GDPR)has set high standards for data privacy, but many countries, especially in the Global South, lack similar regulations.
A unified global framework for AI ethics, focusing on transparency, accountability, and fairness, is essential. The OECD’s AI Principles and UNESCO’s Recommendations on AI Ethics are steps in the right direction, but they must be expanded and enforced globally.
To prevent AI from exacerbating inequality, governments and companies must prioritize fairness and inclusivity in AI design and deployment. AI systems should be developed in ways that promote equal access to opportunities and ensure that marginalized groups are not excluded. For example, AI models in hiring or credit scoring should be free from biases that disproportionately affect women, minorities, or other disadvantaged groups.
To mitigate the impact of AI-driven job displacement, countries must invest in re-skilling programs that help workers transition into the AI-driven economy. Public and private sectors should collaborate to provide education and training in areas such as data science, machine learning, and AI ethics. Governments could also consider creating social safety nets or universal basic income schemes to help workers who are displaced by automation.
The development and use of autonomous weapons systems must be strictly regulated. International treaties and agreements should be established to prohibit the use of AI in weaponry, or at the very least, regulate it in a manner that ensures human oversight and accountability. Global organizations like the United Nations must play an active role in shaping the discourse around AI and warfare.
Artificial Intelligence is undoubtedly one of the most significant technological advancements of the 21st century. Prime Minister Narendra Modi’s comments at the G20 summit reflect a growing realization that while AI offers transformative benefits, it also carries considerable risks. The world stands at a crossroads: if we proceed with caution, uphold ethical standards, and regulate the technology responsibly, AI can usher in a new era of prosperity and well-being. However, if left unchecked, its potential for harm could outweigh its advantages.
To harness the full potential of AI while mitigating its risks, collaboration between governments, industries, and civil societies is essential. By adopting thoughtful policies, creating global ethical frameworks, and investing in education and training, we can ensure that AI serves humanity in ways that are just, inclusive, and beneficial for all.












