Advertisement
The recent Paris AI Action Summit has highlighted a stark divide among nations regarding the future of artificial intelligence (AI). While countries like France, India and China advocate a balanced approach that combines accessibility with ethical regulation, others such as the US and the UK remain wary of imposing controls, arguing that over-regulation could hinder innovation. This disagreement points to a larger question: how can AI be developed in a way that it serves humanity while minimising risks?
AI has the potential to revolutionise industries and economies, contributing an estimated $13 trillion to the global GDP by 2030.
However, its risks are equally significant. Research indicates that 78 per cent of the AI systems studied exhibit biases while deepfakes and misinformation are proliferating at an alarming rate. On the employment front, as many as 375 million workers worldwide may need to switch occupational categories due to AI-driven automation.
These statistics underscore the urgent need to embed ethics, inclusivity and cultural values into AI development. Beyond economic and technological considerations, we must ask ourselves: what is the use of AI if it contributes to the breakdown of families, loss of peace and the erosion of our value systems? Where is the inclusivity, the vision of Sarbat da bhala (welfare for all), espoused by Guru Nanak Dev? How can we reconcile AI’s advancements with the ethos of the Bharatiya gyan parampara (Indian knowledge tradition)? These pressing questions demand immediate attention.
Educational institutions hold a pivotal responsibility in addressing these concerns and shaping an ethical AI ecosystem. Universities and colleges, as centres of knowledge and innovation, must take proactive steps to integrate ethical considerations into AI development.
AI education cannot remain limited to technical knowledge. Institutions must embed moral, philosophical and cultural dimensions into their curricula to prepare students to think critically about the societal impact of AI.
For instance, Guru Nanak Dev University (GNDU) is working on introducing courses that merge AI with philosophy, sociology and the teachings of Guru Nanak Dev. This ensures that students understand the importance of inclusivity and ethical frameworks.
Universities must also prioritise research on mitigating AI biases, enhancing transparency and preventing misuse. Collaborative, interdisciplinary research can bridge the gap between technology and ethics, addressing diverse cultural and social contexts.
Further, educational institutions should demystify AI for the public, explaining its benefits, risks and ethical implications. By engaging with policymakers, industries and communities, universities can play a critical role in fostering trust in AI systems.
While AI has global implications, its solutions must cater to local needs. For example, GNDU’s initiative to reserve 5 per cent seats for students from rural and border areas is a step towards inclusivity in technology education. Such initiatives ensure that the marginalised communities are not left behind in the AI revolution.
Guru Nanak’s philosophy of ‘Sarbat da bhala’ provides a guiding principle for the kind of AI we should aim to develop — one that uplifts humanity, bridges divide and fosters harmony. As stewards of our cultural heritage, universities must champion these ideals to create a technology-driven yet ethically grounded society.
To ensure that AI serves humanity rather than harms it, governments too have a critical role to play. The current laissez-faire approach adopted by countries like the US, driven by an impulse to dominate the AI race, is unsustainable and potentially dangerous. At the same time, excessive regulation could stifle innovation, as feared by many industry leaders. Striking the right balance requires careful policymaking and international collaboration.
Governments must work towards global and national AI governance frameworks that prioritise transparency, safety and inclusivity. These frameworks should include mechanisms to detect and mitigate biases, ensure data privacy and hold developers accountable for misuse. Policies must also reflect the cultural ethos and value systems of the nations implementing them. India, for instance, can draw from its rich heritage of Bharatiya gyan parampara, emphasising wisdom, inclusivity and harmony. AI development must be aligned with the vision of creating a compassionate and just society for future generations.
Governments should allocate funds to universities and research institutions for interdisciplinary studies on ethical AI. Partnerships between academia, industry and the government can accelerate innovation while ensuring ethical compliance.
In addition, governments must invest in large-scale awareness campaigns to educate citizens about AI’s benefits and risks. Public participation can strengthen trust in AI systems and empower individuals to make informed decisions about their use.
As AI disrupts labour markets, governments should also prioritise skilling and re-skilling initiatives to prepare workers for the emerging job roles. Special attention must be given to rural and economically disadvantaged communities to bridge the digital divide.
Ultimately, the vision for AI must align with the society we want to create. An unregulated AI rush, driven solely by profit motives, risks eroding the very values that define humanity. As Guru Nanak Dev’s teachings remind us, the ultimate aim must be the welfare of all. We must strive to create AI systems that are inclusive, compassionate and equitable, ensuring they contribute to peace and harmony rather than division and chaos.
As we tread this delicate path, the collaboration between governments, educational institutions and civil society will be critical. Together, we can harness the immense potential of AI while safeguarding our values, ensuring it becomes a tool for global good rather than a force of disruption.
Source: The Tribune