As we move deeper into the 21st century, few things have changed our world as much – or as quickly – as Artificial Intelligence. What used to be a topic for scientists and science fiction writers has now become part of our daily lives. AI is affecting everything – from how we work and communicate to how wars are fought and elections are won. Algorithms now decide what news we see, what ads we get, how much credit we qualify for, and sometimes, even how free we are. In the middle of this rapid change, a big question arises: who is making the rules for this powerful and fast-moving technology?
The development of AI has been incredibly fast and, at times, confusing. From smart chat-bots to AI-controlled weapons, its reach is spreading into all areas of life. Governments are using it to watch over people, companies are using it to work faster and cheaper, armies are using it to prepare for future wars, and doctors are using it to find cures for diseases more quickly. AI is no longer just about making life easier – it is about power. As AI becomes a tool for global influence, countries are racing not just to build the best technology, but to decide how it should be used and who gets to control its rules and values. The AI race is not only about smart machines – it’s about who shapes the moral and legal standards behind them.
Advertisement
This race is especially clear in the growing competition between the United States and China. Both countries lead the world in AI because they have huge amounts of data, powerful computers, and a lot of money to invest. They are putting AI into business, national security, and the military, seeing it as the key to staying powerful in the future. Europe, while not as aggressive in building new AI tools, is trying to lead by creating strong rules and ethical guidelines – like the EU’s Artificial Intelligence Act.
On the other hand, many countries in Africa, Latin America, and Asia are being left out of these discussions, even though they are being affected by AI systems built elsewhere. This lack of inclusion raises serious questions about fairness, global equality, and digital freedom. What makes things even more worrying is that there is no agreedupon global system to manage or control how AI is built or used. Unlike nuclear technology, which is guided by international treaties and strict rules, AI is developing in a kind of legal wild zone. Different groups – such as the OECD, the G7, the World Economic Forum, and big tech companies – have made suggestions and principles for using AI responsibly, but these are not enforceable laws.
They are mostly voluntary, scattered, and not binding. There is no worldwide agreement, no global agency, and no shared plan to make sure AI helps humanity instead of hurting it. This gap in governance is already causing serious problems. In countries where democracy is weak, AI is being used to strengthen authoritarian rule. Governments are using facial recognition to track and silence protestors, social scoring systems to control people’s behaviour, and AI policing tools that often increase unfair treatment. Online, fake videos and AI-created content are making it harder to tell what is true and what is not, which can disrupt elections and stir up hatred. On the battlefield, AI-controlled weapons raise big moral and legal issues that we are not ready to face.
Should machines be allowed to make life-and-death choices? Who is to blame if an AI system makes a terrible mistake? These are not far-off problems – they are urgent human questions that require action from the global community. The lack of rules also makes the divide between rich and poor nations worse. Wealthy countries and powerful tech firms are leading in AI, while poorer nations are forced to buy and use tools they did not create, and often do not fully understand. This could lead to a new kind of digital colonialism, where data becomes the raw material and algorithms become the tools of control.
Countries without their own AI systems may become places where foreign companies collect data or test new technologies, without proper protections or regard for local cultures and laws. Because of these growing dangers, it is extremely important for the world to work together and build a clear, fair, and strong system for managing AI. This system should be based on common values like human rights, fairness, openness, and justice. It should not just reflect the interests of powerful countries – it must reflect the shared responsibility of all people. Ideally, we need a global treaty, maybe under the United Nations or a new international group that sets the minimum safety and ethical rules for AI use in both everyday life and in the military.
Just as the world came together to make rules about nuclear weapons or climate change, we must act now to stop AI from causing serious harm. It is also essential that countries in the Global South – who are often left out of these discussions – to have a real say in shaping these rules. Including their ideas is not only fair, but necessary. If they are left out, the rules will be incomplete, possibly unfair, and even harmful. Civil society groups, universities, and the press must also play a big role. They can act as watchdogs, educators, and protectors of people’s rights by demanding openness in AI systems, protecting privacy, and helping people understand how AI affects their lives. Interestingly, the big tech companies that helped create today’s AI problems must also help solve them.
Companies like Google, Microsoft, and OpenAI have the skills and the platforms to help set useful global standards. Their role must be carefully watched to prevent them from taking too much control, but their cooperation is key to building AI rules that are both practical and ethical. We cannot rely only on companies to police themselves but we also cannot move forward without their help. The urgency is clear. As AI systems become smarter and more independent, the risks they pose grow quickly. If we wait too long, we may find ourselves facing disasters we could have prevented. We may be reacting to crises instead of shaping the future in a safe way. The time to act is now.
So the real question is not whether we need rules for AI – but whether we can come together, with courage and unity, to create them. We are at a critical moment in human history. AI offers us amazing possibilities, but without good global rules, it also brings serious threats. In deciding how we control it, we are also deciding what kind of future we want to live in. The rules for AI must be created—not by the richest, or the fastest, or the most powerful—but by the whole of humanity, for the benefit of everyone. (The writer is an accountant and freelance writer.)