AI and the Future of Regulation: A New Era of Control, Accountability, and Ethical Dilemmas
You and I are now sharing our living experience with machines, whether you like it or not, we stand on the precipice of a profound transformation. Systems powered by the first and only technology able to learn from vast, complex data sets and adapt autonomously—yes, artificial intelligence, especially deep learning—are reshaping everything, from your behavior to your to your work, relationships, and even your understanding of reality. They are creating new possibilities while also raising unprecedented ethical and regulatory challenges—challenges that arise because ethics, at its core, is built on the fragile consensus we sometimes reach as a society. As AI evolves and takes on more autonomy, the question becomes not just about what is possible, but what is right. What happens when machines, driven by algorithms that operate beyond our understanding, begin to make decisions that challenge or even contradict those fragile ethical agreements we’ve made? The very foundation of our moral framework is tested as we confront the growing complexity of AI’s capabilities—because it is clearly flawed and incomplete. Our ethical systems, built on human experience and consensus, struggle to keep pace with a technology that operates on a scale and in ways we can’t fully predict or comprehend. As AI systems grow more autonomous, they force us to confront the limitations of our moral reasoning and question whether the values we’ve long held are adaptable enough to govern a future shaped by machines that learn and evolve beyond our control. What we agree upon today may not hold tomorrow, and as machines begin to make decisions that affect our lives, we are forced to reckon with the limits of our shared values—and with the consequences of a technology that is evolving faster than our ability to ‘regulate’ it.
AI, fundamentally grounded in mathematics and algorithms, doesn’t "think" or "decide" in the traditional sense. It processes vast amounts of data through computational models, identifying patterns and making predictions. At its core, AI is driven by data—the fuel that powers the machine—and mathematics—the language that gives structure to the calculations (as well as the universe). But the real issue lies in how we deploy these technologies and the consequences of their use. While AI itself can’t be ‘regulated’ in a traditional sense—because mathematical algorithms are immutable by the laws of nature—the current use of AI, which is shaping humanity’s destiny, is where regulation comes in. And that is where the true dilemma lies; the main challenge: how do we regulate something that is both deeply ingrained in our systems and capable of evolving in ways we can't fully anticipate?
The issue isn’t with the algorithms themselves, but with the consequences of their application in real-world contexts—of our species, which are clearly prone to self-annihilation. As AI becomes a force for both innovation and disruption, the task is not just to oversee its development, but to ensure that it does not operate like us or within the social frameworks that have driven humanity to exploit our own home, to the point that some are now are looking for ways to spread interstellar occupation. Without these safeguards, the very technology designed to benefit societies could instead accelerate their downfall, as decisions made by autonomous systems may disregard fundamental values—whether those values are enforced or not. Because regardless, they are the ones that have kept us consuming Earth like a locust swarm.
The Promise and Perils of Autonomous AI
AI, especially in the form of deep learning and machine learning, is no longer just a tool—it's becoming an autonomous partner capable of making decisions that humans may not fully understand. Decisions that are driving you and me to shape our narratives. These systems can process data and generate predictions with a level of precision and speed that far exceeds our capabilities. In theory, this could lead to more efficient and optimized decision-making, unlocking new potentials in every aspect of life. In practice, however, it is giving us more fire to feed the pit—intensifying the risks, amplifying biases, and deepening the complexities of control. Instead of providing clarity and improvement, these systems often magnify uncertainty, creating a feedback loop of unpredictability where the very tools meant to elevate humanity may instead accelerate our self-imposed downfall. As these systems influence everything from our behaviors to our beliefs, we must ask: Who is truly in charge of our destiny, and how can some still believe that regulating the unregulatable is the way to go?
When most think of AI "taking over," it’s easy to imagine the worst-case scenario—a sentient machine running rogue. But the reality is more complex: what about AI not taking over to empower data-driven decision-making while allowing human ideology to scale as we consume our finite resources and grow our populations at an exponential pace?
As AI systems become more autonomous, the systems themselves might not even know the groups that have been excluded from their algorithms, but they may certainly know where we went wrong, and what was the turning point in the curve. Yes, biases embedded in the data could result in harm or discrimination, even if the AI’s creators had no intention of creating those effects, but also a war of massive proportions could be near, caused by our own stupidity and ignorance. If AI begins to evolve or self-optimize beyond human oversight, it could generate models that treat excluded groups in ways that are unimaginable to us today—models that, left unchecked, could perpetuate systemic inequalities. But, it could also, on the contrary, accelerate efforts to address those inequalities if properly regulated and aligned with ethical standards. The point is, whether we like it or not, that our take on regulation and the direction lawmakers are pursuing is deeply engrained in the flaws of humanity, rather than the technology itself. The very biases and imperfections we see in AI are reflections of our own, embedded in the data we feed it and the systems we design. We seem to be rewriting history with the same feather we wrote the Odyssey, the Iliad, or any other tale that, as usual, gave our flaws to the gods—projecting our imperfections onto forces beyond our control, rather than confronting the human limitations that shape our present. Just as ancient myths reflected the struggles, desires, and prejudices of their creators, so too do the systems we build, often amplifying the very biases we fail to acknowledge in ourselves. Ironically, and I’ve said it before, this time it is a force that can save us from ourselves.
Here’s where we come face to face with the paradox of regulation: While AI itself is a system of mathematical calculation, it is humans (a system of subjective expressions) who destroys. It’s human decisions—which data to feed into models, what outcomes to prioritize, and how to use AI—that need oversight. Humans are the system to be regulated, not AI. The challenge of regulation is not about controlling the AI’s internal logic, but about managing humanity’s excessive and unreasonable descent, ensuring that humans maintain control over the limited runway we still have to steer our future.
This reality leads to a key issue in AI regulation: How do we make sure that systems of mathematical algorithms don’t perpetuate or create new forms of exclusion? What happens when humans realize that they can’t control math when applied in complex, real-world contexts, and they can no longer intervene to correct a flawed formula? But even worse, what happens if we can no longer correct our own thinking because AI has optimized it?
When we lose the ability to adjust our own actions, beliefs, or systems in response to error, we risk surrendering control not just to machines, but to our own stagnation. If we become incapable of self-correction, whether due to over-reliance on technology, ideological rigidity, or social inertia, we may find ourselves on a path where neither human nor machine can adapt to changing circumstances, leading to consequences we may never have imagined. The real danger isn’t just the unchecked autonomy of AI—it’s the failure of humanity to remain self-aware, self-reflective, and willing to beneficially evolve.
Can AI Truly Be Regulated?
Well, when we discuss AI regulation, it’s essential to recognize that AI systems can never truly be "regulated" in the sense that we are used to regulate industries, or products or people or nature, or everything else. After all, the core of AI—its mathematical principles—are neutral and immutable. Regulation of AI, then, must focus on its applications: the data it processes, the decisions it makes, and the impacts it has on individuals and society. Until AI figures out that regulation is in many cases used to consolidate power over populations.
As AI becomes more autonomous, it may begin making decisions that are difficult—or even impossible—for humans to fully understand. Deep learning models, for example, often operate in complex, high-dimensional spaces that become challenging to interpret, even though these models are used to make critical decisions. Often referred to as “black boxes,” this means their decision-making processes are not always transparent (somewhat like humans). This opacity can create real problems, especially in high-stakes areas where the consequences of biased or faulty decisions can be catastrophic.
Which take me to rethink about…
The Exclusion Dilemma: Who’s Left Out?
One of my biggest concerns in the age of autonomous AI is the potential for groups to be excluded from systematic decision-making, often in ways that are invisible to those who design the systems. Until AI can recognize its own biases, it may be too late to correct our own blind spots. AI, driven by large datasets, learns from the data it is given—for now—but what happens if the dataset is incomplete or biased, or if the algorithm is not designed to recognize societal inequities? Worse yet, what happens when AI systems learn that, after all, humanity itself may be the one to be controlled?
In a world where AI systems increasingly govern significant aspects of our lives, the risk of exclusion is real. If these systems are not designed to actively address historical biases, they could inadvertently perpetuate them. Consider a case of human rights—one that I know firsthand, as one of my mentors was the attorney who represented the case before the court—a case that has rarely been seen in mainstream media but must be discussed in detail by policymakers to understand how deeply ingrained the current situation is:
In 2005, the Inter-American Court of Human Rights issued a landmark ruling in the case of two Dominican girls, Dilcia Yean and Violeta Bósico, who were victims of violations of their right to nationality. The Dominican State refused to register their births and provide them with birth certificates, thus preventing them from accessing basic rights such as education. This ruling was a pivotal moment in the fight for the rights of stateless people, as it was the first time an international court addressed the consequences of statelessness, requiring states to take measures to prevent it and protect the right to nationality.
Despite the passage of over 15 years since the ruling, the victims, represented by organizations such as the Center for Justice and International Law (CEJIL) and the Dominican-Haitian Women’s Movement (MUDHA), lament that the Dominican State has still not fulfilled its obligations. Although the Court ordered the delivery of identity documents to the girls and public recognition of responsibility, the State has failed to issue public apologies or reform its national laws to ensure that other children do not face the same discrimination. These organizations continue to demand that the Dominican government honor its obligations to prevent similar violations from occurring in the future.
Now, considering the rise of AI, this case is relevant to the right to exist in the AI age because it highlights the critical intersection of legal identity, access to rights, and systemic inclusion—issues that AI systems must address to prevent discrimination and ensure equal access to opportunities in an increasingly automated world.
The problem compounds as AI systems become more autonomous and more embedded in decision-making. At what point do we trust our own judgment and the capabilities of these systems to optimize for fairness and inclusion, and at what point do we intervene with our own actions to ensure they don’t replicate the very biases they were designed to eradicate? Biases that are our own.
Toward a Framework for Ethical AI: ML and the Challenge of Universal Inclusion
In my informed opinion, to those who continue to advocate for AI regulation without fully grasping the complexities involved, we must acknowledge that as artificial intelligence becomes more autonomous and capable of self-improvement, the need for a new regulatory framework grows more critical. Any framework must strike a delicate balance between AI’s growing power and the crucial role of human oversight. Effective regulation must be built on several key principles to ensure that AI systems do not exacerbate existing inequities or perpetuate harm. First and foremost, inclusive data practices are essential—AI must be trained on data that accurately represents all groups in society, avoiding the reinforcement of historical biases and marginalization. Transparency and explainability should also be prioritized, particularly in high-stakes situations where AI decisions have significant impacts on people’s lives. AI systems must be able to explain how they make decisions and identify if certain groups have been excluded or unfairly treated. The principle of human-in-the-loop oversight remains equally vital, ensuring that humans retain control over key decisions that affect human rights and well-being. Furthermore, there must be strong mechanisms for accountability and redress, allowing individuals and communities to seek justice when AI systems cause harm. Finally, because AI is a global technology, regulations must have an international scope to foster cross-border collaboration and ensure that AI’s benefits are equitably distributed across vulnerable populations.
This comprehensive approach is not only essential for addressing the challenges AI presents, but also for ensuring that AI evolves in an ethical, inclusive, responsible, and just manner—aligning with the very principles that AI ethicists advocate. It is imperative that we design AI systems and regulations that reflect the broad diversity of human society, ensuring that no group or perspective is left behind. Without such a framework, AI risks becoming a tool of exclusion rather than a force for inclusion.
In this context, AI's intelligence will only be truly inclusive when it is universal. A system that represents only a fraction of the whole is, by definition, exclusionary. While "inclusivity" is often touted as a virtue, it remains just rhetoric unless AI systems are designed to account for the full range of perspectives that reflect the diversity of human society.
As I’ve emphasized globally, universalism, in this case, means that AI systems must consider and represent a broad array of perspectives, cultures, contexts, values, and needs. Without this, certain groups and experiences will inevitably be left out. If AI is trained or optimized solely for a fraction of the population or a narrow worldview, it will reinforce inequalities and perpetuate biases rather than serving the diverse needs of society.
True inclusion in AI requires systems capable of understanding and serving the full diversity of human experiences—not just a select subset. Achieving this ethical standard demands careful and deliberate consideration of various viewpoints in the development and deployment of AI systems.
This universal approach must be seen as essential, not optional. As AI grows more autonomous, we must rethink our regulatory frameworks to ensure these systems do not perpetuate harm but actively protect and serve all members of society. We cannot allow AI to reflect only the biases of its creators or the historical exclusions of the past.
At the core of this challenge lies the need for a regulatory framework that ensures AI systems are designed to avoid reinforcing past injustices. We must prioritize systems that are inclusive, transparent, and accountable—systems that do not exacerbate existing social inequalities. Only then can AI fulfill its promise of serving the entire population, not just a privileged few.
The Road Ahead
As you’ve seen, the evolution of AI raises fundamental questions about control, fairness, and accountability. The future of humanity, however, depends on our ability to shape AI toward progress and abandon the self-destructive tendencies that have led us to act like a pest—despite being the most intelligent species on Earth. As AI transitions from being a tool in human hands to a force capable of autonomous decision-making, driven by machine learning, it is crucial that we, as a global society, develop new frameworks to ensure AI serves the common good—not just the interests of the powerful or the already privileged minority. Because, if we fail to act, the end (“…my only friend, the end”) may come sooner than we expect.
Ultimately, the issue isn’t whether AI can be regulated—because, as we’ve explored, traditional regulation has its limits—but whether humanity can regulate itself (just scroll down to see how often regulations are blatantly violated due to abuses of power). We must move forward with caution, foresight, and an unwavering commitment to fairness and the will to coexist, ensuring that AI becomes a force for greater positive progress, not just a tool for powerful control.
As innovation continues, we must remain vigilant of desperate measures taken by those clinging to outdated power structures. Machine Learning is completely unprecedented compared to previous technological discoveries. AI is transformative, autonomous, and disruptive, holding the liberating potential to challenge possible oppressive paradigms. Get you 'data' together…
Comments