Understanding AI Through the Lens of Aboriginal and Torres Strait Islander Perspectives
The conversation surrounding artificial intelligence (AI) often centers on its inevitability and potential for improving outcomes. While beneficial, this outlook can overlook the experiences of marginalized communities. The insights from Aboriginal and Torres Strait Islander peoples reveal that the integration of AI into everyday life brings forth a more nuanced understanding of its implications on society.
AI as a System, Not Just a Tool
The Relational Futures project delves deep into the intersection of Indigenous sovereignty and AI governance. By positioning AI within a broader relational context, the project shifts the focus away from viewing it as a standalone tool. It emphasizes that AI is part of a larger ecosystem influencing relationships among individuals, institutions, data, and the land, often referred to as ‘Country’ in Indigenous terms.
Our findings underscore the importance of accountability and ethical considerations when implementing AI systems. A recurring concern voiced by participants is the absence of oversight in AI applications—leading to fears of harm where accountability is lacking.
Trust Issues with Automated Systems
The implementation of automated decision-making processes in Australia has had dire consequences, notably exemplified by the Robodebt scandal. This instance serves as a warning about the pitfalls that can arise within systems that prioritize efficiency over ethical considerations. This scenario raises a critical question: efficiency for whom, and at what cost?
AI does not operate in a vacuum. It enters environments riddled with power imbalances and historical injustices, which means that when problems arise, their impacts are not equitably distributed. Through surveys and yarning circles, our project sought to highlight the unique perspectives of Indigenous peoples regarding AI.
Many participants indicated a profound distrust of AI technologies, with some expressing a willingness to reject them outright. This hesitance stems from a belief that AI amplifies existing inequalities, particularly in welfare, health, and social services.
The Importance of Indigenous Data Sovereignty
Central to our findings is the concept of Indigenous data sovereignty. This principle emphasizes the collective rights and responsibilities of Indigenous peoples to govern data relevant to their communities, environments, and resources. The governance of data should align with self-determination and communal benefit.
Participants voiced concerns that extend beyond the realm of privacy breaches. They highlighted issues such as the appropriation of Indigenous knowledge and the lack of transparency in AI system design. Environmental considerations also featured prominently in their discussions, with participants wary of AI being used to address gaps in under-resourced services.
One participant poignantly noted, “AI doesn’t quite grasp the depth of First Nations experiences, missing emotional nuances and cultural context.” This statement underscores the necessity of grounding AI systems in cultural understanding and community relationships rather than relying solely on algorithmic efficiency.
Envisioning an ‘AI Elder’
Our project also explored speculative concepts like the idea of an “AI Elder.” This hypothetical entity could potentially aid in reconnecting individuals with their culture or offer insights on cultural matters. However, the response from participants was one of skepticism.
Questions arose about the identity and accountability of such an Elder: Who would it represent? Who would it report to? Participants asserted that genuine relationships come from real communities, where trust is cultivated over time and through shared cultural responsibilities.
While the notion of an AI Elder may evoke intriguing possibilities, it cannot replicate the depth of human relationships rooted in cultural traditions and mutual obligation.
Rethinking AI Governance
To ensure AI technologies are beneficial and equitable, governance must transcend mere compliance and technical guidelines. It should engage deeply with concepts of authority, accountability, harm, and care.
When designed with the needs of Aboriginal and Torres Strait Islander peoples in mind—who are often the most marginalized within surveillance systems—AI has the potential to be more effective for everyone. Addressing the needs of those at the periphery of society is not just ancillary; it serves as a vital assessment of the technology’s overall efficacy.
One participant encapsulated this sentiment perfectly: “AI can be both a threat and a boon. Our involvement is crucial to ensure we’re not sidelined in how these systems are shaped and utilized. If we are excluded, existing power imbalances are likely to be reinforced by technology.”
The findings of the Relational Futures project serve as both a cautionary tale and a roadmap. Without the involvement of Indigenous leadership and a commitment to relational governance, the risks associated with AI could perpetuate historical harms, as seen in cases like Robodebt. The need for a thoughtful re-evaluation of AI’s purpose and the communities it aims to serve is urgent and essential.
Inspired by: Source

