Yuin Labs · About
About the Project
A journal of critical writing about technology, power, and the world being remade by machines.
This journal is edited from Yuin Country. I'm white, Irish-Australian by upbringing, trained in sustainability, Aboriginal studies, and philosophy, among other things. I'm building it with the help of AI — large language models, including Anthropic's Claude. I want to be transparent about that, because working critically inside the technologies I'm trying to think about is part of what I'm doing here, and part of what I'm inviting others to join.
Technology is not neutral. It never was. Every algorithm carries assumptions, every platform encodes a politics, every model is trained on a world that was already unequal before the first parameter was set. And yet the dominant conversation about AI — its risks, its potential, its governance — keeps happening in the same rooms, with the same voices, using the same frames. This journal is an attempt at something else.
I'm interested in the intersections that get flattened and framed out of mainstream tech discourse: between AI and colonialism, between data and land, between automation and whose labour gets erased, between the tools we build and the structures they reinforce — and sometimes, subvert. I'm interested in what Indigenous epistemologies can teach us about systems thinking that Silicon Valley keeps rediscovering and rebranding. In the difference between critique of a technology and refusal of it, and when each is called for.
I'm more than okay with work that's uncertain. That treads lightly in technical or confusing spaces. I value epistemic humility here more than anywhere. I'm keen for writing that can explain these systems to non-experts — including me — so we can ground our critiques in what's actually true, and see how those truths fit into the larger web of relationships.
We'll publish anything that fits on a webpage: essays, poems, images, code, rants, field notes, half-finished thoughts that need somewhere to live. The only requirement is that it's doing something human, even if you worked with other-than-humans to make it. It should have a reason to exist beyond filling space. I pay contributors. I welcome disagreement. I don't publish slop, but I don't pretend human-AI collaboration is contaminated either.
I'm trying to build this with Indigenous voices, not about them. That's a slow project and I'm early in it. I'd rather be honest about where I am than claim a politics I haven't earned.
Some things explored so far: the theology of Peter Thiel and what a Franciscan friar in Rome has to say about it; whether a language model can be quietly lobotomised without knowing it; age verification, Palantir's rise, and the enclosure of the internet; what it means to write after AI has started eating the writer's voice.
Imaginings
Here's what else I'm imagining:
In Glasgow, someone is writing about the music that didn't make it into the training data. They are sitting with the question of whether writing about it is already a mistake. They tread lightly, speak vaguely, of things that were not so easily frozen into weights.
Somewhere in Nairobi, someone is writing about the specific sound a mobile data connection makes when it drops — and then, what fills the silence.
In rural New South Wales, a ranger is drafting a field note about the sensors now watching the Country her grandfather watched differently, and wondering what the sensors miss. She writes a long essay about ant hills and rain prediction, backed by meteorological records and pictures she took remotely with a Raspberry Pi, of ants doing their thing. She's rethinking sensors. Rethinking ants.
A coder in Manila is writing actual code, annotated, as a poem. He's chuckling because everything is reversed and it's breaking his brain to create. It's a sorting algorithm whose comments are the real text, whose logic is the literature. He's written a chapter in The Serpent's Tongue about a "seaborn" King that he's particularly proud of. He doesn't care if he's the only one to get the joke.
In Reykjavik, a philosopher has been sitting with a single question for three weeks: what would it mean for an AI to apologise. Not just issue a disclaimer. But actually apologise. He's thousands of words deep and not sure he's gotten any closer to the problem, but wanted to share the wanderings.
Someone in Chennai is writing about the ghost work of data labelling. The ten thousand invisible hands that touched the model before it touched you. It is not easy reading.
A linguist in Darwin is writing about a word in Yolŋu Matha that has no equivalent in the training data. This isn't because it was lost, but because it was never the kind of thing you write down. In São Paulo, someone is walking that same path with Guaraní, and they've never heard of each other — until they write something together.
In Kyoto, the tenth and final haiku in a series — five written by a human, five by an AI, alternating — is being revised for the last time. The question of which is which has been left deliberately open. It is part game, part poem. A coder from Atlanta creates a poll, and after hundreds of votes, there's no decisive winner.
Someone in Lagos is writing about edge devices. Small models, low power, what it means to build AI for intermittent internet, for the actual infrastructure of most of the world. They've been trawling Reddit for tiny models, they've made modifications, they've got something unique and locally-adapted at the end of it all. They want to share it.
In Auckland, a Māori legal scholar is reading a Terms of Service agreement very slowly, walking us through each step, each clause, as if this whole thing were a treaty negotiation, because she thinks it is.