The localization industry adapts the language of products and services for the world. Adapting itself to the world of business has been much slower and less transparent.
As a result, there is vast fragmentation of services and technologies and an undertow of questions about best practices. No one service or tool does it all, notwithstanding abundant marketing copy to the contrary.
I built this application to pull open the curtains. This is the first practitioner-reported dataset on how localization actually works. These workflows run through buyer organizations, LSPs, and vendor teams. Everyone running them belongs here.
Three real-time feedback mechanisms, contributed by practitioners across the industry. See the results to help navigate buying decisions and program strategy.
Report the tools you use and how well they work together. Vendors know what they sell. Buyers know what they pay. Nobody publishes what works. This is for measuring real stack performance: which combinations people run, how they rate them, and where the friction is.
Share your stack →Vote on contested localization terminology and propose alternatives. Disagreements about terminology are often disagreements about accountability. "Light post-editing." "TM leverage." "Transcreation." This glossary names the contested ones and invites you to share your professional (and candid) views.
On your terms →Report what you pay for TMS, automation, MT, and AI platforms. Pricing opacity in this industry benefits vendors, not buyers. Contracts are negotiated privately, benchmarks do not exist publicly, and the gap between what platforms cost and what buyers think they should cost is rarely discussed. This changes that.
Industry rates →Everyone who contributes makes the picture more clear.
About the author →