Ontologies: formal representations of knowledge as sets of concepts and relationships, play a pivotal role in artificial intelligence and information systems. They enable machines to interpret human knowledge by structuring concepts in a machine processable way. However, ontologies are not culturally neutral: they embed assumptions from the language and worldview of their creators [1] [2] . An ontology reflects a particular conceptualisation of the world and this conceptualisation is often influenced by the culture and linguistic context in which it was developed. Professionals in knowledge engineering must therefore grapple with how cultural assumptions and linguistic structures shape concept selection, relation patterns and even biases in ontological models. In this post, we explore the interplay between ontologies, culture and language, and the implications for AI and knowledge engineering. Using real-world examples from Friend of a Friend ( FOAF), schema.org al well as domain...