Skip to main content

Ontologies, Culture and Language: Influence and Implications in AI Systems

Ontologies: formal representations of knowledge as sets of concepts and relationships, play a pivotal role in artificial intelligence and information systems. They enable machines to interpret human knowledge by structuring concepts in a machine processable way. However, ontologies are not culturally neutral: they embed assumptions from the language and worldview of their creators[1][2]. An ontology reflects a particular conceptualisation of the world and this conceptualisation is often influenced by the culture and linguistic context in which it was developed. Professionals in knowledge engineering must therefore grapple with how cultural assumptions and linguistic structures shape concept selection, relation patterns and even biases in ontological models. In this post, we explore the interplay between ontologies, culture and language, and the implications for AI and knowledge engineering. Using real-world examples from Friend of a Friend (FOAF), schema.org al well as domain ontologies in healthcare and defence. We also highlight how cultural biases can creep into ontologies and discuss strategies to mitigate these biases in professional practice.

Language, Culture and Conceptualisation

More than a century of research in anthropology and linguistics has documented that language influences how people categorise reality[3]. This idea, often associated with the Sapir-Whorf hypothesis (linguistic relativity), suggests that the vocabulary and structures of a language can shape its speakers’ worldview. A classic example comes from kinship terms: different cultures carve up family relationships in distinct ways. For instance, English uses a single term “uncle” for a father’s or mother’s brother, whereas other languages differentiate these roles, indicating separate concepts for maternal versus paternal uncle[4]. Such differences are not merely linguistic quirks; they reflect deeper cultural notions of family and social roles. An ontology that naïvely merges all “uncle” relations into one might overlook cultural distinctions (e.g. the varying expectations of a maternal uncle in one culture versus a paternal uncle in another). This simple example illustrates how cultural and linguistic structures shape conceptual categories, which in turn must be considered when modelling knowledge.

Figure 1: An example of linguistic influence on conceptual categories, a colour wheel labelled with Irish Gaelic colour terms. Irish Gaelic divides the blue-green spectrum differently from English, using terms like gorm (for deep blue/green) and glas (for lighter blue/green) that are distinguished by brightness rather than hue[5]. Such linguistic differences highlight how each culture may emphasise certain conceptual distinctions. An ontology that includes colour concepts needs to account for these variations, for example, representing that gorm and glas do not map exactly to English “blue” and “green.” This kind of variation demonstrates how human conceptual structures encoded in language can diverge, posing challenges for creating a shared, culture agnostic ontology.

The above examples underscore that what we consider a “concept” is often culturally bounded. Ontologists (the knowledge engineers designing ontologies ( not to be misheard as Otologists ) must decide what level of granularity and differentiation to model. Those decisions can be influenced by the designers’ own background. Are certain distinctions collapsed because the designers’ language has a single word? Are certain relationships assumed universal (like the notion of a single spouse or the concept of a “friend”) when in fact other cultures conceptualise them differently? These questions point to the subtle ways bias can enter ontology design through cultural and linguistic assumptions.

Ontologies as Cultural Artifacts

Ontologies are typically engineered by humans in a particular context and as such, they carry implicit choices about how to model reality. Recent studies confirm that ontologies, even though formal and logical, embody subjective choices and can introduce bias[2]. Keet (2021) identifies multiple sources of such bias, including linguistic bias (favouring terms from a particular language), socio-cultural bias (embedding assumptions about social structures) and others like philosophical or political bias[6]. In other words, an ontology is a product of design decisions: which concepts to include or exclude, how to define relationships and how granular to make the categories. Each decision may reflect the ontology developers’ cultural background or purpose.

For example, an ontology designer may choose a single relationship like “spouse” to link Person entities, implicitly assuming monogamy as a norm. This socio-cultural bias would fail to model polygamous relationships prevalent in some cultures[7]. Similarly, if an ontology in a geopolitical domain includes a class “TerroristOrganisation,” it may encode a particular political viewpoint, one group’s “terrorist” could be another group’s “freedom fighter.” As noted in one survey, classifying a person or group as terrorist versus protestor is not a neutral technical choice; it reflects political or ideological stances that can become baked into the ontology[7]. These examples illustrate that ontologies can inadvertently propagate cultural perspectives under the guise of technical definitions.

The linguistic medium of ontology development (often English for most widely used ontologies) can also introduce bias. Terminology chosen for classes and properties might align with English or Western concepts, potentially marginalising concepts that do not translate neatly. A linguistic bias occurs when the ontology’s vocabulary and structure favour a particular language’s way of thinking about the domain[6]. Even the choice of labels can influence how users of the ontology interpret the concept. For instance, an ontology for public health might have a class called “Disease” with subclasses but if it was developed by Western experts, it might omit or under-emphasise disorders recognised in other traditional medical systems. The structure and hierarchy might follow Western biomedical taxonomies, which could misrepresent or overlook knowledge from other cultures.

Figure 2: From cultural context to ontology to AI, a simplified flowchart. Human knowledge begins in a cultural and linguistic context, which influences how domain experts conceptualise their world. Those concepts are formalised by knowledge engineers into an ontological schema (e.g. classes and relations in OWL/RDF). Finally, the ontology is deployed in an AI or information system to enable reasoning or data integration. At each stage, there is potential for bias introduction[2]. For example, the socio-cultural norms of experts (first stage) determine concept boundaries and any oversight or bias at that stage becomes formalised in the ontology (second stage). The AI system (third stage) will then strictly adhere to the ontology’s model of reality, meaning any cultural blind spots or biases in the ontology can directly affect system behaviour. Recognising this pipeline helps AI practitioners appreciate why ontology design must be approached with cultural awareness.

Implications for AI and Knowledge Engineering

Why do these cultural and linguistic biases in ontologies matter for AI? Because ontologies often inform how AI systems reason about data, biases in the ontology can lead to biased or incomplete outcomes. In knowledge graphs and semantic AI, an ontology defines the allowed types of entities and relations and thus shapes what can be represented or inferred[2]. If an ontology encodes a skewed view of the world, an AI using it may systematically misinterpret or ignore certain information.

One immediate implication is bias in data annotation and retrieval. If an ontology lacks a concept or distinction, data about that concept may be misclassified or lost. A striking real-world example arose during the COVID-19 pandemic: one ontology for COVID-19 research (COVoc) included the term “male” but had no class or value for “female”[8]. This reflects a bias (whether intentional or not) aligned with a known issue in evidence-based medicine where male subjects/data are often taken as the default[8]. The consequence is that when researchers use COVoc to annotate scientific literature, queries about COVID-19 in women are hindered, the ontology literally lacks the category needed to tag or retrieve those studies. In this case, the ontological bias directly translates to a gender bias in information retrieval. An AI system performing literature triage or data integration on COVID-19 using that ontology would have a blind spot regarding female related research. Only by expanding the ontology to include the missing concept (female, in this case) can the bias be corrected.

Ontology biases can also lead to reasoning errors or undesirable inferences. An example cited by Keet involves an ontology where “COVID-19 experimental drug in a clinical trial” was modelled as a subclass of “COVID-19 drug”. This optimistic bias (perhaps reflecting hope that an experimental substance is already considered a drug) led a reasoning system to infer that hydroxychloroquine, an experimental treatment at the time, was a confirmed COVID-19 drug[9]. The logically correct but factually misleading inference was caused by the ontology’s biased class relationship, not by the data[10]. This scenario shows how ontological bias can propagate into AI outputs, potentially with high-stakes consequences (e.g., misguiding clinicians or policymakers). Ensuring that ontologies are modelled with neutral, evidence based relationships (and periodically audited for such biases) is critical when they interface with automated reasoning.

Bias in ontologies also affects data coverage and fairness in knowledge graphs. Geller and Kollapally (2021) examined a standard medical ontology and found that ontological gaps led to under reporting of race specific incidents[11]. In other words, because certain race or ethnicity related concepts were missing or insufficiently detailed in the ontology, data about adverse events affecting those groups were not being captured and reported at the same rate. Once again, the structure of the ontology determined what the system “sees.” If a knowledge graph schema does not differentiate, say, between different population groups where relevant, it may mask health disparities, thus contributing to bias in the resulting analysis. The researchers were able to suggest countermeasures by analysing real incidents and augmenting the ontology with missing terminologies[11]. This underscores a key point: cultural competence in ontology design (e.g., acknowledging race, gender or other contextual factors appropriately) is necessary to build fair AI systems.

Beyond specific domains, even large general knowledge graphs like Wikidata and DBpedia inherit cultural biases from both their crowd sourced data and their ontology schema. Studies have noted that Wikipedia (a source for DBpedia/Wikidata) is Eurocentric and predominantly edited by contributors from certain demographics, which skews the knowledge available[12][13]. The ontology layer can compound this by the way it classifies and filters information. For example, if an ontology only allows certain types of entities in a knowledge graph, those decisions can mirror a cultural perspective of what “matters.” In sum, professionals need to be aware that bias can enter at the ontology level just as much as in data or algorithms and that mitigating AI bias requires examining ontological assumptions too[14].

Case Studies: Ontology Projects and Cultural Context

To make these ideas concrete, let us look at a few ontology projects and how culture or language played a role in their design and use:

FOAF (Friend of a Friend): FOAF is an early Semantic Web ontology for describing people and their social networks. It includes classes like foaf:Person, foaf:Organisation, foaf:Document and properties such as foaf:name, foaf:knows (to indicate one person knows another), and foaf:member (membership in groups)[15][16]. The simplicity of FOAF was a strength, but it also reflected the cultural context of its creators (early 2000s, English-speaking web enthusiasts). For example, FOAF’s use of a single property foaf:knows to represent the friend-of-a-friend connection assumes a rather generic notion of “knowing” someone. It does not distinguish family relationships, degrees of friendship, or social context, nuances that might be important in other cultures. An analysis of FOAF’s content using discursive semiotics concluded that bias is an inherent feature of such knowledge organisation systems, produced by the social and cultural context of their developers[1]. In other words, FOAF’s particular choice of concepts (Person, Agent, Document, etc.) and relationships mirrored the worldview of its (Western, tech-community) authors and certain aspects (like representing friendship) were stretched or simplified in ways that may not universally apply[17].

Figure 3: An example fragment of the FOAF ontology (graph view). This diagram shows foaf:Person nodes and their relations: Alice knows Bob (foaf:knows), and both are members of an organisation (foaf:member links to a foaf:Organisation). Alice also “made” a document (foaf:made links to a foaf:Document). Such a model captures basic social and creative relationships. However, FOAF’s design choices reveal assumptions, e.g. treating knows as a universal relation and not modelling any further detail about the nature of the relationship. Cultural notions like hierarchical respect, kinship ties or context-specific acquaintance (think of honorific relationships in some cultures) are absent. As Gomes and Barros observe in their FOAF bias analysis, the ontology’s elements embody the social-cultural context of its creators and some concepts are stretched to fit (or omit nuance) due to those biases[18][17]. This example shows how even a lightweight ontology can carry cultural fingerprints.

  • schema.org: This is a broad ontology (or more accurately a vocabulary) developed by major search engine companies for structuring data on the web (e.g., web pages annotating events, products, organisations, etc.). Schema.org has classes for many everyday things, Person, Organisation, Event, Product, Restaurant, and so forth, and is intended to be universal. Yet, the global reach of schema.org can mask the fact that its initial ontology reflected the priorities of its sponsors and the e-commerce oriented, English-speaking web ecosystem[19]. For example, early versions of schema.org had detailed schemas for local businesses, reviews and recipes (relevant to search engine users in Western contexts) but far less coverage of concepts important in other cultural contexts (such as Indigenous knowledge systems or non-Western art forms). Over time, schema.org’s community has added extensions for healthcare, bibliographic info and more, which has improved its cultural breadth. Nonetheless, designing schema.org required choices like which types of Place of Worship to include, it explicitly has types for Church, Mosque, Hindu Temple, Synagogue, etc., acknowledging multiple religions. This inclusion is good but it also highlights how concept selection can reflect cultural awareness or blind spots. If some culture’s important institution type is missing, that data is harder to mark up on the web. Thus, schema.org illustrates the balancing act of creating a shared ontology: it must be general enough to cover diverse cultures’ concepts and this often means continuous revisions as awareness grows. The key takeaway for practitioners is that when using or extending a large ontology like schema.org, one should remain cognisant of whose worldview is embedded in its taxonomy and whether additional terms are needed to localise it.

  • Healthcare Ontologies (e.g., SNOMED CT, ICD): Medical knowledge representation has its own challenges with cultural and linguistic variation. Clinical ontologies like SNOMED CT and classification systems like the WHO’s ICD (International Classification of Diseases) strive to be global standards, yet medicine is not free from cultural context. One example is the recognition of certain conditions or concepts in one culture and not in another. Culture-bound syndromes (like certain folk illnesses) historically were not well-represented in international ontologies, reflecting a bias towards Western biomedical concepts. Recent efforts have tried to bridge this gap, for instance, ICD-11 included a chapter on traditional medicine originating from Chinese medicine, integrating concepts like Qi imbalance that were absent in earlier editions. This inclusion was controversial in some circles but it serves as an attempt to widen the ontology to a broader cultural frame. Another issue is how social determinants of health (like race, ethnicity, socio-economic context) are represented. If an ontology does not encode those factors, health disparities can be overlooked. As mentioned, Geller & Kollapally’s work in 2021 found that a standard medical ontology underreported incidents for certain racial groups due to missing terms[11]. This has led to initiatives to enrich ontologies with more granular demographic and cultural context data. For ontology engineers in healthcare, the lesson is clear: engage diverse experts and examine whether the ontology’s structure inadvertently centres one population’s experience. Multilingual support is also crucial, terms should be translatable and some languages might require new concepts. The large Unified Medical Language System (UMLS) addresses this by linking synonyms across languages but even structuring those mappings requires cultural sensitivity to ensure, for example, that a concept like “depression” in English properly links to the concept as understood in various languages and healthcare systems.

  • Defence and Intelligence Ontologies: In the defence domain, ontologies are used for information sharing (e.g., intelligence analysis, situational awareness). These ontologies can be particularly prone to political and cultural bias. A counter-terrorism ontology, for instance, might classify groups or actions as “terrorist” based on the defining nation’s perspective[20]. Keet notes that terrorism ontologies provide ample material for political views to creep in[20]. In practice, this could mean an ontology that labels certain organisations as TerroristGroup, a designation that might not be globally agreed upon. Similarly, military ontologies might have concepts like “EnemyCombatant” which embed the creating nation’s viewpoint about legitimacy. If shared in coalition or international contexts, such ontologies may conflict with allies’ terminologies or neutral terms (what one military calls insurgents, another might call local militia). Even seemingly straightforward terms like rank or unit can have different structures in different countries’ military culture (e.g., the concept of a battalion or the role of a colonel can vary). This domain illustrates how ontologies should be handled carefully when crossing cultural or national boundaries. One approach is to develop a core, abstract ontology (e.g., for command & control) and allow localisation or extensions for specific doctrines. The NATO Architecture Framework, for example, has tried to standardise some terms, but implementation often reveals biases. The overall point for defence ontologies is that one group’s ontology can encode assumptions that others do not share so interoperability requires recognising and reconciling those differences (often via alignment mappings or adopting more neutral descriptors).

Addressing Cultural and Linguistic Bias in Ontology Design

Understanding that ontologies can carry cultural biases is the first step; the next step is figuring out how to mitigate these biases in practice. Here are some strategies and considerations for building more culturally aware ontologies in AI and information systems:

  • Stakeholder Diversity: Involve domain experts and stakeholders from different cultural and linguistic backgrounds during ontology development. Broad involvement helps surface assumptions that a homogenous team might overlook[21][6]. For example, if designing a global e-commerce ontology, include experts from various regions to ensure that retail concepts (like payment methods, business types, product categories) are not solely defined by one country’s market. Diverse input can highlight missing concepts or the need for more inclusive relationships.

  • Explicit Documentation of Scope and Bias: Ontology engineers should document the design decisions, including any known bias or limitation of scope. Declaring the philosophical or cultural stance of an ontology (e.g., “this ontology follows Western scientific taxonomy for diseases” or “this model assumes monogamous family units”) does not eliminate bias but it provides transparency[21]. Transparent documentation allows others to critique, extend or adapt the ontology for different contexts. In academic and scientific ontologies, there is a push towards “literate ontology engineering”, where the rationale is recorded alongside the axioms[22][23], this can reveal where subjective choices were made.

  • Use of Foundational or Upper Ontologies Cautiously: Upper ontologies (like BFO, DOLCE) provide general categories (e.g., Object, Event, Agent) intended to be domain neutral. They carry philosophical commitments (e.g., whether to treat “process” as a fundamental kind) which might not obviously be cultural but they do impose a certain worldview of categorisation[24]. Be mindful that even foundational choices (such as time and space representation) might align more with some cultural scientific traditions than others. If minimising bias is a goal, choose upper ontologies or standards that are widely vetted and avoid over committing to a single metaphysics if not necessary. Sometimes a hybrid approach works: use a core ontology for shared concepts but allow localised subontologies for culture-specific ones (ensuring they map appropriately).

  • Linguistic Robustness: Design the ontology with multilingual labels and definitions. A truly international ontology should not have concept definitions that only make sense in one language. Where possible, include alternative labels (skos:altLabel in RDF, for example) in different languages and check if the concept’s meaning holds across those translations. If not, you may need multiple concepts or a more complex structure. Also be aware of terms that have no equivalent in other languages; this could signal a culture bound concept that might need explanation or might not belong in a “core” ontology layer without qualification. Techniques from terminology science, like cross lingual concept mapping, are useful here to ensure that ontology classes correspond to well defined concepts in each target language community.

  • Bias Auditing and Revision: Just as data and algorithms undergo bias audits, ontologies can too. The eight category bias framework by Keet[6] or similar taxonomies can serve as a checklist to examine an ontology’s content. For instance, review the ontology for any linguistic bias (are all terms English centric? do they favour a certain dialect or technical jargon?), socio cultural bias (do relationships or constraints reflect one social system? e.g., only allowing “spouse” implies a certain family model), political bias (are any loaded terms included?), economic bias (does the ontology categorisation serve an agenda, e.g., classifying a borderline condition as a “disease” because of insurance or funding implications[20]?). By identifying these, ontology curators can discuss and possibly refactor the ontology. In some cases, the solution might be to parametrise or contextualise the ontology. For example, instead of a single global class “TerroristOrganisation,” one could have a neutral base class “MilitantOrganisation” and then allow context specific tags or sub classes depending on jurisdiction, making the contentious classification explicit and adjustable.

  • Empirical Validation: Incorporate empirical studies from cognitive science and ethnography to validate the ontology’s conceptual model. If you are building an ontology of emotions, for example, consult cross cultural psychology research to see how different cultures categorise emotions. If evidence shows a concept is not universal, ensure the ontology does not rigidly hard code a single view. Empirical studies on linguistic categorisation (such as colour term research or kinship systems) can inform ontologists about which distinctions are likely important. As a practical measure, testing the ontology with datasets or use cases from different regions can reveal mismatches. If an ontology consistently fails to categorise or describe non Western data, that is a sign of cultural bias in the model.

  • Maintainability and Extensions: Finally, treat an ontology as a living artifact. As knowledge or social values evolve, ontologies should be updated. For example, the concept of gender in many older ontologies was modelled as a binary property; modern understanding (and requirements for inclusivity) mean that ontologies need to evolve (e.g., allowing non binary gender categories or using a more nuanced model of gender as a concept)[25]. Designing for extensibility, perhaps through modules or layers, can make it easier to incorporate new cultural perspectives. A domain ontology might have a core that is as neutral as possible and extension modules for particular cultural contexts. By modularising, one can keep the core consistent while allowing variation and an AI system can switch or combine modules as appropriate for its context.

Conclusion

Ontologies form a bridge between human conceptual structures and machine reasoning. As such, they inherit all the beauty and pitfalls of human knowledge, including the fact that knowledge is shaped by culture and language. For AI and information systems professionals, recognising this is crucial. An ontology is not just a technical schema; it is a model of the world and which world it models depends on who built it. Cultural assumptions can influence which concepts are deemed fundamental, how relationships are defined and what distinctions are considered relevant. If unchecked, these assumptions become biases in our AI systems, subtly skewing outcomes or marginalising certain viewpoints.
The interplay of ontologies, culture and language is evident in examples ranging from FOAF’s social network model to medical terminologies and beyond. The challenge moving forward is to develop ontologies that are both useful and mindful of diversity. This means engaging a variety of voices in ontology engineering, drawing on interdisciplinary research about how different communities structure knowledge and being prepared to iterate on our models. When done well, ontology design can benefit from cultural insights, for instance, improving health ontologies by incorporating traditional knowledge or enhancing social ontologies by learning from non Western social categorisation. In a sense, each ontology project is an opportunity for cross cultural dialogue: making implicit assumptions explicit and finding a shared representation that is richer for accommodating multiple perspectives.

By approaching ontology development with cultural intelligence and a rigorous methodology for bias minimisation[21][6], we can create knowledge engineering artefacts that not only power AI applications but do so in a way that respects the complexity of human experience. For professionals in information systems, this translates to more robust, fair and globally applicable systems, ones that truly understand the world in its many dimensions, because the ontologies underpinning them have captured knowledge from a broad and nuanced canvas. In summary, acknowledging and addressing the influence of culture and language on ontologies is not a mere academic exercise; it is a practical imperative for building AI and information systems that we can trust and that everyone can benefit from, regardless of language or cultural background.

References

Gomes, D.L. & Barros, T.H.B. (2020). “The Bias in Ontologies: An Analysis of the FOAF Ontology.” In Knowledge Organisation at the Interface: Proceedings of the Sixteenth International ISKO Conference, pp. 236–244.[1][17]
Keet, C.M. (2021). “Bias in ontologies?” (blog post). Keet Blog, 26 Aug 2021. Explores sources of bias in ontologies and examples including COVID-19 ontologies[20][8].
Keet, C.M. (2021). “An exploration into cognitive bias in ontologies.” In Proceedings of CAOS’21 (CEUR Workshop). Identifies eight types of bias (philosophical, purpose, scientific, granularity, linguistic, socio-cultural, political, economic) in ontology content[6].
Janowics, K. et al. (2018). “Debiasing knowledge graphs: Why female presidents are not like female popes.” In ISWC 2018 Posters & Demos. (Mentioned in Keet 2021 blog as addressing bias in knowledge graphs).
Paparidis, N. & Kotis, K. (2021). “Whether one attributes a person’s skin color or not in an ontology can determine emergence of bias.” (Referenced in ACL 2022 survey)[26].
Geller, J. & Kollapally, N.M. (2021). “Detecting, Reporting and Alleviating Racial Biases in Standardised Medical Terminologies and Ontologies.” In Proc. of IEEE International Conference on Big Data, 2021. (Finding that gaps in medical ontologies caused under-reporting of race-specific incidents)[11].
Sherlyn (2017). “Colours in Irish” (graphic, CC-BY-SA 4.0). Wikimedia Commons. Illustrates Irish Gaelic color terms in a colour wheel[5].
Wikimedia Foundation. “Basic Color Terms – Linguistic relativity and color naming debate.” Wikipedia (accessed 2025). Describes Berlin & Kay’s findings and critiques, including Gaelic and Himba colour categorisations[5].
Brellas, D. & Martines, V. (2020). “Kinship Terms.” In Shared Voices: Introduction to Cultural Anthropology. Example of cross-cultural kinship term differences (Croatia vs US)[4].
Maria Keet (2009). “Dirty wars, databases, and indices.” Peace & Conflict Review 4(1). (Discusses political bias in data/ontologies, cited in Keet’s blog).
Maria A. Matienso (2022). Blog posts citing Gomes & Barros (2020) and Keet (2021)[1].
Bender, E.M. et al. (2022). “The Lifecycle of Facts: A Survey of Social Bias in Knowledge Graphs.” AACL 2022. Provides an overview of how bias enters KGs, including ontology design stage[2][7].
Britannica (n.d.). “Kinship – Terminology.” (by J. Carsten). On how language shapes kinship categories[3].
Schema.org (n.d.). Documentation. (Demonstrates breadth of schema.org vocabulary and cultural extensions).
World Health Organisation (2019). ICD-11. (Includes Traditional Medicine Chapter).
Brickley, D. & Miller, L. (2014). FOAF Vocabulary Specification 0.99. (FOAF ontology documentation)[15].


[1] [17] [18] 2022 | María A. Matienso https://matienso.org/2022/
[2] [7] [11] [12] [13] [14] [25] [26] The Lifecycle of "Facts": A Survey of Social Bias in Knowledge Graphs https://aclanthology.org/2022.aacl-main.49.pdf
[3] Kinship - Terminology, Descent, Relationships | Britannica https://www.britannica.com/topic/kinship/Kinship-terminology
[4] 8.3 Kinship Terms – Shared Voices: An Introduction to Cultural Anthropology [Revised Edition] https://rotel.pressbooks.pub/culturalanthropology/chapter/kinship-terms/
[5] Basic Color Terms – Wikipedia https://en.wikipedia.org/wiki/Basic_Color_Terms
[6] [21] [22] [23] [24] Minimally-Biased Scientific Ontologies https://www.emergentmind.com/topics/minimally-biased-scientific-ontologies
[8] [9] [10] [20] Bias in ontologies? – Keet blog https://keet.wordpress.com/2021/08/26/bias-in-ontologies/
[15] [16] FOAF – Wikipedia https://en.wikipedia.org/wiki/FOAF
[19] One schema to rule them all: How Schema.org models the world of ...https://asistdl.onlinelibrary.wiley.com/doi/10.1002/asi.24744

Comments

Popular posts from this blog

Briefing Note: Strategic Defence Review 2025 (Training and Simulation Focus)

This briefing note is on the recently published Strategic Defence Review (SDR 2025) with particular focus on training and simulation. Headlines : Strategic Defence Review 2025 mandates a fundamental overhaul of Defence pedagogy. NATO standards will now form the core benchmark; to ensuring interoperability. A philosophy of managed risk replaces “safety at all costs” culture, permitting experimentation before implementation and exploitation. A unified virtual environment and mandatory ‘synthetic wraps’ is aimed at transform training into a persistent, scalable activity independent of live platforms. Defence’s skills doctrine is focussed to promotes leadership, digital expertise and commercial acuity across regulars, reserves, civil servants as well as industry partners. Recruitment modernises through short form commitments and rapid induction camps. A whole force career education, training pathway underpins long term professional growth. Timeline obligations concentrate effort betwee...

Briefing Note: Spending Review 2025 (Defence Training and Simulation focus)

Date: 11/06/2025 This briefing note is on the recently published UK Government Spending Review (SR 2025) with particular focus on Defence Training and Simulation. It builds on the analysis of the Training and Simulation analysis of the Defence Spending Review 2025 that can be found at https://metier-solutions.blogspot.com/2025/06/briefing-note-strategic-defence-review.html Headlines: Table ‑ 1 ‑ 1 Big picture – how the June 2025 Spending Review (SR25) touches Defence Training & Simulation. IMPACT Analysis: Using the core factors of the #IMPACT theory [1] and data from 2024 as a baseline we can draw some strategic insights into the Defence Training and Simulation themes of SR 2025. Figure 0 ‑ 1 IMPACT-Factors shifts driven by SR25, top level IMPACT analysis of the training and simulation aspects of SDR 2025 Table 2 ‑ 1 comments on the effect of SR2025 and shows the effect on the main IMPACT Factors. Legend: ▲ positive shift, ▬ neutral. What changes for Defence training p...

Briefing Note: Competition & Markets Authority Investigation into Google’s General Search and Search Advertising Services

Date: 16 January 2025 Subject: Investigation into Google’s compliance under the Digital Markets, Competition and Consumers Act 2024 Purpose:  This briefing addresses the Competition & Markets Authority (CMA’s) investigation into Google’s general search and search advertising services. The investigation evaluates Google's compliance under the digital markets competition regime and assesses whether Google should be designated as having Strategic Market Status (SMS). If designated, specific Conduct Requirements and Pro-Competition Interventions could be imposed to enhance competition, innovation and consumer protection. Key Context Market Dominance: Google accounts for over 90% of the UK general search market, generating high revenues from search advertising. Its market share and control over key access points create significant barriers for competitors. Economic Impact: UK advertising spend on search has doubled between 2019 and 2023 to £15 billion, with Google dominating the ...