TechWhoIsWho
Vol. 01Editorial Standardsv1.1
Join Waitlist
Editorial Standards · v1.1 · April 2026

How TechWhoIsWho is built.

The complete editorial standards behind the canonical index of the people building AI — the definitions, sourcing discipline, taxonomy, lifecycle, dispute resolution, and governance that keep the record trustworthy.

Twelve Categories Twenty-One Subcategories Four Lifecycle States Zero Paid Inclusion
Version History
v1.1 · APR 2026Added operational definitions, dispute resolution, conflict-of-interest policy, source preservation cadence, notification exception handling, and edge-case guidance.
v1.0 · APR 2026Initial publication.
§01

Purpose and Promise.

TechWhoIsWho is a canonical reference platform for the people building artificial intelligence. It exists because the AI talent economy is fragmented across platforms that were never built for it — LinkedIn surfaces professional history but not contribution, GitHub surfaces code but not context, arXiv surfaces papers but not people. No single source answers the question most worth asking: who are the people actually building the future of AI, and where is their work?

This page describes how the platform answers that question. It is a masthead document. It defines the editorial standards, the taxonomy, the lifecycle, the dispute process, and the governance that keep the record trustworthy. It is also a promise — to the builders we profile, to the readers who rely on the index, and to the broader community whose trust the platform's legitimacy depends on.

These standards exist to protect three outcomes, in order: the dataset is trustworthy; the process is fair to subjects; editorial judgment remains independent from commercial influence. Every clause below is written in service of one of those three.

We open with Africa. The continent's AI ecosystem is real, growing, and systematically underindexed by platforms built elsewhere. The researchers, founders, and engineers building from Lagos, Nairobi, Cape Town, Tunis, Accra, and across the diaspora deserve a canonical record — and nobody is better positioned to build it than a platform rooted in the continent itself. From that foundation, we extend outward to a global reference layer of researchers and operators shaping the field.

These standards are how we make that work honest.

§02

Definitions.

Several phrases throughout this document do load-bearing work — they determine who is included, what counts as a source, and when a change is material enough to version the standards. They are defined here, formally, so the work they do is legible to editors, subjects, and readers alike. These definitions are normative.

Builder
A person with documented, substantive contribution to AI systems, AI research, AI infrastructure, or AI deployment. The platform's subjects are builders, defined this way, and not commentators, analysts, or observers whose output is primarily about AI rather than the production of AI.
Documented contribution
Public evidence of work output: a peer-reviewed paper, shipped code, a launched product, a documented role at a recognized organization, a benchmark, a dataset release, or equivalent material the subject is verifiably responsible for.
Recognized venue
A peer-reviewed conference or journal of record in the field (NeurIPS, ICML, ICLR, ACL, EMNLP, and comparable), a research repository used by the field with credible institutional affiliations (arXiv, where accompanied by institutional authorship), or an institutionally validated publication channel.
Meaningful adoption
For open-source work, evidence that the project is used by others — measured in context of the project's age and domain. Indicative thresholds, any one of which typically qualifies: three hundred or more GitHub stars; five thousand or more monthly package downloads; verified production use by two or more external organizations; or citation by three or more independent technical sources. Editors may qualify projects below these thresholds where domain context warrants it, and may decline projects above them where the work is derivative or unmaintained.
Credible organization
An institution with a clear AI mandate and public governance: a university research lab, a research institute with published work, a recognized conference or professional body, an established grantmaker (NIH, ERC, NSF, Wellcome, IDRC, and comparable), or an established company program with public outputs. Ad-hoc initiatives and unvetted labs do not qualify.
Primary source
A source controlled by the subject or the subject's organization — a personal website, a verified company page, a published CV, an official university profile, an authored blog post. The most authoritative source type for biographical claims.
Secondary source
Independent reporting or interviews from reputable outlets — established publications, credentialed journalists, recognized industry newsletters, documented talks and conferences. Used to corroborate primary sources and to carry claims primary sources will not.
Scholarly source
Peer-reviewed papers, conference proceedings, academic profiles with institutional verification (Google Scholar affiliations, ORCID), and citation records. The authoritative source type for research claims.
Material change
A change to these standards that can alter who is eligible for inclusion, how a category is assigned, how a profile is ranked, how visibility states work, or what rights a subject holds. Material changes trigger a version increment and re-review of affected profiles.
§03

Who is included.

A profile on TechWhoIsWho belongs to someone who has shipped real work in artificial intelligence. We do not include commentators, influencers, or educators without demonstrated building. We do not include subjects whose public presence is primarily about AI without any underlying contribution. The platform is a record of builders, specifically — and the definition of builder in §02 is the standard being applied.

A profile is included when the subject meets one of the following objective criteria, verifiable through public sources:

For the African Edition specifically, inclusion is additionally open to active community organizers, educators, and ecosystem builders whose public contribution to African AI is documented: Masakhane core contributors, Deep Learning Indaba organizers, Zindi top-ranked competitors, Data Science Nigeria bootcamp leads, and similar roles with specific, public contribution evidence.

Inclusion is earned through real contribution — never purchased.

We do not include: commentators and newsletter writers whose output is about AI rather than the production of AI; public figures whose association with AI is through ownership or investment rather than technical contribution; subjects whose only documented AI activity is a single talk, a media interview, or social posts; and founders of self-proclaimed "AI" companies for which no verifiable AI system can be documented.

Declined-eligible profiles

Occasionally a subject meets the inclusion criteria but the editor concludes inclusion would weaken the dataset — due to insufficient verification, identity ambiguity, conflict with platform scope, or other specific quality risk. When this occurs, the decision is not discretionary in the quiet sense. The editor records, in the internal decline log: which quality risk applies, which evidence was found insufficient, and a re-review date no more than 90 days out. A second editor reviews every decline before it stands. This protects against silent bias in the inclusion process and ensures deferred profiles are genuinely reconsidered, not forgotten.

§04

How we source.

Every profile on TechWhoIsWho is built from public sources. We do not use private data, proprietary databases, or information shared in confidence. If a fact about a subject cannot be linked to a publicly accessible source, it does not appear in the profile.

Every profile has at minimum two linked sources. Most profiles have three to five. Sources are classified as primary, secondary, or scholarly — the three types defined in §02 — and at least one source per profile must be primary or scholarly. Every non-trivial claim is cross-checked against at least one additional source before publication.

Our primary source waterfall for the African Edition draws from: Masakhane's GitHub organization and contributor histories; Deep Learning Indaba alumni rosters; Zindi competitor leaderboards; AI4D Africa program pages; the team pages of Lelapa AI, InstaDeep, Awarri, Qhala, and other African AI companies; Google Scholar author pages of African researchers; arXiv author searches cross-referenced with African institutional affiliations; and YC, Techstars, and Antler portfolio pages filtered for AI and African founder affiliations.

For the global reference layer, we draw from Wikipedia and official company pages for established figures, Crunchbase for founder and company records, and GitHub and paper authorship records for technical contributors. Every source is cross-checked against at least one other before the profile is written.

Facts we will not include

Personal contact information is never included in any profile field. This means home addresses, personal phone numbers, and personal email addresses other than a subject-provided correspondence address used only for correction requests. Age and date of birth are not collected unless publicly and prominently shared by the subject. Family members, partners, and personal relationships are excluded. Sensitive personal attributes are excluded unless explicitly and prominently self-disclosed by the subject and materially relevant to the profile.

Claims that we cannot source are simply omitted. If a subject's undergraduate institution is not publicly documented, the profile does not speculate. If a company's founding date is ambiguous, we state only what the sources agree on.

Source preservation

Sources decay. Links rot; companies rebrand; personal pages disappear. The platform treats source durability as an operational responsibility, not a hope. For every source, the editorial record preserves the source URL, the source type, the access date at which the source was used, and a last-verified date updated on every review. Where legally permissible and technically feasible, an archival snapshot reference is retained alongside the original URL.

Verification cadence

Public profiles are re-verified at least every 180 days. High-velocity subjects — active founders, research leads, and others whose affiliations and outputs change rapidly — are re-verified every 90 days. Any broken or materially changed link is resolved or replaced within 14 days of discovery. The last-reviewed date shown on every public profile reflects the most recent completed verification.

§05

How we write.

Profile copy is written in reference-book register. It is neutral, factual, and specific. The platform does not editorialize about importance. Every descriptive claim is either a direct citation of public fact or immediately followed by a source link.

What this means in practice:

Every descriptive claim is either a direct citation of public fact — or it is not written.

Profile length is disciplined. The short bio is one sentence, capped at 40 words, used in search results and as the SEO meta description. The long bio is two to three sentences, capped at 120 words, used on the individual profile page. The discipline is deliberate: it forces editorial prioritization and protects the reader from padding.

Structured works are where the substantive detail lives. Every profile includes at least one work — a paper, a project, a company, an open-source release, or a documented role — linked directly to its primary source. Most profiles have three to five. The structured works field is what elevates a profile from a listing to a reference.

§06

The taxonomy.

TechWhoIsWho classifies profiles through a controlled vocabulary of twelve top-level categories and twenty-one subcategories. Every profile is assigned at least one and at most three categories, with exactly one marked as primary. The taxonomy is published below in full — not as marketing, but as the actual operating reference used in editorial work.

A note on the architecture: two top-level categories — applied_ai and scientific_ai — require a subcategory. Three top-level categories — natural language processing, computer vision, and AI infrastructure — allow an optional subcategory where the subject's work is sharply specialized. The remaining seven categories are flat. The selective hierarchy keeps the taxonomy manageable while capturing meaningful distinctions where they exist.

01
Foundation Models
foundation_models
Pre-training, architecture, and scaling of general-purpose AI models — LLMs, multimodal foundation models, and vision-language models built from scratch or at frontier scale.
Subcategories
None at v1.
02
Natural Language Processing
natural_language_processing
Language tasks, language data, and language subsystems — the layer that uses or produces language representations but is not the pre-training of foundation models.
Optional Subcategories
machine translation; low-resource languages; speech and audio; information extraction.
03
Computer Vision
computer_vision
Image and video understanding, generation, and analysis — detection, segmentation, generative image models, 3D reconstruction, remote sensing.
Optional Subcategories
medical imaging; generative images; 3D and geometry; remote sensing.
04
Reinforcement Learning
reinforcement_learning
RL theory, deep RL, multi-agent systems, and RL methodology — the study of learning from interaction and feedback.
Subcategories
None at v1.
05
AI Safety & Interpretability
ai_safety_interpretability
Research into making AI systems behave as intended and into understanding how they work internally — alignment, evaluation methodology, mechanistic interpretability.
Subcategories
None at v1.
06
AI Infrastructure
ai_infrastructure
The systems, tools, and software that make AI run at scale — training infrastructure, inference serving, compilers, accelerator software, model compression.
Optional Subcategories
training systems; inference serving; compilers and hardware.
07
Robotics & Embodied AI
robotics_and_embodied_ai
Physical AI systems — manipulation, locomotion, embodied reasoning, and the perception-action loop for robots and embodied agents.
Subcategories
None at v1.
08
Scientific AI
scientific_ai
AI applied to fundamental questions in the natural sciences — biology, chemistry, physics, climate, materials. The work is about scientific discovery, not product deployment.
Required Subcategory
biology and proteins; chemistry and materials; physics and climate; drug discovery.
09
Applied AI
applied_ai
Integration of AI into a specific industry vertical or real-world domain. The focus is deployment, product, and impact — not the underlying methodology.
Required Subcategory
fintech; health; agriculture and climate; education; customer ops; logistics and mobility.
10
Open-Source AI Tooling
open_source_ai_tooling
Developer-facing frameworks, libraries, and platforms whose primary users are other AI practitioners — the tools that build other people's AI work.
Subcategories
None at v1.
11
Multimodal & Generative AI
multimodal_and_generative
Work that spans or integrates modalities — text, image, audio, video — or focuses on generative models that cross traditional single-modality categories.
Subcategories
None at v1.
12
Data & Evaluation
data_and_evaluation
The methodology of building AI datasets, benchmarks, and evaluation suites as a primary contribution — the scaffolding that makes AI research measurable.
Subcategories
None at v1.

How categories are applied

Each profile has exactly one primary category — the single best answer to "what is this person most known for?" — and up to two additional secondary categories. The primary is used for search ranking and category-page composition; secondaries capture genuine multi-domain work.

Editors apply the taxonomy under inclusion rules and, equally important, exclusion rules. A builder who fine-tunes an existing foundation model for a specific use case is tagged applied_ai with the relevant vertical subcategory, not foundation_models — because foundation-model work specifically means pre-training or frontier architecture research. A researcher who releases a dataset as the primary contribution of their work is tagged data_and_evaluation, not the research area the dataset supports. These exclusion rules are what prevent categories from drifting into catchalls over time.

If a third category feels forced, editors do not apply it. A profile with one or two strong tags is always better than one with three weakly-justified tags.

Taxonomy governance

The taxonomy evolves under explicit rules. A new top-level category is added only when three or more profiles have been held in editorial review because they genuinely do not fit any existing category, and when the proposed category has clear inclusion and exclusion rules that do not overlap with existing ones. Subcategories are added more readily — when three profiles would benefit, a new subcategory is introduced and the minor version is incremented. Categories are retired only if, after twelve months, they have fewer than five profiles and an editorial review concludes they are not a leading indicator of emerging work.

As the editorial team grows beyond solo operation, the platform maintains quality controls on tagging consistency — sampling profiles for inter-editor agreement, monitoring recategorization rates, and tracking the frequency with which a forced third tag is applied. These controls begin as internal operational practice, but the platform will publish a periodic transparency summary of the measures that are mature enough to report responsibly. Where a measure is not yet statistically meaningful, the summary will say so explicitly rather than omit the category altogether.

§07

Profile lifecycle.

Every profile moves through a defined state machine. This is how editorial review, subject notification, and public launch are orchestrated. Only profiles in the public state appear on the site; profiles in every other state are not publicly visible.

01
Draft
Editor is writing the profile. Sources are being assembled, the bio is being drafted, works are being structured. Not visible anywhere publicly. A profile can sit in draft as long as it takes to reach the standard.
02
Notified
The profile is complete. The subject has been sent the full draft for a seventy-two-hour review window. Edits and opt-out requests are processed during this window. Still not publicly visible.
03
Public
The profile is live on the site and indexed by search engines. It appears in category pages and in platform search. The date the profile went public is shown on the profile itself.
04
Retired
The profile is no longer visible publicly. The route returns a clear "removed" signal rather than a generic missing-page. The database row is retained for audit, never as public content.

Transitions are one-directional except two: a profile in public may return to notified if a major rewrite requires subject re-review, and a profile in retired may return to draft if a subject later re-consents or circumstances change. A profile never moves directly from draft to public without passing through notified — the subject always sees the profile before the world does.

§08

Subject notification.

Because TechWhoIsWho builds from public information, the legal standard does not strictly require prior consent for inclusion. We hold ourselves to a higher standard than the legal minimum. Before any profile moves from notified to public, the subject receives the full draft and a seventy-two-hour review window.

What the notification contains:

  • The full draft profile, exactly as it will appear publicly.
  • The list of sources used to construct the profile.
  • A request-edit link to suggest corrections.
  • An opt-out link to decline inclusion entirely.
  • A point of contact for questions not covered by the above.

What happens during the review window:

  • If the subject requests corrections, we apply them and restart the 72-hour clock on the corrected draft.
  • If the subject opts out, the profile moves to retired and does not appear publicly.
  • If the subject does not respond within 72 hours, the profile moves to public. Subjects retain full correction and removal rights after publication — silence at the pre-publication stage does not waive them.

Exception handling

A 72-hour window is the default, not a ceiling. We recognize that the platform's audience is global and that the African Edition specifically includes subjects in contexts where a 72-hour email window may be unfair — cross-time-zone distances, low-connectivity environments, religious observances, public holidays, and the ordinary variance of busy professional lives. The default is held firm in the common case; the exceptions below exist so that the default does not become a trap.

If the primary notification bounces or is otherwise undeliverable, we attempt at least one alternative contact channel where one is publicly available — a secondary email from a verified source, a direct message via a subject-controlled handle, or a company contact documented on the subject's affiliation page. If no alternative channel is reasonably available and the notification cannot be delivered, the profile is not published. The editor records the delivery attempts in the audit log and re-reviews the notification plan before proceeding.

Where low-connectivity, cross-time-zone, or holiday context reasonably warrants it, the review window may be extended up to seven calendar days. This extension is at the editor's discretion based on documented context — not a privilege the subject must request. Extensions are logged alongside the reason.

This process is our standing commitment to the subjects of the index. It is how we earn the right to describe them, not just the legal clearance to do so.

§09

Corrections and removals.

Every public profile carries a visible request edit and request removal link in its footer. We acknowledge receipt of every request within two business days, and resolve factual corrections and verified removals within seven calendar days.

What edits we make

Factual corrections are always applied. If a role, affiliation, date, work attribution, or linked source is wrong, we fix it on confirmation. Spelling of names, preferred pronouns, and preferred short forms are corrected to the subject's stated preference. If the subject has updated their professional status — a new role, a departure, a retirement — we reflect it.

Stylistic requests are honored when they do not compromise factual accuracy. If a subject prefers a different phrasing of a factual claim, we accommodate. If the requested change would introduce inaccuracy or remove sourced material, we discuss it.

What edits we decline

We do not remove sourced, accurate, and relevant factual claims on request. If a subject would prefer their profile not mention a company they founded, a paper they co-authored, or a role they held — the request is treated as a removal request rather than an edit request. The profile as a whole may be retired, but its individual factual claims do not become selectively invisible.

We do not make changes that would misrepresent the subject's actual contribution — either by understatement or overstatement.

Removal

A subject may request full removal at any time. On receipt of a removal request from the subject (verified via the correspondence address on file), the profile moves to retired within seven days. The page returns a clear removal signal, not a generic missing page. The database row is retained for audit purposes — the reason for retirement is captured, the decision is not reversed casually — but no content from the profile remains publicly accessible.

Removed profiles are not re-added without the subject's explicit re-consent.

§10

Disputes and appeals.

Not every correction request is a simple factual fix. Sometimes a subject and the editor disagree about whether a claim is accurate, whether a source is credible, whether a category is appropriate, or whether inclusion itself is warranted. For those cases, the platform operates a three-tier dispute resolution process with defined SLAs at each tier. The process is designed to resolve disputes quickly where resolution is reachable, and to escalate cleanly where it is not.

— Tier 1 —
Editorial Owner
Written decision within 7 days
The editor responsible for the profile issues a written decision, documenting the facts, sources reviewed, and the reasoning behind the outcome. Most disputes resolve here. If the subject accepts the decision, the matter closes; if not, the dispute escalates to Tier 2.
— Tier 2 —
Senior Editorial Review
Independent re-review within 14 days
A second editor, unconnected to the original profile, re-reviews the dispute independently. The senior editor may uphold the Tier 1 decision, overturn it, or send the matter back for additional investigation. Outcome is documented with standards clauses applied.
— Tier 3 —
Standards Review Panel
Final determination within 21 days
For material disputes — matters affecting inclusion rights, category architecture, or the application of these standards themselves — a convened panel issues the final determination. Until a standing panel is formally constituted, Tier 3 review is satisfied by an independent external reviewer or advisory reviewer designated for standards oversight and not involved in the original decision. Panel composition, or the interim reviewer model in force, is published with the annual standards update.

Every dispute outcome is logged: the issue type, the evidence reviewed, the decision rationale, the standards clauses applied, the effective date. This log is internal but auditable — both by the subject whose dispute it concerns and by future editors handling comparable cases. Consistency across disputes is how the standards remain credible as the platform scales.

§11

Conflicts of interest.

Editorial independence is not a posture — it is a practice. An editor has a conflict of interest when their judgment about a specific profile could reasonably be influenced by a relationship they hold outside the platform. Conflicts are disclosed, logged, and in material cases, followed by recusal.

Editors disclose the following categories of relationship with any subject being profiled:

Disclosure is the minimum standard. Where the conflict is material — meaning a reasonable observer would question the editor's objectivity — the editor recuses from the profile, and the profile is reassigned. The recusal and its reason are logged in the internal record. Non-material relationships (a casual acquaintance, an attended conference, a shared mutual contact) are disclosed but do not require recusal.

These rules apply equally to the founder during the platform's solo-editorial phase. When an editor is writing a profile of someone they know, the working standard is: disclose, and when in doubt, reassign.

§12

Monetization and independence.

TechWhoIsWho does not sell inclusion. No builder pays to appear on the platform. No builder pays for higher ranking. No builder pays for a promotional tier, featured placement, or any other form of paid visibility. This is the line that separates a canonical index from a vanity directory, and it is non-negotiable.

Future revenue comes from the demand side — never from the builders whose work makes the platform worth anything.

The platform will introduce paid products. Those products will be directed at the demand side of the market: recruiters, investors, and enterprises seeking access to the dataset, with advanced search, filtering, hiring tools, and API access. Subjects may choose to upgrade their own profiles for control and depth — verification badges, hiring availability signals, expanded media — but the base profile is always free and earned by contribution. "Upgrade" never means "pay to be listed." It means pay for editorial control of your own entry, where the entry itself was earned on its merits.

Sponsored content, where it eventually appears, is clearly labeled and separated from editorial. Sponsors do not influence which builders are profiled, how they are described, or where they appear in search results. Commercial partnerships and integrations do not grant influence over editorial decisions. The integrity of the index is the product; commercial arrangements serve the index, never the reverse.

§13

Governance and versioning.

These standards are versioned and public. Versioning follows a three-level semantic scheme — major, minor, patch — so that changes to this document carry their implications visibly.

Version semantics

Every version published here carries a date and a change summary at the top of the page. Affected profiles — those whose classification, source standards, or subject rights are changed by a revision — are re-reviewed within 30 days of publication. The change log grows over time as the standards evolve.

The inclusion standard

Inclusion criteria are reviewed semi-annually. Edge cases encountered during editorial work are documented (see §14 below) and, when the cases reveal a recurring pattern the criteria do not handle, the criteria are updated with a version increment and a published change notice.

Editor accountability

As the platform scales beyond solo editorial, editorial decisions are logged per-profile — the editor who wrote it, the editor who reviewed it, the decisions made at each lifecycle transition, the corrections applied after publication. Any editor may be audited; any decision may be reviewed. The platform's credibility depends on consistency across profiles, and the governance mechanism protects that consistency as the editorial team grows.

Independence

TechWhoIsWho is a project of SITI Africa Holdings Limited. Editorial judgment is held by the editorial team, not by commercial interests within the parent entity or external partners. Sponsorship and commercial partnerships, when they exist, do not influence editorial decisions. Where a commercial interest within SITI Africa Holdings would benefit from a particular editorial outcome, the conflict is treated under §11 and the editor recuses.

Transparency reporting

The platform publishes a periodic transparency summary covering the operational measures that are mature enough to report responsibly: source-count distribution, correction volume and median resolution time, removal volume and median resolution time, dispute volume, and taxonomy recategorization trends. Where a measure is not yet stable or statistically meaningful, the summary states that directly. The purpose of the summary is not optics; it is to make quality control visible.

§14

Edge cases.

The inclusion standard and taxonomy are firm. The boundary cases where they meet real careers are where editorial judgment actually happens. The cases below are the ones that recur in sourcing — profiles that sit on the line between inclusion and exclusion, or between one category and another. Each is published with the platform's current decision and the rationale behind it. The list is illustrative, not exhaustive.

AI newsletter author with no shipped systems, papers, or products
Exclude
Newsletter authorship is commentary on AI, not the production of AI. The writer may be influential, but the platform records builders. Excluded by the builder definition in §02.
Investor with AI-focused portfolio but no technical output
Exclude
Capital allocation is a consequential role but not a building role. If the investor formerly shipped substantive research or systems, the prior contribution may qualify; their current investor role does not.
Developer advocate who also ships an AI benchmark library used by external teams
Include
The benchmark library is a documented contribution with meaningful adoption. Tag under open_source_ai_tooling or data_and_evaluation depending on the primary contribution. The advocacy role is context, not the inclusion trigger.
Senior product manager who shapes technical direction at a frontier AI lab but does not author papers or ship code
Conditional
If the PM's technical influence is documented through talks, internal research posts made public, or named attribution on products, include. If influence is real but unwritten and untraceable, hold pending sourceable evidence.
Policy researcher whose output is primarily commentary but who co-authored a technical audit methodology or benchmark
Include
The technical co-authorship is a documented contribution. Tag under ai_safety_interpretability if the audit is substantive. The commentary output is context.
Research engineer at a frontier lab who ships production infrastructure but has no external publications
Include
Production infrastructure at a recognized lab is a senior technical role per the inclusion criteria. Sourcing may rely on internal engineering blogs, named commit history on public repos, or talks at recognized venues.
Founder of a self-described AI company with no verifiable AI system
Exclude
"AI-powered" as marketing does not qualify. The platform's sourcing standard requires verifiable evidence of an AI system — a model, a published technique, a documented training pipeline, or comparable. Hold in draft pending evidence; decline if evidence does not emerge.
Prominent AI ethicist whose writing is philosophical rather than technical
Exclude
Ethics commentary without an accompanying technical or empirical contribution falls outside the builder definition. Ethicists who do publish technical audits, empirical bias studies, or evaluation work are included under the relevant technical category.
Researcher with a single well-cited paper over five years ago and no recent output
Conditional
The 36-month window in §03 is the default standard. Older contributions may qualify for Tier 1 (reference layer) if the paper is foundational. For Tier 3 (discovery layer), recency is required.
African-origin researcher at a frontier Western lab whose research is not primarily on African AI
Include
The African Edition includes African-origin builders regardless of their current geography or research topic. Their presence in the index is part of mapping the African AI diaspora; the research category is tagged normally.

Edge cases encountered in live editorial work are logged. When a case type recurs three or more times without being covered here, the list is updated and published in a minor-version revision of these standards.

§15

Our standing promise.

— One —
Inclusion is earned.
No builder pays for a profile. No builder pays for priority. The dataset's value depends on this, without exception.
— Two —
Every claim is sourced.
No profile contains a descriptive claim without a public, linkable source behind it. If we cannot source it, we do not write it.
— Three —
Subjects see the profile first.
Every profile is shown to its subject before it is shown to the world. Seventy-two hours — extended where context warrants.
— Four —
Disputes have a path.
Three-tier resolution with defined SLAs. Disagreement does not become silence; it becomes review.
— Five —
Corrections are honored.
Two-day acknowledgement, seven-day resolution. Factual corrections applied without friction. Removals honored on verified request.
— Six —
Independence is protected.
Conflicts are disclosed, material ones trigger recusal. No commercial interest influences editorial judgment.
— Seven —
The dataset is free to read.
The index is public. Future revenue comes from the demand side, not from the builders we profile.
— Eight —
These standards are public.
This page is the actual operating document, not marketing. Changes are versioned, dated, and published.
Editorial Standards v1.1 · April 2026. Questions, corrections, or partnership inquiries: hello@techwhoiswho.com