How Board Game Librarian Answers Your Questions

Last updated: March 31, 2026

How Board Game Librarian Answers Your Questions

A plain-language guide to what happens between "you ask a question" and "you get an answer" — no technical background required.


Table of Contents

  1. The Short Version
  2. Step by Step: The Journey of a Question
  3. How the System Understands Meaning, Not Just Words
  4. Multilingual: Ask in Any Language
  5. Two Types of Answer: Fast and Deep
  6. Confidence: How Sure Is the Answer?
  7. When the Rulebook Is New
  8. Community Sources and Deep Analysis
  9. What the System Can and Cannot Do
  10. Frequently Asked Questions

The Short Version

Board Game Librarian is a research assistant that's read every rulebook you've given it. When a player asks a question, the assistant finds the most relevant pages, reads them, and writes a clear answer in the player's own language — with citations so the player can verify the source.

The system doesn't guess. It reads first, then answers.

In numbers: most answers arrive in 8–12 seconds. The system supports 10 languages. Every answer includes the rulebook page numbers it drew from.


Step by Step: The Journey of a Question

Here's what happens from the moment a player types a question to the moment they read the answer.

1. The question arrives

Questions can come from three places: the Telegram bot, the web chat, or a widget embedded on a partner's website. No matter where the question comes from, it follows the same path.

2. Language detection

The system reads the structure of the sentence — not the content words, but the grammatical words (articles, pronouns, prepositions) — to determine what language the player is using. It doesn't need to be told. A question written in French is detected as French; a question in Japanese is detected as Japanese.

Example: "Come funziona il combattimento?" is detected as Italian from the function words come, il, and the verb structure — even though a game name like "Root" would be the same in any language.

This detected language is remembered and used at the end of the pipeline to write the answer back in that same language.

3. Game identification

The system extracts the name of the game from the question. If the player has already selected a game (for example, in a single-game widget or by using the /game command), the system uses that context. If the game isn't yet in the library, it can be imported automatically.

4. Searching the rulebook by meaning

This is where the system does something different from a keyword search. See the next section for a full explanation.

5. Finding the right sections

The system identifies the top 10 most relevant passages from the rulebook — based on meaning, not keyword matching. These passages are ranked by how well they match what the player was actually asking about.

6. Writing the answer

An AI model reads the selected rulebook sections and writes a clear, structured answer. The AI doesn't use general knowledge or invention — it draws exclusively from the rulebook passages selected in the previous step. The answer is written in the player's detected language, regardless of the language the rulebook is in.

7. Delivery and citation

The answer is delivered with:

  • Page citations: the exact rulebook pages that informed the answer
  • A confidence indicator: a signal of how well the rulebook supported the answer
  • An option to request a deeper analysis if the question is complex

How the System Understands Meaning, Not Just Words

Keyword search finds the exact words you typed. Meaning-based search finds the idea you meant.

The difference

Imagine two players ask the same question, differently:

  • "How do I set up the game for three players?"
  • "What does the board look like at the start with 3 people?"

These sentences share almost no words. A traditional keyword search would treat them as entirely different queries. The system treats them as the same question — because they mean the same thing.

The map analogy

Think of every concept in every rulebook placed on an invisible map, where similar ideas are physically close to each other. "Setup," "starting position," "initial placement," and "beginning of game" all live in the same neighbourhood on this map. When a player asks a question, the system finds its location on the map and retrieves everything in the surrounding area.

This is why the system can find the right rulebook section even when the player uses different words, a loose description, or even a slight mistranslation.

What this means in practice

  • Asking about "winning the game" also retrieves content about "victory conditions" and "end-game scoring"
  • Asking about "fighting" retrieves content about "combat," "attack," and "battle resolution"
  • Asking a question in Italian retrieves content from an English rulebook — because the meaning is the same, even if the words are not

One honest limitation

Meaning-based search is very good at concepts. It's less reliable for highly specific references — like a rule that applies only to a particular named card, an obscure variant, or a term that appears in only one line of the rulebook. If the relevant content simply isn't in the rulebook, the system can't find it.


Multilingual: Ask in Any Language

Board Game Librarian supports 10 languages: English, Italian, German, French, Spanish, Portuguese, Russian, Japanese, Polish, and Chinese.

How it works

The system operates on a simple principle: the language of the question determines the language of the answer.

A player who asks in German gets an answer in German, even if the rulebook is in English. The system detects the question language, searches the rulebook in its original language, then composes the answer in whatever language the player used — acting as an automatic interpreter.

Language precedence

When a player uses the system, their language preference is respected in this order:

  1. The language of the question — always the primary signal
  2. The player's saved preference (if they've logged in before)
  3. The widget's configured language (a default set by the partner)

The player never needs to select a language or change any settings.

Example: A German-speaking player uses an Italian partner's widget. The widget is configured for Italian. The player asks a question in German. The answer arrives in German — because the player's question language takes priority.

A note on accuracy

Languages with more training data in AI models (English, Spanish, French, Italian, German) produce the most fluent and precise answers. Less widely modelled languages (Polish, Japanese, Chinese) are fully supported but may occasionally show minor grammatical imprecision compared to the original English answer.


Two Types of Answer: Fast and Deep

Every question goes through a two-stage system designed to balance speed with depth.

Stage 1 — The fast answer (Tier 1)

Time: 5–12 seconds Sources: Official rulebook only Triggered: Automatically, for every question

This is the standard response. The system searches the rulebook, selects the most relevant sections, and produces a clear answer. For the majority of questions — straightforward rules clarifications, setup instructions, turn order, victory conditions — this answer is complete and definitive.

Stage 2 — The deep answer (Tier 2)

Time: 15–35 seconds Sources: Official rulebook + community discussions Triggered: When the player explicitly requests deeper analysis, or when the system determines the question requires it

Some questions are genuinely complex: they involve interactions between multiple rules, edge cases the rulebook doesn't address clearly, or real-world situations where players disagree about the correct interpretation. For these, the system goes further. It searches community forums — where experienced players and designers have discussed exactly these kinds of edge cases — and combines that discussion with the official rulebook to produce a more complete answer.

Fast AnswerDeep Answer
Time5–12 seconds15–35 seconds
SourcesOfficial rulebookRulebook + community forums
Best forClear rule questionsEdge cases, ambiguous rules
CitationsRulebook pagesPages + forum threads
When triggeredAutomaticallyOn player request

Think of it this way: the fast answer is a triage response — immediate, accurate, drawn from the official source. The deep answer is a consultation: it takes longer but cross-references more evidence, including how experienced players have interpreted the same rule in practice.


Confidence: How Sure Is the Answer?

Every answer includes a confidence indicator. This isn't a measure of the AI's certainty — it's a measure of how well the rulebook supported the answer.

What confidence means

LevelWhat it means
HighMultiple rulebook sections clearly and consistently address the question. The answer is well-supported.
MediumThe rulebook addresses the topic but the relevant passages require interpretation. The answer is reasonable but verify if it's critical.
LowThe rulebook doesn't address this question directly. The answer draws on the closest available content. Consider checking the publisher's FAQ or official errata.

How it's calculated

The system measures how closely the retrieved rulebook sections match the question. If several strong matches are found and they all point to the same conclusion, confidence is high. If the matches are weak, few, or contradictory, confidence is lower.

The weather forecast analogy: a high-confidence answer is like a 90% chance of rain — the evidence strongly points one way. A low-confidence answer is like a 40% chance — the evidence exists but isn't conclusive. In both cases, the forecast is honest about its uncertainty.

What to do with a low-confidence answer

A low-confidence answer isn't necessarily wrong. It means the system couldn't find strong direct support in the rulebook. In these cases:

  • Check the page citations provided — the relevant content may be there but phrased differently
  • Consider requesting a deep analysis, which draws on community sources
  • Consult the publisher's FAQ, errata, or official forum if the question is critical

When the Rulebook Is New

When a new rulebook is added to the library, the system doesn't process the entire book immediately. Instead, it waits.

Why?

Processing thousands of rulebook pages — extracting meaning from every section, building the semantic map described earlier — takes time and computing resources. Doing this for every rulebook in advance, for games that may never be asked about, would be wasteful. So the system processes a rulebook the first time a player asks about that game.

What happens on the first question

  1. The system sees the rulebook text but has no processed sections yet.
  2. It reads the raw rulebook text directly — this takes slightly longer than usual.
  3. An answer is delivered (it may be slightly less precise than normal).
  4. In the background, the system processes the full rulebook. This takes 5–10 minutes.
  5. All subsequent questions use the fully processed rulebook.

The first answer may be slightly less precise, but it's never an error. Players receive a response immediately. The system improves silently in the background.


Community Sources and Deep Analysis

For some rules questions — especially edge cases and situations the rulebook doesn't address explicitly — the most valuable knowledge exists not in the official document, but in the accumulated experience of the player community.

What community sources are

The system can access community forum discussions: threads where experienced players, enthusiasts, and sometimes game designers themselves have debated the exact question being asked. These discussions often contain interpretations of ambiguous rules, designer intent clarifications, real play examples, and errata that hasn't yet made it into the printed rulebook.

How they are used

Community sources are used exclusively in deep analysis (Tier 2). They're never mixed into the standard fast answer. When a deep analysis is requested, the system searches the official rulebook as usual, then searches curated community forum threads related to the game and question, combines both sets of sources, and produces an answer that acknowledges both the official rulebook position and any relevant community discussion. Every community source is cited, so the player can read the original thread.

Authority weighting

Official rulebook content carries more weight than community discussion. If the rulebook clearly answers a question, that answer takes precedence. Community sources are presented as supplementary context, not as overrides.


What the System Can and Cannot Do

What it can do

  • Answer questions about any game whose rulebook has been imported
  • Detect and respond in 10 languages automatically
  • Cite the exact rulebook pages that informed each answer
  • Handle follow-up questions in a conversation
  • Distinguish between straightforward rules questions and complex edge cases
  • Cross-reference community discussions for ambiguous situations
  • Provide a confidence indicator for every answer

What it cannot do

  • Answer questions about rules not covered in the imported rulebook
  • Access external websites, publishers' websites, or live errata
  • Guarantee 100% accuracy on genuinely ambiguous or contested rules
  • Replace official publisher rulings for tournament or competitive play
  • Understand house rules or local variants unless they're in an imported document
  • Answer questions about games not in the library

On accuracy

The system is designed to be accurate and honest. When it doesn't know, it says so — or gives a low-confidence answer rather than a confident wrong one. Page citations are provided so players can verify every answer against the source.

For high-stakes situations (tournament play, resolving a real dispute), always verify the answer against the physical rulebook or the publisher's official FAQ.


Frequently Asked Questions

The system gave me a wrong answer. What happened?

Several things can cause an incorrect answer:

  1. The question touched an edge case the rulebook doesn't address clearly. Check the confidence level — a low-confidence answer means the system found limited support.
  2. The relevant rule is in a supplement or expansion rulebook that hasn't been imported. Contact the portal administrator to add the missing document.
  3. The rulebook contains an error that was later corrected by errata. The system answers based on the imported document, not post-publication corrections.

In any case, check the page citations provided. They show exactly which rulebook sections were used.


Can I ask follow-up questions?

Yes. Within a conversation, the system retains context. If you ask "What happens if the attacking player runs out of cards?" after a question about combat, the system understands you're still asking about combat and adjusts accordingly.


Why does the first question about a new game sometimes feel slower or less precise?

When a rulebook has just been added to the library, the system delivers an immediate answer from the raw rulebook text while processing the full document in the background. This first-response quality is slightly lower than normal. Subsequent questions will use the fully processed version.


The answer is in the wrong language.

The system uses the language of the question to determine the language of the answer. If the answer came in an unexpected language, check that the question itself was written in the intended language — the system doesn't require any language settings to be changed manually.


Can the system answer questions about unofficial rules or variants?

Only if a document containing those variants has been imported into the library. The system can't invent rules it hasn't been given.


How current are the rulebooks?

The system answers based on the rulebook versions that have been imported. If a publisher releases an updated rulebook or errata, the portal administrator needs to import the new version. The system won't automatically update.


Continuous Quality Improvement

Board Game Librarian includes an autonomous quality optimisation loop that runs in the background after you and other players have used it.

How it works:

  1. Nightly triage — the system scans recent interactions and flags the ones worth evaluating (real questions with real answers, in a supported language).
  2. LLM-as-judge — a separate AI model reviews each flagged interaction and scores it across four dimensions: accuracy, completeness, format, and relevance. It also identifies which prompt template was used.
  3. Targeted proposals — when a pattern of weak responses is found in a specific question category, the system proposes a targeted change to the relevant YAML prompt template, together with the reasoning.
  4. Test battery — the proposed change is tested against 40 representative questions. The score before and after is compared. If the improvement exceeds the threshold and no questions regressed, the change is auto-applied. Borderline improvements go to a human for approval.
  5. Safe deployment — when a proposal is applied, the prompt file is updated, the Redis prompt cache is flushed, and the orchestrator restarts. A git commit records the exact diff. If something goes wrong, the change can be rolled back.

This cycle means that answer quality improves over time without requiring manual prompt engineering after every edge case.

The admin approval interface is at /admin/quality.