🔬 Our Methodology

How we source, verify, timestamp, archive, and correct every Teer result we publish.

Accuracy is the foundation of this website. A single incorrect number erodes trust that takes months to rebuild. This page explains our editorial approach to sourcing, verifying, publishing, archiving, and — where necessary — correcting Teer archery results, and what we do when information is uncertain. It is written deliberately for readers who want to understand the actual mechanics rather than marketing claims, and it is the closest thing we have to a technical spec sheet for the site.

We publish this page because we think operators in the Teer information space, broadly, have been vague about where their data comes from and how it is verified. We would like to be the counter-example. If something on this page is unclear or if you spot a logical gap in our process, please tell us via the contact page — the methodology itself is open to editorial scrutiny, and we update it when our process improves.

Scope of Coverage

This methodology applies to four archery events we cover daily: Shillong Teer, Khanapara Teer, Juwai Teer, and Night Teer. For each event, the published result consists of the last two digits of the total arrows that struck a designated cylindrical target within a fixed time window during each declared round. Most events publish two rounds per day — the First Round (FR) and the Second Round (SR) — though timings and round structures vary by association and occasional schedule changes. We maintain event-specific notes for timing and format variations; those notes feed into the verification rules described below.

Source Hierarchy

Not all sources are equal. We organise the public information environment into three tiers and apply different weights depending on the tier a reference falls into.

Tier 1 — Official association references. Publicly accessible declarations published by the archery associations themselves sit at the top of the hierarchy. When a Tier 1 reference is available and is reporting a number consistent with at least one Tier 2 reference, the result is treated as strongly verified.

Tier 2 — Established secondary public references. Long-standing public references that aggregate daily declarations from the associations, maintained by independent third parties with a multi-year track record. These are the workhorses of the verification pipeline; they are fast and usually reliable but are not authoritative on their own.

Tier 3 — Informational references without editorial track record. Pages that may report a declared number but have no published methodology, no corrections record, or an uneven history. We read these but do not count them toward consensus.

No source, at any tier, is treated as published-ready on its own. Even a Tier 1 reference must match at least one independent reference before the number transitions from pending to verified on our site.

Our Commitment to Accuracy

1Multi-Source Verification

Every number we publish is cross-referenced against multiple independent sources before it appears on the site. The consensus rule is simple: at least two independent references must report the same value before the number transitions from pending to verified. We do not treat any single source as authoritative on its own — regardless of how reputable it may be. If sources disagree, we do not pick arbitrarily.

2Ground-Truth Priority

When information is available from an observer with direct access to the officially declared outcome — a Tier 1 reference from the association itself — that information is given priority over secondary aggregators. Ground-truth priority resolves most borderline cases: if Tier 1 and Tier 2 agree, we publish; if Tier 1 exists and Tier 2 references conflict among themselves, Tier 1 wins the tie.

3Independent Re-Verification

Each day's results are re-checked later the same day against a separate set of references that were not used in the initial consensus check. This "second pair of eyes" pass runs in the evening, after all rounds for the day have been declared and the upstream sources have had time to stabilise. The evening pass is designed to catch any error that slipped through the initial publication — an upstream source correcting its own earlier mistake, a typo we inadvertently mirrored, a transient network issue that fed us stale data.

4Accountable Corrections

When a published number is updated, the change is recorded in a permanent, timestamped corrections log — not silently overwritten. The log captures the original value, the corrected value, the time of the change, and the editor responsible for reviewing the correction. We treat the archive as accountable history, not a mutable record.

Why we go to these lengths: readers make real decisions based on the numbers we publish. Even though our role is purely informational, an error on our end can mislead thousands of people. Our verification approach is deliberately redundant so that no single failure — a malfunctioning upstream website, a typo at the source, an opportunistic bad actor — can corrupt what appears here.

The Publishing Pipeline, End to End

At a high level, a number moves through five distinct states before it stabilises on the site: unpublishedpendingsingle-source observedverifiedre-verified (archived). Each transition has a defined rule.

From unpublished to pending. As the round's declared time approaches, the result page status flips from "unpublished" to "pending." This signals to readers that we are actively looking and that a number may arrive shortly.

From pending to single-source observed. When our automated fetchers pick up a first candidate value from any tracked public reference, the value is logged internally but not displayed on the public page. A single source is never enough to publish.

From single-source observed to verified. Once a second independent reference reports the same value — or a Tier 1 reference agrees with at least one Tier 2 reference — the number transitions to "verified" and appears publicly on the result page, with a visible updated-at timestamp.

From verified to re-verified. After the evening re-verification pass completes and a further independent reference corroborates the published value, the number is marked "archived" and rolls into the historical record. Numbers that fail re-verification are flagged for editorial review rather than auto-overwritten.

Consensus, Voting, and Tie-Breaking

The consensus rule is strict: two independent references agreeing is the minimum threshold, and agreement from a Tier 1 reference substantially raises our confidence even when fewer Tier 2 references have reported. The implementation detail that matters most is independence — two aggregators that share an upstream feed are not two independent references, and we track upstream lineage to avoid treating echoes as corroboration.

When three or more references report and a clear majority agrees, we follow the majority and mark the outliers for editorial review. When two references report different values and no further reference is available, we hold the result at "pending" and wait. In practice, this pattern resolves itself within minutes — upstream sources catch up, or one reference corrects a transient error — but on the rare occasion it persists, we keep the page in "pending" rather than publishing a guess.

What Happens When Sources Disagree

When we can't reach confident agreement on a number, we do not pick between conflicting values. We either keep the previously confirmed result (if one exists for the same round) or we show "pending" for the current round. Our systems continue to cross-check throughout the day, and the daily re-verification pass gives us a second chance to resolve any disagreement cleanly. If the evening pass still cannot settle the disagreement — a rare event — the result is escalated for editor review, and we will not publish anything until either an official clarification is issued or the editor has credible direct evidence.

Handling Holidays, Closed Days, and Scheduled Breaks

Archery associations do not play every day. Sundays, major public holidays, state or regional observances, weather cancellations, and event-specific scheduled breaks all produce closed days on which no result is declared. We handle closed days explicitly rather than treating them as errors. The day's result page shows a clear "No Result Today — Closed" status with the declared reason where known, and the archive records the day as closed rather than missing. Scheduled closures for each event are maintained in an internal calendar, and any ad-hoc closures we become aware of through official channels are reflected on the page as soon as we can verify them.

Corrections Handling

Corrections can originate from three sources: our own internal re-verification pass flagging a previously published number, a reader reporting an error with supporting evidence, or an upstream official reference updating its own record. Every correction follows the same workflow: verify the corrected value against the same consensus rule used for fresh publishing, log the old and new values with a timestamp, update the live page, and — where the error affected a high-traffic result page — display a visible correction note on the page for a reasonable period so readers who saw the original version know the change has been made. The internal corrections log is retained indefinitely and is available for editorial review.

Verified-Status Indicators and Integrity Fingerprints

On result pages where we offer a machine-verifiable indicator, the published number is accompanied by a compact "verified" badge and, where applicable, a SHA-256 hash of the canonical result record. The fingerprint lets readers and third-party tools confirm that the number displayed on the page matches the underlying data record we logged at verification time — a small integrity check against page tampering, cache corruption, or display bugs. The same fingerprint is reflected in our structured data so that search engines and crawlers see a consistent picture.

Archive and Backup Policy

Every verified result is written to our historical archive, which is retained indefinitely. Daily result records are backed up to independent storage so that the historical record survives an infrastructure failure of any single component. Historical pages (Previous Results and per-event archives) are rebuilt from the same authoritative archive, which means a correction that lands on the live result page automatically flows through to the historical view once the correction log is reconciled.

We consider the archive a public trust. Readers, researchers, and data journalists rely on it to look up what was declared on a specific date, and we are deliberately cautious about changes. Archive corrections follow the same timestamped, logged workflow as live corrections.

Caching and Freshness

Result pages are served through a layered cache. The outermost layer is a short-TTL edge cache that ensures fast page loads globally; it is invalidated whenever a number transitions to "verified" or whenever a correction is published, so readers are not served stale numbers after an update. A longer cache layer covers historical archive pages, which do not change under normal operation. Structured data on each page carries its own dateModified field so that any downstream consumer — a search engine, an aggregator, a crawler — can tell exactly when the page was last meaningfully updated. The on-page "updated at" timestamp is the authoritative surface for readers; if it is showing a stale value, please send us a note.

Timezone and Daily Cut-Off

All timestamps on the site are Asia/Kolkata (IST, UTC+05:30). Each calendar day in IST is treated as a distinct results day; rounds declared after the daily cut-off roll into the next calendar day's archive. The archive entry carries both the round declaration time (as published by the association) and our own verification time, so researchers can distinguish between the actual sporting event and our record-keeping event. We do not normalise to any other timezone — readers outside India should do the arithmetic themselves, as the sport is played on an IST schedule.

What We Don't Do

Technology Stack (Non-Identifying)

The publishing pipeline runs on scheduled fetchers that poll tracked public references at appropriate intervals around each event's declared round time. Candidate numbers enter a small verification queue, which applies the consensus rule described above and emits a verified record when the rule is satisfied. The live site is a static-first architecture with edge caching, which keeps response times low even on slow mobile networks; dynamic result values are injected client-side from a lightweight verified-results feed. The same feed powers our structured data, push notifications, and historical archive. We deliberately keep the stack minimal so that fewer moving parts means fewer places for quiet bugs to live.

Identifying details about specific vendors, hosting providers, or internal employees are not part of this document. Readers who need to verify our editorial independence can rely on the content-side signals — public methodology, public corrections, non-affiliation with any association, and the complete absence of paid tip content — rather than infrastructure trivia.

Accuracy Targets and Measurement

We track accuracy as an internal operational metric. Our target is a verified-result error rate at or below one in ten thousand published numbers, measured as corrections issued against numbers originally published as "verified" divided by total verified publications over a rolling ninety-day window. When the metric drifts, we investigate root causes — upstream data quality, fetcher reliability, editor fatigue, holiday edge cases — and update the pipeline accordingly. This is not a public scoreboard; it is an internal management tool. But we publish the existence of the target because accuracy that isn't measured is accuracy that is aspirational rather than real.

Machine-Readable Transparency

Our result pages include structured data (schema.org/SportsEvent, schema.org/Dataset, schema.org/WebPage) with machine-readable result values, declaration times, and updated-at timestamps. Search engines and third-party tools can verify that the numbers shown on the page match the underlying structured data. Where applicable, a canonical integrity fingerprint (SHA-256 of the published record) is included so that automated consumers can confirm data integrity without trusting the rendered HTML alone.

Reporting a Correction

If you believe a published result is incorrect and you have credible evidence — a photograph of the official declaration, a link to the official association's own page, an archived copy of a Tier 1 reference, or equivalent — please reach out via our contact page. Every correction request is reviewed by an editor before any published number is changed. We aim to acknowledge corrections within one business day and resolve them (either publishing the correction or explaining why we are not) within three. Unsubstantiated correction claims — no evidence, conflicting with multiple independent references — are logged and dismissed, but we still read them in case a pattern emerges.

Frequently Asked Questions

How many sources must agree before a result is published?

At least two independent references must report the same value before a number transitions from "pending" to "verified" on the page. Where a Tier 1 official reference is available and agrees with at least one independent Tier 2 reference, that pairing also satisfies the consensus rule. If only one source is reporting, we hold the status at "pending" rather than publishing.

How fast is a typical declaration reflected on the site?

In the common case where upstream references are responsive and consensus is reached quickly, a verified number appears on the page within a few minutes of the official declaration. When consensus is delayed — conflicting references, a slow upstream — the page stays in "pending" until the rule is satisfied. We prefer a slightly slower publish over a fast but unverified one.

What happens during holidays or closed days?

The result page for the closed day shows a "No Result Today — Closed" status with the declared reason where known, and the archive records the day as closed rather than missing. Scheduled breaks are tracked in an internal calendar; ad-hoc closures are reflected as soon as we can verify them through official channels.

Why do you publish a SHA-256 fingerprint?

The fingerprint is an integrity check. It lets a careful reader or a third-party tool confirm that the number rendered on the page matches the underlying data record we logged at verification time — a small but useful defence against caching bugs, page tampering, or display errors. It is not a cryptographic proof of the declared outcome itself; it is a proof of internal consistency.

What timezone and daily cut-off do you use?

All timestamps are Asia/Kolkata (IST, UTC+05:30). Each calendar day in IST is treated as a distinct results day. Numbers declared after the daily cut-off roll into the next day's archive entry.

How long do you retain historical results?

Indefinitely. Every verified result is written to the archive, which is retained in primary storage and independently backed up. Historical pages are rebuilt from the same authoritative archive so that corrections flow through consistently.

Do you publish predictions or common-number tips?

No. Our Common Numbers and Dream Numbers pages are descriptive historical-frequency references and cultural context. They are not forecasts, tips, or advice. We do not sell paid tips or any form of advisory content.

For the editorial standards governing what we publish and how we handle corrections, conflicts of interest, and reader feedback, see our Editorial Policy. For the broader context of what this site is and is not, see our About page. For the legal posture governing our coverage, see our Disclaimer and Terms.

Last updated: 21 April 2026. This methodology page describes the production system in use at the time of writing. As our verification pipeline evolves, we will update this page and note the revision date.