Statistical reference numbers derived from historical Teer result patterns. Covers Shillong, Khanapara, Juwai and Night Teer. For informational reference only.
We analyse the last 90 days of historical results to find which numbers appear most frequently in each position (FR and SR) for each game.
The "house" digit (tens place) and "ending" digit (units place) are analysed separately to identify patterns across weeks and months.
The most statistically frequent numbers are surfaced as a reference. This is a descriptive statistical summary — it describes what happened historically, not what will happen.
A target number, in the context of Teer archery results, is a descriptive statistic. That word "descriptive" is doing heavy lifting and is worth unpacking carefully. In statistics, we distinguish between descriptive statistics (which summarise what has already happened) and inferential or predictive statistics (which estimate what will happen next or generalise from a sample to a wider population). The target numbers on this page are firmly in the descriptive category. They tell you, in compressed form, which two-digit outcomes have appeared most often in the archive of past Shillong, Khanapara, Juwai and Night Teer results over a recent window.
Why publish them, then, if they do not predict anything? Because the archive itself is a matter of public record — the results are declared publicly every day by each archery association — and making it easier to see the frequency distribution is a genuine service to anyone curious about the dataset. Journalists, students of statistics, Northeast India researchers, data hobbyists and curious readers all benefit from being able to look up, at a glance, what the recent history looks like. The responsibility we carry on this page is to present those summaries accurately and, just as importantly, to communicate clearly that a descriptive summary of past archery outcomes is not a forecast.
Teer results are two-digit numbers from 00 to 99, which gives us a sample space of exactly 100 distinct outcomes. If each outcome were perfectly equally likely (a "fair" or "uniform" distribution), the probability of any specific number appearing in a given round would be 1 in 100, or 1%. That 1% baseline is the north star against which every frequency observation must be measured.
Now consider a small worked example. Suppose we look at the last 90 Shillong Teer FR (first round) results and tabulate how often each of the 100 possible numbers appeared. If the process is fair and each event is independent, the expected count for each number is 90 × (1/100) = 0.9. In practice you will see some numbers appear twice, many appear once, and many appear zero times over 90 rounds. That is not a signal of bias — it is expected random variation in a small sample. Statisticians call this distribution "multinomial with uniform cell probabilities", and the variance around the expected count is easy to compute.
Here is the critical insight: a number that appeared 3 times in the last 90 rounds, while nominally "three times more frequent than expected", is still entirely consistent with pure randomness. The 95% confidence interval around a count of 0.9 in 90 trials comfortably includes values from 0 to about 3 or 4 just by chance. Calling such a number "hot" confuses random fluctuation with a real pattern — a mistake formally known as the hot-hand fallacy or the gambler's fallacy, depending on which direction you mis-read it.
Two formal results from probability theory deserve direct explanation here because they are at the heart of why target numbers are not predictions.
The Law of Large Numbers states that as the number of independent identically-distributed trials grows, the observed frequency of each outcome converges toward its true probability. If you were to record Shillong Teer results for thousands of days, and if the underlying process is fair, you would indeed see each of the 100 numbers appear close to 1% of the time. But the law tells you exactly nothing about which number will come up tomorrow. It is a statement about long-run averages, not about specific next events.
Independence of events means that the outcome of one round has no causal influence on the outcome of any subsequent round. The archers on day 50 do not remember what the result was on day 49. Each daily archery event is a fresh physical experiment whose outcome is determined by that day's conditions — not by whatever number came up last week. This is the formal reason the gambler's fallacy ("number 73 hasn't appeared in 20 rounds, so it's due") is, in fact, a fallacy. A number is never "due" in an independent-event process.
Put these two principles together and the conclusion is clean: past frequency is a faithful summary of the past, but it carries zero causal signal about any future round. Target numbers are rear-view mirrors, not telescopes.
Let us walk through a concrete illustration so the statistical argument is not just abstract. Suppose, over the last 180 days of Shillong Teer FR results, the ten most frequent numbers are:
At first glance, 47 looks "hot" with 5 appearances. But the expected count under a fair uniform distribution is 180 × (1/100) = 1.8 appearances. The standard deviation of the count for each number is roughly √(180 × 0.01 × 0.99) ≈ 1.34. So a count of 5 is about (5 − 1.8) / 1.34 ≈ 2.4 standard deviations above the mean. That sounds impressive, but when you are looking at 100 numbers simultaneously, statisticians expect at least some of them to fluctuate this far above the mean purely by chance — a phenomenon called multiple comparisons. After correcting for the fact that you looked at 100 numbers, the apparent "hotness" of 47 is not statistically significant.
The correct interpretation is sober: over this particular 180-day window, the observed frequencies are entirely consistent with a fair, independent, random process. Neither "47 is hot and therefore more likely" nor "numbers that appeared zero times are due and therefore more likely" follows from the data. Both readings are classic misreadings of randomness. The published target numbers are, at best, convenient labels for "numbers that happened to appear often recently" — useful for navigating the archive, useless as forecasts.
If you are statistically literate and want to get genuine educational value out of browsing target numbers, here is what the frequency distribution can legitimately tell you:
Here is what the frequency distribution cannot tell you, no matter how carefully you stare at it:
We reinforce this framing here because Google's AdSense review guidelines and basic honesty both demand it, and because we believe readers are better served by a site that respects their intelligence than by one that sells them a mirage.
The practice of tracking Teer result frequencies has a surprisingly long history in Meghalaya. Before the digital era, enthusiasts kept hand-written ledgers noting which numbers came up in each round, and local newspapers in Shillong and Guwahati occasionally published summary tables. These ledgers were passed around in tea shops and archery association offices and formed an informal oral statistics culture long before spreadsheets existed.
With the arrival of personal computers and later the internet, those paper ledgers migrated to Excel files, then to websites, then to automated data pipelines. What you are reading today is a continuation of that tradition — a digital publication of publicly declared results in aggregated, searchable form. The mathematics has not changed. A ledger from 1985 and a real-time dashboard from 2026 are both doing the same thing: recording what happened. Neither predicts what will happen.
It is worth acknowledging that for many decades, informal "common-number" and "target-number" claims circulated in print and online without the statistical framing we insist on here. The history of Teer culture includes plenty of confident-sounding predictive claims that never withstood any rigorous test. Publishing target numbers alongside the statistical literacy context to interpret them correctly is our small contribution to making this public data more useful and less misleading.
Historical, cultural and statistical references only — nothing on this page constitutes a prediction, a recommendation, or a guarantee of any Teer outcome.
A teer target number is a statistical reference figure derived from historical result patterns of Teer archery events. It summarises which numbers have appeared most frequently in past results. These are informational references only.
Target numbers have no predictive accuracy. They are purely descriptive statistics about past events. Teer results are determined by a live archery event each day and are not influenced by any past pattern or frequency analysis.
Target numbers are recalculated each day as new historical results are added. As the rolling window of past results changes, the frequency distribution shifts, which may change which numbers appear most frequently in the historical dataset.
Yes. Each game — Shillong, Khanapara, Juwai and Night Teer — has its own independent result history. The statistical patterns are calculated separately for each game.
Each daily archery event is statistically independent of every previous event. The law of large numbers tells us that frequencies converge toward their true probability over many trials, but it does not tell us which specific number will appear in any single upcoming event. Historical frequency describes the past; it has no causal link to the next result.
With 100 possible two-digit outcomes (00 through 99), the base probability of any specific number appearing in a given round is 1 in 100, or 1%. All frequency analysis is measured against this uniform baseline. Small-sample deviations from 1% are expected random variation, not signals of predictive bias.
No. Any process whose outcomes are independent and near-uniformly distributed is, by its mathematical nature, unpredictable on a per-round basis. No frequency analysis, no dream association, no "secret formula" changes this. Claims to the contrary are, at best, misreadings of randomness.