The weekly ranking number looks clean. For anyone tracking YieldMax coverage data quality weekly, that number updates on a consistent schedule, sorts neatly from high to low, and makes comparison across tickers feel straightforward.
Over time, though, a pattern kept showing up that didn’t sit right. Occasionally, a figure would look unusually high — or unusually low — and the underlying data didn’t back it up. Not because the calculation was wrong, but because only a fraction of the week’s activity had actually been observed. One or two intraday files. No holdings structure. A number that was arithmetically correct but built on a narrow slice of what actually happened.
That’s the problem coverage is meant to surface. Not whether the strategy is working, but whether what’s being tracked is a complete picture or a partial one.
What Coverage Actually Means in This Data
Coverage, in the context of DiviTracker, is not a performance metric. It doesn’t measure how aggressively a fund is running its options strategy, how large the premium flows are, or anything about the distribution. It measures something more specific: how much of the fund’s weekly option activity was actually observed in the data.
Two inputs make up the coverage read. The first is the count of intraday option files captured during the weekly window — each file represents one trading day’s worth of observed executions. A week with five intraday files means activity was observed across the full Mon–Fri window. A week with one or two means most of the week wasn’t captured. The second input is the holdings row count from the most recent snapshot, which reflects how many open option positions were visible at the time. A holdings file with no option rows means the current position structure simply isn’t there to read.
Neither of these is a judgment on the fund. They are a measure of the data’s completeness — and that changes what the main number can tell you.
The Same Week: How YieldMax Coverage Data Quality Varies
To make this concrete: the cards below show three widely followed YieldMax ETFs — MSTY, NVDY, and TSLY — during the same calendar week. The intraday capture count varied across all three, even though they were observed over the same Mon–Fri window.
Data from the week of 2025-11-17. Shown as a historical example of coverage variation — not a forecast or current signal.
All three tickers ran their strategies through that same week. Holdings snapshots were available for all three. The difference was purely in how much intraday execution data was captured — and that directly affects how much confidence sits behind any net per share figure derived from those files.
TSLY’s two-file capture meant that three trading days’ worth of option activity simply wasn’t in the observed data. The number that came out of those two days may be accurate as a partial read, but it isn’t the same as a number backed by a fuller window. The strategy didn’t change. The observation did. That distinction is what coverage is tracking.
What Low Coverage Does — and Doesn’t — Tell You
Low coverage is not evidence that a fund is underperforming or doing something differently. A week with one or two intraday files might reflect a quieter execution period, a gap in data availability, or simply how the capture window aligned with when trades were actually executed. Funds like TSLY or NVDY don’t adjust their approach based on whether an outside observer captured one day or five.
What low coverage does tell you is that the observed figure is a partial read. That distinction matters most when a number looks unusually high or unusually low. A strong net per share reading under two-file capture could reflect a genuinely active week that happened to fall on those two observed days — or it could be a week that looked active from a limited vantage point. Without fuller capture, it’s not possible to tell which.
The practical implication isn’t to dismiss thin-coverage readings. It’s to hold them at a different level of confidence. A value that appears once under partial capture is a first observation. A value that repeats across multiple weeks under consistent coverage starts to look like a pattern. Those two things carry different interpretive weight, and the weekly recap format is built to flag the difference explicitly rather than let it disappear into the ranking.
How to Read Coverage Alongside Net per Share
The most useful habit is to read both figures at the same time — not to check coverage first and then decide whether the number is worth looking at, but to treat them as a pair from the start. Net per share tells you what the observed premium flow looked like on a per-share basis during the week. Coverage tells you how complete that observation was. Together, they give a more grounded read than either figure alone.
A few combinations appear regularly in the data. When net per share is elevated and coverage is full — five intraday files, a meaningful holdings row count — the signal carries the most interpretive weight. When net per share is elevated but coverage is thin, the reading is technically valid but structurally uncertain; it may reflect real premium activity compressed into a narrow window, or it may be an artifact of partial capture. When net per share is moderate under full coverage, the signal is smaller but more reliable. When net per share lands at zero under full coverage, that’s a separate data point worth noting — it suggests the week genuinely produced no net premium flow, rather than simply not being captured.
None of these combinations tell you what to do with the information. They tell you how much confidence to place in it when deciding whether a signal is worth tracking into the following weeks.
Coverage in DiviTracker’s Weekly Recaps
In the weekly recap format, coverage shows up in two places. The main movers table includes an intra files column and a holdings rows column for every ranked ticker. The coverage notes section at the bottom of each post singles out the tickers with the thinnest capture — typically those with one or two intraday files — and explains whether that shifts how the number should be read.
The watchlist section applies the same logic. A ticker that ranks strongly under thin coverage goes on the watchlist not because the number is wrong, but because one partial observation isn’t enough to establish a pattern. The follow-up question is whether the same level of activity appears in subsequent weeks under comparable or better coverage. When it does, the signal starts to carry weight. When it doesn’t, the original read was most likely a partial snapshot of a period that looked active from a limited angle.
Coverage is surfaced explicitly for exactly this reason. A number can be arithmetically correct and observationally incomplete at the same time — and once those two things get conflated, the data stops being useful. Keeping the boundary visible is part of what makes the weekly record readable over time. For a look at how coverage has varied across recent weeks, the weekly recap archive shows the full context for each period.
What This Site Documents — and What It Leaves Open
DiviTracker doesn’t fill in what coverage leaves out. There’s no estimate of what a fuller observation window might have produced, no adjustment applied to thin-capture figures, and no confidence modifier added to the ranking. The data as observed is what gets documented — and the coverage count is what shows how much of the underlying activity that data actually represents.
A reader who sees a strong net per share figure without coverage context is looking at an output without the conditions behind it. A reader who sees that same figure next to a two-file coverage note has the full picture and can weigh it accordingly. That’s the difference coverage tracking is trying to preserve — not to lower confidence in the data, but to make the limits of each observation as visible as the observations themselves.
Income sources are tracked weekly. The observation window behind each figure is part of what makes that tracking honest, and coverage is how that window stays visible.
This site does not provide investment advice, distribution forecasts, or recommendations of any kind. All data is shown for informational and observational purposes only. Historical coverage figures are illustrative examples drawn from observed data and do not represent current fund activity or future signal reliability.