Full table scan — why MySQL reads every rowMySQL 8.4 • MariaDB 11.4

Understand what a full table scan really means: O(n) row reads, predicate filtering after the read, and why selective predicates usually need an index.

Lesson familyScan Operator

Parameters (full table scan cost model)

Rows in the table: 1000000
Row size: 256 bytes
Expected matches: 2.0% of rows

Example query this animation executes

SELECT id, name, country FROM users WHERE country = 'US';

No index on users(country) in this scenario, so MySQL must examine each row.

What you'll see in the animation

  • The scan head moves row-by-row through the users table (left) because no index can pre-filter.
  • Each row is read first, then predicate-tested. Green rows match; other rows were still read and then discarded.
  • The result box (right) only gets matching rows, but storage work happened for every row.
  • Readout and chart compare this O(n) path to indexed range access O(log n + k).
Ready — press Play
0.0s 0.0s

Cost readout (full table scan)

Rows read ?Rows the engine must touch when no index can filter first. For a full scan, this is every row in the table.

Rows returned ?Rows that pass the WHERE predicate and reach the result set.

Bytes read ?Approximate table bytes read: rows read × average row size.

Estimated pages touched ?Approximate 16 KiB InnoDB pages touched while scanning. More pages = more I/O.

Indexed path rows touched ?Rough comparison path if a useful index existed: B+tree levels + matching rows (O(log n + k)).

Work amplification ?How many times more rows a full scan touches versus an indexed path for the same selectivity.

Rows touched vs table size (log–log, selectivity fixed)

Learn more — when full scans are acceptable vs dangerous

Full scan is not always bad. If the table is tiny, or if the query returns a large fraction of rows, scanning can be cheaper than random index lookups.

It becomes painful when selectivity is low. Example: reading 10 million rows to return 0.5% means you are doing near-table-sized I/O for a tiny result set.

Usual fixes: create an index on the predicate columns, avoid wrapping indexed columns in functions, and keep table statistics fresh so the optimizer can estimate selectivity correctly.

In EXPLAIN output, look for operations like Table scan on ... or access_type=ALL as full-scan signals.