Memory Layout: Arrays store elements contiguously in memory. Your CPU cache loves this - it can load multiple elements in one cache line.
CPU Cache: For small arrays (<100 elements), the entire array fits in L1 cache. Linear scan through 50 bytes is faster than hash table lookup.
V8 Optimizations: JavaScript engines heavily optimize includes() for small arrays
using SIMD instructions and loop unrolling.
Set Overhead: Hash computation, collision handling, and memory indirection add constant overhead that only pays off at scale.
// Here's what's happening under the hood:
// ARRAY (Small size):
// Memory: [a][b][c][d] - contiguous, cache-friendly
// Search: CPU loads all 4 bytes in one cache line, SIMD compare
// Time: ~1-5 CPU cycles
// SET (Any size):
// Memory: hash_table -> bucket -> linked_list/tree
// Search: hash(char) -> bucket lookup -> comparison
// Time: ~10-20 CPU cycles + memory indirection
// The crossover happens when Array's O(n) growth
// overtakes Set's constant overhead