Micron investment in a nutshell:
Micron, the only major U.S. memory maker, is committing roughly $200 billion to onshore DRAM and HBM production in the U.S. That breaks down to about $150 billion for manufacturing and $50 billion for R&D across Idaho, New York, and Virginia. Micron is using CHIPS incentives, including $6.165 billion already awarded, plus an incremental $275 million and tax credits. See investors.micron.com for details.
Boise:
Boise is the tip of the spear: Micron plans about $50 billion for two new fabs (ID1 and ID2). ID1 is targeting first wafers in mid-2027, and both fabs aim to be online by the end of 2028. Each fab includes roughly 600,000 sq ft of clean room space-among the largest in the U.S.-and the site work is massive: more than 7 million pounds of blasting powder, about 70,000 tons of steel per fab, and roughly 300,000 cubic yards of concrete. Read more at Tom's Hardware.
New York:
A separate $100 billion megafab near Syracuse is planned-likely the largest private investment in New York state history. This project has a longer timeline, stretching into the 2030s. See the New York governor's announcement at governor.ny.gov.
Why now:
DRAM and HBM prices have surged as AI demand soaks up capacity. DRAM contract prices jumped about 172% year-over-year in Q3'25, and DDR5 spot prices climbed more than 300% since September, according to industry trackers. Micron says it can only fulfill roughly half to two-thirds of key customer demand. Gross margin was 56% last quarter, with guidance near 68% this quarter. In short: memory is scarce again and vendors have pricing power. Sources: Tom's Hardware and TrendForce.
HBM vs DRAM - 10-second explainer:
DRAM (Dynamic Random-Access Memory): general-purpose system memory used by CPUs and many workloads. It's fast for normal compute tasks but not specialized for the highest-bandwidth AI needs.
HBM (High-Bandwidth Memory): a 3D-stacked memory type placed next to AI accelerators (GPUs/TPUs) to feed huge amounts of data quickly during model training and heavy inference.
Wafers: the thin silicon discs chips are manufactured on.
Clean rooms: ultra-clean manufacturing spaces where wafers are processed.
For a short official primer, see NIST.
Demand engine:
Cloud providers and AI companies are the primary buyers. Examples:
These projects are hoovering up HBM and DRAM capacity.
Competitive heat:
South Korean players are expanding fast. SK hynix is adding a roughly $4 billion HBM packaging and R&D site in Indiana and building a new Yongin fab (first phase around ₩9.4 trillion, targeting 2027). See Purdue University news.
HBM4 noise:
Industry reports say NVIDIA's next-gen HBM4 (reportedly for the "Vera Rubin" accelerator) is leaning toward SK hynix and Samsung. Micron denies being sidelined, saying HBM4 shipments began earlier and that 2026 HBM volume is already sold out. More at TrendForce.
Why founders should care:
Cloud/GPU bills, up: DRAM and HBM contract prices are spiking. Budget for higher training and inference costs. See Tom's Hardware.
Lock supply now: Micron says 2026 HBM is committed on volume and price-multi-year deals are the play. If your product depends on high-bandwidth memory, secure contracts early. (See an earnings call summary at The Motley Fool).
Opportunity: Anything that reduces memory footprint or improves memory locality will sell well in a tight market.
Bottom line:
Memory has moved from commodity to strategic choke point. Expect tight supply through 2026 and beyond. If you build AI systems, design as if every byte is expensive and plan procurement sooner rather than later. For further context, see The Verge.
Get daily insider tech news delivered to your inbox every weekday morning.