Article by Ayman Alheraki on January 11 2026 10:36 AM
The mantra "smaller is better" has driven the semiconductor industry for decades. Shrinking transistor sizes allows more of them to fit on a single chip, improving performance, reducing power consumption, and lowering costs over time. This trend has propelled the computing revolution, but recent developments suggest we may be nearing the physical limits of silicon-based technology. Is the shift from nanometers to angstroms a natural evolution, or a sign that we’re approaching the end of miniaturization?
In the early days of microprocessor design, manufacturing technology was measured in micrometers (µm). For example, the Intel 4004, released in 1971, featured transistors with dimensions around 10 µm. Over the years, manufacturing nodes continued to shrink:
| Year | Process Node | Notable Processor |
|---|---|---|
| 1985 | 1.5 µm | Intel 80386 |
| 1993 | 0.8 µm | Intel Pentium |
| 2000 | 180 nm | Pentium III |
| 2006 | 65 nm | Core 2 Duo |
| 2018 | 10 nm | Ice Lake |
| 2023+ | 3 nm | Apple M3, TSMC |
Originally, node names such as "90nm" or "14nm" referred to actual physical features of the transistor. Today, however, these labels have become more of a marketing term than a literal measurement. For instance, the features in a "5nm" node might actually be larger than 5nm. This discrepancy has driven the industry to seek more accurate or meaningful labels for next-generation technologies.
In 2021, Intel introduced a new naming scheme to reflect more advanced manufacturing techniques and avoid misleading associations with previous node naming. This new scheme includes the use of angstroms (Å):
Intel 20A = 20 angstroms = 2 nanometers
Intel 18A = 18 angstroms = 1.8 nanometers
What is striking here is that angstrom-level precision means chip features are approaching the size of individual atoms.
1 angstrom = 0.1 nanometer
A single silicon atom measures approximately 2 angstroms
A feature size of 18A means working with structures that are only 9 silicon atoms wide
This raises a critical question: can we continue to scale transistors any further?
Miniaturizing transistors is not just an engineering challenge—it’s a matter of fundamental physics:
Quantum tunneling: At extremely small sizes, electrons begin to "leak" through insulating layers, leading to power losses and instability.
Heat and signal interference: Smaller transistors generate more heat in denser spaces, and managing interference becomes increasingly difficult.
Manufacturing complexity: Producing layers just a few atoms thick requires advanced technologies like Extreme Ultraviolet Lithography (EUV), which is still in its early stages.
To overcome these challenges, companies are not relying solely on shrinking dimensions, but also on new architectural and material innovations:
FinFETs have been the standard since the 22nm node.
GAAFETs (Gate-All-Around FETs), which Intel plans to use for 20A and 18A nodes, offer better control over current and reduced leakage.
Exploring alternatives to silicon, such as graphene or 2D materials, which can function at scales where silicon fails.
Rather than shrinking components further, the future may lie in entirely new computing paradigms that don’t rely on transistor miniaturization, like quantum or photonic computing.
While angstrom-level fabrication may seem like a niche concern for engineers, it directly affects consumers:
Better performance: More transistors packed in a chip lead to faster processing.
Improved battery life: Lower power consumption benefits mobile and portable devices.
AI acceleration: Advanced nodes allow integration of specialized AI processing cores.
Theoretically: No Practically: It may continue for a while, but not indefinitely
We are approaching atomic limits, beyond which transistors can no longer function reliably. As a result, innovation is shifting toward:
3D chip stacking and chiplet designs
Domain-specific accelerators
Improved hardware-software co-design
The transition to angstroms is not just a rebranding exercise—it reflects how close we are to the atomic scale in modern chip manufacturing. It also signals a paradigm shift: the future of performance improvements may come less from shrinking and more from rethinking computing entirely—from architecture to materials to design philosophy.
The nanometer race may be nearing its final lap, but innovation in computing is far from over.