The Holy Grail of Computing: A Self Authoring FPGA

Rex St John
2 min readFeb 18, 2024

--

There has always been a tension in the world of semiconductors between general purpose computing (CPU, GPU, Graph Processor) and specific-purpose computing (TPU, ASIC, MCU, DSP, PIC). The semiconductor industry has many strategies for attempting to create the “best of both worlds.”

The first strategy is to combine many accelerators plus general purpose processors, sensors and memory onto the same configuration yielding an SOC. The second strategy involves combining various chiplets together which enable composition of IP blocks onto a single processor. A third strategy involves using a hybrid contraption known as an FPGA.

FPGAs have always been a weird bird in the semiconductor space. Both Intel and AMD made large acquisitions of Altera and Xilinx out of the desire to integrate this technology directly into their datacenter networking portfolios.

At it’s core, an FPGA is composed of a “sea of gates.” These hardware gates can be programmed dynamically by highly convoluted, closed source and esoteric tools requiring highly specialist skillsets. Once programmed and configured, the “sea of gates” can offer ASIC-like performance for specific algorithms which is desirable, especially when paired with a CPU / GPU to enable added benefits.

While the rest of the industry has plowed ahead and subsequently hit the limit of Moore’s Law, part of me wonders if there might not be a renewed role for FPGAs in the world which we are entering which I am calling “Liquid Computing.” Liquid Computing is where AI is used to dynamically generate the operating system, kernel, drivers and even the processor itself and reconfigure itself on demand via a “Universal Kernel.”

In the long term, I can imagine that the end-to-end supply chain is fully automated, simulated and curated by AI. In the near term, I wonder if a fully open source FPGA, paired with a CPU that is operated by an LLM OS which can dynamically configure the FPGA in real time on command to accelerate functions in hardware might be a sort of “Holy Grail.”

Such a system would have all the benefits of a general purpose computer while fusing these with ASICs which can be reconfigured at runtime.

The biggest barrier to such a system is the convoluted software, bulky tooling and closed source nature of the bitstream used to write the system.

In my ideal world, a company like Tenstorrent might come along and say: “Lets throw away the entire rule book. The instruction set is open. The FGPA is open. The OS is open. The kernel is open .The drivers are open. It runs an LLM OS that automatically figures out what to do and the hardware rewrites itself at runtime.”

I can dream, cant I?

But what do I know, I am just a guy who worked at Intel, Arm and NVIDIA and interviewed hundreds of the leading CTOs in the industry for a decade. Having that experience wouldn’t possibly prepare me to write this essay with any sort of credibility.

Or would it.

--

--

Rex St John
Rex St John

Written by Rex St John

Exploring the intersection between AI, blockchain, IoT, Edge Computing and robotics. From Argentina with love.

No responses yet