Hardware description languages (HDLs) such as VHDL are notoriously difficult to learn. Despite their complexity, these powerful tools are necessary to execute custom algorithms using field-programmable gate arrays (FPGAs), such as with all-in-one Moku test devices from Liquid Instruments. Other languages like Python are supported by a wealth of easily accessible information that make them simple to research, learn, and master. Now, thanks to advanced language learning models like ChatGPT, you can input a script in an accessible, well-known programming language like Python and quickly convert it to FPGA code at a behavioral level — not just a syntactic one — opening up new possibilities to accelerate test goals.
With this new ability, you don’t need to fully understand how the code will run on the FPGA. Rather, you just need to know some fundamental computer programming to create the necessary code. While the use of artificial intelligence (AI) to accelerate FPGA code development is convenient, it’s important to be aware of common errors that tools like ChatGPT may make. In this example, we demonstrate how to convert a square-root function and outline the most important considerations when using ChatGPT to write FPGA code from Python. But first, let’s review the challenges of writing FPGA code:
- The programming paradigm is very different from the procedural execution of more common languages like C and Python.
- It’s not a very common skill, which means that resources like Stack Overflow posts and example designs aren’t very common, either.
- Debugging is difficult. It takes a relatively long time to build a design to run on the FPGA, which limits iterations. When it’s there, it can be hard to see what’s going on inside the device. On the other hand, it takes a lot of time to set up high-quality simulations to reveal what might be a very simple rookie mistake.
Breaking down FPGA coding
Making FPGA programming more like computer programming has been a long-time goal of manufacturers. High-Level Synthesis (HLS) tools allow you to write code in C or similar languages, and run it on an FPGA, but they have never approached the efficiency of execution of handcrafted HDL. Tools like MathWorks HDL Coder™ and MyHDL allow you to write specially designed code in MATLAB or Python, respectively, and will again convert it to run on an FPGA, but the special design of these programs means you need to understand the FPGA programming paradigm anyway — while using a nicer syntax and testing environment.
With the advent of Large Language Models (LLMs) like ChatGPT, you now have access to systems that can take programs written in one paradigm (like procedural code in Python), understand what you’re trying to accomplish, and rewrite the code in another paradigm (like a Hardware Description Language for FPGA). This is fundamentally a different way to approach the High-Level Synthesis problem, with some huge advantages, and some interesting drawbacks:
- You get to write your algorithms in your favorite programming language.
- You can debug your algorithms in that same language with all the features you’ve grown to expect, from simple print statements to full interactive debuggers.
- Even if the conversion isn’t perfect the first time, you can treat the ChatGPT output like a piece of example code that happens to be perfectly crafted for you. Learn from it, understand it, and you and ChatGPT can help each other better understand the problem and its solution.
Of course, there’s a catch: There are FPGA concepts that you can’t necessarily express in procedural code. Therefore, how can you expect ChatGPT to generate the right code? That’s where the conversational interface of ChatGPT comes into play. Just tell it what to do! You can add as much extra information in your prompts as you like, and after a few iterations with ChatGPT, it’ll become the most useful pair programmer you’ve ever worked with.
Example: Taking a square root on an FPGA
Let’s take the example of taking a square root on an FPGA. This is a moderately complex operation that typically uses floating-point arithmetic, neither of which are easy to deal with on an FPGA. We can start with prompting ChatGPT to help us think of some approaches to the problem, as seen in Figure 1:
Figure 1: Common algorithms to calculate a square root with integer arithmetic
We can then use it to help us write out and test one of the algorithms in Python, as seen in Figure 2. This is an opportunity to try dedicated AI coding tools like GitHub Copilot or AWS CodeWhisperer.
Figure 2: Determining 16 bits of precision
Debug your code in Python, while making sure that it’s keeping the correct number of bits of precision and the expected output values. Then, it’s time to do the conversion, as seen in Figure 3:
Figure 3: Converting a Python function to a VHDL entity
Checking your work
Quite often, the FPGA code runs correctly the first time. However, there are some common mistakes that ChatGPT may make when doing the conversion:
- Not keeping the correct number of bits of precision, leading to outputs that always get rounded down to zero, rounded up to some maximum value, or simply don’t have enough detail.
- Not tracking bit widths properly when creating signals. HDLs require that you manually specify the bit width of every variable precisely, something that Python doesn’t have to care about. You can help mitigate this by explicitly adding information to the prompt about how many bits of precision you want in your input and output signals, but you can still run into synthesis errors when they don’t match.
- Simple conversion errors, like converting a left-shift to a right-shift, adding intermediate variables that throw off the algorithm, and more.
This all goes to show: You achieve the best results when you and ChatGPT work together on a solution rather than expecting it to get it perfect.
Still, it’s getting better all the time. As an example, let’s take a Python for-loop. This could be converted into a VHDL for-loop, but in fact, they’re semantically very different. In Python, each iteration happens one after the other, whereas in VHDL, each iteration happens at the same time, but in different sets of logic gates.
The current free version of ChatGPT did not understand the difference, and directly converted the Python for-loop to a VHDL one.
The current GPT4 model correctly inferred a state machine, running each iteration of the loop one after the other, and each call of the loop one after the other. For example, if the iterative square root algorithm took 16 iterations to complete, it could not start processing another input until the previous one had completed, and the throughput was one value every 16 clock cycles.
The GPT4 model is able to emit a fully pipelined implementation, where the algorithm continued to have a 16-clock cycle latency, but 16 computations could all be in progress at once with one new result every clock cycle.
With these rapid improvements in ChatGPT’s capabilities, writing FPGA code in Python or other common programming languages is getting faster and more effective, opening the door for more users to implement custom test instruments, prototypes, and other algorithms with user-programmable FPGAs. With more accessibility to FPGAs with custom programmability, even those who are not HDL experts can benefit from the time-critical processing and parallel processing made possible by FPGAs. Using tools like Moku Cloud Compile from Liquid Instruments, you can test these designs alongside software-defined test and measurement instruments like an Oscilloscope, Spectrum Analyzer, and PID Controller to create customized, fully integrated systems in a single device.
For more examples of how to use ChatGPT with Moku Cloud Compile, watch our webinar. You can also check out our detailed blogs with multiple examples here (custom transient fault detection) and here (absolute value).
Have questions about a custom instrument you want to develop with Moku Cloud Compile? Contact us at [email protected] to connect with an engineer.