Here’s what I’ve found from my testing of ChatGPT as a tool to help with code reviews for C# applications.
Don’t use ChatGPT for everything
First, use static code analysis tools for things they can handle, like variable naming standards, unreachable code, potential null reference exceptions, cyclomatic complexity calculation, etc.
One thing static code analysis tools are great at, and ChatGPT isn’t, is providing consistent results.
If you run a static code analysis tool on a codebase multiple times, you will always get the same result. If you ask ChatGPT to review a codebase multiple times, you will get different results.
For things that are fully defined, use static code analysis tools.
Outline of steps to use ChatGPT for code reviews
I start all prompts with these sentences to set up the persona that ChatGPT should use when responding to my prompts.
- Respond as a senior developer with 25 years of experience writing custom business software.
- All code must work in .NET 6, C# 10.
- Use the latest features of the language if they improve performance or readability.
- Your code should be extremely high-quality.
- Highlight concerns (security, performance, readability, etc.)
- Mention constraints (e.g., backwards compatibility)
Modify these to fit your situation. If you want to use specific versions of languages or libraries, include that information here.
The “Your code should be extremely high-quality” comment seems a little silly to add, but I feel it has resulted in better code.
I’ve been working on different templated prompts to use for different tasks. After testing different prompts, these are what I ended up with to request a code review from ChatGPT and to ask ChatGPT to create unit tests for some code.
Prompt for code reviews
- Prefer to make code functional, when possible.
- Prefer to make functions static, when not dependent on instance variables.
- <LIST YOUR ADDITIONAL PREFERENCES>
- Provide a code review of the following code.
- Think through to the final optimal version for the submitted code, with all improvements applied, before commenting.
- After displaying your version of the code, provide a succinct list of all changes made.:
- <YOUR CODE GOES HERE>
Prompt to create unit tests from source code
- Write succinct unit tests for this code using xUnit and Shouldly.
- Tests should cover all possible execution paths.
- Tests should use a variety of input values, including null, positive values, negative, zero, and boundary values.
- Ensure each test case is unique and covers different input scenarios without duplication.
- Consolidate all similar tests by using parameterized unit tests where possible.
- Consider using separate test methods for cases that cannot use compile-time constants.
- Assertions comparing non-floating-point values must match exactly.
- Assertions comparing floating point values should consider a difference of 0.000001 or less to be equal.
- <YOUR CODE GOES HERE>
I follow up ChatGPT’s response with “Can you suggest any further improvements?”
Even when it returns a good response to your initial prompt, this additional prompt often makes additional improvements to the code and often mentions meta-improvements.
For example, you ask ChatGPT to write a better version of a function, and it does that. However, when you ask for further improvements, it may include suggestions around documentation, additional validation, converting to an extension method, etc.
Handling bad ChatGPT code review suggestions
When ChatGPT returns code that doesn’t work the way I wanted, I’ll follow up in the same chat with a prompt like, “This code produced an error due to floating point equality. Can you rewrite it to prevent that problem?”
I also follow up bad results with, “How could I have changed the original prompt to prevent this problem?” I use this response to adjust my prompt templates (from above).
That’s helped me get to the prompts above, which seem to work well.
I’ll re-test all of this when I get access to xAI.
Please leave a comment if you’ve found other ways to get the most out of ChatGPT for code reviews.