When a developer creates a pull request, the reviewer checks it for business logic. However, what often gets overlooked are:
- Data structure optimization
- Code security checks
- Function complexity and reusability
- Cleaning up dead and duplicate pieces
Let's break it down and understand all 3 stages.
Stage 1 - Development Editor:
We all add linters; they detect issues related to formatting and some anti-patterns. As developers, we see numerous issues, and even some paid linters stack rank the issues based on severity. However, it fails in multiple aspects because developers don't have time to manually fix these issues. Developers want these issues to be auto-fixed, or else they will silence them. Secondly, these static linters lack context about your codebase, so they may not effectively prioritize high-severity, high-impact issues for you.
Stage 2 - Git Hook (Maybe):
Some organizations have Git hooks in place to prevent certain bad practices from being pushed. The problem here lies in the depth of the rules, as most rules focus on high-level, framework-specific checks. You can think of these as extensions of your linters, with the added capability that you can't push the code if certain rules are not followed.
Stage 3 - Pull Requests (PRs):
Now, you have pushed your changes and created a PR, waiting for the reviewer to merge it. Here, the challenge is that the reviewer also has 10 other PRs to review, so they primarily check for the correctness of the business logic. Having reviewed over 1000 PRs, what I've learned is that no reviewer can ever have 100% context of their codebase. Therefore, we trust that the developer has followed all the checks, fixed anti-patterns, dead and duplicate code, ensured the use of correct data structures, etc. Additionally, awareness of security and compliance issues is not always present with most developers, so we don't review those aspects. Moreover, both the reviewer and developer are under pressure to ship the product faster, with the end goal being deployment in lower environments. If something breaks, we will fix it there.
The reality is that nothing will break because the existing checks primarily focus on unit or end-to-end tests for business logic, which are already reviewed. Consequently, "bad" code gets checked in and committed.
This situation results in technical debt. Why haven't we been able to solve it until now? It's due to a lack of enforcement that doesn't slow down developer velocity.
What do we need then?
We need tools that have a great context of the codebase, prioritize high-severity, high-impact issues, and suggest auto-fixing of these issues without breaking any existing logic. The tool should also seamlessly integrate with the developer journey starting from IDEs to PR checkers.
We are building CodeAnt AI (YC W24) on the same lines, and it is live; feel free to check it out here.
Top comments (0)