The idea stems from an article series with the same title by Claudio Jolowicz and is an opinionated guideline about best practices and clean code in python in the 21st century. It is a guide to modern Python tooling with a focus on simplicity and minimalism. It walks you through the creation of a complete and up-to-date Python project structure, with unit tests, static analysis, type-checking, documentation, and continuous integration and delivery.
Which kitchen do you think is better in terms of:
- health (including mental health) and safety,
- fast delivery,
- high quality of outcomes,
- job satisfaction,
- personal growth?
In a game of would you rather, kitchen one is the unanimous winner.
The most important thing I have done as a programmer in recent years is to aggressively pursue static code analysis. Even more valuable than the hundreds of serious bugs I have prevented with it, is the change in mindset about the way I view software reliability and code quality.
-- John Carmack
Clean consistent code minimizes context switches.
If our code looks the same everywhere we need less mental overhead to switch in between different code styles.
A 'LEAN' principle is to eliminate waste to reduce friction, this helps us with it.
The Broken Window Theory suggests that when bad behavior is not corrected immediately, it shows people that there is no downside to breaking the rules, practices or standards. If there is no negative outcome, cutting corners becomes acceptable and in time quality always decreases.
Most engineers have heard of the Boy Scout Rule: 'Always leave the code better than you found it'. It's much easier to leave the place in a better state than you found it, if you found it in good condition in the first place.
Improvements over time are the result of incremental progress rather than a huge leaps forward.
The first step toward the management of disease was replacement of demon theories and humours theories by the germ theory. That very step, the beginning of hope, in itself dashed all hopes of magical solutions. It told workers that progress would be made stepwise, at great effort, and that a persistent, unremitting care would have to be paid to a discipline of cleanliness. So it is with software engineering today.
-- Frederick P. Brooks Jr. No Silver Bullet
While you could enforce cleaner code manually with with documents like styleguides, it is much easier to outsource these tasks to automated tools.
To hypermodernize your code, it's essential to maintain high coding standards and ensure that your codebase adheres to those standards consistently. There are several strategies for achieving this:
- A CI system that automatically checks your code whenever you push changes to a code repository (like GitHub). By running tools that check your code against coding standards during every push, you get immediate feedback. If your code doesn't meet the standards, the CI will alert you.
- Linters are tools that scan your code for style and quality issues. Running them in "daemon mode" with a file watcher means that they constantly keep an eye on your code. As you write or modify code in your development environment, the linters provide real-time feedback about any deviations from your coding standards.
- Many Integrated Development Environments (IDEs) come with built-in support for various coding standards and linters. This means that, as you write code, the IDE can highlight issues and suggest improvements in real-time.
- Pre-commit Hooks: Pre-commit is a tool that can be set up to enforce coding rules and standards before you commit your changes to your code repository. This ensures that you can't even check in (commit) code that doesn't meet your standards. This allows a code reviewer to focus on the architecture of a change while not wasting time with trivial style nitpicks.
These strategies help you keep your code in good shape. They ensure that your code is always checked for quality and style, whether you're writing it, pushing it to the code repository, or committing changes. This way, you catch and fix issues early, making your codebase more modern and maintainable.
You won't have much discussion about imports, this is a good point to start:
- isort will sort the imports for you
- absolufy-imports converts relative imports to absolute ones.
import *in Python files with explicit imports.
- unimport: removes unused imports.
To get all your code into a consistent format the next step is to run a formatter.
I recommend black, the well-known uncompromising code formatter, which is the most popular choice.
black are autoflake, prettier and yapf, if you do not agree with
Ultimately we want to test our code with Flake8 and plugins to enforce a more consistent code style and to encourage best practices.
When you first introduce
flake8 or a new plug-in commonly you have a lot of violations that you can silence with a
When you first introduce a new
flake8 plugin, you will likely have a lot of violations, which you silence with
#noqa comments. Over time these comments will become obsolete because you fixed the. yesqa will automatically remove these unnecessary
A more modern alternative for
flake8 is Ruff:
Ruff can be used to replace
Flake8 (plus a variety of plugins),
autoflake, all while executing tens or hundreds of times faster than any individual tool. Ruff supports over 700 lint rules and goes beyond the responsibilities of a traditional linter, instead functioning as an advanced code transformation tool capable of upgrading type annotations, rewriting class definitions, sorting imports, and more.
In the landscape of hypermodern Python, security is paramount. An array of automated security scanning tools exists to fortify code against vulnerabilities. While some tools primarily focus on securing the code, others offer insights into common errors and potential risks.
Bugbear is not specifically a security tool but serves as an effective guard against common coding errors and pitfalls. It pinpoints and rectifies frequent mistakes like setting a list as a default value for a parameter and cautions against such practices, enhancing code robustness.
Bandit is a dedicated security scanner designed to target critical security concerns such as SQL injection and cross-site scripting exploits. It meticulously scrutinizes the codebase to identify and alert developers about possible security breaches or vulnerabilities, thus fortifying the code against potential exploitation.
Safety and Dependabot complement these security tools by focusing on external dependencies. Safety takes charge of examining your dependencies, ensuring they are up-to-date and free from any known vulnerabilities. Dependabot works similarly, scanning dependencies, verifying if they're current and assessing them for potential security flaws. This function is crucial as weaknesses in external dependencies can compromise the security of the entire codebase.
Together, these tools form a comprehensive security net that not only secures the code directly but also safeguards against potential risks from external dependencies, ensuring the development of secure, reliable, and robust Python code within the hypermodern framework.
“When a measure becomes a target, it ceases to be a good measure.”
-- Goodhart's Law
The adage "You improve what you measure" underscores the significance of tracking metrics for improvement. This principle is intertwined with Goodhart's Law, stating that when a measure becomes the sole focus or goal, it loses its value as an effective metric.
In the realm of codebase modernization, certain metrics can guide the process:
Test Coverage: Tools such as Coverage and Diff Cover assess how much of your code is under test coverage. While high coverage is valuable, the focus isn't just on achieving a specific percentage. It's crucial to ensure the tests are meaningful and effectively cover the essential aspects of the codebase.
Code Complexity: Metrics like McCabe, Radon, Xenon, and Lizard help evaluate the complexity of code. Lizard, in particular, is an efficient tool, capable of identifying highly complex code sections. It's considered helpful because if Lizard deems code complex, it likely needs simplification or better structuring. Cognitive Complexity, also available as a Flake8 plugin, further aids in assessing how humans perceive and interpret code, encompassing factors like decision points and recursive patterns.
Lack of Cohhesion in Methods, LCOM4 measures the relationship between methods within a class. It quantifies how much these methods are interdependent or independent from each other. This helps assess the class's cohesion, determining if methods are tightly or loosely coupled within a class, influencing code maintainability and aiding in targeted refactoring efforts for improved code quality.
These measures are essential for monitoring and understanding aspects of the codebase that might need improvement. However, it's important not to merely aim for high numbers or low complexity scores. Rather, they should act as guiding posts in the pursuit of maintainable, readable, and efficient code. These metrics help identify areas that might benefit from refactoring, thereby contributing to a more organized, maintainable, and scalable codebase. The focus remains on understanding these metrics to make informed decisions for better code quality rather than solely targeting certain percentages.
The pursuit of a hyper-modern codebase involves ensuring type correctness, an area where Mypy serves as an invaluable tool. Yet, implementing proper type annotations, especially in legacy code, can pose a significant challenge. Here's a more detailed expansion of the tools and methods to address this challenge:
Mypy and Manual Annotation:
Mypy stands as an essential static type-checking tool. Its primary function is to verify the correctness of types in your codebase. However, manually annotating types in legacy code can be laborious and time-consuming.
To alleviate the burden of manual annotation, MonkeyType offers a clever solution. It dynamically observes the types entering and leaving functions during code execution. Based on this observation, it generates a preliminary draft of type annotations. This significantly reduces the effort needed to add type hints to legacy code.
Pyre, PyRight and PyType:
The infer-types CLI tool is another beneficial asset. This tool automatically inserts initial annotations, acting as a useful starting point for adding type hints to the codebase.
Type-Checking at Runtime with Typeguard:
Typeguard enables runtime type checking in a development environment. It is extremely helpful in ensuring that correct types are being passed around during testing, even if you do not want to activate strict runtime typechecking in your production environment.
These tools and methods collectively aid in the process of introducing type annotations, especially in the context of legacy code. They offer a spectrum of options for reducing the manual overhead and ensuring type correctness, enabling developers to gradually upgrade and modernize the codebase with better type safety without significantly disrupting existing operations.
When adopting a hyper-modern approach, it's crucial to revamp and improve the testing ecosystem. Consider the following key aspects:
Unit Tests Readability and Transition:
Enhancing unit tests contributes significantly to code readability and precision. Tools like
Unittest2Pytestserve as effective means to convert old-style unit tests into Pytest format, aligning them with modern testing standards.
Testing Beyond Pytest:
While the Hyper Modern Python series doesn’t take a stringent stance on testing beyond Pytest, it’s vital to explore advanced testing methodologies.
Hypothesis for Property-Based Testing:
Hypothesis is a Python library facilitating property-based testing. It offers a distinct advantage by generating a wide array of input data based on specified properties or invariants within the code. The perks of Hypothesis include:
- Comprehensive Testing: Hypothesis uncovers edge cases and unexpected behaviors that conventional tests might miss.
- Reduced Test Maintenance: Tests based on properties are less prone to breaking when the code undergoes refactoring or alterations.
- Enhanced Confidence: By testing diverse inputs and edge cases, Hypothesis enhances confidence in code correctness.
- Ease of Integration: The library seamlessly integrates into existing test suites with a straightforward API.
- Simplified Debugging: In case of test failures, Hypothesis simplifies the reproduction of the failed test case, aiding in debugging.
Advantages of Hypothesis:
- Diverse Input Types Support: Hypothesis accommodates various input types, including integers, strings, lists, and dictionaries, making testing more thorough and reliable.
Ease of Writing and Debugging: Writing tests with Hypothesis reduces the burden of generating specific inputs for tests. The
hypothesis.extra.ghostwritermodule automatically generates test functions, providing a smooth entry into property-based testing.
The ultimate goal is to bolster testing by moving beyond traditional practices and incorporating property-based testing methods. This not only enriches the testing suite with a broader scope but also reduces maintenance efforts, fortifying code against unexpected flaws and changes.
SchemaThesis is a powerful tool, especially when working with web APIs, and here's how it can enhance your testing capabilities:
API Testing and Schema Verification:
SchemaThesisoperates in close conjunction with
Hypothesisto provide a comprehensive framework for web API testing. It streamlines the generation of tests and data by aligning them with OpenAPI or GraphQL specifications. This approach ensures the thorough validation of APIs against predetermined schemas.
One of the standout features of
SchemaThesisis its flexibility as a service. It can be effortlessly utilized without necessitating deep coding or technical knowledge. This accessibility enables users without extensive programming backgrounds to take full advantage of its testing capabilities.
Automated Test Data Generation:
By interfacing with the OpenAPI or GraphQL specifications,
SchemaThesisautomates the creation of test scenarios and data points that comply with the defined API schema. This function significantly enhances testing robustness and ensures that the API remains compliant with its expected behavior.
Efficient Schema-Driven Testing:
Leveraging a schema-driven approach to API testing ensures that the generated tests are aligned with the expected structure, input, and output of the API. This methodology boosts efficiency and coverage in the testing phase, providing greater confidence in the API's behavior under various conditions.
User-Friendly Testing Solution:
Its simplified approach allows users to easily point the tool at an OpenAPI or GraphQL specification, enabling the generation of comprehensive test data without delving into intricate coding or complex testing procedures.
SchemaThesis offers a user-friendly and efficient way to test web APIs by utilizing predefined specifications and automatically generating test scenarios and data. This approach ensures adherence to the specified schema, allowing for robust and comprehensive testing without requiring an extensive coding background. It is also available as a service designed to handle the heavy lifting of API debugging so you can concentrate on delivering value with your API.
"Who watches the watchmen?" is a question that resonates in many contexts, and in the realm of testing code, the reliability of your tests is a critical concern. Test coverage, often seen as the gold standard, isn't a guarantee of thoroughness. That's where mutation testing tools like Mutmut come into play.
Mutmut introduces a clever approach to scrutinizing your tests. It evaluates the effectiveness of your test suite by slightly altering the code after the tests have been written. If a test fails after a minor change, that's a good sign; it means the test is robust enough to catch those changes. But if the test passes even after the code change, it indicates that the test isn't effectively detecting that alteration – this is what Mutmut terms a "surviving mutant."
While it's a powerful tool for enhancing test quality, mutation testing like Mutmut comes with a caveat: it can significantly extend the duration of your testing process. The exhaustive nature of this tool means that comprehensive testing might take a long time. Consequently, it's crucial to be selective about what you test with Mutmut to keep the testing duration manageable. Focusing on the core business logic or key functionalities is an effective strategy to use Mutmut without significantly extending the test execution time.
By selectively targeting specific areas of code or the most crucial functions, you can effectively leverage Mutmut to ensure the strength and accuracy of your tests, thus enhancing their reliability and impact without unduly extending your testing time.
Staying current with the latest Python versions and framework updates is crucial for maintaining code health, security and functionality. To streamline these updates, tools like PyUpgrade and Ruff are invaluable. PyUpgrade is designed to effortlessly manage Python syntax updates, ensuring that the code remains aligned with the latest Python standards.
For those working with Django, specific tools like Django-Upgrade and Django-Codemod offer essential support. These dedicated tools aid in the seamless transition of Django code from earlier versions to the most recent one. They automate the process of converting legacy Django code to adapt to the latest version's syntax and conventions.
Should a more tailored modification tool be necessary, developers can utilize LibCST (Concrete Syntax Tree) to craft their own code transformation tool. LibCST offers a flexible approach, enabling users to build custom tools aligned with their unique requirements, allowing for modifications in code structure or style.
These tools collectively facilitate the efficient and timely upgrading of Python codebases, allowing for smoother transitions to new language features, the latest syntax standards, and ensuring compatibility with the most recent frameworks and libraries. This keeps the codebase relevant and optimally aligned with the current Python ecosystem.
Refactoring is a vital aspect of maintaining code health and quality. Sourcery is a fantastic tool that helps with small-scale refactorings by automatically suggesting and implementing code improvements. It offers an automated approach to identify and carry out minor code alterations that can enhance readability, reduce duplication, or improve overall code structure.
On the other hand, SonarCloud is a comprehensive code analysis service designed to identify and rectify issues related to code quality, security, and maintainability. It continuously scans and analyzes code repositories, ensuring adherence to coding standards, finding potential bugs, and offering comprehensive insights into code health. This platform flags problematic areas in the code, allowing developers to refactor and improve code quality effectively.
CodeScene is another tool to manage technical debt. It helps you to identify the most critical areas and plan goals to reduce technical debt in each hotspot.
Sourcery SonarCloud and CodeScene serve as powerful assistants in enhancing code quality and readability. Sourcery focuses on specific, smaller-scale refactoring tasks, while SonarCloud provides a broader perspective by analyzing codebases for overall health, security, and maintainability, guiding developers in making comprehensive improvements across their projects.
When refactoring code, it is important to remember that "perfect is the opposite of done". Refactoring is an iterative process, and there may be times where code is not perfect, but is still useful and can be improved upon over time.
Refactoring is about refining code to be maintainable, extensible, and modular by recognizing patterns and reducing redundancy. It involves adhering to principles like the SOLID principles:
- Single Responsibility Principle: Each module/class should have a single responsibility.
- Open-Closed Principle: Classes/modules should be open for extension but closed for modification.
- Liskov Substitution Principle: Subtypes should be replaceable with their base types without altering the program's correctness.
- Interface Segregation Principle: Many client-specific interfaces are better than a single general-purpose interface.
- Dependency Inversion Principle: Depend on abstractions, not on concrete implementations.
The CUPID principles (from the lightning talk "Why Every Single SOLID Principle is Wrong" take a descriptive rather than a prescriptive approach.
The five CUPID properties are:
- Composable: plays well with others
- Unix philosophy: does one thing well
- Predictable: does what you expect
- Idiomatic: feels natural
- Domain-based: the solution domain models the problem domain in language and structure
These principles guide better code creation, emphasizing maintainability and extensibility over perfection. They are guidelines, good advice, rather than hard rules, not natural laws like Isaac Newtons Philosophiæ Naturalis Principia Mathematica.
In refactoring, remember that perfection can impede progress and "perfect is the opposite of done". Refactoring is an iterative process that leads to better code incrementally. Code doesn’t need to be perfect but useful. The goal is maintainability and improvement over time rather than perfection at once.
It is important to focus on creating code that is maintainable and extensible, rather than striving for perfection.
“Talk is cheap. Show me the code.”
― Linus Torvalds
While most of my 'hypermodernizing' was done on proprietary code, there is a good example in pygeoif, which was brought up to the standard 10 years after the first version was released. The diff is not very helpful, almost every line was touched in the end, but you can compare the version 0.6 to the current implementation. FastKML is still actively in the process of modernizing and refactoring.
The list of tools mentioned in this article is far from exhaustive, you will find more on the awsome 🕶️ lists
- Awesome Python Code Formatters
- Awesome Flake8 Extensions
- Awesome Python Typing
- Awesome Python Testing
- Awesome PyTest
- Bandit is a tool designed to find common security issues in Python code.
- GuardDog is a CLI tool that allows to identify malicious PyPI packages.
- Safety checks Python dependencies for known security vulnerabilities and suggests the proper remediation for vulnerabilities detected.