Pytest with Eric

Learn to write production level Python Unit Tests with Pytest

Picture this: you join a growing project with thousands of tests. Or your project has grown over time to hundreds of tests.

Different developers work on the codebase, each with various skills and styles, leading to inconsistent best practices.

Scattered test files, inconsistent naming conventions, overloaded fixtures. and conftest.py files, creating more confusion than clarity.

Debugging even a single test failure takes hours. It’s messy and overwhelming, leaving you wondering where to begin and how to refactor.

Read more »

Have you ever found yourself trying to test code that integrates with an external 3rd party API, unsure how to move forward?

You’re mocking away but stuck with constant errors, wondering if you’re just testing the mocks instead of the integration.

While unit testing your code is simple, things get trickier when dealing with third-party dependencies.

External APIs are integral to applications, allowing them to exchange data and offer enhanced functionality.

But this comes with the responsibility of ensuring that your code can handle responses reliably — even when the data comes from a service outside your control.

How do you test these interactions to catch issues before they impact users?

Should you connect to the real API or create a controlled environment? Is mocking the right approach, or would a sandbox or fake be better?

Read more »

Debugging tests is frustrating.

It’s even worse when you find out that your environment variables or config settings were wrong, causing tests to fail.

Even worse, you accidentally ran tests against your production database!

Managing configurations across multiple environments — development, testing, staging, and production — can quickly become a headache.

Hence robust configuration management is essential, especially in frameworks like Flask or Django.

But how do you separate test settings from production?

Read more »

Do you ever feel like your test coverage reports are cluttered with irrelevant code, making it hard to see what really matters?

Maybe you’ve got a folder full of Django migrations or some legacy code that’s no longer in use.

Sure, your coverage numbers look fine, but do they really reflect true test quality?

This noise can lead to inflated coverage percentages, giving you a false sense of security about your codebase.

Worse yet, it can slow you down when trying to spot untested areas that actually matter.

But don’t worry—there’s an easy solution!

Read more »

Are you using Poetry to manage your Python projects but unsure how to generate coverage reports?

Or perhaps you’re wondering about the difference between test coverage and branch coverage, and how each affects code and test quality.

What about configuring coverage reports to run seamlessly in your CI/CD pipeline?

Read more »

As a Python developer, you may have written non-deterministic code or code that has external dependencies.

One of the simplest ways to test this type of code is mocking.

But what happens when you use too much mocking? How does it affect code refactoring?

What if you change the underlying storage layer or ORM (say SQL to NoSQL), does it break the mocks? What about changes to implementation logic?

How do you capture changes in the external system if you’re using mocking? (e.g. API provider changes schema).

Read more »

Imagine kicking off a lengthy test suite, only to find they all failed because a database wasn’t accessible or an API endpoint was down.

Frustrating, right?

Maybe you’ve experienced this scenario before, where a single failure cascades into multiple test failures, wasting time, resources, and increasing costs in your CI/CD pipelines.

But what if you could check critical resources and dependencies before running your tests, to avoid such situations?

How about if you wanted your tests to do something before collection - maybe ping a webhook or send a Slack message?

Read more »

As an application developer, delivering a functional, well-tested app is a top priority.

Most apps use external dependencies, like a data storage layer — database or file-based storage.

How do you make sure your app behaves the way it’s supposed to — even for edge cases? Well, you write tests.

But the next question is how do you test these database interactions?

Do you use a test database or an in-memory one? Should you use a flat-file instead to keep things light?

Or should you mock the database queries?

Read more »

Have you ever wanted your tests to behave differently based on command-line inputs?

Perhaps connect to a different database — depending on whether tests are run locally or in a CI/CD pipeline?

Or maybe you want to extract information about the test execution context in your tests - e.g. test module name, file path, or scope?

Read more »

In the last article on Pytest API Testing with FastAPI, SQLAlchemy, and Postgres, we built a simple Mortgage Calculator.

We explored how to design and develop CRUD operations to create a property, associate a mortgage with it, and calculate the monthly payment depending on the mortgage type — repayment or interest only.

You learned how to define the ORM models, Pydantic Models, CRUD operations, calculations, routing, etc.

Lastly, you learned how to use the Swagger UI and import the OpenAPI JSON schema into Postman while running manual API checks in Postman.

While this is great for development, running these manual workflows every time is cumbersome.

Read more »

Building APIs is easy. Testing them is harder.

API testing involves multiple components and strays into the realm of integration testing.

APIs form an interface between the real world consumers of data and a data store.

So how do you test this interaction of components?

How do you test that your API correctly handles the incoming payload and return values?

What about making sure your API returns correct HTTP response codes?

Should you use a real database or mock the queries? Does that create a strong coupling with the source code?

What about an in-memory database like SQLite or TinyDB?

Read more »

As an SDET or QA Engineer, how do you share test results with your wider team?

Pytest-HTML and terminal reporting are fine for highly technical people.

But what about less technical colleagues like project managers or delivery leads?

How can you communicate with them effectively and bridge the gap between test automation, developers, and project managers?

That’s exactly why the Allure Report was developed.

Read more »

You’ve probably come across or used tools like Cypress or Selenium in test automation.

These tools though powerful, are dated, slow, and flaky.

Enter the new kid on the block, Playwright by Microsoft.

Have you heard about Playwright but have yet to use it? Does it integrate well with Pytest?

How do you configure it, why should you even consider using it, over Selenium or Cypress?

Read more »

Testing Python code is crucial for improving code quality and detecting bugs or unintended behavior.

But what about the test results?

How do you communicate the test results with colleagues, managers, and QA teams or send it to an external system?

One way is to build an API and POST the results, take screenshots, or use another complex system. All are not very user-friendly options.

But there’s an easier way. A user-friendly one.

Read more »

Imagine your project contains 100, if not thousands of tests. When you run pytest, the test runner executes them sequentially, taking ages to complete the tests.

What if there was a better way? A way to use resources effectively, run tests in parallel and reduce the overall test execution time and CI/CD pipeline.

This would make testing more efficient, provide faster feedback on code changes, and improve the overall development process, all while reducing your CI/CD running costs.

Read more »

Have you ever found it challenging trying to run Pytest or Python scripts via Poetry?

Perhaps Poetry fails to detect your virtual environment or uses the default Pytest version instead of the one you want.

How do you ensure Poetry runs your code within the virtual environment?

What about handling dependency management and lock files in Poetry? How do you group dependencies and manage them effectively?

Read more »

As a software developer it’s not uncommon that your code depends on external systems, databases, or APIs.

So how do you write tests for code that’s heavily reliant on these systems, totally out of your control?

Perhaps you’ve heard of mocking, patching or even stubbing and have little to no idea what it means or how to use it.

Should you write mocks, or patches? What’s the difference between them? Which one should you use?

This can be quite confusing, especially if you’re new to testing.

Read more »

You know that annoying feeling when trying to run your code and unrelated warnings pop up?

Like a DeprecationWarning from an external library that you have no control over.

While useful, warnings can clutter your console output and obscure important results.

How do you silence warnings in Pytest? Should you disable ALL warnings? What about important ones?

Read more »

In modern computing, it’s a common belief that computers execute tasks in parallel, effortlessly juggling multiple operations at once.

True parallelism is largely the domain of supercomputers and quantum computing.

But what about the everyday applications that form the backbone of our digital lives?

Asynchronous, or async, allows a program to efficiently execute its tasks (executing other tasks one while is waiting). This is also called concurrency.

When it comes to testing async code, things can get a bit tricky.

How can you ensure your async code behaves as expected? How do you test the interaction of different async tasks and functions without getting tangled in the complexities of event loops and await statements?

Read more »

Have you ever had to wade through hundreds or thousands of tests to run a few specific ones?

Running pytest in the terminal, executes ALL the tests which not only takes a long time, but is also inefficient.

Maybe you can run a test in a specific directory but that too carries overhead and is hard to do repeatedly for tests in different directories.

So how do you filter tests in Pytest? How can you tell Pytest to run only the tests you’re interested in? Something like a regular expression?

Read more »
0%