| Dec | JAN | Feb |
| 01 | ||
| 2021 | 2022 | 2023 |
COLLECTED BY
Collection: Save Page Now
sum(), you would check the output of sum() against a known output.
For example, here’s how you check that the sum() of the numbers (1, 2, 3) equals 6:
>>>>>> assert sum([1, 2, 3]) == 6, "Should be 6"
This will not output anything on the REPL because the values are correct.
If the result from sum() is incorrect, this will fail with an AssertionError and the message "Should be 6". Try an assertion statement again with the wrong values to see an AssertionError:
>>>>>> assert sum([1, 1, 1]) == 6, "Should be 6"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AssertionError: Should be 6
In the REPL, you are seeing the raised AssertionError because the result of sum() does not match 6.
Instead of testing on the REPL, you’ll want to put this into a new Python file called test_sum.py and execute it again:
def test_sum():
assert sum([1, 2, 3]) == 6, "Should be 6"
if __name__ == "__main__":
test_sum()
print("Everything passed")
Now you have written a test case, an assertion, and an entry point (the command line). You can now execute this at the command line:
$ python test_sum.py
Everything passed
You can see the successful result, Everything passed.
In Python, sum() accepts any iterable as its first argument. You tested with a list. Now test with a tuple as well. Create a new file called test_sum_2.py with the following code:
def test_sum():
assert sum([1, 2, 3]) == 6, "Should be 6"
def test_sum_tuple():
assert sum((1, 2, 2)) == 6, "Should be 6"
if __name__ == "__main__":
test_sum()
test_sum_tuple()
print("Everything passed")
When you execute test_sum_2.py, the script will give an error because the sum()of(1, 2, 2)is5, not 6. The result of the script gives you the error message, the line of code, and the traceback:
$ python test_sum_2.py
Traceback (most recent call last):
File "test_sum_2.py", line 9, in <module>
test_sum_tuple()
File "test_sum_2.py", line 5, in test_sum_tuple
assert sum((1, 2, 2)) == 6, "Should be 6"
AssertionError: Should be 6
Here you can see how a mistake in your code gives an error on the console with some information on where the error was and what the expected result was.
Writing tests in this way is okay for a simple check, but what if more than one fails? This is where test runners come in. The test runner is a special application designed for running tests, checking the output, and giving you tools for debugging and diagnosing tests and applications.
Remove ads
unittest. In this tutorial, you will be using unittest test cases and the unittest test runner. The principles of unittest are easily portable to other frameworks. The three most popular test runners are:
●unittest
●noseornose2
●pytest
Choosing the best test runner for your requirements and level of experience is important.
unittest
unittest has been built into the Python standard library since version 2.1. You’ll probably see it in commercial Python applications and open-source projects.
unittest contains both a testing framework and a test runner. unittest has some important requirements for writing and executing tests.
unittest requires that:
●You put your tests into classes as methods
●You use a series of special assertion methods in the unittest.TestCase class instead of the built-in assert statement
To convert the earlier example to a unittest test case, you would have to:
(一)Import unittest from the standard library
(二)Create a class called TestSum that inherits from the TestCase class
(三)Convert the test functions into methods by adding self as the first argument
(四)Change the assertions to use the self.assertEqual() method on the TestCase class
(五)Change the command-line entry point to call unittest.main()
Follow those steps by creating a new file test_sum_unittest.py with the following code:
import unittest
class TestSum(unittest.TestCase):
def test_sum(self):
self.assertEqual(sum([1, 2, 3]), 6, "Should be 6")
def test_sum_tuple(self):
self.assertEqual(sum((1, 2, 2)), 6, "Should be 6")
if __name__ == '__main__':
unittest.main()
If you execute this at the command line, you’ll see one success (indicated with .) and one failure (indicated with F):
$ python test_sum_unittest.py
.F
======================================================================
FAIL: test_sum_tuple (__main__.TestSum)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_sum_unittest.py", line 9, in test_sum_tuple
self.assertEqual(sum((1, 2, 2)), 6, "Should be 6")
AssertionError: Should be 6
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (failures=1)
You have just executed two tests using the unittest test runner.
Note: Be careful if you’re writing test cases that need to execute in both Python 2 and 3. In Python 2.7 and below, unittest is called unittest2. If you simply import from unittest, you will get different versions with different features between Python 2 and 3.
For more information on unittest, you can explore the unittest Documentation.
nose
unittest.
nose is compatible with any tests written using the unittest framework and can be used as a drop-in replacement for the unittest test runner. The development of nose as an open-source application fell behind, and a fork called nose2 was created. If you’re starting from scratch, it is recommended that you use nose2 instead of nose.
To get started with nose2, install nose2 from PyPI and execute it on the command line. nose2 will try to discover all test scripts named test*.py and test cases inheriting from unittest.TestCase in your current directory:
$ pip install nose2
$ python -m nose2
.F
======================================================================
FAIL: test_sum_tuple (__main__.TestSum)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_sum_unittest.py", line 9, in test_sum_tuple
self.assertEqual(sum((1, 2, 2)), 6, "Should be 6")
AssertionError: Should be 6
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (failures=1)
You have just executed the test you created in test_sum_unittest.py from the nose2 test runner. nose2 offers many command-line flags for filtering the tests that you execute. For more information, you can explore the Nose 2 documentation.
pytest
pytest supports execution of unittest test cases. The real advantage of pytest comes by writing pytest test cases. pytest test cases are a series of functions in a Python file starting with the name test_.
pytest has some other great features:
●Support for the built-in assert statement instead of using special self.assert*() methods
●Support for filtering for test cases
●Ability to rerun from the last failing test
●An ecosystem of hundreds of plugins to extend the functionality
Writing the TestSum test case example for pytest would look like this:
def test_sum():
assert sum([1, 2, 3]) == 6, "Should be 6"
def test_sum_tuple():
assert sum((1, 2, 2)) == 6, "Should be 6"
You have dropped the TestCase, any use of classes, and the command-line entry point.
More information can be found at the Pytest Documentation Website.
Remove ads
sum() function, test a simple implementation of the same requirement.
Create a new project folder and, inside that, create a new folder called my_sum. Inside my_sum, create an empty file called __init__.py. Creating the __init__.py file means that the my_sum folder can be imported as a module from the parent directory.
Your project folder should look like this:
project/
│
└── my_sum/
└── __init__.py
Open up my_sum/__init__.py and create a new function called sum(), which takes an iterable (a list, tuple, or set) and adds the values together:
def sum(arg):
total = 0
for val in arg:
total += val
return total
This code example creates a variable called total, iterates over all the values in arg, and adds them to total. It then returns the result once the iterable has been exhausted.
test.py, which will contain your first test case. Because the file will need to be able to import your application to be able to test it, you want to place test.py above the package folder, so your directory tree will look something like this:
project/
│
├── my_sum/
│ └── __init__.py
|
└── test.py
You’ll find that, as you add more and more tests, your single file will become cluttered and hard to maintain, so you can create a folder called tests/ and split the tests into multiple files. It is convention to ensure each file starts with test_ so all test runners will assume that Python file contains tests to be executed. Some very large projects split tests into more subdirectories based on their purpose or usage.
Note: What if your application is a single script?
You can import any attributes of the script, such as classes, functions, and variables by using the built-in __import__() function. Instead of from my_sum import sum, you can write the following:
target = __import__("my_sum.py")
sum = target.sum
The benefit of using __import__() is that you don’t have to turn your project folder into a package, and you can specify the file name. This is also useful if your filename collides with any standard library packages. For example, math.py would collide with the math module.
sum(). There are many behaviors in sum() you could check, such as:
●Can it sum a list of whole numbers (integers)?
●Can it sum a tuple or set?
●Can it sum a list of floats?
●What happens when you provide it with a bad value, such as a single integer or a string?
●What happens when one of the values is negative?
The most simple test would be a list of integers. Create a file, test.py with the following Python code:
import unittest
from my_sum import sum
class TestSum(unittest.TestCase):
def test_list_int(self):
"""
Test that it can sum a list of integers
"""
data = [1, 2, 3]
result = sum(data)
self.assertEqual(result, 6)
if __name__ == '__main__':
unittest.main()
This code example:
Imports sum() from the my_sum package you created
Defines a new test case class called TestSum, which inherits from unittest.TestCase
Defines a test method, .test_list_int(), to test a list of integers. The method .test_list_int() will:
●Declare a variable data with a list of numbers (1, 2, 3)
●Assign the result of my_sum.sum(data) to a result variable
●Assert that the value of result equals 6by using the .assertEqual() method on the unittest.TestCase class
Defines a command-line entry point, which runs the unittest test-runner .main()
If you’re unsure what self is or how .assertEqual() is defined, you can brush up on your object-oriented programming with Python 3 Object-Oriented Programming.
Remove ads
sum() example
unittest comes with lots of methods to assert on the values, types, and existence of variables. Here are some of the most commonly used methods:
| Method | Equivalent to |
|---|---|
.assertEqual(a, b) |
a == b |
.assertTrue(x) |
bool(x) is True |
.assertFalse(x) |
bool(x) is False |
.assertIs(a, b) |
a is b |
.assertIsNone(x) |
x is None |
.assertIn(a, b) |
a in b |
.assertIsInstance(a, b) |
isinstance(a, b) |
.assertIs(), .assertIsNone(), .assertIn(), and .assertIsInstance() all have opposite methods, named .assertIsNot(), and so forth.
test.py, you added this small snippet of code:
if __name__ == '__main__':
unittest.main()
This is a command line entry point. It means that if you execute the script alone by running python test.py at the command line, it will call unittest.main(). This executes the test runner by discovering all classes in this file that inherit from unittest.TestCase.
This is one of many ways to execute the unittest test runner. When you have a single test file named test.py, calling python test.py is a great way to get started.
Another way is using the unittest command line. Try this:
$ python -m unittest test
This will execute the same test module (called test) via the command line.
You can provide additional options to change the output. One of those is -v for verbose. Try that next:
$ python -m unittest -v test
test_list_int (test.TestSum) ... ok
----------------------------------------------------------------------
Ran 1 tests in 0.000s
This executed the one test inside test.py and printed the results to the console. Verbose mode listed the names of the tests it executed first, along with the result of each test.
Instead of providing the name of a module containing tests, you can request an auto-discovery using the following:
$ python -m unittest discover
This will search the current directory for any files named test*.py and attempt to test them.
Once you have multiple test files, as long as you follow the test*.py naming pattern, you can provide the name of the directory instead by using the -s flag and the name of the directory:
$ python -m unittest discover -s tests
unittest will run all tests in a single test plan and give you the results.
Lastly, if your source code is not in the directory root and contained in a subdirectory, for example in a folder called src/, you can tell unittest where to execute the tests so that it can import the modules correctly with the -t flag:
$ python -m unittest discover -s tests -t src
unittest will change to the src/ directory, scan for all test*.py files inside the the tests directory, and execute them.
Remove ads
sum() should be able to accept other lists of numeric types, like fractions.
At the top of the test.py file, add an import statement to import the Fraction type from the fractions module in the standard library:
from fractions import Fraction
Now add a test with an assertion expecting the incorrect value, in this case expecting the sum of 1/4, 1/4, and 2/5 to be 1:
import unittest
from my_sum import sum
class TestSum(unittest.TestCase):
def test_list_int(self):
"""
Test that it can sum a list of integers
"""
data = [1, 2, 3]
result = sum(data)
self.assertEqual(result, 6)
def test_list_fraction(self):
"""
Test that it can sum a list of fractions
"""
data = [Fraction(1, 4), Fraction(1, 4), Fraction(2, 5)]
result = sum(data)
self.assertEqual(result, 1)
if __name__ == '__main__':
unittest.main()
If you execute the tests again with python -m unittest test, you should see the following output:
$ python -m unittest test
F.
======================================================================
FAIL: test_list_fraction (test.TestSum)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 21, in test_list_fraction
self.assertEqual(result, 1)
AssertionError: Fraction(9, 10) != 1
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (failures=1)
In the output, you’ll see the following information:
The first line shows the execution results of all the tests, one failed (F) and one passed (.).
The FAIL entry shows some details about the failed test:
●The test method name (test_list_fraction)
●The test module (test) and the test case (TestSum)
●A traceback to the failing line
●The details of the assertion with the expected result (1) and the actual result (Fraction(9, 10))
Remember, you can add extra information to the test output by adding the -v flag to the python -m unittest command.
unittestorpytest by following these steps:
(一)In the Project tool window, select the tests directory.
(二)On the context menu, choose the run command for unittest. For example, choose Run ‘Unittests in my Tests…’.
This will execute unittest in a test window and give you the results within PyCharm:
More information is available on the PyCharm Website.
unittest, nose, and pytest execution is built into the Python plugin.
If you have the Python plugin installed, you can set up the configuration of your tests by opening the Command Palette with Ctrl+Shift+P and typing “Python test”. You will see a range of options:
unittest) and the home directory (.).
Once this is set up, you will see the status of your tests at the bottom of the window, and you can quickly access the test logs and run the tests again by clicking on these icons:
unittest. You can continue writing tests in the way you’ve been learning but execute them slightly differently.
startapp template will have created a tests.py file inside your application directory. If you don’t have that already, you can create it with the following contents:
from django.test import TestCase
class MyTestCase(TestCase):
# Your test methods
The major difference with the examples so far is that you need to inherit from the django.test.TestCase instead of unittest.TestCase. These classes have the same API, but the Django TestCase class sets up all the required state to test.
To execute your test suite, instead of using unittest at the command line, you use manage.py test:
$ python manage.py test
If you want multiple test files, replace tests.py with a folder called tests, insert an empty file inside called __init__.py, and create your test_*.py files. Django will discover and execute these.
More information is available at the Django Documentation Website.
unittest and Flask
setUp method of your test case. In the following example, my_app is the name of the application. Don’t worry if you don’t know what setUp does. You’ll learn about that in the More Advanced Testing Scenarios section.
The code within your test file should look like this:
import my_app
import unittest
class MyTestCase(unittest.TestCase):
def setUp(self):
my_app.app.testing = True
self.app = my_app.app.test_client()
def test_home(self):
result = self.app.get('/')
# Make your assertions
You can then execute the test cases using the python -m unittest discover command.
More information is available at the Flask Documentation Website.
Remove ads
sum(), a question came up:
What happens when you provide it with a bad value, such as a single integer or a string?
In this case, you would expect sum() to throw an error. When it does throw an error, that would cause the test to fail.
There’s a special way to handle expected errors. You can use .assertRaises() as a context-manager, then inside the with block execute the test steps:
import unittest
from my_sum import sum
class TestSum(unittest.TestCase):
def test_list_int(self):
"""
Test that it can sum a list of integers
"""
data = [1, 2, 3]
result = sum(data)
self.assertEqual(result, 6)
def test_list_fraction(self):
"""
Test that it can sum a list of fractions
"""
data = [Fraction(1, 4), Fraction(1, 4), Fraction(2, 5)]
result = sum(data)
self.assertEqual(result, 1)
def test_bad_type(self):
data = "banana"
with self.assertRaises(TypeError):
result = sum(data)
if __name__ == '__main__':
unittest.main()
This test case will now only pass if sum(data) raises a TypeError. You can replace TypeError with any exception type you choose.
There are some simple techniques you can use to test parts of your application that have many side effects:
●Refactoring code to follow the Single Responsibility Principle
●Mocking out any method or function calls to remove side effects
●Using integration testing instead of unit testing for this piece of the application
If you’re not familiar with mocking, see Python CLI Testing for some great examples.
project/
│
├── my_app/
│ └── __init__.py
│
└── tests/
|
├── unit/
| ├── __init__.py
| └── test_sum.py
|
└── integration/
├── __init__.py
└── test_integration.py
There are many ways to execute only a select group of tests. The specify source directory flag, -s, can be added to unittest discover with the path containing the tests:
$ python -m unittest discover -s tests/integration
unittest will have given you the results of all the tests within the tests/integration directory.
Remove ads
fixtures to indicate that it contains test data. Then, within your tests, you can load the data and run the test.
Here’s an example of that structure if the data consisted of JSON files:
project/
│
├── my_app/
│ └── __init__.py
│
└── tests/
|
└── unit/
| ├── __init__.py
| └── test_sum.py
|
└── integration/
|
├── fixtures/
| ├── test_basic.json
| └── test_complex.json
|
├── __init__.py
└── test_integration.py
Within your test case, you can use the .setUp() method to load the test data from a fixture file in a known path and execute many tests against that test data. Remember you can have multiple test cases in a single Python file, and the unittest discovery will execute both. You can have one test case for each set of test data:
import unittest
class TestBasic(unittest.TestCase):
def setUp(self):
# Load test data
self.app = App(database='fixtures/test_basic.json')
def test_customer_count(self):
self.assertEqual(len(self.app.customers), 100)
def test_existence_of_customer(self):
customer = self.app.get_customer(id=10)
self.assertEqual(customer.name, "Org XYZ")
self.assertEqual(customer.address, "10 Red Road, Reading")
class TestComplexData(unittest.TestCase):
def setUp(self):
# load test data
self.app = App(database='fixtures/test_complex.json')
def test_customer_count(self):
self.assertEqual(len(self.app.customers), 10000)
def test_existence_of_customer(self):
customer = self.app.get_customer(id=9999)
self.assertEqual(customer.name, u"バナナ")
self.assertEqual(customer.address, "10 Red Road, Akihabara, Tokyo")
if __name__ == '__main__':
unittest.main()
If your application depends on data from a remote location, like a remote API, you’ll want to ensure your tests are repeatable. Having your tests fail because the API is offline or there is a connectivity issue could slow down development. In these types of situations, it is best practice to store remote fixtures locally so they can be recalled and sent to the application.
The requests library has a complimentary package called responses that gives you ways to create response fixtures and save them in your test folders. Find out more on their GitHub Page.
pip:
$ pip install tox
Now that you have Tox installed, it needs to be configured.
$ tox-quickstart
The Tox configuration tool will ask you those questions and create a file similar to the following in tox.ini:
[tox]
envlist = py27, py36
[testenv]
deps =
commands =
python -m unittest discover
Before you can run Tox, it requires that you have a setup.py file in your application folder containing the steps to install your package. If you don’t have one, you can follow this guide on how to create a setup.py before you continue.
Alternatively, if your project is not for distribution on PyPI, you can skip this requirement by adding the following line in the tox.ini file under the [tox] heading:
[tox]
envlist = py27, py36
skipsdist=True
If you don’t create a setup.py, and your application has some dependencies from PyPI, you’ll need to specify those on a number of lines under the [testenv] section. For example, Django would require the following:
[testenv]
deps = django
Once you have completed that stage, you’re ready to run the tests.
You can now execute Tox, and it will create two virtual environments: one for Python 2.7 and one for Python 3.6. The Tox directory is called .tox/. Within the .tox/ directory, Tox will execute python -m unittest discover against each virtual environment.
You can run this process by calling Tox at the command line:
$ tox
Tox will output the results of your tests against each environment. The first time it runs, Tox takes a little bit of time to create the virtual environments, but once it has, the second execution will be a lot faster.
Remove ads
$ tox -e py36
Recreate the virtual environments, in case your dependencies have changed or site-packages is corrupt:
$ tox -r
Run Tox with less verbose output:
$ tox -q
Running Tox with more verbose output:
$ tox -v
More information on Tox can be found at the Tox Documentation Website.
.travis.yml with the following contents:
language: python
python:
- "2.7"
- "3.7"
install:
- pip install -r requirements.txt
script:
- python -m unittest discover
This configuration instructs Travis CI to:
(一)Test against Python 2.7 and 3.7 (You can replace those versions with any you choose.)
(二)Install all the packages you list in requirements.txt (You should remove this section if you don’t have any dependencies.)
(三)Run python -m unittest discover to run the tests
Once you have committed and pushed this file, Travis CI will run these commands every time you push to your remote Git repository. You can check out the results on their website.
Remove ads
python -m unittest discover.
You can provide one or many commands in all of these tools, and this option is there to enable you to add more tools that improve the quality of your application.
One such type of application is called a linter. A linter will look at your code and comment on it. It could give you tips about mistakes you’ve made, correct trailing spaces, and even predict bugs you may have introduced.
For more information on linters, read the Python Code Quality tutorial.
flake8
flake8.
You can install flake8 using pip:
$ pip install flake8
You can then run flake8 over a single file, a folder, or a pattern:
$ flake8 test.py
test.py:6:1: E302 expected 2 blank lines, found 1
test.py:23:1: E305 expected 2 blank lines after class or function definition, found 1
test.py:24:20: W292 no newline at end of file
You will see a list of errors and warnings for your code that flake8 has found.
flake8 is configurable on the command line or inside a configuration file in your project. If you wanted to ignore certain rules, like E305 shown above, you can set them in the configuration. flake8 will inspect a .flake8 file in the project folder or a setup.cfg file. If you decided to use Tox, you can put the flake8 configuration section inside tox.ini.
This example ignores the .git and __pycache__ directories as well as the E305 rule. Also, it sets the max line length to 90 instead of 80 characters. You will likely find that the default constraint of 79 characters for line-width is very limiting for tests, as they contain long method names, string literals with test values, and other pieces of data that can be longer. It is common to set the line length for tests to up to 120 characters:
[flake8]
ignore = E305
exclude = .git,__pycache__
max-line-length = 90
Alternatively, you can provide these options on the command line:
$ flake8 --ignore E305 --exclude .git,__pycache__ --max-line-length=90
A full list of configuration options is available on the Documentation Website.
You can now add flake8 to your CI configuration. For Travis CI, this would look as follows:
matrix:
include:
- python: "2.7"
script: "flake8"
Travis will read the configuration in .flake8 and fail the build if any linting errors occur. Be sure to add the flake8 dependency to your requirements.txt file.
flake8 is a passive linter: it recommends changes, but you have to go and change the code. A more aggressive approach is a code formatter. Code formatters will change your code automatically to meet a collection of style and layout practices.
black is a very unforgiving formatter. It doesn’t have any configuration options, and it has a very specific style. This makes it great as a drop-in tool to put in your test pipeline.
Note: black requires Python 3.6+.
You can install black via pip:
$ pip install black
Then to run black at the command line, provide the file or directory you want to format:
$ black test.py
flake8 over your test code:
$ flake8 --max-line-length=120 tests/
timeit module, which can time functions a number of times and give you the distribution. This example will execute test() 100 times and print() the output:
def test():
# ... your code
if __name__ == '__main__':
import timeit
print(timeit.timeit("test()", setup="from __main__ import test", number=100))
Another option, if you decided to use pytest as a test runner, is the pytest-benchmark plugin. This provides a pytest fixture called benchmark. You can pass benchmark() any callable, and it will log the timing of the callable to the results of pytest.
You can install pytest-benchmark from PyPI using pip:
$ pip install pytest-benchmark
Then, you can add a test that uses the fixture and passes the callable to be executed:
def test_my_function(benchmark):
result = benchmark(test)
Execution of pytest will now give you benchmark results:
More information is available at the Documentation Website.
bandit from PyPI using pip:
$ pip install bandit
You can then pass the name of your application module with the -r flag, and it will give you a summary:
$ bandit -r my_sum
[main] INFO profile include tests: None
[main] INFO profile exclude tests: None
[main] INFO cli include tests: None
[main] INFO cli exclude tests: None
[main] INFO running on Python 3.5.2
Run started:2018-10-08 00:35:02.669550
Test results:
No issues identified.
Code scanned:
Total lines of code: 5
Total lines skipped (#nosec): 0
Run metrics:
Total issues (by severity):
Undefined: 0.0
Low: 0.0
Medium: 0.0
High: 0.0
Total issues (by confidence):
Undefined: 0.0
Low: 0.0
Medium: 0.0
High: 0.0
Files skipped (0):
As with flake8, the rules that bandit flags are configurable, and if there are any you wish to ignore, you can add the following section to your setup.cfg file with the options:
[bandit]
exclude: /test
tests: B101,B102,B301
More details are available at the GitHub Website.
unittest and write small, maintainable methods to validate your code.
As you learn more about testing and your application grows, you can consider switching to one of the other test frameworks, like pytest, and start to leverage more advanced features.
Thank you for reading. I hope you have a bug-free future with Python!
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Test-Driven Development With PyTest
🐍 Python Tricks 💌
Get a short & sweet Python Trick delivered to your inbox every couple of days. No spam ever. Unsubscribe any time. Curated by the Real Python team.
About Anthony Shaw

Anthony is an avid Pythonista and writes for Real Python. Anthony is a Fellow of the Python Software Foundation and member of the Open-Source Apache Foundation.
» More about Anthony
David
Geir Arne

🔒 No spam. Unsubscribe any time.
All Tutorial Topics advanced api basics best-practices community databases data-science devops django docker flask front-end gamedev gui intermediate machine-learning projects python testing tools web-dev web-scraping Table of Contents ●Testing Your Code ●Automated vs. Manual Testing ●Unit Tests vs. Integration Tests ●Choosing a Test Runner ●Writing Your First Test ●Where to Write the Test ●How to Structure a Simple Test ●How to Write Assertions ●Side Effects ●Executing Your First Test ●Executing Test Runners ●Understanding Test Output ●Running Your Tests From PyCharm ●Running Your Tests From Visual Studio Code ●Testing for Web Frameworks Like Django and Flask ●Why They’re Different From Other Applications ●How to Use the Django Test Runner ●How to Use unittest and Flask ●More Advanced Testing Scenarios ●Handling Expected Failures ●Isolating Behaviors in Your Application ●Writing Integration Tests ●Testing Data-Driven Applications ●Testing in Multiple Environments ●Installing Tox ●Configuring Tox for Your Dependencies ●Executing Tox ●Automating the Execution of Your Tests ●What’s Next ●Introducing Linters Into Your Application ●Keeping Your Test Code Clean ●Testing for Performance Degradation Between Changes ●Testing for Security Flaws in Your Application ●Conclusion Tweet Share Email Recommended Video Course Test-Driven Development With PyTest Almost there! Complete this form and click the button below to gain instant access:
5 Thoughts On Python Mastery
Remove ads
© 2012–2022 Real Python ⋅ Newsletter ⋅ Podcast ⋅ YouTube ⋅ Twitter ⋅ Facebook ⋅ Instagram ⋅ Python Tutorials ⋅ Search ⋅ Privacy Policy ⋅ Energy Policy ⋅ Advertise ⋅ Contact
❤️ Happy Pythoning!