There are four main approaches to testing your plugin, ranging in terms of scope (unit vs. integration test). You may mix-and-match between these approaches.
All approaches use [Pytest](🔗)-style tests, rather than [`unittest
`](🔗)-style tests.
You must also install the distribution `pantsbuild.pants.testutil
`. We recommend using the `pants_requirement()
` macro to do this, similar to how you may have [installed Pants as a library](🔗) when writing your plugin. This will read your `pants_version
` from `pants.toml
` to ensure that you're using the correct distribution.
Pants's dependency inference will infer a dependency on this library whenever you import `pants.testutil
`. You may also add the dependency explicitly in the `dependencies
` field.
## Approach 1: normal unit tests
Often, you can factor out normal Python functions from your plugin that do not use the Rules API. These helpers can be tested like you would test any other Python code.
For example, some Pants rules take the type `InterpreterConstraints
` as input. `InterpreterConstraints
` has a factory method `merge_constraint_sets()
` that we can test through a normal unit test.
This approach can be especially useful for testing the Target API, such as testing custom validation you added to a `Field
`.
How to create a `
Target
` in-memoryFor Approaches #1 and #2, you will often want to pass a `
Target
` instance to your test, such as a `PythonTestTarget
` instance.To create a `
Target
` instance, choose which subclass you want, then pass a dictionary of the values you want to use, followed by an `Address
` object. The dictionary corresponds to what you'd put in the BUILD file; any values that you leave off will use their default values.The `
Address
` constructor's first argument is the path to the BUILD file; you can optionally define `target_name: str
` if it is not the default `name
`.For example, given this target definition for `
project/app:tgt
`:We would write:
Note that we did not put `
"name": "tgt"
` in the dictionary. `name
` is a special field that does not use the Target API. Instead, pass the `name
` to the `target_name
` argument in the `Address
` constructor.For Approach #3, you should instead use `
rule_runner.write_files()
` to write a BUILD file, followed by `rule_runner.get_target()
`.For Approach #4, you should use `
setup_tmpdir()
` to set up BUILD files.
## Approach 2: `run_rule_with_mocks()
` (unit tests for rules)
`run_rule_with_mocks()
` will run your rule's logic, but with each argument to your `@rule
` provided explicitly by you and with mocks for any `await Get
`s. This means that the test is fully mocked; for example, `run_rule_with_mocks()
` will not actually run a `Process
`, nor will it use the file system operations. This is useful when you want to test the inlined logic in your rule, but usually, you will want to use Approach #3.
To use `run_rule_with_mocks
`, pass the `@rule
` as its first arg, then `rule_args=[arg1, arg2, ...]
` in the same order as the arguments to the `@rule
`.
If your `@rule
` has any `await Get
`s or `await Effect
`s, set the argument `mock_gets=[]
` with `MockGet
`/`MockEffect
` objects corresponding to each of them. A `MockGet
` takes three arguments: `output_type: Type
`, `input_type: Type
`, and `mock: Callable[[OutputType], InputType]
`, which is a function that takes an instance of the `input_type
` and returns an instance of the `output_type
`.
For example, given this contrived rule to find all targets with `sources
` with a certain filename included (find a "needle in the haystack"):
We can write this test:
### How to mock some common types
See the above tooltip about how to create a `Target
` instance.
If your rule takes a `Subsystem
` or `GoalSubsystem
` as an argument, you can use the utilities `create_subsystem
` and `create_goal_subsystem
` like below. Note that you must explicitly provide all options read by your `@rule
`; the default values will not be used.
If your rule takes `Console
` as an argument, you can use the `with_console
` context manager like this:
If your rule takes `Workspace
` as an argument, first create a `pants.testutil.rule_runner.RuleRunner()
` instance in your individual test. Then, create a `Workspace
` object with `Workspace(rule_runner.scheduler)
`.
## Approach 3: `RuleRunner
` (integration tests for rules)
`RuleRunner
` allows you to run rules in an isolated environment, i.e. where you set up the rule graph and registered target types exactly how you want. `RuleRunner
` will set up your rule graph and create a temporary build root. This is useful for integration tests that are more isolated and faster than Approach #4.
After setting up your isolated environment, you can run `rule_runner.request(Output, [input1, input2])
`, e.g. `rule_runner.request(SourceFiles, [SourceFilesRequest([sources_field])])
` or `rule_runner.request(TargetsWithNeedle, [FindNeedle(targets, "needle.txt"])
`. This will cause Pants to "call" the relevant `@rule
` to get the output type.
### Setting up the `RuleRunner
`
First, you must set up a `RuleRunner
` instance and activate the rules and target types you'll use in your tests. Set the argument `target_types
` with a list of the `Target
` types used in in your tests, and set `rules
` with a list of all the rules used transitively.
This means that you must register the rules you directly wrote, and also any rules that they depend on. Pants will automatically register some core rules for you, but leaves off most of them for better isolation of tests. If you're missing some rules, the rule graph will fail to be built.
Confusing rule graph error?
It can be confusing figuring out what's wrong when setting up a `
RuleRunner
`. We know the error messages are not ideal and are working on improving them.Please feel free to reach out on [Slack](🔗) for help with figuring out how to get things working.
What's with the `QueryRule
`? Normally, we don't use `QueryRule
` because we're using the _asynchronous_ version of the Rules API, and Pants is able to parse your Python code to see how your rules are used. However, with tests, we are using the _synchronous_ version of the Rules API, so we need to give a hint to the engine about what requests we're going to make. Don't worry about filling in the `QueryRule
` part yet. You'll add it later when writing `rule_runner.request()
`.
Each test should create its own distinct `RuleRunner
` instance. This is important for isolation between each test.
It's often convenient to define a [Pytest fixture](🔗) in each test file. This allows you to share a common `RuleRunner
` setup, but get a new instance for each test.
If you want multiple distinct `RuleRunner
` setups in your file, you can define multiple Pytest fixtures.
### Setting up the content and BUILD files
For most tests, you'll want to create files and BUILD files in your temporary build root. Use `rule_runner.write_files(files: dict[str, str])
`.
This function will write the files to the correct location and also notify the engine that the files were created.
You can then use `rule_runner.get_target()
` to have Pants read the BUILD file and give you back the corresponding `Target
`.
To read any files that were created, use `rule_runner.build_root
` as the first part of the path to ensure that the correct directory is read.
### Setting options
Often, you will want to set Pants options, such as activating a certain backend or setting a `--config
` option.
To set options, call `rule_runer.set_options()
` with a list of the arguments, e.g. `rule_runner.set_options(["--pytest-version=pytest>=6.0"])
`.
You can also set the keyword argument `env: dict[str, str]
`. If the option starts with `PANTS_
`, it will change which options Pants uses. You can include any arbitrary environment variable here; some rules use the parent Pants process to read arbitrary env vars, e.g. the `--test-extra-env-vars
` option, so this allows you to mock the environment in your test. Alternatively, use the keyword argument `env_inherit: set[str]
` to set the specified environment variables using the test runner's environment, which is useful to set values like `PATH
` which may vary across machines.
Warning: calling `rule_runner.set_options()
` will override any options that were previously set, so you will need to register everything you want in a single call.
### Running your rules
Now that you have your `RuleRunner
` set up, along with any options and the content/BUILD files for your test, you can test that your rules work correctly.
Unlike Approach #2, you will not explicitly say which `@rule
` you want to run. Instead, look at the return type of your `@rule
`. Use `rule_runner.request(MyOutput, [input1, ...])
`, where `MyOutput
` is the return type.
`rule_runner.request()
` is equivalent to how you would normally use `await Get(MyOuput, Input1, input1_instance)
` in a rule (See [Concepts](🔗)). For example, if you would normally say `await Get(Digest, MergeDigests([digest1, digest2])
`, you'd instead say `rule_runner.request(Digest, [MergeDigests([digest1, digest2])
`.
You will also need to add a `QueryRule
` to your `RuleRunner
` setup, which gives a hint to the engine for what requests you are going to make. The `QueryRule
` takes the same form as your `rule_runner.request()
`, except that the inputs are types, rather than instances of those types.
For example, given this rule signature (from the above Approach #2 example):
We could write this test:
Given this rule signature for running the linter Bandit:
We can write a test like this:
Note that our `@rule
` takes 3 parameters, but we only explicitly included `BanditRequest
` in the inputs. This is possible because the engine knows how to compute all [Subsystems](🔗) based on the initial input to the graph. See [Concepts](🔗).
We are happy [to help](🔗) figure out what rules to register, and what inputs to pass to `rule_runner.request()
`. It can also help to [visualize the rule graph](🔗) when running your code in production. If you're missing an input that you need, the engine will error explaining that there is no way to compute your `OutputType
`.
### Testing `@goal_rule
`s
You can run `@goal_rule
`s by using `rule_runner.run_goal_rule()
`. The first argument is your `Goal
` subclass, such as `Filedeps
` or `Lint
`. Usually, you will set `args: Iterable[str]
` by giving the specs for the targets/files you want to run on, and sometimes passing options for your goal like `--transitive
`. If you need to also set global options that do not apply to your specific goal, set `global_args: Iterable[str]
`.
`run_goal_rule()
` will return a `GoalRuleResult
` object, which has the fields `exit_code: int
`, `stdout: str
`, and `stderr: str
`.
For example, to test the `filedeps
` goal:
Unlike when testing normal `@rules
`, you do not need to define a `QueryRule
` when using `rule_runner.run_goal_rule()
`. This is already set up for you. However, you do need to make sure that your `@goal_rule
` and all the rules it depends on are registered with the `RuleRunner
` instance.
## Approach 4: `run_pants()
` (integration tests for Pants)
`pants_integration_test.py
` provides functions that allow you to run a full Pants process as it would run on the command line. It's useful for acceptance testing and for testing things that are too difficult to test with Approach #3.
You will typically use three functions:
`
setup_tmpdir()
`, which is a [context manager](🔗) that sets up temporary files in the build root to simulate a real project.It takes a single parameter `
files: Mapping[str, str]
`, which is a dictionary of file paths to file content.All file paths will be prefixed by the temporary directory.
File content can include `
{tmpdir}
`, which will get substituted with the actual temporary directory.
It yields the temporary directory, relative to the test's current work directory.
`
run_pants()
`, which runs Pants using the `list[str]
` of arguments you pass, such as `["help"]
`.It returns a `
PantsResult
` object, which has the fields `exit_code: int
`, `stdout: str
`, and `stderr: str
`.It accepts several other optional arguments, including `
config
`, `extra_env
`, and any keyword argument accepted by `subprocess.Popen()
`.
`
PantsResult.assert_success()
` or `PantsResult.assert_failure()
`, which checks the exit code and prints a nice error message if unexpected.
For example:
`run_pants()
` is hermetic by default, meaning that it will not read your `pants.toml
`. As a result, you often need to include the option `--backend-packages
` in the arguments to `run_pants()
`. You can alternatively set the argument `hermetic=False
`, although we discourage this.
To read any files that were created, use `get_buildroot()
` as the first part of the path to ensure that the correct directory is read.