Involved Source Filesallocs.gobenchmark.gocover.goexample.gofuzz.gomatch.gorun_example.go
Package testing provides support for automated testing of Go packages.
It is intended to be used in concert with the "go test" command, which automates
execution of any function of the form
func TestXxx(*testing.T)
where Xxx does not start with a lowercase letter. The function name
serves to identify the test routine.
Within these functions, use the Error, Fail or related methods to signal failure.
To write a new test suite, create a file whose name ends _test.go that
contains the TestXxx functions as described here. Put the file in the same
package as the one being tested. The file will be excluded from regular
package builds but will be included when the "go test" command is run.
For more detail, run "go help test" and "go help testflag".
A simple test function looks like this:
func TestAbs(t *testing.T) {
got := Abs(-1)
if got != 1 {
t.Errorf("Abs(-1) = %d; want 1", got)
}
}
Benchmarks
Functions of the form
func BenchmarkXxx(*testing.B)
are considered benchmarks, and are executed by the "go test" command when
its -bench flag is provided. Benchmarks are run sequentially.
For a description of the testing flags, see
https://golang.org/cmd/go/#hdr-Testing_flags.
A sample benchmark function looks like this:
func BenchmarkRandInt(b *testing.B) {
for i := 0; i < b.N; i++ {
rand.Int()
}
}
The benchmark function must run the target code b.N times.
During benchmark execution, b.N is adjusted until the benchmark function lasts
long enough to be timed reliably. The output
BenchmarkRandInt-8 68453040 17.8 ns/op
means that the loop ran 68453040 times at a speed of 17.8 ns per loop.
If a benchmark needs some expensive setup before running, the timer
may be reset:
func BenchmarkBigLen(b *testing.B) {
big := NewBig()
b.ResetTimer()
for i := 0; i < b.N; i++ {
big.Len()
}
}
If a benchmark needs to test performance in a parallel setting, it may use
the RunParallel helper function; such benchmarks are intended to be used with
the go test -cpu flag:
func BenchmarkTemplateParallel(b *testing.B) {
templ := template.Must(template.New("test").Parse("Hello, {{.}}!"))
b.RunParallel(func(pb *testing.PB) {
var buf bytes.Buffer
for pb.Next() {
buf.Reset()
templ.Execute(&buf, "World")
}
})
}
A detailed specification of the benchmark results format is given
in https://golang.org/design/14313-benchmark-format.
There are standard tools for working with benchmark results at
https://golang.org/x/perf/cmd.
In particular, https://golang.org/x/perf/cmd/benchstat performs
statistically robust A/B comparisons.
Examples
The package also runs and verifies example code. Example functions may
include a concluding line comment that begins with "Output:" and is compared with
the standard output of the function when the tests are run. (The comparison
ignores leading and trailing space.) These are examples of an example:
func ExampleHello() {
fmt.Println("hello")
// Output: hello
}
func ExampleSalutations() {
fmt.Println("hello, and")
fmt.Println("goodbye")
// Output:
// hello, and
// goodbye
}
The comment prefix "Unordered output:" is like "Output:", but matches any
line order:
func ExamplePerm() {
for _, value := range Perm(5) {
fmt.Println(value)
}
// Unordered output: 4
// 2
// 1
// 3
// 0
}
Example functions without output comments are compiled but not executed.
The naming convention to declare examples for the package, a function F, a type T and
method M on type T are:
func Example() { ... }
func ExampleF() { ... }
func ExampleT() { ... }
func ExampleT_M() { ... }
Multiple example functions for a package/type/function/method may be provided by
appending a distinct suffix to the name. The suffix must start with a
lower-case letter.
func Example_suffix() { ... }
func ExampleF_suffix() { ... }
func ExampleT_suffix() { ... }
func ExampleT_M_suffix() { ... }
The entire test file is presented as the example when it contains a single
example function, at least one other function, type, variable, or constant
declaration, and no test or benchmark functions.
Fuzzing
'go test' and the testing package support fuzzing, a testing technique where
a function is called with randomly generated inputs to find bugs not
anticipated by unit tests.
Functions of the form
func FuzzXxx(*testing.F)
are considered fuzz tests.
For example:
func FuzzHex(f *testing.F) {
for _, seed := range [][]byte{{}, {0}, {9}, {0xa}, {0xf}, {1, 2, 3, 4}} {
f.Add(seed)
}
f.Fuzz(func(t *testing.T, in []byte) {
enc := hex.EncodeToString(in)
out, err := hex.DecodeString(enc)
if err != nil {
t.Fatalf("%v: decode: %v", in, err)
}
if !bytes.Equal(in, out) {
t.Fatalf("%v: not equal after round trip: %v", in, out)
}
})
}
A fuzz test maintains a seed corpus, or a set of inputs which are run by
default, and can seed input generation. Seed inputs may be registered by
calling (*F).Add or by storing files in the directory testdata/fuzz/<Name>
(where <Name> is the name of the fuzz test) within the package containing
the fuzz test. Seed inputs are optional, but the fuzzing engine may find
bugs more efficiently when provided with a set of small seed inputs with good
code coverage. These seed inputs can also serve as regression tests for bugs
identified through fuzzing.
The function passed to (*F).Fuzz within the fuzz test is considered the fuzz
target. A fuzz target must accept a *T parameter, followed by one or more
parameters for random inputs. The types of arguments passed to (*F).Add must
be identical to the types of these parameters. The fuzz target may signal
that it's found a problem the same way tests do: by calling T.Fail (or any
method that calls it like T.Error or T.Fatal) or by panicking.
When fuzzing is enabled (by setting the -fuzz flag to a regular expression
that matches a specific fuzz test), the fuzz target is called with arguments
generated by repeatedly making random changes to the seed inputs. On
supported platforms, 'go test' compiles the test executable with fuzzing
coverage instrumentation. The fuzzing engine uses that instrumentation to
find and cache inputs that expand coverage, increasing the likelihood of
finding bugs. If the fuzz target fails for a given input, the fuzzing engine
writes the inputs that caused the failure to a file in the directory
testdata/fuzz/<Name> within the package directory. This file later serves as
a seed input. If the file can't be written at that location (for example,
because the directory is read-only), the fuzzing engine writes the file to
the fuzz cache directory within the build cache instead.
When fuzzing is disabled, the fuzz target is called with the seed inputs
registered with F.Add and seed inputs from testdata/fuzz/<Name>. In this
mode, the fuzz test acts much like a regular test, with subtests started
with F.Fuzz instead of T.Run.
See https://go.dev/doc/fuzz for documentation about fuzzing.
Skipping
Tests or benchmarks may be skipped at run time with a call to
the Skip method of *T or *B:
func TestTimeConsuming(t *testing.T) {
if testing.Short() {
t.Skip("skipping test in short mode.")
}
...
}
The Skip method of *T can be used in a fuzz target if the input is invalid,
but should not be considered a failing input. For example:
func FuzzJSONMarshalling(f *testing.F) {
f.Fuzz(func(t *testing.T, b []byte) {
var v interface{}
if err := json.Unmarshal(b, &v); err != nil {
t.Skip()
}
if _, err := json.Marshal(v); err != nil {
t.Error("Marshal: %v", err)
}
})
}
Subtests and Sub-benchmarks
The Run methods of T and B allow defining subtests and sub-benchmarks,
without having to define separate functions for each. This enables uses
like table-driven benchmarks and creating hierarchical tests.
It also provides a way to share common setup and tear-down code:
func TestFoo(t *testing.T) {
// <setup code>
t.Run("A=1", func(t *testing.T) { ... })
t.Run("A=2", func(t *testing.T) { ... })
t.Run("B=1", func(t *testing.T) { ... })
// <tear-down code>
}
Each subtest and sub-benchmark has a unique name: the combination of the name
of the top-level test and the sequence of names passed to Run, separated by
slashes, with an optional trailing sequence number for disambiguation.
The argument to the -run, -bench, and -fuzz command-line flags is an unanchored regular
expression that matches the test's name. For tests with multiple slash-separated
elements, such as subtests, the argument is itself slash-separated, with
expressions matching each name element in turn. Because it is unanchored, an
empty expression matches any string.
For example, using "matching" to mean "whose name contains":
go test -run '' # Run all tests.
go test -run Foo # Run top-level tests matching "Foo", such as "TestFooBar".
go test -run Foo/A= # For top-level tests matching "Foo", run subtests matching "A=".
go test -run /A=1 # For all top-level tests, run subtests matching "A=1".
go test -fuzz FuzzFoo # Fuzz the target matching "FuzzFoo"
The -run argument can also be used to run a specific value in the seed
corpus, for debugging. For example:
go test -run=FuzzFoo/9ddb952d9814
The -fuzz and -run flags can both be set, in order to fuzz a target but
skip the execution of all other tests.
Subtests can also be used to control parallelism. A parent test will only
complete once all of its subtests complete. In this example, all tests are
run in parallel with each other, and only with each other, regardless of
other top-level tests that may be defined:
func TestGroupedParallel(t *testing.T) {
for _, tc := range tests {
tc := tc // capture range variable
t.Run(tc.Name, func(t *testing.T) {
t.Parallel()
...
})
}
}
The race detector kills the program if it exceeds 8128 concurrent goroutines,
so use care when running parallel tests with the -race flag set.
Run does not return until parallel subtests have completed, providing a way
to clean up after a group of parallel tests:
func TestTeardownParallel(t *testing.T) {
// This Run will not return until the parallel tests finish.
t.Run("group", func(t *testing.T) {
t.Run("Test1", parallelTest1)
t.Run("Test2", parallelTest2)
t.Run("Test3", parallelTest3)
})
// <tear-down code>
}
Main
It is sometimes necessary for a test or benchmark program to do extra setup or teardown
before or after it executes. It is also sometimes necessary to control
which code runs on the main thread. To support these and other cases,
if a test file contains a function:
func TestMain(m *testing.M)
then the generated test will call TestMain(m) instead of running the tests or benchmarks
directly. TestMain runs in the main goroutine and can do whatever setup
and teardown is necessary around a call to m.Run. m.Run will return an exit
code that may be passed to os.Exit. If TestMain returns, the test wrapper
will pass the result of m.Run to os.Exit itself.
When TestMain is called, flag.Parse has not been run. If TestMain depends on
command-line flags, including those of the testing package, it should call
flag.Parse explicitly. Command line flags are always parsed by the time test
or benchmark functions run.
A simple implementation of TestMain is:
func TestMain(m *testing.M) {
// call flag.Parse() here if TestMain uses flags
os.Exit(m.Run())
}
TestMain is a low-level primitive and should not be necessary for casual
testing needs, where ordinary test functions suffice.
testing_other.go
Package-Level Type Names (total 32, in which 13 are exported)
/* sort exporteds by: | */
B is a type passed to Benchmark functions to manage benchmark
timing and to specify the number of iterations to run.
A benchmark ends when its Benchmark function returns or calls any of the methods
FailNow, Fatal, Fatalf, SkipNow, Skip, or Skipf. Those methods must be called
only from the goroutine running the Benchmark function.
The other reporting methods, such as the variations of Log and Error,
may be called simultaneously from multiple goroutines.
Like in tests, benchmark logs are accumulated during execution
and dumped to standard output when done. Unlike in tests, benchmark logs
are always printed, so as not to hide output whose existence may be
affecting benchmark results.
NintbenchFuncfunc(b *B)benchTimedurationOrCountFlagbytesint64commoncommon
// To signal parallel subtests they may start. Nil when T.Parallel is not present (B) or not usable (when fuzzing).
// Whether the current test is a benchmark.
// A copy of chattyPrinter, if the chatty flag is set.
// Name of the cleanup function.
// The stack trace at the point where Cleanup was called.
// optional functions to be called at the end of the test
// If level > 0, the stack trace at the point where the parent called t.Run.
// Test is finished and all subtests have completed.
common.durationtime.Duration
// Test or benchmark has failed.
// Test function has completed.
// Written atomically.
// helperPCs converted to function names
// functions to be skipped when writing file/line info
// Whether the fuzz target, if this is one, is running.
// Nesting depth of test or benchmark.
// guards this group of fields
// Name of test or benchmark.
// Output generated by test or benchmark.
common.parent*common
// Number of races detected during test.
// Test or benchmark (or one of its subtests) was executed.
// Function name of tRunner running the test.
// To signal a test is done.
// Test or benchmark has been skipped.
// Time test or benchmark started
// Queue of subtests to be run in parallel.
common.tempDirstringcommon.tempDirErrerrorcommon.tempDirMusync.Mutexcommon.tempDirSeqint32
// For flushToParent.
context*benchContext
Extra metrics collected by ReportMetric.
// import path of the package containing the benchmark
// one of the subbenchmarks does not have bytes set.
The net total of this test after being run.
netBytesuint64
// RunParallel creates parallelism*GOMAXPROCS goroutines
// total duration of the previous run
// number of iterations in the previous run
resultBenchmarkResultshowAllocResultbool
The initial states of memStats.Mallocs and memStats.TotalAlloc.
startBytesuint64timerOnbool
Cleanup registers a function to be called when the test (or subtest) and all its
subtests complete. Cleanup functions will be called in last added,
first called order.
Error is equivalent to Log followed by Fail.
Errorf is equivalent to Logf followed by Fail.
Fail marks the function as having failed but continues execution.
FailNow marks the function as having failed and stops its execution
by calling runtime.Goexit (which then runs all deferred calls in the
current goroutine).
Execution will continue at the next test or benchmark.
FailNow must be called from the goroutine running the
test or benchmark function, not from other goroutines
created during the test. Calling FailNow does not stop
those other goroutines.
Failed reports whether the function has failed.
Fatal is equivalent to Log followed by FailNow.
Fatalf is equivalent to Logf followed by FailNow.
Helper marks the calling function as a test helper function.
When printing file and line information, that function will be skipped.
Helper may be called simultaneously from multiple goroutines.
Log formats its arguments using default formatting, analogous to Println,
and records the text in the error log. For tests, the text will be printed only if
the test fails or the -test.v flag is set. For benchmarks, the text is always
printed to avoid having performance depend on the value of the -test.v flag.
Logf formats its arguments according to the format, analogous to Printf, and
records the text in the error log. A final newline is added if not provided. For
tests, the text will be printed only if the test fails or the -test.v flag is
set. For benchmarks, the text is always printed to avoid having performance
depend on the value of the -test.v flag.
Name returns the name of the running (sub-) test or benchmark.
The name will include the name of the test along with the names of
any nested sub-tests. If two sibling sub-tests have the same name,
Name will append a suffix to guarantee the returned name is unique.
ReportAllocs enables malloc statistics for this benchmark.
It is equivalent to setting -test.benchmem, but it only affects the
benchmark function that calls ReportAllocs.
ReportMetric adds "n unit" to the reported benchmark results.
If the metric is per-iteration, the caller should divide by b.N,
and by convention units should end in "/op".
ReportMetric overrides any previously reported value for the same unit.
ReportMetric panics if unit is the empty string or if unit contains
any whitespace.
If unit is a unit normally reported by the benchmark framework itself
(such as "allocs/op"), ReportMetric will override that metric.
Setting "ns/op" to 0 will suppress that built-in metric.
ResetTimer zeroes the elapsed benchmark time and memory allocation counters
and deletes user-reported metrics.
It does not affect whether the timer is running.
Run benchmarks f as a subbenchmark with the given name. It reports
whether there were any failures.
A subbenchmark is like any other benchmark. A benchmark that calls Run at
least once will not be measured itself and will be called once with N=1.
RunParallel runs a benchmark in parallel.
It creates multiple goroutines and distributes b.N iterations among them.
The number of goroutines defaults to GOMAXPROCS. To increase parallelism for
non-CPU-bound benchmarks, call SetParallelism before RunParallel.
RunParallel is usually used with the go test -cpu flag.
The body function will be run in each goroutine. It should set up any
goroutine-local state and then iterate until pb.Next returns false.
It should not use the StartTimer, StopTimer, or ResetTimer functions,
because they have global effect. It should also not call Run.
SetBytes records the number of bytes processed in a single operation.
If this is called, the benchmark will report ns/op and MB/s.
SetParallelism sets the number of goroutines used by RunParallel to p*GOMAXPROCS.
There is usually no need to call SetParallelism for CPU-bound benchmarks.
If p is less than 1, this call will have no effect.
Setenv calls os.Setenv(key, value) and uses Cleanup to
restore the environment variable to its original value
after the test.
This cannot be used in parallel tests.
Skip is equivalent to Log followed by SkipNow.
SkipNow marks the test as having been skipped and stops its execution
by calling runtime.Goexit.
If a test fails (see Error, Errorf, Fail) and is then skipped,
it is still considered to have failed.
Execution will continue at the next test or benchmark. See also FailNow.
SkipNow must be called from the goroutine running the test, not from
other goroutines created during the test. Calling SkipNow does not stop
those other goroutines.
Skipf is equivalent to Logf followed by SkipNow.
Skipped reports whether the test was skipped.
StartTimer starts timing a test. This function is called automatically
before a benchmark starts, but it can also be used to resume timing after
a call to StopTimer.
StopTimer stops timing a test. This can be used to pause the timer
while performing complex initialization that you don't
want to measure.
TempDir returns a temporary directory for the test to use.
The directory is automatically removed by Cleanup when the test and
all its subtests complete.
Each subsequent call to t.TempDir returns a unique directory;
if the directory creation fails, TempDir terminates the test by calling Fatal.
add simulates running benchmarks in sequence in a single iteration. It is
used to give some meaningful results in case func Benchmark is used in
combination with Run.
(*B) checkFuzzFn(name string)
decorate prefixes the string with the file and line of the call site
and inserts the final newline if needed and indentation spaces for formatting.
This function must be called with c.mu held.
(*B) doBench() BenchmarkResult
flushToParent writes c.output to the parent after first writing the header
with the given format and arguments.
frameSkip searches, starting after skip frames, for the first caller frame
in a function not marked as a helper and returns that frame.
The search stops if it finds a tRunner function that
was the entry point into the test and the test is not a subtest.
This function must be called with c.mu held.
launch launches the benchmark function. It gradually increases the number
of benchmark iterations until the benchmark runs for the requested benchtime.
launch is run by the doBench function as a separate goroutine.
run1 must have been called on b.
log generates the output. It's always at the same stack depth.
logDepth generates the output at an arbitrary stack depth.
(*B) private()
run executes the benchmark in a separate goroutine, including all of its
subbenchmarks. b must not have subbenchmarks.
run1 runs the first iteration of benchFunc. It reports whether more
iterations of this benchmarks should be run.
runCleanup is called at the end of the test.
If catchPanic is true, this will catch panics, and return the recovered
value if any.
runN runs a single benchmark for the specified number of iterations.
(*B) setRan()
trimOutput shortens the output from a benchmark, which can be very long.
*B : TB
*B : github.com/golang/mock/gomock.TestHelper
*B : github.com/golang/mock/gomock.TestReporter
*B : go.uber.org/goleak.TestingT
*B : gotest.tools/v3/assert.TestingT
*B : gotest.tools/v3/internal/assert.LogT
*B : github.com/golang/mock/gomock.cleanuper
*B : go.uber.org/goleak.testHelper
*B : gotest.tools/v3/assert.helperT
*B : gotest.tools/v3/internal/assert.helperT
BenchmarkResult contains the results of a benchmark run.
// Bytes processed in one iteration.
Extra records additional metrics reported by ReportMetric.
// The total number of memory allocations.
// The total number of bytes allocated.
// The number of iterations.
// The total time taken.
AllocedBytesPerOp returns the "B/op" metric,
which is calculated as r.MemBytes / r.N.
AllocsPerOp returns the "allocs/op" metric,
which is calculated as r.MemAllocs / r.N.
MemString returns r.AllocedBytesPerOp and r.AllocsPerOp in the same format as 'go test'.
NsPerOp returns the "ns/op" metric.
String returns a summary of the benchmark results.
It follows the benchmark result line format from
https://golang.org/design/14313-benchmark-format, not including the
benchmark name.
Extra metrics override built-in metrics of the same name.
String does not include allocs/op or B/op, since those are reported
by MemString.
mbPerSec returns the "MB/s" metric.
BenchmarkResult : expvar.Var
BenchmarkResult : fmt.Stringer
BenchmarkResult : context.stringer
BenchmarkResult : github.com/aws/smithy-go/middleware.stringer
BenchmarkResult : runtime.stringer
func Benchmark(f func(b *B)) BenchmarkResult
func (*B).doBench() BenchmarkResult
func (*B).add(other BenchmarkResult)
CoverBlock records the coverage data for a single basic block.
The fields are 1-indexed, as in an editor: The opening line of
the file is number 1, for example. Columns are measured
in bytes.
NOTE: This struct is internal to the testing infrastructure and may change.
It is not covered (yet) by the Go 1 compatibility guidelines.
// Column number for block start.
// Column number for block end.
// Line number for block start.
// Line number for block end.
// Number of statements included in this block.
F is a type passed to fuzz tests.
Fuzz tests run generated inputs against a provided fuzz target, which can
find and report potential bugs in the code being tested.
A fuzz test runs the seed corpus by default, which includes entries provided
by (*F).Add and entries in the testdata/fuzz/<FuzzTestName> directory. After
any necessary setup and calls to (*F).Add, the fuzz test must then call
(*F).Fuzz to provide the fuzz target. See the testing package documentation
for an example, and see the F.Fuzz and F.Add method documentation for
details.
*F methods can only be called before (*F).Fuzz. Once the test is
executing the fuzz target, only (*T) methods can be used. The only *F methods
that are allowed in the (*F).Fuzz function are (*F).Failed and (*F).Name.
commoncommon
// To signal parallel subtests they may start. Nil when T.Parallel is not present (B) or not usable (when fuzzing).
// Whether the current test is a benchmark.
// A copy of chattyPrinter, if the chatty flag is set.
// Name of the cleanup function.
// The stack trace at the point where Cleanup was called.
// optional functions to be called at the end of the test
// If level > 0, the stack trace at the point where the parent called t.Run.
// Test is finished and all subtests have completed.
common.durationtime.Duration
// Test or benchmark has failed.
// Test function has completed.
// Written atomically.
// helperPCs converted to function names
// functions to be skipped when writing file/line info
// Nesting depth of test or benchmark.
// guards this group of fields
// Name of test or benchmark.
// Output generated by test or benchmark.
common.parent*common
// Number of races detected during test.
// Test or benchmark (or one of its subtests) was executed.
// Function name of tRunner running the test.
// To signal a test is done.
// Test or benchmark has been skipped.
// Time test or benchmark started
// Queue of subtests to be run in parallel.
common.tempDirstringcommon.tempDirErrerrorcommon.tempDirMusync.Mutexcommon.tempDirSeqint32
// For flushToParent.
corpus is a set of seed corpus entries, added with F.Add and loaded
from testdata.
fuzzCalledboolfuzzContext*fuzzContext
inFuzzFn is true when the fuzz function is running. Most F methods cannot
be called when inFuzzFn is true.
resultfuzzResulttestContext*testContext
Add will add the arguments to the seed corpus for the fuzz test. This will be
a no-op if called after or within the fuzz target, and args must match the
arguments for the fuzz target.
Cleanup registers a function to be called when the test (or subtest) and all its
subtests complete. Cleanup functions will be called in last added,
first called order.
Error is equivalent to Log followed by Fail.
Errorf is equivalent to Logf followed by Fail.
Fail marks the function as having failed but continues execution.
FailNow marks the function as having failed and stops its execution
by calling runtime.Goexit (which then runs all deferred calls in the
current goroutine).
Execution will continue at the next test or benchmark.
FailNow must be called from the goroutine running the
test or benchmark function, not from other goroutines
created during the test. Calling FailNow does not stop
those other goroutines.
Failed reports whether the function has failed.
Fatal is equivalent to Log followed by FailNow.
Fatalf is equivalent to Logf followed by FailNow.
Fuzz runs the fuzz function, ff, for fuzz testing. If ff fails for a set of
arguments, those arguments will be added to the seed corpus.
ff must be a function with no return value whose first argument is *T and
whose remaining arguments are the types to be fuzzed.
For example:
f.Fuzz(func(t *testing.T, b []byte, i int) { ... })
The following types are allowed: []byte, string, bool, byte, rune, float32,
float64, int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64.
More types may be supported in the future.
ff must not call any *F methods, e.g. (*F).Log, (*F).Error, (*F).Skip. Use
the corresponding *T method instead. The only *F methods that are allowed in
the (*F).Fuzz function are (*F).Failed and (*F).Name.
This function should be fast and deterministic, and its behavior should not
depend on shared state. No mutatable input arguments, or pointers to them,
should be retained between executions of the fuzz function, as the memory
backing them may be mutated during a subsequent invocation. ff must not
modify the underlying data of the arguments provided by the fuzzing engine.
When fuzzing, F.Fuzz does not return until a problem is found, time runs out
(set with -fuzztime), or the test process is interrupted by a signal. F.Fuzz
should be called exactly once, unless F.Skip or F.Fail is called beforehand.
Helper marks the calling function as a test helper function.
When printing file and line information, that function will be skipped.
Helper may be called simultaneously from multiple goroutines.
Log formats its arguments using default formatting, analogous to Println,
and records the text in the error log. For tests, the text will be printed only if
the test fails or the -test.v flag is set. For benchmarks, the text is always
printed to avoid having performance depend on the value of the -test.v flag.
Logf formats its arguments according to the format, analogous to Printf, and
records the text in the error log. A final newline is added if not provided. For
tests, the text will be printed only if the test fails or the -test.v flag is
set. For benchmarks, the text is always printed to avoid having performance
depend on the value of the -test.v flag.
Name returns the name of the running (sub-) test or benchmark.
The name will include the name of the test along with the names of
any nested sub-tests. If two sibling sub-tests have the same name,
Name will append a suffix to guarantee the returned name is unique.
Setenv calls os.Setenv(key, value) and uses Cleanup to
restore the environment variable to its original value
after the test.
This cannot be used in parallel tests.
Skip is equivalent to Log followed by SkipNow.
SkipNow marks the test as having been skipped and stops its execution
by calling runtime.Goexit.
If a test fails (see Error, Errorf, Fail) and is then skipped,
it is still considered to have failed.
Execution will continue at the next test or benchmark. See also FailNow.
SkipNow must be called from the goroutine running the test, not from
other goroutines created during the test. Calling SkipNow does not stop
those other goroutines.
Skipf is equivalent to Logf followed by SkipNow.
Skipped reports whether the test was skipped.
TempDir returns a temporary directory for the test to use.
The directory is automatically removed by Cleanup when the test and
all its subtests complete.
Each subsequent call to t.TempDir returns a unique directory;
if the directory creation fails, TempDir terminates the test by calling Fatal.
(*F) checkFuzzFn(name string)
decorate prefixes the string with the file and line of the call site
and inserts the final newline if needed and indentation spaces for formatting.
This function must be called with c.mu held.
flushToParent writes c.output to the parent after first writing the header
with the given format and arguments.
frameSkip searches, starting after skip frames, for the first caller frame
in a function not marked as a helper and returns that frame.
The search stops if it finds a tRunner function that
was the entry point into the test and the test is not a subtest.
This function must be called with c.mu held.
log generates the output. It's always at the same stack depth.
logDepth generates the output at an arbitrary stack depth.
(*F) private()(*F) report()
runCleanup is called at the end of the test.
If catchPanic is true, this will catch panics, and return the recovered
value if any.
(*F) setRan()
*F : TB
*F : github.com/golang/mock/gomock.TestHelper
*F : github.com/golang/mock/gomock.TestReporter
*F : go.uber.org/goleak.TestingT
*F : gotest.tools/v3/assert.TestingT
*F : gotest.tools/v3/internal/assert.LogT
*F : github.com/golang/mock/gomock.cleanuper
*F : go.uber.org/goleak.testHelper
*F : gotest.tools/v3/assert.helperT
*F : gotest.tools/v3/internal/assert.helperT
func fRunner(f *F, fn func(*F))
Ffunc()NamestringOutputstringUnorderedbool
processRunResult computes a summary and status of the result of running an example test.
stdout is the captured output from stdout of the test.
recovered is the result of invoking recover after running the test, in case it panicked.
If stdout doesn't match the expected output or if recovered is non-nil, it'll print the cause of failure to stdout.
If the test is chatty/verbose, it'll print a success message to stdout.
If recovered is non-nil, it'll panic with that value.
If the test panicked with nil, or invoked runtime.Goexit, it'll be
made to fail and panic with errNilPanicOrGoexit
func Main(matchString func(pat, str string) (bool, error), tests []InternalTest, benchmarks []InternalBenchmark, examples []InternalExample)
func MainStart(deps testDeps, tests []InternalTest, benchmarks []InternalBenchmark, fuzzTargets []InternalFuzzTarget, examples []InternalExample) *M
func RunExamples(matchString func(pat, str string) (bool, error), examples []InternalExample) (ok bool)
func listTests(matchString func(pat, str string) (bool, error), tests []InternalTest, benchmarks []InternalBenchmark, fuzzTargets []InternalFuzzTarget, examples []InternalExample)
func runExample(eg InternalExample) (ok bool)
func runExamples(matchString func(pat, str string) (bool, error), examples []InternalExample) (ran, ok bool)
A PB is used by RunParallel for running parallel benchmarks.
// total number of iterations to execute (b.N)
// local cache of acquired iterations
// shared between all worker goroutines iteration counter
// acquire that many iterations from globalN at once
Next reports whether there are more iterations to execute.
T is a type passed to Test functions to manage test state and support formatted test logs.
A test ends when its Test function returns or calls any of the methods
FailNow, Fatal, Fatalf, SkipNow, Skip, or Skipf. Those methods, as well as
the Parallel method, must be called only from the goroutine running the
Test function.
The other reporting methods, such as the variations of Log and Error,
may be called simultaneously from multiple goroutines.
commoncommon
// To signal parallel subtests they may start. Nil when T.Parallel is not present (B) or not usable (when fuzzing).
// Whether the current test is a benchmark.
// A copy of chattyPrinter, if the chatty flag is set.
// Name of the cleanup function.
// The stack trace at the point where Cleanup was called.
// optional functions to be called at the end of the test
// If level > 0, the stack trace at the point where the parent called t.Run.
// Test is finished and all subtests have completed.
common.durationtime.Duration
// Test or benchmark has failed.
// Test function has completed.
// Written atomically.
// helperPCs converted to function names
// functions to be skipped when writing file/line info
// Whether the fuzz target, if this is one, is running.
// Nesting depth of test or benchmark.
// guards this group of fields
// Name of test or benchmark.
// Output generated by test or benchmark.
common.parent*common
// Number of races detected during test.
// Test or benchmark (or one of its subtests) was executed.
// Function name of tRunner running the test.
// To signal a test is done.
// Test or benchmark has been skipped.
// Time test or benchmark started
// Queue of subtests to be run in parallel.
common.tempDirstringcommon.tempDirErrerrorcommon.tempDirMusync.Mutexcommon.tempDirSeqint32
// For flushToParent.
// For running tests and subtests.
isEnvSetboolisParallelbool
Cleanup registers a function to be called when the test (or subtest) and all its
subtests complete. Cleanup functions will be called in last added,
first called order.
Deadline reports the time at which the test binary will have
exceeded the timeout specified by the -timeout flag.
The ok result is false if the -timeout flag indicates “no timeout” (0).
Error is equivalent to Log followed by Fail.
Errorf is equivalent to Logf followed by Fail.
Fail marks the function as having failed but continues execution.
FailNow marks the function as having failed and stops its execution
by calling runtime.Goexit (which then runs all deferred calls in the
current goroutine).
Execution will continue at the next test or benchmark.
FailNow must be called from the goroutine running the
test or benchmark function, not from other goroutines
created during the test. Calling FailNow does not stop
those other goroutines.
Failed reports whether the function has failed.
Fatal is equivalent to Log followed by FailNow.
Fatalf is equivalent to Logf followed by FailNow.
Helper marks the calling function as a test helper function.
When printing file and line information, that function will be skipped.
Helper may be called simultaneously from multiple goroutines.
Log formats its arguments using default formatting, analogous to Println,
and records the text in the error log. For tests, the text will be printed only if
the test fails or the -test.v flag is set. For benchmarks, the text is always
printed to avoid having performance depend on the value of the -test.v flag.
Logf formats its arguments according to the format, analogous to Printf, and
records the text in the error log. A final newline is added if not provided. For
tests, the text will be printed only if the test fails or the -test.v flag is
set. For benchmarks, the text is always printed to avoid having performance
depend on the value of the -test.v flag.
Name returns the name of the running (sub-) test or benchmark.
The name will include the name of the test along with the names of
any nested sub-tests. If two sibling sub-tests have the same name,
Name will append a suffix to guarantee the returned name is unique.
Parallel signals that this test is to be run in parallel with (and only with)
other parallel tests. When a test is run multiple times due to use of
-test.count or -test.cpu, multiple instances of a single test never run in
parallel with each other.
Run runs f as a subtest of t called name. It runs f in a separate goroutine
and blocks until f returns or calls t.Parallel to become a parallel test.
Run reports whether f succeeded (or at least did not fail before calling t.Parallel).
Run may be called simultaneously from multiple goroutines, but all such calls
must return before the outer test function for t returns.
Setenv calls os.Setenv(key, value) and uses Cleanup to
restore the environment variable to its original value
after the test.
This cannot be used in parallel tests.
Skip is equivalent to Log followed by SkipNow.
SkipNow marks the test as having been skipped and stops its execution
by calling runtime.Goexit.
If a test fails (see Error, Errorf, Fail) and is then skipped,
it is still considered to have failed.
Execution will continue at the next test or benchmark. See also FailNow.
SkipNow must be called from the goroutine running the test, not from
other goroutines created during the test. Calling SkipNow does not stop
those other goroutines.
Skipf is equivalent to Logf followed by SkipNow.
Skipped reports whether the test was skipped.
TempDir returns a temporary directory for the test to use.
The directory is automatically removed by Cleanup when the test and
all its subtests complete.
Each subsequent call to t.TempDir returns a unique directory;
if the directory creation fails, TempDir terminates the test by calling Fatal.
(*T) checkFuzzFn(name string)
decorate prefixes the string with the file and line of the call site
and inserts the final newline if needed and indentation spaces for formatting.
This function must be called with c.mu held.
flushToParent writes c.output to the parent after first writing the header
with the given format and arguments.
frameSkip searches, starting after skip frames, for the first caller frame
in a function not marked as a helper and returns that frame.
The search stops if it finds a tRunner function that
was the entry point into the test and the test is not a subtest.
This function must be called with c.mu held.
log generates the output. It's always at the same stack depth.
logDepth generates the output at an arbitrary stack depth.
(*T) private()(*T) report()
runCleanup is called at the end of the test.
If catchPanic is true, this will catch panics, and return the recovered
value if any.
(*T) setRan()
*T : TB
*T : github.com/golang/mock/gomock.TestHelper
*T : github.com/golang/mock/gomock.TestReporter
*T : go.uber.org/goleak.TestingT
*T : gotest.tools/v3/assert.TestingT
*T : gotest.tools/v3/internal/assert.LogT
*T : github.com/golang/mock/gomock.cleanuper
*T : go.uber.org/goleak.testHelper
*T : gotest.tools/v3/assert.helperT
*T : gotest.tools/v3/internal/assert.helperT
func tRunner(t *T, fn func(t *T))
alternationMatch matches a test name if one of the alternations match.
( alternationMatch) matches(name []string, matchString func(pat, str string) (bool, error)) (ok, partial bool)( alternationMatch) verify(name string, matchString func(pat, str string) (bool, error)) error
alternationMatch : filterMatch
// Maximum extension length.
match*matcher
// The largest recorded benchmark name.
processBench runs bench b for the configured CPU counts and prints the results.
// last printed test name in chatty mode
// guards lastName
wio.Writer
Printf prints a message, generated by the named test, that does not
necessarily mention that tests's name itself.
Updatef prints a message about the status of the named test to w.
The formatted message must include the test name itself.
func newChattyPrinter(w io.Writer) *chattyPrinter
common holds the elements common between T and B and
captures common methods such as Errorf.
// To signal parallel subtests they may start. Nil when T.Parallel is not present (B) or not usable (when fuzzing).
// Whether the current test is a benchmark.
// A copy of chattyPrinter, if the chatty flag is set.
// Name of the cleanup function.
// The stack trace at the point where Cleanup was called.
// optional functions to be called at the end of the test
// If level > 0, the stack trace at the point where the parent called t.Run.
// Test is finished and all subtests have completed.
durationtime.Duration
// Test or benchmark has failed.
// Test function has completed.
// Written atomically.
// helperPCs converted to function names
// functions to be skipped when writing file/line info
// Whether the fuzz target, if this is one, is running.
// Nesting depth of test or benchmark.
// guards this group of fields
// Name of test or benchmark.
// Output generated by test or benchmark.
parent*common
// Number of races detected during test.
// Test or benchmark (or one of its subtests) was executed.
// Function name of tRunner running the test.
// To signal a test is done.
// Test or benchmark has been skipped.
// Time test or benchmark started
// Queue of subtests to be run in parallel.
tempDirstringtempDirErrerrortempDirMusync.MutextempDirSeqint32
// For flushToParent.
Cleanup registers a function to be called when the test (or subtest) and all its
subtests complete. Cleanup functions will be called in last added,
first called order.
Error is equivalent to Log followed by Fail.
Errorf is equivalent to Logf followed by Fail.
Fail marks the function as having failed but continues execution.
FailNow marks the function as having failed and stops its execution
by calling runtime.Goexit (which then runs all deferred calls in the
current goroutine).
Execution will continue at the next test or benchmark.
FailNow must be called from the goroutine running the
test or benchmark function, not from other goroutines
created during the test. Calling FailNow does not stop
those other goroutines.
Failed reports whether the function has failed.
Fatal is equivalent to Log followed by FailNow.
Fatalf is equivalent to Logf followed by FailNow.
Helper marks the calling function as a test helper function.
When printing file and line information, that function will be skipped.
Helper may be called simultaneously from multiple goroutines.
Log formats its arguments using default formatting, analogous to Println,
and records the text in the error log. For tests, the text will be printed only if
the test fails or the -test.v flag is set. For benchmarks, the text is always
printed to avoid having performance depend on the value of the -test.v flag.
Logf formats its arguments according to the format, analogous to Printf, and
records the text in the error log. A final newline is added if not provided. For
tests, the text will be printed only if the test fails or the -test.v flag is
set. For benchmarks, the text is always printed to avoid having performance
depend on the value of the -test.v flag.
Name returns the name of the running (sub-) test or benchmark.
The name will include the name of the test along with the names of
any nested sub-tests. If two sibling sub-tests have the same name,
Name will append a suffix to guarantee the returned name is unique.
Setenv calls os.Setenv(key, value) and uses Cleanup to
restore the environment variable to its original value
after the test.
This cannot be used in parallel tests.
Skip is equivalent to Log followed by SkipNow.
SkipNow marks the test as having been skipped and stops its execution
by calling runtime.Goexit.
If a test fails (see Error, Errorf, Fail) and is then skipped,
it is still considered to have failed.
Execution will continue at the next test or benchmark. See also FailNow.
SkipNow must be called from the goroutine running the test, not from
other goroutines created during the test. Calling SkipNow does not stop
those other goroutines.
Skipf is equivalent to Logf followed by SkipNow.
Skipped reports whether the test was skipped.
TempDir returns a temporary directory for the test to use.
The directory is automatically removed by Cleanup when the test and
all its subtests complete.
Each subsequent call to t.TempDir returns a unique directory;
if the directory creation fails, TempDir terminates the test by calling Fatal.
(*common) checkFuzzFn(name string)
decorate prefixes the string with the file and line of the call site
and inserts the final newline if needed and indentation spaces for formatting.
This function must be called with c.mu held.
flushToParent writes c.output to the parent after first writing the header
with the given format and arguments.
frameSkip searches, starting after skip frames, for the first caller frame
in a function not marked as a helper and returns that frame.
The search stops if it finds a tRunner function that
was the entry point into the test and the test is not a subtest.
This function must be called with c.mu held.
log generates the output. It's always at the same stack depth.
logDepth generates the output at an arbitrary stack depth.
(*common) private()
runCleanup is called at the end of the test.
If catchPanic is true, this will catch panics, and return the recovered
value if any.
(*common) setRan()
*common : TB
*common : github.com/golang/mock/gomock.TestHelper
*common : github.com/golang/mock/gomock.TestReporter
*common : go.uber.org/goleak.TestingT
*common : gotest.tools/v3/assert.TestingT
*common : gotest.tools/v3/internal/assert.LogT
*common : github.com/golang/mock/gomock.cleanuper
*common : go.uber.org/goleak.testHelper
*common : gotest.tools/v3/assert.helperT
*common : gotest.tools/v3/internal/assert.helperT
corpusEntry is an alias to the same type as internal/fuzz.CorpusEntry.
We use a type alias because we don't want to export this type, and we can't
import internal/fuzz from testing.
Data[]byteGenerationintIsSeedboolParentstringPathstringValues[]any
matches checks the name against the receiver's pattern strings using the
given match function.
verify checks that the receiver's pattern strings are valid filters by
calling the given match function.
alternationMatchsimpleMatch
func splitRegexp(s string) filterMatch
fuzzCrashError is satisfied by a failing input detected while fuzzing.
These errors are written to the seed corpus and can be re-run with 'go test'.
Errors within the fuzzing framework (like I/O errors between coordinator
and worker processes) don't satisfy this interface.
CrashPath returns the path of the subtest that corresponds to the saved
crash input file in the seed corpus. The test can be re-run with go test
-run=$test/$name $test is the fuzz test name, and $name is the
filepath.Base of the string returned here.
( fuzzCrashError) Error() builtin.string( fuzzCrashError) Unwrap() error
fuzzCrashError : error
fuzzResult contains the results of a fuzz run.
// Error is the error from the failing input
// The number of iterations.
// The total time taken.
( fuzzResult) String() string
fuzzResult : expvar.Var
fuzzResult : fmt.Stringer
fuzzResult : context.stringer
fuzzResult : github.com/aws/smithy-go/middleware.stringer
fuzzResult : runtime.stringer
matcher sanitizes, uniques, and filters names of subtests and subbenchmarks.
filterfilterMatchmatchFuncfunc(pat, str string) (bool, error)musync.Mutex
subNames is used to deduplicate subtest names.
Each key is the subtest name joined to the deduplicated name of the parent test.
Each value is the count of the number of occurrences of the given subtest name
already seen.
clearSubNames clears the matcher's internal state, potentially freeing
memory. After this is called, T.Name may return the same strings as it did
for earlier subtests.
(*matcher) fullName(c *common, subname string) (name string, ok, partial bool)
unique creates a unique name for the given parent and subname by affixing it
with one or more counts, if necessary.
func newMatcher(matchString func(pat, str string) (bool, error), patterns, name string) *matcher
func newTestContext(maxParallel int, m *matcher) *testContext
simpleMatch matches a test name if all of the pattern strings match in
sequence.
( simpleMatch) matches(name []string, matchString func(pat, str string) (bool, error)) (ok, partial bool)( simpleMatch) verify(name string, matchString func(pat, str string) (bool, error)) error
simpleMatch : filterMatch
testContext holds all fields that are common to all tests. This includes
synchronization primitives to run at most *parallel tests.
deadlinetime.Time
isFuzzing is true in the context used when generating random inputs
for fuzz targets. isFuzzing is false when running normal tests and
when running fuzz tests as unit tests (without -fuzz or when -fuzz
does not match).
match*matcher
maxParallel is a copy of the parallel flag.
musync.Mutex
numWaiting is the number tests waiting to be run in parallel.
running is the number of tests currently running in parallel.
This does not include tests that are waiting for subtests to complete.
Channel used to signal tests that are ready to be run in parallel.
(*testContext) release()(*testContext) waitParallel()
func newTestContext(maxParallel int, m *matcher) *testContext
Package-Level Functions (total 46, in which 13 are exported)
AllocsPerRun returns the average number of allocations during calls to f.
Although the return value has type float64, it will always be an integral value.
To compute the number of allocations, the function will first be run once as
a warm-up. The average number of allocations over the specified number of
runs will then be measured and returned.
AllocsPerRun sets GOMAXPROCS to 1 during its measurement and will restore
it before returning.
Benchmark benchmarks a single function. It is useful for creating
custom benchmarks that do not use the "go test" command.
If f depends on testing flags, then Init must be used to register
those flags before calling Benchmark and before calling flag.Parse.
If f calls Run, the result will be an estimate of running all its
subbenchmarks that don't call Run in sequence in a single benchmark.
Coverage reports the current code coverage as a fraction in the range [0, 1].
If coverage is not enabled, Coverage returns 0.
When running a large set of sequential test cases, checking Coverage after each one
can be useful for identifying which test cases exercise new code paths.
It is not a replacement for the reports generated by 'go test -cover' and
'go tool cover'.
CoverMode reports what the test coverage mode is set to. The
values are "set", "count", or "atomic". The return value will be
empty if test coverage is not enabled.
Init registers testing flags. These flags are automatically registered by
the "go test" command before running test functions, so Init is only needed
when calling functions such as Benchmark without using "go test".
Init has no effect if it was already called.
Main is an internal function, part of the implementation of the "go test" command.
It was exported because it is cross-package and predates "internal" packages.
It is no longer used by "go test" but preserved, as much as possible, for other
systems that simulate "go test" using Main, but Main sometimes cannot be updated as
new functionality is added to the testing package.
Systems simulating "go test" should be updated to use MainStart.
MainStart is meant for use by tests generated by 'go test'.
It is not meant to be called directly and is not subject to the Go 1 compatibility document.
It may change signature from release to release.
RegisterCover records the coverage data accumulators for the tests.
NOTE: This function is internal to the testing infrastructure and may change.
It is not covered (yet) by the Go 1 compatibility guidelines.
RunBenchmarks is an internal function but exported because it is cross-package;
it is part of the implementation of the "go test" command.
RunExamples is an internal function but exported because it is cross-package;
it is part of the implementation of the "go test" command.
RunTests is an internal function but exported because it is cross-package;
it is part of the implementation of the "go test" command.
Short reports whether the -test.short flag is set.
Verbose reports whether the -test.v flag is set.
benchmarkName returns full name of benchmark including procs suffix.
callerName gives the function name (qualified with a package path)
for the caller after skip frames (where 0 means the current function).
coverReport reports the coverage percentage and writes a coverage profile if requested.
fmtDuration returns a string representing d in the form "87.00s".
fRunner wraps a call to a fuzz test and ensures that cleanup functions are
called and status flags are set. fRunner should be called in its own
goroutine. To wait for its completion, receive from f.signal.
fRunner is analogous to tRunner, which wraps subtests started with T.Run.
Unit tests and fuzz tests work a little differently, so for now, these
functions aren't consolidated. In particular, because there are no F.Run and
F.Parallel methods, i.e., no fuzz sub-tests or parallel fuzz tests, a few
simplifications are made. We also require that F.Fuzz, F.Skip, or F.Fail is
called.
removeAll is like os.RemoveAll, but retries Windows "Access is denied."
errors up to an arbitrary timeout.
Those errors have been known to occur spuriously on at least the
windows-amd64-2012 builder (https://go.dev/issue/50051), and can only occur
legitimately if the test leaves behind a temp file that either is still open
or the test otherwise lacks permission to delete. In the case of legitimate
failures, a failing test may take a bit longer to fail, but once the test is
fixed the extra latency will go away.
rewrite rewrites a subname to having only printable characters and no white
space.
runFuzzing runs the fuzz test matching the pattern for -fuzz. Only one such
fuzz test must match. This will run the fuzzing engine to generate and
mutate new inputs against the fuzz target.
If fuzzing is disabled (-test.fuzz is not set), runFuzzing
returns immediately.
runFuzzTests runs the fuzz tests matching the pattern for -run. This will
only run the (*F).Fuzz function for each seed corpus without using the
fuzzing engine to generate or mutate inputs.
fuzzWorkerExitCode is used as an exit code by fuzz worker processes after an
internal error. This distinguishes internal errors from uncontrolled panics
and other failiures. Keep in sync with internal/fuzz.workerExitCode.
The maximum number of stack frames to go through when skipping helper functions for
the purpose of decorating log messages.