Upgrade the operator to use Operator SDK v1.33.0 (#182)

* Move controller package inside internal directory

Based on the go/v4 project structure, the following changed:
- Pakcage `controllers` is now named `controller`
- Package `controller` now lives inside new `internal` directory

* Move main.go in cmd directory

Based on the new go/v4 project structure, `main.go` now lives in the `cmd` directory.

* Change package import in main.go

* Update go mod dependencies

Update the dependencies based on the versions obtained by creating a new operator project using `kubebuilder init --domain onepassword.com --plugins=go/v4`.

This is based on the migration steps provided to go from go/v3 to go/v4 (https://book.kubebuilder.io/migration/migration_guide_gov3_to_gov4)

* Update vendor

* Adjust code for breaking changes from pkg update

sigs.k8s.io/controller-runtime package had breaking changes from v0.14.5 to v0.16.3. This commit brings the changes needed to achieve the same things using the new functionality avaialble.

* Adjust paths to connect yaml files

Since `main.go` is now in `cmd` directory, the paths to the files for deploying Connect have to be adjusted based on the new location `main.go` is executed from.

* Update files based on new structure and scaffolding

These changes are made based on the new project structure and scaffolding obtained when using the new go/v4 project structure.

These were done based on the migration steps mentioned when migrating to go/v4 (https://book.kubebuilder.io/migration/migration_guide_gov3_to_gov4).

* Update config files

These updates are made based on the Kustomize v4 syntax.

This is part of the upgrate to go/v4 (https://book.kubebuilder.io/migration/migration_guide_gov3_to_gov4)

* Update dependencies and GO version

* Update vendor

* Update Kubernetes tools versions

* Update operator version in Makefile

Now the version in the Makefile matches the version of the operator

* Update Operator SDK version in version.go

* Adjust generated deepcopy

It seems that the +build tag is no longer needed based on the latest generated scaffolding, therefore it's removed.

* Update copyright year

* Bring back missing changes from migration

Some customization in Makefile was lost during the migration process. Specifically, the namespace customization for `make deploy` command.

Also, we push changes to kustomization.yaml for making the deploy process smoother.

* Add RBAC perms for coordination.k8s.io

It seems that with the latest changes to Kubernetes and Kustomize, we need to add additional RBAC to the service account used so that it can properly access the `leases` resource.

* Optimize Dockerfile

Dockerfile had a step for caching dependencies (go mod download). However, this is already done by the vendor directory, which we include. Therefore, this step can be removed to make the image build time faster.
This commit is contained in:
Eduard Filip
2024-01-25 14:21:31 +01:00
committed by GitHub
parent 8fc852a4dd
commit f72e5243b0
1356 changed files with 86780 additions and 43671 deletions

View File

@@ -1,5 +1,5 @@
.DS_Store
TODO.md
TODO
tmp/**/*
*.coverprofile
.vscode

View File

@@ -1,3 +1,133 @@
## 2.13.2
### Fixes
- Fix file handler leak (#1309) [e2e81c8]
- Avoid allocations with `(*regexp.Regexp).MatchString` (#1302) [3b2a2a7]
## 2.13.1
### Fixes
- # 1296 fix(precompiled test guite): exec bit check omitted on Windows (#1301) [26eea01]
### Maintenance
- Bump github.com/go-logr/logr from 1.2.4 to 1.3.0 (#1291) [7161a9d]
- Bump golang.org/x/sys from 0.13.0 to 0.14.0 (#1295) [7fc7b10]
- Bump golang.org/x/tools from 0.12.0 to 0.14.0 (#1282) [74bbd65]
- Bump github.com/onsi/gomega from 1.27.10 to 1.29.0 (#1290) [9373633]
- Bump golang.org/x/net in /integration/_fixtures/version_mismatch_fixture (#1286) [6e3cf65]
## 2.13.0
### Features
Add PreviewSpect() to enable programmatic preview access to the suite report (fixes #1225)
## 2.12.1
### Fixes
- Print logr prefix if it exists (#1275) [90d4846]
### Maintenance
- Bump actions/checkout from 3 to 4 (#1271) [555f543]
- Bump golang.org/x/sys from 0.11.0 to 0.12.0 (#1270) [d867b7d]
## 2.12.0
### Features
- feat: allow MustPassRepeatedly decorator to be set at suite level (#1266) [05de518]
### Fixes
- fix-errors-in-readme (#1244) [27c2f5d]
### Maintenance
Various chores/dependency bumps.
## 2.11.0
In prior versions of Ginkgo specs the CLI filter flags (e.g. `--focus`, `--label-filter`) would _override_ any programmatic focus. This behavior has proved surprising and confusing in at least the following ways:
- users cannot combine programmatic filters and CLI filters to more efficiently select subsets of tests
- CLI filters can override programmatic focus on CI systems resulting in an exit code of 0 despite the presence of (incorrectly!) committed focused specs.
Going forward Ginkgo will AND all programmatic and CLI filters. Moreover, the presence of any programmatic focused tests will always result in a non-zero exit code.
This change is technically a change in Ginkgo's external contract and may require some users to make changes to successfully adopt. Specifically: it's possible some users were intentionally using CLI filters to override programmatic focus. If this is you please open an issue so we can explore solutions to the underlying problem you are trying to solve.
### Fixes
- Programmatic focus is no longer overwrriten by CLI filters [d6bba86]
### Maintenance
- Bump github.com/onsi/gomega from 1.27.7 to 1.27.8 (#1218) [4a70a38]
- Bump golang.org/x/sys from 0.8.0 to 0.9.0 (#1219) [97eda4d]
## 2.10.0
### Features
- feat(ginkgo/generators): add --tags flag (#1216) [a782a77]
adds a new --tags flag to ginkgo generate
### Fixes
- Fix broken link of MIGRATING_TO_V2.md (#1217) [548d78e]
### Maintenance
- Bump golang.org/x/tools from 0.9.1 to 0.9.3 (#1215) [2b76a5e]
## 2.9.7
### Fixes
- fix race when multiple defercleanups are called in goroutines [07fc3a0]
## 2.9.6
### Fixes
- fix: create parent directory before report files (#1212) [0ac65de]
### Maintenance
- Bump github.com/onsi/gomega from 1.27.6 to 1.27.7 (#1202) [3e39231]
## 2.9.5
### Fixes
- ensure the correct deterministic sort order is produced when ordered specs are generated by a helper function [7fa0b6b]
### Maintenance
- fix generators link (#1200) [9f9d8b9]
- Bump golang.org/x/tools from 0.8.0 to 0.9.1 (#1196) [150e3f2]
- fix spelling err in docs (#1199) [0013b1a]
- Bump golang.org/x/sys from 0.7.0 to 0.8.0 (#1193) [9e9e3e5]
## 2.9.4
### Fixes
- fix hang with ginkgo -p (#1192) [15d4bdc] - this addresses a _long_ standing issue related to Ginkgo hanging when a child process spawned by the test does not exit.
- fix: fail fast may cause Serial spec or cleanup Node interrupted (#1178) [8dea88b] - prior to this there was a small gap in which specs on other processes might start even if one process has tried to abort the suite.
### Maintenance
- Document run order when multiple setup nodes are at the same nesting level [903be81]
## 2.9.3
### Features
- Add RenderTimeline to GinkgoT() [c0c77b6]
### Fixes
- update Measure deprecation message. fixes #1176 [227c662]
- add newlines to GinkgoLogr (#1170) (#1171) [0de0e7c]
### Maintenance
- Bump commonmarker from 0.23.8 to 0.23.9 in /docs (#1183) [8b925ab]
- Bump nokogiri from 1.14.1 to 1.14.3 in /docs (#1184) [e3795a4]
- Bump golang.org/x/tools from 0.7.0 to 0.8.0 (#1182) [b453793]
- Bump actions/setup-go from 3 to 4 (#1164) [73ed75b]
- Bump github.com/onsi/gomega from 1.27.4 to 1.27.6 (#1173) [0a2bc64]
- Bump github.com/go-logr/logr from 1.2.3 to 1.2.4 (#1174) [f41c557]
- Bump golang.org/x/sys from 0.6.0 to 0.7.0 (#1179) [8e423e5]
## 2.9.2
### Maintenance

View File

@@ -15,7 +15,7 @@ import (
...
)
Describe("Checking books out of the library", Label("library"), func() {
var _ = Describe("Checking books out of the library", Label("library"), func() {
var library *libraries.Library
var book *books.Book
var valjean *users.User
@@ -50,7 +50,7 @@ Describe("Checking books out of the library", Label("library"), func() {
It("tells the user", func(ctx SpecContext) {
err := valjean.Checkout(ctx, library, "Les Miserables")
Expect(error).To(MatchError("Les Miserables is currently checked out"))
Expect(err).To(MatchError("Les Miserables is currently checked out"))
}, SpecTimeout(time.Second * 5))
It("lets the user place a hold and get notified later", func(ctx SpecContext) {
@@ -74,7 +74,7 @@ Describe("Checking books out of the library", Label("library"), func() {
When("the library does not have the book in question", func() {
It("tells the reader the book is unavailable", func(ctx SpecContext) {
err := valjean.Checkout(ctx, library, "Les Miserables")
Expect(error).To(MatchError("Les Miserables is not in the library catalog"))
Expect(err).To(MatchError("Les Miserables is not in the library catalog"))
}, SpecTimeout(time.Second * 5))
})
})

View File

@@ -248,31 +248,13 @@ func RunSpecs(t GinkgoTestingT, description string, args ...interface{}) bool {
exitIfErr(types.GinkgoErrors.RerunningSuite())
}
suiteDidRun = true
suiteLabels := Labels{}
configErrors := []error{}
for _, arg := range args {
switch arg := arg.(type) {
case types.SuiteConfig:
suiteConfig = arg
case types.ReporterConfig:
reporterConfig = arg
case Labels:
suiteLabels = append(suiteLabels, arg...)
default:
configErrors = append(configErrors, types.GinkgoErrors.UnknownTypePassedToRunSpecs(arg))
}
err := global.PushClone()
if err != nil {
exitIfErr(err)
}
exitIfErrors(configErrors)
defer global.PopClone()
configErrors = types.VetConfig(flagSet, suiteConfig, reporterConfig)
if len(configErrors) > 0 {
fmt.Fprintf(formatter.ColorableStdErr, formatter.F("{{red}}Ginkgo detected configuration issues:{{/}}\n"))
for _, err := range configErrors {
fmt.Fprintf(formatter.ColorableStdErr, err.Error())
}
os.Exit(1)
}
suiteLabels := extractSuiteConfiguration(args)
var reporter reporters.Reporter
if suiteConfig.ParallelTotal == 1 {
@@ -308,9 +290,8 @@ func RunSpecs(t GinkgoTestingT, description string, args ...interface{}) bool {
registerReportAfterSuiteNodeForAutogeneratedReports(reporterConfig)
}
err := global.Suite.BuildTree()
err = global.Suite.BuildTree()
exitIfErr(err)
suitePath, err := os.Getwd()
exitIfErr(err)
suitePath, err = filepath.Abs(suitePath)
@@ -335,6 +316,69 @@ func RunSpecs(t GinkgoTestingT, description string, args ...interface{}) bool {
return passed
}
func extractSuiteConfiguration(args []interface{}) Labels {
suiteLabels := Labels{}
configErrors := []error{}
for _, arg := range args {
switch arg := arg.(type) {
case types.SuiteConfig:
suiteConfig = arg
case types.ReporterConfig:
reporterConfig = arg
case Labels:
suiteLabels = append(suiteLabels, arg...)
default:
configErrors = append(configErrors, types.GinkgoErrors.UnknownTypePassedToRunSpecs(arg))
}
}
exitIfErrors(configErrors)
configErrors = types.VetConfig(flagSet, suiteConfig, reporterConfig)
if len(configErrors) > 0 {
fmt.Fprintf(formatter.ColorableStdErr, formatter.F("{{red}}Ginkgo detected configuration issues:{{/}}\n"))
for _, err := range configErrors {
fmt.Fprintf(formatter.ColorableStdErr, err.Error())
}
os.Exit(1)
}
return suiteLabels
}
/*
PreviewSpecs walks the testing tree and produces a report without actually invoking the specs.
See http://onsi.github.io/ginkgo/#previewing-specs for more information.
*/
func PreviewSpecs(description string, args ...any) Report {
err := global.PushClone()
if err != nil {
exitIfErr(err)
}
defer global.PopClone()
suiteLabels := extractSuiteConfiguration(args)
priorDryRun, priorParallelTotal, priorParallelProcess := suiteConfig.DryRun, suiteConfig.ParallelTotal, suiteConfig.ParallelProcess
suiteConfig.DryRun, suiteConfig.ParallelTotal, suiteConfig.ParallelProcess = true, 1, 1
defer func() {
suiteConfig.DryRun, suiteConfig.ParallelTotal, suiteConfig.ParallelProcess = priorDryRun, priorParallelTotal, priorParallelProcess
}()
reporter := reporters.NoopReporter{}
outputInterceptor = internal.NoopOutputInterceptor{}
client = nil
writer := GinkgoWriter.(*internal.Writer)
err = global.Suite.BuildTree()
exitIfErr(err)
suitePath, err := os.Getwd()
exitIfErr(err)
suitePath, err = filepath.Abs(suitePath)
exitIfErr(err)
global.Suite.Run(description, suiteLabels, suitePath, global.Failer, reporter, writer, outputInterceptor, interrupt_handler.NewInterruptHandler(client), client, internal.RegisterForProgressSignal, suiteConfig)
return global.Suite.GetPreviewReport()
}
/*
Skip instructs Ginkgo to skip the current spec

View File

@@ -32,6 +32,9 @@ func BuildGenerateCommand() command.Command {
{Name: "template-data", KeyPath: "CustomTemplateData",
UsageArgument: "template-data-file",
Usage: "If specified, generate will use the contents of the file passed as data to be rendered in the test file template"},
{Name: "tags", KeyPath: "Tags",
UsageArgument: "build-tags",
Usage: "If specified, generate will create a test file that uses the given build tags (i.e. `--tags e2e,!unit` will add `//go:build e2e,!unit`)"},
},
&conf,
types.GinkgoFlagSections{},
@@ -59,6 +62,7 @@ You can also pass a <filename> of the form "file.go" and generate will emit "fil
}
type specData struct {
BuildTags string
Package string
Subject string
PackageImportPath string
@@ -93,6 +97,7 @@ func generateTestFileForSubject(subject string, conf GeneratorsConfig) {
}
data := specData{
BuildTags: getBuildTags(conf.Tags),
Package: determinePackageName(packageName, conf.Internal),
Subject: formattedName,
PackageImportPath: getPackageImportPath(),

View File

@@ -1,6 +1,7 @@
package generators
var specText = `package {{.Package}}
var specText = `{{.BuildTags}}
package {{.Package}}
import (
{{.GinkgoImport}}
@@ -14,7 +15,8 @@ var _ = {{.GinkgoPackage}}Describe("{{.Subject}}", func() {
})
`
var agoutiSpecText = `package {{.Package}}
var agoutiSpecText = `{{.BuildTags}}
package {{.Package}}
import (
{{.GinkgoImport}}

View File

@@ -1,6 +1,7 @@
package generators
import (
"fmt"
"go/build"
"os"
"path/filepath"
@@ -14,6 +15,7 @@ type GeneratorsConfig struct {
Agouti, NoDot, Internal bool
CustomTemplate string
CustomTemplateData string
Tags string
}
func getPackageAndFormattedName() (string, string, string) {
@@ -62,3 +64,13 @@ func determinePackageName(name string, internal bool) string {
return name + "_test"
}
// getBuildTags returns the resultant string to be added.
// If the input string is not empty, then returns a `//go:build {}` string,
// otherwise returns an empty string.
func getBuildTags(tags string) string {
if tags != "" {
return fmt.Sprintf("//go:build %s\n", tags)
}
return ""
}

View File

@@ -7,6 +7,7 @@ import (
"path"
"path/filepath"
"regexp"
"runtime"
"strings"
"github.com/onsi/ginkgo/v2/types"
@@ -192,7 +193,7 @@ func precompiledTestSuite(path string) (TestSuite, error) {
return TestSuite{}, errors.New("this is not a .test binary")
}
if filepath.Ext(path) == ".test" && info.Mode()&0111 == 0 {
if filepath.Ext(path) == ".test" && runtime.GOOS != "windows" && info.Mode()&0111 == 0 {
return TestSuite{}, errors.New("this is not executable")
}
@@ -225,7 +226,7 @@ func suitesInDir(dir string, recurse bool) TestSuites {
files, _ := os.ReadDir(dir)
re := regexp.MustCompile(`^[^._].*_test\.go$`)
for _, file := range files {
if !file.IsDir() && re.Match([]byte(file.Name())) {
if !file.IsDir() && re.MatchString(file.Name()) {
suite := TestSuite{
Path: relPath(dir),
PackageName: packageNameForSuite(dir),
@@ -240,7 +241,7 @@ func suitesInDir(dir string, recurse bool) TestSuites {
if recurse {
re = regexp.MustCompile(`^[._]`)
for _, file := range files {
if file.IsDir() && !re.Match([]byte(file.Name())) {
if file.IsDir() && !re.MatchString(file.Name()) {
suites = append(suites, suitesInDir(dir+"/"+file.Name(), recurse)...)
}
}
@@ -271,7 +272,7 @@ func filesHaveGinkgoSuite(dir string, files []os.DirEntry) bool {
reGinkgo := regexp.MustCompile(`package ginkgo|\/ginkgo"|\/ginkgo\/v2"|\/ginkgo\/v2/dsl/`)
for _, file := range files {
if !file.IsDir() && reTestFile.Match([]byte(file.Name())) {
if !file.IsDir() && reTestFile.MatchString(file.Name()) {
contents, _ := os.ReadFile(dir + "/" + file.Name())
if reGinkgo.Match(contents) {
return true

View File

@@ -244,9 +244,7 @@ func labelFromCallExpr(ce *ast.CallExpr) []string {
}
if id.Name == "Label" {
ls := extractLabels(expr)
for _, label := range ls {
labels = append(labels, label)
}
labels = append(labels, ls...)
}
}
}

View File

@@ -78,7 +78,7 @@ func (d Dependencies) resolveAndAdd(deps []string, depth int) {
if err != nil {
continue
}
if !pkg.Goroot && (!ginkgoAndGomegaFilter.Match([]byte(pkg.Dir)) || ginkgoIntegrationTestFilter.Match([]byte(pkg.Dir))) {
if !pkg.Goroot && (!ginkgoAndGomegaFilter.MatchString(pkg.Dir) || ginkgoIntegrationTestFilter.MatchString(pkg.Dir)) {
d.addDepIfNotPresent(pkg.Dir, depth)
}
}

View File

@@ -79,7 +79,7 @@ func (p *PackageHash) computeHashes() (codeHash string, codeModifiedTime time.Ti
continue
}
if goTestRegExp.Match([]byte(info.Name())) {
if goTestRegExp.MatchString(info.Name()) {
testHash += p.hashForFileInfo(info)
if info.ModTime().After(testModifiedTime) {
testModifiedTime = info.ModTime()
@@ -87,7 +87,7 @@ func (p *PackageHash) computeHashes() (codeHash string, codeModifiedTime time.Ti
continue
}
if p.watchRegExp.Match([]byte(info.Name())) {
if p.watchRegExp.MatchString(info.Name()) {
codeHash += p.hashForFileInfo(info)
if info.ModTime().After(codeModifiedTime) {
codeModifiedTime = info.ModTime()

View File

@@ -80,6 +80,9 @@ type FullGinkgoTInterface interface {
Fi(indentation uint, format string, args ...any) string
Fiw(indentation uint, maxWidth uint, format string, args ...any) string
//Generates a formatted string version of the current spec's timeline
RenderTimeline() string
GinkgoRecover()
DeferCleanup(args ...any)

View File

@@ -8,22 +8,22 @@ import (
)
/*
If a container marked as focus has a descendant that is also marked as focus, Ginkgo's policy is to
unmark the container's focus. This gives developers a more intuitive experience when debugging specs.
It is common to focus a container to just run a subset of specs, then identify the specific specs within the container to focus -
this policy allows the developer to simply focus those specific specs and not need to go back and turn the focus off of the container:
If a container marked as focus has a descendant that is also marked as focus, Ginkgo's policy is to
unmark the container's focus. This gives developers a more intuitive experience when debugging specs.
It is common to focus a container to just run a subset of specs, then identify the specific specs within the container to focus -
this policy allows the developer to simply focus those specific specs and not need to go back and turn the focus off of the container:
As a common example, consider:
As a common example, consider:
FDescribe("something to debug", function() {
It("works", function() {...})
It("works", function() {...})
FIt("doesn't work", function() {...})
It("works", function() {...})
})
FDescribe("something to debug", function() {
It("works", function() {...})
It("works", function() {...})
FIt("doesn't work", function() {...})
It("works", function() {...})
})
here the developer's intent is to focus in on the `"doesn't work"` spec and not to run the adjacent specs in the focused `"something to debug"` container.
The nested policy applied by this function enables this behavior.
here the developer's intent is to focus in on the `"doesn't work"` spec and not to run the adjacent specs in the focused `"something to debug"` container.
The nested policy applied by this function enables this behavior.
*/
func ApplyNestedFocusPolicyToTree(tree *TreeNode) {
var walkTree func(tree *TreeNode) bool
@@ -44,46 +44,43 @@ func ApplyNestedFocusPolicyToTree(tree *TreeNode) {
}
/*
Ginkgo supports focussing specs using `FIt`, `FDescribe`, etc. - this is called "programmatic focus"
It also supports focussing specs using regular expressions on the command line (`-focus=`, `-skip=`) that match against spec text
and file filters (`-focus-files=`, `-skip-files=`) that match against code locations for nodes in specs.
Ginkgo supports focussing specs using `FIt`, `FDescribe`, etc. - this is called "programmatic focus"
It also supports focussing specs using regular expressions on the command line (`-focus=`, `-skip=`) that match against spec text and file filters (`-focus-files=`, `-skip-files=`) that match against code locations for nodes in specs.
If any of the CLI flags are provided they take precedence. The file filters run first followed by the regex filters.
When both programmatic and file filters are provided their results are ANDed together. If multiple kinds of filters are provided, the file filters run first followed by the regex filters.
This function sets the `Skip` property on specs by applying Ginkgo's focus policy:
- If there are no CLI arguments and no programmatic focus, do nothing.
- If there are no CLI arguments but a spec somewhere has programmatic focus, skip any specs that have no programmatic focus.
- If there are CLI arguments parse them and skip any specs that either don't match the focus filters or do match the skip filters.
This function sets the `Skip` property on specs by applying Ginkgo's focus policy:
- If there are no CLI arguments and no programmatic focus, do nothing.
- If a spec somewhere has programmatic focus skip any specs that have no programmatic focus.
- If there are CLI arguments parse them and skip any specs that either don't match the focus filters or do match the skip filters.
*Note:* specs with pending nodes are Skipped when created by NewSpec.
*Note:* specs with pending nodes are Skipped when created by NewSpec.
*/
func ApplyFocusToSpecs(specs Specs, description string, suiteLabels Labels, suiteConfig types.SuiteConfig) (Specs, bool) {
focusString := strings.Join(suiteConfig.FocusStrings, "|")
skipString := strings.Join(suiteConfig.SkipStrings, "|")
hasFocusCLIFlags := focusString != "" || skipString != "" || len(suiteConfig.SkipFiles) > 0 || len(suiteConfig.FocusFiles) > 0 || suiteConfig.LabelFilter != ""
type SkipCheck func(spec Spec) bool
// by default, skip any specs marked pending
skipChecks := []SkipCheck{func(spec Spec) bool { return spec.Nodes.HasNodeMarkedPending() }}
hasProgrammaticFocus := false
if !hasFocusCLIFlags {
// check for programmatic focus
for _, spec := range specs {
if spec.Nodes.HasNodeMarkedFocus() && !spec.Nodes.HasNodeMarkedPending() {
skipChecks = append(skipChecks, func(spec Spec) bool { return !spec.Nodes.HasNodeMarkedFocus() })
hasProgrammaticFocus = true
break
}
for _, spec := range specs {
if spec.Nodes.HasNodeMarkedFocus() && !spec.Nodes.HasNodeMarkedPending() {
hasProgrammaticFocus = true
break
}
}
if hasProgrammaticFocus {
skipChecks = append(skipChecks, func(spec Spec) bool { return !spec.Nodes.HasNodeMarkedFocus() })
}
if suiteConfig.LabelFilter != "" {
labelFilter, _ := types.ParseLabelFilter(suiteConfig.LabelFilter)
skipChecks = append(skipChecks, func(spec Spec) bool {
return !labelFilter(UnionOfLabels(suiteLabels, spec.Nodes.UnionOfLabels()))
skipChecks = append(skipChecks, func(spec Spec) bool {
return !labelFilter(UnionOfLabels(suiteLabels, spec.Nodes.UnionOfLabels()))
})
}

View File

@@ -6,6 +6,7 @@ import (
var Suite *internal.Suite
var Failer *internal.Failer
var backupSuite *internal.Suite
func init() {
InitializeGlobals()
@@ -15,3 +16,13 @@ func InitializeGlobals() {
Failer = internal.NewFailer()
Suite = internal.NewSuite()
}
func PushClone() error {
var err error
backupSuite, err = Suite.Clone()
return err
}
func PopClone() {
Suite = backupSuite
}

View File

@@ -321,7 +321,10 @@ func (g *group) run(specs Specs) {
if !skip {
var maxAttempts = 1
if g.suite.currentSpecReport.MaxMustPassRepeatedly > 0 {
if g.suite.config.MustPassRepeatedly > 0 {
maxAttempts = g.suite.config.MustPassRepeatedly
g.suite.currentSpecReport.MaxMustPassRepeatedly = maxAttempts
} else if g.suite.currentSpecReport.MaxMustPassRepeatedly > 0 {
maxAttempts = max(1, spec.MustPassRepeatedly())
} else if g.suite.config.FlakeAttempts > 0 {
maxAttempts = g.suite.config.FlakeAttempts

View File

@@ -10,7 +10,7 @@ import (
"github.com/onsi/ginkgo/v2/internal/parallel_support"
)
const ABORT_POLLING_INTERVAL = 500 * time.Millisecond
var ABORT_POLLING_INTERVAL = 500 * time.Millisecond
type InterruptCause uint
@@ -62,13 +62,14 @@ type InterruptHandlerInterface interface {
}
type InterruptHandler struct {
c chan interface{}
lock *sync.Mutex
level InterruptLevel
cause InterruptCause
client parallel_support.Client
stop chan interface{}
signals []os.Signal
c chan interface{}
lock *sync.Mutex
level InterruptLevel
cause InterruptCause
client parallel_support.Client
stop chan interface{}
signals []os.Signal
requestAbortCheck chan interface{}
}
func NewInterruptHandler(client parallel_support.Client, signals ...os.Signal) *InterruptHandler {
@@ -76,11 +77,12 @@ func NewInterruptHandler(client parallel_support.Client, signals ...os.Signal) *
signals = []os.Signal{os.Interrupt, syscall.SIGTERM}
}
handler := &InterruptHandler{
c: make(chan interface{}),
lock: &sync.Mutex{},
stop: make(chan interface{}),
client: client,
signals: signals,
c: make(chan interface{}),
lock: &sync.Mutex{},
stop: make(chan interface{}),
requestAbortCheck: make(chan interface{}),
client: client,
signals: signals,
}
handler.registerForInterrupts()
return handler
@@ -109,6 +111,12 @@ func (handler *InterruptHandler) registerForInterrupts() {
pollTicker.Stop()
return
}
case <-handler.requestAbortCheck:
if handler.client.ShouldAbort() {
close(abortChannel)
pollTicker.Stop()
return
}
case <-handler.stop:
pollTicker.Stop()
return
@@ -152,11 +160,18 @@ func (handler *InterruptHandler) registerForInterrupts() {
func (handler *InterruptHandler) Status() InterruptStatus {
handler.lock.Lock()
defer handler.lock.Unlock()
return InterruptStatus{
status := InterruptStatus{
Level: handler.level,
Channel: handler.c,
Cause: handler.cause,
}
handler.lock.Unlock()
if handler.client != nil && handler.client.ShouldAbort() && !status.Interrupted() {
close(handler.requestAbortCheck)
<-status.Channel
return handler.Status()
}
return status
}

View File

@@ -597,12 +597,16 @@ func (n Node) IsZero() bool {
/* Nodes */
type Nodes []Node
func (n Nodes) Clone() Nodes {
nodes := make(Nodes, len(n))
copy(nodes, n)
return nodes
}
func (n Nodes) CopyAppend(nodes ...Node) Nodes {
numN := len(n)
out := make(Nodes, numN+len(nodes))
for i, node := range n {
out[i] = node
}
copy(out, n)
for j, node := range nodes {
out[numN+j] = node
}
@@ -875,6 +879,15 @@ func (n Nodes) FirstNodeMarkedOrdered() Node {
return Node{}
}
func (n Nodes) IndexOfFirstNodeMarkedOrdered() int {
for i := range n {
if n[i].MarkedOrdered {
return i
}
}
return -1
}
func (n Nodes) GetMaxFlakeAttempts() int {
maxFlakeAttempts := 0
for i := range n {

View File

@@ -27,36 +27,43 @@ func (s *SortableSpecs) Swap(i, j int) { s.Indexes[i], s.Indexes[j] = s.Indexes[
func (s *SortableSpecs) Less(i, j int) bool {
a, b := s.Specs[s.Indexes[i]], s.Specs[s.Indexes[j]]
firstOrderedA := a.Nodes.FirstNodeMarkedOrdered()
firstOrderedB := b.Nodes.FirstNodeMarkedOrdered()
if firstOrderedA.ID == firstOrderedB.ID && !firstOrderedA.IsZero() {
// strictly preserve order in ordered containers. ID will track this as IDs are generated monotonically
return a.FirstNodeWithType(types.NodeTypeIt).ID < b.FirstNodeWithType(types.NodeTypeIt).ID
aNodes, bNodes := a.Nodes.WithType(types.NodeTypesForContainerAndIt), b.Nodes.WithType(types.NodeTypesForContainerAndIt)
firstOrderedAIdx, firstOrderedBIdx := aNodes.IndexOfFirstNodeMarkedOrdered(), bNodes.IndexOfFirstNodeMarkedOrdered()
if firstOrderedAIdx > -1 && firstOrderedBIdx > -1 && aNodes[firstOrderedAIdx].ID == bNodes[firstOrderedBIdx].ID {
// strictly preserve order within an ordered containers. ID will track this as IDs are generated monotonically
return aNodes.FirstNodeWithType(types.NodeTypeIt).ID < bNodes.FirstNodeWithType(types.NodeTypeIt).ID
}
aCLs := a.Nodes.WithType(types.NodeTypesForContainerAndIt).CodeLocations()
bCLs := b.Nodes.WithType(types.NodeTypesForContainerAndIt).CodeLocations()
for i := 0; i < len(aCLs) && i < len(bCLs); i++ {
aCL, bCL := aCLs[i], bCLs[i]
if aCL.FileName < bCL.FileName {
return true
} else if aCL.FileName > bCL.FileName {
return false
// if either spec is in an ordered container - only use the nodes up to the outermost ordered container
if firstOrderedAIdx > -1 {
aNodes = aNodes[:firstOrderedAIdx+1]
}
if firstOrderedBIdx > -1 {
bNodes = bNodes[:firstOrderedBIdx+1]
}
for i := 0; i < len(aNodes) && i < len(bNodes); i++ {
aCL, bCL := aNodes[i].CodeLocation, bNodes[i].CodeLocation
if aCL.FileName != bCL.FileName {
return aCL.FileName < bCL.FileName
}
if aCL.LineNumber < bCL.LineNumber {
return true
} else if aCL.LineNumber > bCL.LineNumber {
return false
if aCL.LineNumber != bCL.LineNumber {
return aCL.LineNumber < bCL.LineNumber
}
}
// either everything is equal or we have different lengths of CLs
if len(aCLs) < len(bCLs) {
return true
} else if len(aCLs) > len(bCLs) {
return false
if len(aNodes) != len(bNodes) {
return len(aNodes) < len(bNodes)
}
// ok, now we are sure everything was equal. so we use the spec text to break ties
return a.Text() < b.Text()
for i := 0; i < len(aNodes); i++ {
if aNodes[i].Text != bNodes[i].Text {
return aNodes[i].Text < bNodes[i].Text
}
}
// ok, all those texts were equal. we'll use the ID of the most deeply nested node as a last resort
return aNodes[len(aNodes)-1].ID < bNodes[len(bNodes)-1].ID
}
type GroupedSpecIndices []SpecIndices

View File

@@ -26,6 +26,17 @@ func (impl *dupSyscallOutputInterceptorImpl) CreateStdoutStderrClones() (*os.Fil
stdoutCloneFD, _ := unix.Dup(1)
stderrCloneFD, _ := unix.Dup(2)
// Important, set the fds to FD_CLOEXEC to prevent them leaking into childs
// https://github.com/onsi/ginkgo/issues/1191
flags, err := unix.FcntlInt(uintptr(stdoutCloneFD), unix.F_GETFD, 0)
if err == nil {
unix.FcntlInt(uintptr(stdoutCloneFD), unix.F_SETFD, flags|unix.FD_CLOEXEC)
}
flags, err = unix.FcntlInt(uintptr(stderrCloneFD), unix.F_GETFD, 0)
if err == nil {
unix.FcntlInt(uintptr(stderrCloneFD), unix.F_SETFD, flags|unix.FD_CLOEXEC)
}
// And then wrap the clone file descriptors in files.
// One benefit of this (that we don't use yet) is that we can actually write
// to these files to emit output to the console even though we're intercepting output

View File

@@ -77,6 +77,20 @@ func NewSuite() *Suite {
}
}
func (suite *Suite) Clone() (*Suite, error) {
if suite.phase != PhaseBuildTopLevel {
return nil, fmt.Errorf("cnanot clone suite after tree has been built")
}
return &Suite{
tree: &TreeNode{},
phase: PhaseBuildTopLevel,
ProgressReporterManager: NewProgressReporterManager(),
topLevelContainers: suite.topLevelContainers.Clone(),
suiteNodes: suite.suiteNodes.Clone(),
selectiveLock: &sync.Mutex{},
}, nil
}
func (suite *Suite) BuildTree() error {
// During PhaseBuildTopLevel, the top level containers are stored in suite.topLevelCotainers and entered
// We now enter PhaseBuildTree where these top level containers are entered and added to the spec tree
@@ -245,7 +259,9 @@ func (suite *Suite) pushCleanupNode(node Node) error {
node.NodeIDWhereCleanupWasGenerated = suite.currentNode.ID
node.NestingLevel = suite.currentNode.NestingLevel
suite.selectiveLock.Lock()
suite.cleanupNodes = append(suite.cleanupNodes, node)
suite.selectiveLock.Unlock()
return nil
}
@@ -326,6 +342,16 @@ func (suite *Suite) CurrentSpecReport() types.SpecReport {
return report
}
// Only valid in the preview context. In general suite.report only includes
// the specs run by _this_ node - it is only at the end of the suite that
// the parallel reports are aggregated. However in the preview context we run
// in series and
func (suite *Suite) GetPreviewReport() types.Report {
suite.selectiveLock.Lock()
defer suite.selectiveLock.Unlock()
return suite.report
}
func (suite *Suite) AddReportEntry(entry ReportEntry) error {
if suite.phase != PhaseRun {
return types.GinkgoErrors.AddReportEntryNotDuringRunPhase(entry.Location)
@@ -937,6 +963,12 @@ func (suite *Suite) runNode(node Node, specDeadline time.Time, text string) (typ
gracePeriodChannel = time.After(gracePeriod)
case <-interruptStatus.Channel:
interruptStatus = suite.interruptHandler.Status()
// ignore interruption from other process if we are cleaning up or reporting
if interruptStatus.Cause == interrupt_handler.InterruptCauseAbortByOtherProcess &&
node.NodeType.Is(types.NodeTypesAllowedDuringReportInterrupt|types.NodeTypesAllowedDuringCleanupInterrupt) {
continue
}
deadlineChannel = nil // don't worry about deadlines, time's up now
failureTimelineLocation := suite.generateTimelineLocation()

View File

@@ -7,6 +7,7 @@ import (
"github.com/onsi/ginkgo/v2/formatter"
"github.com/onsi/ginkgo/v2/internal"
"github.com/onsi/ginkgo/v2/reporters"
"github.com/onsi/ginkgo/v2/types"
)
@@ -185,6 +186,9 @@ func (t *ginkgoTestingTProxy) Fi(indentation uint, format string, args ...any) s
func (t *ginkgoTestingTProxy) Fiw(indentation uint, maxWidth uint, format string, args ...any) string {
return t.f.Fiw(indentation, maxWidth, format, args...)
}
func (t *ginkgoTestingTProxy) RenderTimeline() string {
return reporters.RenderTimeline(t.report(), false)
}
func (t *ginkgoTestingTProxy) GinkgoRecover() {
t.ginkgoRecover()
}

View File

@@ -135,6 +135,10 @@ func (w *Writer) Println(a ...interface{}) {
func GinkgoLogrFunc(writer *Writer) logr.Logger {
return funcr.New(func(prefix, args string) {
writer.Printf("%s", args)
if prefix == "" {
writer.Printf("%s\n", args)
} else {
writer.Printf("%s %s\n", prefix, args)
}
}, funcr.Options{})
}

View File

@@ -4,16 +4,21 @@ import (
"encoding/json"
"fmt"
"os"
"path"
"github.com/onsi/ginkgo/v2/types"
)
//GenerateJSONReport produces a JSON-formatted report at the passed in destination
// GenerateJSONReport produces a JSON-formatted report at the passed in destination
func GenerateJSONReport(report types.Report, destination string) error {
if err := os.MkdirAll(path.Dir(destination), 0770); err != nil {
return err
}
f, err := os.Create(destination)
if err != nil {
return err
}
defer f.Close()
enc := json.NewEncoder(f)
enc.SetIndent("", " ")
err = enc.Encode([]types.Report{
@@ -22,11 +27,11 @@ func GenerateJSONReport(report types.Report, destination string) error {
if err != nil {
return err
}
return f.Close()
return nil
}
//MergeJSONReports produces a single JSON-formatted report at the passed in destination by merging the JSON-formatted reports provided in sources
//It skips over reports that fail to decode but reports on them via the returned messages []string
// MergeJSONReports produces a single JSON-formatted report at the passed in destination by merging the JSON-formatted reports provided in sources
// It skips over reports that fail to decode but reports on them via the returned messages []string
func MergeAndCleanupJSONReports(sources []string, destination string) ([]string, error) {
messages := []string{}
allReports := []types.Report{}
@@ -46,15 +51,19 @@ func MergeAndCleanupJSONReports(sources []string, destination string) ([]string,
allReports = append(allReports, reports...)
}
if err := os.MkdirAll(path.Dir(destination), 0770); err != nil {
return messages, err
}
f, err := os.Create(destination)
if err != nil {
return messages, err
}
defer f.Close()
enc := json.NewEncoder(f)
enc.SetIndent("", " ")
err = enc.Encode(allReports)
if err != nil {
return messages, err
}
return messages, f.Close()
return messages, nil
}

View File

@@ -14,6 +14,7 @@ import (
"encoding/xml"
"fmt"
"os"
"path"
"strings"
"github.com/onsi/ginkgo/v2/config"
@@ -285,6 +286,9 @@ func GenerateJUnitReportWithConfig(report types.Report, dst string, config Junit
TestSuites: []JUnitTestSuite{suite},
}
if err := os.MkdirAll(path.Dir(dst), 0770); err != nil {
return err
}
f, err := os.Create(dst)
if err != nil {
return err
@@ -322,6 +326,9 @@ func MergeAndCleanupJUnitReports(sources []string, dst string) ([]string, error)
mergedReport.TestSuites = append(mergedReport.TestSuites, report.TestSuites...)
}
if err := os.MkdirAll(path.Dir(dst), 0770); err != nil {
return messages, err
}
f, err := os.Create(dst)
if err != nil {
return messages, err
@@ -344,8 +351,12 @@ func failureDescriptionForUnstructuredReporters(spec types.SpecReport) string {
}
func systemErrForUnstructuredReporters(spec types.SpecReport) string {
return RenderTimeline(spec, true)
}
func RenderTimeline(spec types.SpecReport, noColor bool) string {
out := &strings.Builder{}
NewDefaultReporter(types.ReporterConfig{NoColor: true, VeryVerbose: true}, out).emitTimeline(0, spec, spec.Timeline())
NewDefaultReporter(types.ReporterConfig{NoColor: noColor, VeryVerbose: true}, out).emitTimeline(0, spec, spec.Timeline())
return out.String()
}

View File

@@ -11,6 +11,7 @@ package reporters
import (
"fmt"
"os"
"path"
"strings"
"github.com/onsi/ginkgo/v2/types"
@@ -27,6 +28,9 @@ func tcEscape(s string) string {
}
func GenerateTeamcityReport(report types.Report, dst string) error {
if err := os.MkdirAll(path.Dir(dst), 0770); err != nil {
return err
}
f, err := os.Create(dst)
if err != nil {
return err

View File

@@ -149,7 +149,7 @@ func PruneStack(fullStackTrace string, skip int) string {
re := regexp.MustCompile(`\/ginkgo\/|\/pkg\/testing\/|\/pkg\/runtime\/`)
for i := 0; i < len(stack)/2; i++ {
// We filter out based on the source code file name.
if !re.Match([]byte(stack[i*2+1])) {
if !re.MatchString(stack[i*2+1]) {
prunedStack = append(prunedStack, stack[i*2])
prunedStack = append(prunedStack, stack[i*2+1])
}

View File

@@ -27,6 +27,7 @@ type SuiteConfig struct {
FailOnPending bool
FailFast bool
FlakeAttempts int
MustPassRepeatedly int
DryRun bool
PollProgressAfter time.Duration
PollProgressInterval time.Duration

View File

@@ -38,7 +38,7 @@ func (d deprecations) Async() Deprecation {
func (d deprecations) Measure() Deprecation {
return Deprecation{
Message: "Measure is deprecated and will be removed in Ginkgo V2. Please migrate to gomega/gmeasure.",
Message: "Measure is deprecated and has been removed from Ginkgo V2. Any Measure tests in your spec will not run. Please migrate to gomega/gmeasure.",
DocLink: "removed-measure",
Version: "1.16.3",
}

View File

@@ -453,8 +453,8 @@ func (g ginkgoErrors) InvalidEntryDescription(cl CodeLocation) error {
func (g ginkgoErrors) MissingParametersForTableFunction(cl CodeLocation) error {
return GinkgoError{
Heading: fmt.Sprintf("No parameters have been passed to the Table Function"),
Message: fmt.Sprintf("The Table Function expected at least 1 parameter"),
Heading: "No parameters have been passed to the Table Function",
Message: "The Table Function expected at least 1 parameter",
CodeLocation: cl,
DocLink: "table-specs",
}

View File

@@ -97,9 +97,7 @@ func (report Report) Add(other Report) Report {
report.RunTime = report.EndTime.Sub(report.StartTime)
reports := make(SpecReports, len(report.SpecReports)+len(other.SpecReports))
for i := range report.SpecReports {
reports[i] = report.SpecReports[i]
}
copy(reports, report.SpecReports)
offset := len(report.SpecReports)
for i := range other.SpecReports {
reports[i+offset] = other.SpecReports[i]

View File

@@ -1,3 +1,3 @@
package types
const VERSION = "2.9.2"
const VERSION = "2.13.2"