I am born in 1973 and live in Tilburg, the Netherlands, with my three gorgeous children.
I am also known as mrhaki, which is simply the initials of his name prepended by mr. The following Groovy snippets shows how the alias comes together:
(How cool is Groovy that we can express this in a simple code sample ;-) )
I studied Information Systems and Management at the Tilburg University.
After finishing my studies I started to work at a company which specialized in knowledge-based software. There I started writing my first Java software (yes, an applet!) in 1996.
Over the years my focus switched from applets, to servlets, to Java Enterprise Edition applications, to Spring-based software.
In 2008 I wanted to have fun again when writing software.
The larger projects I was working on were more about writing configuration XML files, tuning performance and less about real development.
So I started to look around and noticed Groovy as a good language to learn about.
I could still use existing Java code, libraries, and my Groovy classes in Java.
The learning curve isn’t steep and to support my learning phase I wrote down interesting Groovy facts in my blog with the title Groovy Goodness.
I post small articles with a lot of code samples to understand how to use Groovy. Since November 2011 I am also a DZone Most Valuable Blogger (MVB); DZone also posts my blog items on their site.
I have spoken at the Gr8Conf Europe and US editions about Groovy, Gradle, Grails and Asciidoctor topics. Other conferences where I talked are Greach in Madrid, Spain, JavaLand in Germany and JFall in The Netherlands.
I work for a company called JDriven in the Netherlands. JDriven focuses on technologies that simplify and improve development of enterprise applications. Employees of JDriven have years of experience with Java and related technologies and are all eager to learn about new technologies.
Introduction
The Spock framework is a testing and specification framework for JVM languages, like Java and Groovy. When I started to learn about Spock I wrote done little code snippets with features of Spock I found interesting. To access my notes from different locations I wrote the snippets with a short explanation in a blog: Messages from mrhaki. I labeled the post as Spocklight, because I thought this is good stuff that needs to be in the spotlight.
A while ago I bundled all my Groovy and Grails Goodness blog posts in a book published at Leanpub.
Leanpub is very easy to use and I could use Markdown to write the content, which I really liked as a developer. So it felt natural to also bundle the Spocklight blog posts at Leanpub.
The book is intended to browse through the subjects.
You should be able to just open the book at a random page and learn more about Spock.
Maybe pick it up once in a while and learn a bit more about known and lesser known features of Spock.
I hope you will enjoy reading the book and it will help you with learning about the Spock framework, so you can apply all the goodness in your projects.
Getting Started
Introduction to Spock Testing
In this blog entry we will start with a simple Spock specification for a Groovy class we create. We can learn why to use Spock on the Spock website. In this article we show with code how our tests will work with Spock. To get started with this sample we need Gradle installed on our machine and nothing else. The current release of Gradle at the time of writing this entry is 0.9-preview-3.
First we need to create a new directory for our little project and then we create a new build.gradle file:
Next we create our tests, but in Spock they are called specifications. We only need to extend the spock.lang.Specification and we get all the Spock magic in our hands. We start simple by defining a specification where we want to count the number of users in a UserService class. We are going to create the UserService class later, we start first with our specification:
Notice at line 6 how we can use very descriptive method names by using String literals. Next we create an instance of the UserService class and pass a list of two users at line 8. And then we check if the return value is the expected value 2 with a simple assertion statement. Spock provides a very readable way to write code. Mostly we first setup code for testing, run the code and finally test the results. This logic is supported nicely by Spock by using the labels setup and expect. Later on we see more of these labels.
Before we run the test we create our UserService class:
We can run our code and test with the following command:
The source files are compiled and our specification is run. We get the BUILD SUCCESSFUL message indicating our test runs fine. If the test would fail we can open build/reports/tests/index.html or build/test-results/TEST-com.mrhaki.blog.UserServiceSpecification.xml to see the failure.
We specified the count() method must return the number of users, but we only check it for 2 elements, but what if we want to test to see if 0 and 1 user also return the correct count value? We can create new methods in our specification class, but Spock makes it so easy to do this elegantly:
So what happens here? We use a new label where which contains a data table. Each row of the data table represent a new test run with the data from the row. In the setup block we used an unbound variable userList and in the expect block the unbound variable expectedCount. The variables get their values from the data table rows in the where block. So the first run the UserService instances gets null assigned to the users property and we expect the value 0 to be returned by the count() method. In the second run we pass an empty list and expect also 0 from the count() method. We have four rows, so our test is run four times when we invoke $ gradle test.
We can make the fact that four tests are run explicit by using the @Unroll annotation. We can use a String as argument describing the specific variable values used in a run. If we use the # followed by the unbound variable name will it be replaced when we run the code:
The generated XML with the test result contains the four runs with their specific names:
This concludes the introduction to Spock testing. In the future we learn more about Spock and the great features it provide to make writing tests easy and fun.
One of the many great features of Spock is the way assertion failures are shown. The power assert from Groovy is based on Spock's assertion feature, but Spock takes it to a next level. Let's create a simple specification for a course service, which is able to create new courses:
At lines 16, 17, 18 we define the assertions for our specification. First of all we notice we don't add the keyword assert for each assertion. Because we are in the then block we can omit the assert keyword. Notice at line 18 we can test for null values by using the Groovy truth. We also notice we only have to write a simple assertion. Spock doesn't need a bunch of assertEquals() methods like JUnit to test the result.
Now it is time to run our specification as JUnit test and see the result:
Wow that is a very useful message for what is going wrong! We can see our condition 'Mrhaki' == course.teacher.name is not satisfied, but we even get to see which part of the String value is not correct. In this case the first character should be lowercase instead of uppercase, but the message clearly shows the rest of the String value is correct. As a matter of fact we even know 83% of the String values is similar.
Another nice feature of Spock is that only the line which is important is shown in the abbreviated stacktrace. So we don't have to scroll through a big stacktrace with framework classes to find out where in our class the exception occurs. We immediately see that at line 16 in our specification the condition is not satisfied.
In our sample we have three assertions to be checked in the then. If we get a lot of assertions in the then block we can refactor our specification and put the assertions in a new method. This method must have void return type and we must add the assert keyword again. After these changes the assertions work just like when we put them in the then block:
When can run our specification and get the following output:
Spock provides very useful assertion messages when the condition is not satisfied. We see immediately what wasn't correct, because of the message and the fact the stacktrace only shows the line where the code is wrong.
In a Spock specification we write our assertion in the then: or expect: blocks. If we need to write multiple assertions for an object we can group those with the with method. We specify the object we want write assertions for as argument followed by a closure with the real assertions. We don't need to use the assert keyword inside the closure, just as we don't have to use the assert keyword in an expect: or then: block.
In the following example specification we have a very simple implementation for finding an User object. We want to check that the properties username and name have the correct value.
We all know writing tests or specifications with Spock is fun. We can run our specifications and when one of our assertions in a feature method invalid, the feature method is marked as failed. If we have more than one assertion in our feature method, only the first assertion that fails is returned as an error. Any other assertion after a failing assertion are not checked. To let Spock execute all assertions and return all failing assertions at once we must use the verifyAll method. We pass a closure with all our assertions to the method. All assertions will be checked when use the verifyAll and if one or more of the assertions is invalid the feature method will fail.
In the following example specification we have 3 assertions in the feature method check all properties are valid. We don't use the verifyAll method in our first example.
Let's see what happens when we don't use the verifyAll method and run the specification:
In the next example we use verifyAll to group the assertions for our course object:
We re-run the specification and now we get different output with all three assertions mentioned:
With Spock we can easily write feature methods in our specification to test if an exception is thrown by the methods invoked in a when block. Spock support exception conditions with the thrown() and notThrown() methods. We can even get a reference to the expected exception and check for example the message.
The following piece of code contains the specification to check for exceptions that can be thrown by a cook() method of the RecipeService class. And we check that exception are not thrown. The syntax is clear and concise, what we expect from Spock.
In a previous post we learned that we can check a specific exception is not thrown in our specification with the notThrown method. If we are not interested in a specific exception, but just want to check no exception at all is thrown, we must use the noExceptionThrown method. This method return true if the called code doesn't throw an exception.
In the following example we invoke a method (cook) that can throw an exception. We want to test the case when no exception is thrown:
Since Spock 2.1 we have 2 new operators we can use for assertions to check collections: =~ and ==~. We can use these operators with implementations of the Iterable interface when we want to check that a given collection has the same elements as an expected collection and we don’t care about the order of the elements. Without the new operators we would have to cast our collections to a Set first and than use the == operator.
The difference between the operators =~ and ==~ is that =~ is lenient and ==~ is strict. With the lenient match operator we expect that each element in our expected collection appears at least once in the collection we want to assert. The strict match operator expects that each element in our expected collection appears exactly once in the collection we want to assert.
In the following example we see different uses of the new operators and some other idiomatic Groovy ways to check elements in a collection in any order:
Spock has support for Hamcrest matchers and adds some extra syntactic sugar. In an expect: block in a Spock specification method we can use the following syntax value Matcher. Let's create a sample Spock specification and use this syntax with the Hamcrest matcher hasKey:
We can run the code ($groovy SampleSpecification.groovy) and see in the output a very useful assertion message for the second matcher in the expect: block. We directly see what went wrong and what was expected.
With Spock we can rewrite the specification and use the static method that() in spock.util.matcher.HamcrestSupport as a shortcut for the Hamcrest assertThat() method. The following sample shows how we can use that(). With this method we can use the assertion outside an expect: or then: block.
Finally we can use the expect() method in spock.util.matcher.HamcrestSupport to add the assertion in a then: block. This improves readability of our specification.
In a previous blog post we learned how we can use Hamcrest matchers. We can also create a custom matcher and use it in our Spock specification. Let's create a custom matcher that will check if elements in a list are part of a range.
In the following specification we create the method inRange() which will return an instance of a Matcher object. This object must implement a matches() method and extra methods to format the description when the matcher fails. We use Groovy's support to create a Map and turn it into an instance of BaseMatcher.
We can run the specification ($ groovy SampleSpecification.groovy) and everything should work and all tests must pass.
We change the code to see the description we have added. So we change that list, inRange(0..10) to that list, inRange(0..3). We run the specification again ($ groovy SampleSpecification.groovy) and look at the output:
Notice the output shows the text we have defined in the describeTo() and describeMismatch() methods.
We can write data driven tests with Spock. We can specify for example a data table or data pipes in a where: block. If we use a data pipe we can specify a data provider that will return the values that are used on each iteration. If our data provider returns multiple results for each row we can assign them immediatelly to multiple variables. We must use the syntax [var1, var2, var3] << providerImpl to assign values to the data variables var1, var2 and var3. We know from Groovy the multiple assignment syntax with parenthesis ((var1, var2, var3)), but with Spock we use square brackets.
In the following sample specification we have a simple feature method. The where: block shows how we can assign the values from the provider to multiple data variables. Notice we can skip values from the provider by using a _ to ignore the value.
Code written with Spock 0.7-groovy-2.0 and Groovy 2.3.3.
We can use data pipes to write data driven tests in Spock. A data pipe (<<) is fed by a data provider. We can use Collection objects as data provider, but also String objects and any class that implements the Iterable interface. We can write our own data provider class if we implement the Iterable interface.
In the following sample code we want to test the female property of the User class. We have the class MultilineProvider that implements the Iterable interface. The provider class accepts a multiline String value and returns the tokenized result of each line in each iteration.
Code written with Spock 0.7-groovy-2 and Groovy 2.3.3.
Spock's unroll feature is very powerful. The provider data variables can be used in the method description of our specification features with placeholders. For each iteration the placeholders are replaced with correct values. This way we get a nice output where we immediately can see the values that were used to run the code. Placeholders are denoted by a hash sign (#) followed by the variable name. We can even invoke no-argument methods on the variable values or access properties. For example if we have a String value we could get the upper case value with #variableName.toUpperCase(). If we want to use more complex expressions we must introduce a new data variable in the where block. The value of the variable will be determined for each test invocation and we can use the result as a value in the method description.
If we look at the output of the tests we see the method names are not really representing the code we test. For example we can not see if the value was lower case or not.
We rewrite the specification and add a new data variable unrollDescription in the where block. We then refer to this variable in our method name description.
When we look at the output we now have more descriptive method names:
Writing a parameterized specification in Spock is easy. We need to add the where: block and use data providers to specify different values. For each set of values from the data providers our specifications is run, so we can test for example very effectively multiple input arguments for a method and the expected outcome. A data provider can be anything that implements the Iterable interface. Spock also adds support for a data table. In the data table we define columns for each variable and in the rows values for each variable. Since Spock 1.1 we can reuse the value of the variables inside the data provider or data table. The value of the variable can only be reused in variables that are defined after the variable we want to reuse is defined.
In the following example we have two feature methods, one uses a data provider and one a data table. The variable sentence is defined after the variable search, so we can use the search variable value in the definition of the sentence variable.
When we run the specification the feature methods will pass and in we see in the @Unroll descriptions that the sentence variable uses the value of search:
Creating and working with mocks and stubs in Spock is easy. If we want to interact with our mocks and stubs we have several options to return values. One of the options is the triple right shift operator >>>. With this operator we can define different return values for multiple invocations of the stubbed or mocked method. For example we can use the >>> operator followed by a list of return values ['abc', 'def', 'ghi']. On the first invocation abc is return, the second invocation returns def and the third (and following) invocation(s) return ghi.
In the following specification we have a class under test StringUtil. The class has a dependency on an implementation of the Calculator interface. We mock this interface in our specification. We expect the calculateSize method is invoked 5 times, but we only provide 3 values for the invocations. This means the first time 1 is used, the second time 3 is used and the remaining invocations get the value 4:
Change Return Value of Mocked or Stubbed Service Based On Argument Value
My colleague Albert van Veen wrote a blog post about Using ArgumentMatchers with Mockito. The idea is to let a mocked or stubbed service return a different value based on the argument passed into the service. This is inspired me to write the same sample with Spock.
Spock already has built-in mock and stub support, so first of all we don’t need an extra library to support mocking and stubbing. We can easily create a mock or stub with the Mock() and Stub() methods. We will see usage of both in the following examples.
In the first example we simply return true or false for ChocolateService.doesCustomerLikesChocolate() in the separate test methods.
In the following example we mimic the ArgumentMatcher and this time we use a stub instead of mock.
Although I couldn't make it to Gr8Conf EU this year, I am glad a lot of the presentations are available as slide decks and videos. The slide deck for the talk Interesting nooks and crannies of Spock you (may) have never seen before by Marcin Zajączkowski is very interesting. This is really a must read if you use Spock (and why shouldn't you) in your projects. One of the interesting things is the ability to change the response for methods in a class that is stubbed using Spock's Stub method, but have no explicit stubbed method definition.
So normally when we create a stub we would add code that implements the methods from the stubbed class. In our specification the methods we have written are invoked instead of the original methods from the stubbed class. By default if we don't override a method definition, but it is used in the specification, Spock will try to create a response using a default response strategy. The default response strategy for a stub is implemented by the class EmptyOrDummyResponse. For example if a method has a return type Message then Spock will create a new instance of Message and return it to be used in the specification. Spock also has a ZeroOrNullResponse response strategy. With this strategy null is returned for our method that returns the Message type.
Both response strategies implement the IDefaultResponse interface. We can write our own response strategy by implementing this interface. When we use the Stub method we can pass an instance of our response strategy with the defaultResponse named argument of the method. For example: MessageProvider stub = Stub(defaultResponse: new CustomResponse()). We implement the respond\
method of IDefaultResponse to write a custom response strategy. The method gets a IMockInvocation instance. We can use this instance to check for example the method name, return type, arguments and more. Based on this we can write code to return the response we want.
In the following example we have a Spock specification where we create a stub using the default response strategy, the ZeroOrNullResponse strategy and a custom written response strategy:
When we mock or stub methods we can use the method arguments passed to the method in the response for the mocked or stubbed method. We must write a closure after the rightShift operator (>>) and the closure arguments will resemble the arguments of the mocked or stubbed method. Alternatively we can use a single non-typed argument in the closure and this will contains the method argument list.
Let's create a specification where we use this feature. In the following sample we use a mock for the AgeService used in the class under test. The method allowedMaxTime() is invoked by the class under test and basically should return the maximum hour of the day a show can be broadcasted. In our specification we use the name of the show to return different values during the test.
My colleague Arthur Arts has written a blog post Tasty Test Tip: Using ArgumentCaptor for generic collections with Mockito. This inspired me to do the same in Spock. With the ArgumentCaptor in Mockito the parameters of a method call to a mock are captured and can be verified with assertions. In Spock we can also get a hold on the arguments that are passed to method call of a mock and we can write assertions to check the parameters for certain conditions.
When we create a mock in Spock and invoke a method on the mock the arguments are matched using the equals() implementation of the argument type. If they are not equal Spock will tell us by showing a message that there are too few invocations of the method call. Let's show this with an example. First we create some classes we want to test:
Now we can write a Spock specification to test ClassUnderTest. We will now use the default matching of arguments of a mock provided by Spock.
When we execute the specification we get a failure with the message that there are too few invocations:
To capture the arguments we have to use a different syntax for the method invocation on the mock. This time we define the method can be invoked with any number of arguments ((*_)) and then use a closure to capture the arguments. The arguments are passed to the closure as a list. We can then get the argument we want and write an assert statement.
We run the specification again and it will fail again (of course), but this time we get an assertion message:
Use Stub or Mock For Spring Component Using @SpringBean
When we write tests or specifications using Spock for our Spring Boot application, we might want to replace some Spring components with a stub or mock version. With the stub or mock version we can write expected outcomes and behaviour in our specifications. Since Spock 1.2 and the Spock Spring extension we can use the @SpringBean annotation to replace a Spring component with a stub or mock version. (This is quite similar as the @MockBean for Mockito mocks that is supported by Spring Boot). We only have to declare a variable in our specification of the type of the Spring component we want to replace. We directly use the Stub() or Mock() methods to create the stub or mock version when we define the variable. From now on we can describe expected output values or behaviour just like any Spock stub or mock implementation.
To use the @SpringBean annotation we must add a dependency on spock-spring module to our build system. For example if we use Gradle we use the following configuration:
Let's write a very simple Spring Boot application and use the @SpringBean annotation to create a stubbed component. First we write a simple interface with a method that accepts an argument of type String and return a new String value:
Next we use this interface in a Spring REST controller where we use constructor dependency injection to inject the correct implementation of the MessageComponent interface into the controller:
To test the controller we write a new Spock specification. We use Spring's MockMvc support to test our controller, but the most important part in the specification is the declaration of the variable messageComponent with the annotation @SpringBean. Inside the method where we invoke /message?name=mrhaki we use the stub to declare our expected output:
Written with Spock 1.3-groovy-2.5 and Spring Boot 2.1.8.RELEASE.
Testing asynchronous code needs some special treatment. With synchronous code we get results from invoking method directly and in our tests or specifications we can easily assert the value. But when we don’t know when the results will be available after calling a method we need to wait for the results. So in our specification we actually block until the results from asynchronous code are available. One of the options Spock provides us to block our testing code and wait for the code to be finished is using the classes DataVariable and DataVariables. When we create a variable of type DataVariable we can set and get one value result. The get method will block until the value is available and we can write assertions on the value as we now know it is available. The set method is used to assign a value to the BlockingVariable, for example we can do this in a callback when the asynchronous method support a callback parameter.
The BlockingVariable can only hold one value, with the other class BlockingVariables we can store multiple values. The class acts like a Map where we create a key with a value for storing the results from asynchronous calls. Each call to get the value for a given key will block until the result is available and ready to assert.
The following example code is a Java class with two methods, findTemperature and findTemperatures, that make asynchronous calls. The implementation of the methods use a so-called callback parameter that is used to set the results from invoking a service to get the temperature for a city:
To test our Java class we write the following specification where we use both DataVariable and DataVariables to wait for the asynchronous methods to be finished and we can assert on the resulting values:
In a previous blog post we learned how to use DataVariable and DataVariables to test asynchronous code. Spock also provides PollingConditions as a way to test asynchronous code. The PollingConditions class has the methods eventually and within that accept a closure where we can write our assertions on the results of the asynchronous code execution. Spock will try to evaluate conditions in the closure until they are true. By default the eventually method will retry for 1 second with a delay of 0.1 second between each retry. We can change this by setting the properties timeout, delay, initialDelay and factor of the PollingConditions class. For example to define the maximum retry period of 5 seconds and change the delay between retries to 0.5 seconds we create the following instance: new PollingConditions(timeout: 5, initialDelay: 0.5).\
Instead of changing the PollingConditions properties for extending the timeout we can also use the method within and specify the timeout in seconds as the first argument. If the conditions can be evaluated correctly before the timeout has expired then the feature method of our specification will also finish earlier. The timeout is only the maximum time we want our feature method to run.
In the following example Java class we have the methods findTemperature and findTemperatures that will try to get the temperature for a given city on a new thread. The method getTemperature will return the result. The result can be null as long as the call to the WeatherService is not yet finished.
To test the class we write the following specification using PollingConditions:
We can use the @Ignore and @IgnoreRest annotation in our Spock specifications to not run the annotated specifications or features. With the @IgnoreIf annotation we can specify a condition that needs to evaluate to true to not run the feature or specification. The argument of the annotation is a closure. Inside the closure we can access three extra variables: properties (Java system properties), env (environment variables) and javaVersion.
In the following Spock specification we have a couple of features. Each feature has the @IgnoreIf annotation with different checks. We can use the extra variables, but we can also invoke our own methods in the closure argument for the annotation:
When we run our specification with Java 1.8 and do not set the Java system property spock.ignore.longRunning or we set the value to false and we do not set the environment variable SPOCK_IGNORE_LONG_RUNNING or give it the value false we can see that some features are ignored:
Now we run on Java 1.7, Windows operating system and set the Java system property spock.ignore.longRunning with the value true and the environment variable SPOCK_IGNORE_LONG_RUNNING with the value true. The resulting report shows the specifications that are ignored and those that are executed:
To ignore feature methods in our Spock specification we can use the annotation @Ignore. Any feature method or specification with this annotation is not invoked when we run a specification. With the annotation @IgnoreRest we indicate that feature methods that do not have this annotation must be ignored. So any method with the annotation is invoked, but the ones without aren't. This annotation can only be applied to methods and not to a specification class.
In the next example we have a specification with two feature methods that will be executed and one that is ignored:
We can run this specification directly from the command line:
In a previous blog post we have seen the IgnoreIf extension. There is also a counterpart: the Requires extension. If we apply this extension to a feature method or specification class than the method or whole class is executed when the condition for the @Requires annotation is true. If the condition is false the method or specification is not executed. As a value for the @Requires annotation we must specify a closure. In the closure Spock adds some properties we can use for our conditions:
jvm can be used to check a Java version or compatibility.
sys returns the Java system properties.
env used to access environment variables.
os can be used to check for operating system names.
javaVersion has the Java version as BigDecimal, eg. 1.8.
In the following example we use the @Requires annotation with different conditions:
If we have the same condition to be applied for all feature methods in a specification we can use the @Requires annotation at the class level:
Including or Excluding Specifications Based On Annotations
One of the lesser known and documented features of Spock if the external Spock configuration file. In this file we can for example specify which specifications to include or exclude from a test run. We can specify a class name (for example a base specification class, like DatabaseSpec) or an annotation. In this post we see how to use annotations to have some specifications run and others not.
The external Spock configuration file is actually a Groovy script file. We must specify a runner method with a closure argument where we configure basically the test runner. To include specification classes or methods with a certain annotation applied to them we configure the include property of the test runner. To exclude a class or method we use the exclude property. Because the configuration file is a Groovy script we can use everything Groovy has to offer, like conditional statements, println statements and more.
Spock looks for a file named SpockConfig.groovy in the classpath of the test execution and in in the USER_HOME/.spock directory. We can also use the Java system property spock.configuration with a file name for the configuration file.
In the following example we first define a simple annotation Remote. This annotation can be applied to a class or method:
We write a simple Spock specification where we apply the Remote annotation to one of the methods:
Next we create a Spock configuration file:
When we run the WordRepositorySpec and our configuration file is on the classpath only the specifications with the @Remote annotation are executed. Let's apply this in a simple Gradle build file. In this case we save the configuration file as src/test/resources/RemoteSpockConfig.groovy, we create a new test task remoteTest and set the Java system property spock.configuration:
Now when we execute the Gradle test task all specifications are executed:
And when we run remoteTest only the specification with the @Remote annotation are executed:
Spock is able to change the execution order of test methods in a specification. We can tell Spock to re-run failing methods before successful methods. And if we have multiple failing or successful tests, than first run the fastest methods, followed by the slower methods. This way when we re-run the specification we immediately see the failing methods and could stop the execution and fix the errors. We must set the property optimizeRunOrder in the runner configuration of the Spock configuration file. A Spock configuration file with the name SpockConfig.groovy can be placed in the classpath of our test execution or in our USER_HOME/.spock directory. We can also use the Java system property spock.configuration and assign the filename of our Spock configuration file.
In the following example we have a specification with different methods that can be successful or fail and have different durations when executed:
Let's run our test where there is no optimised run order. We see the methods are executed as defined in the specification:
Next we create a Spock configuration file with the following contents:
If we re-run our specification and have this file in the classpath we already see the order of the methods has changed. The failing tests are at the top and the successful tests are at the bottom. The slowest test method is last:
Another re-run has optimised the order by running the slowest failing test after the other failing tests.
Spock keeps track of the failing and successful methods and their execution time in a file with the specification name in the USER_HOME/.spock/RunHistory directory. To reset the information we must delete the file from this directory.
If we write a specification for a specific class we can indicate that class with the @Subject annotation. This annotation is only for informational purposes, but can help in making sure we understand which class we are writing the specifications for. The annotation can either be used at class level or field level. If we use the annotation at class level we must specify the class or classes under test as argument for the annotation. If we apply the annotation to a field, the type of the field is used as the class under test. The field can be part of the class definition, but we can also apply the @Subject annotation to fields inside a feature method.
In the following example Spock specification we write a specification for the class Greet. The definition of the Greet class is also in the code listing. We use the @Subject annotation on the field greet to indicate this instance of the Greet class is the class we are testing here. The code also works without the @Subject annotation, but it adds more clarity to the specification.
Code written with Spock 0.7-groovy-2.0 and Groovy 2.3.7.
Sometimes we are working on a new feature in our code and we want to write a specification for it without yet really implementing the feature. To indicate we know the specification will fail while we are implementing the feature we can add the @PendingFeature annotation to our specification method. With this annotation Spock will still execute the test, but will set the status to ignored if the test fails. But if the test passes the status is set to failed. So when we have finished the feature we need to remove the annotation and Spock will kindly remind us to do so this way.
In the following example specification we use the @PendingFeature annotation:
When we run our test in for example Gradle we get the following result:
Now let's implement the upper method:
We run the test again and now we get a failing result although our implementation of the upper method is correct:
So this tells us the @PendingFeature is no longer needed. We can remove it and the specification will pass correctly.
Spock has some great features to write specifications or tests that are short and compact. One of them is the old() method. The old() method can only be used in a then: block. With this method we get the value a statement had before the when: block is executed.
Let's see this with a simple example. In the following specification we create a StringBuilder with an initial value. In the then: block we use the same initial value for the assertion:
If we want to change the initial value when we create the StringBuilder we must also change the assertion. We can refactor the feature method and show our intention of the specification better. We add the variable oldToString right after we have created the StringBuilder. We use this in the assertion.
But with Spock we can do one better. Instead of creating an extra variable we can use the old() method. In the assertion we replace the variable reference oldToString with old(builder.toString()). This actually means we want the value for builder.toString() BEFORE the when: block is executed. The assertion also is now very clear and readable and the intentions of the specification are very clear.
Let's change the specification a bit so we get some failures. Instead of adding the appendValue data variable unchanged to the StringBuilder we want to add a capitalized value.
If we run the specification we get assertion failures. In the following output we see such a failure and notice the value for the old() is shown correctly:
Note: If we use the old() method we might get an InternalSpockError exception when assertions fail. The error looks something like: org.spockframework.util.InternalSpockError: Missing value for expression "...". Re-ordering the assertion can help solve this. For example putting the old() method statement last. In Spock 1.0-SNAPSHOT this error doesn't occur.
For more information we can read Rob Fletcher's blog post about the old() method.
Spcok has a lot of nice extensions we can use in our specifications. The AutoCleanup extension makes sure the close() method of an object is called each time a feature method is finished. We could invoke the close() method also from the cleanup method in our specification, but with the @AutoCleanup annotation it is easier and immediately shows our intention. If the object we apply the annotation to doesn't have a close() method to invoke we can specify the method name as the value for the annotation. Finally we can set the attribute quiet to true if we don't want to see any exceptions that are raised when the close() method (or custom method name, that is specified) is invoked.
In the following example code we have a specification that is testing the WatchService implementation. The implementation also implements the Closeable interface, which means we can use the close() method to cleanup the object properly. We also have a custom class WorkDir with a delete() method that needs to be invoked.
Creating Temporary Files And Directories With FileSystemFixture
If we write specification where we need to use files and directories we can use the @TempDir annotation on a File or Path instance variable. By using this annotation we make sure the file is created in the directory defined by the Java system property java.io.tmpdir. We could overwrite the temporary root directory using Spock configuration if we want, but the default should be okay for most situations. The @TempDir annotation can actually be used on any class that has a constructor with a File or Path argument. Since Spock 2.2 we can use the FileSystemFixture class provided by Spock. With this class we have a nice DSL to create directory structures and files in a simple matter. We can use the Groovy extensions to File and Path to also immediately create contents for the files. If we want to use the extensions to Path we must make sure we include org.apache.groovy:groovy-nio as dependency to our test runtime classpath. The FileSystemFixture class also has the method copyFromClasspath that we can use to copy files and their content directory into our newly created directory structure.
In the following example specification we use FileSystemFixture to define a new directory structure in a temporary directory, but also in our project directory:
In order to use the Groovy extensions for java.nio.Path we must add the groovy-nio module to the test classpath. For example we can do this if we use Gradle by using the JVM TestSuite plugin extension:
Written with Spock 2.3-groovy-4.0 and Gradle 8.0.2.
Testing classes that work with date calculations based on the current date and time (now) can be difficult. First of all we must make sure our class under test accepts a java.time.Clock instance. This allows us to provide a specific Clock instance in our tests where we can define for example a fixed value, so our tests don't break when the actual date and time changes. But this can still not be enough for classes that will behave different based on the value returned for now. The Clock instances in Java are immutable, so it is not possible to change the date or time for a Clock instance.
In Spock 2.0 we can use the new MutableClock class in our specifications to have a Clock that can be used to go forward or backward in time on the same Clock instance. We can create a MutableClock and pass it to the class under test. We can test the class with the initial date and time of the Clock object, then change the date and time for the clock and test the class again without having to create a new instance of the class under test. This is handy in situations like a queue implementation, where a message delivery date could be used to see if messages need to be delivered or not. By changing the date and time of the clock that is passed to the queue implementation we can write specifications that can check the functionality of the queue instance.
The MutableClock class has some useful methods to change the time. We can for example use the instant property to assign a new Instant. Or we can add or subtract a duration from the initial date and time using the + and - operators. We can specify the temporal amount that must be applied when we use the ++ or -- operators. Finally, we can use the adjust method with a single argument closure to use a TemporalAdjuster to change the date or time. This last method is useful if we want to specify a date adjustment that is using months and years.
In the following example Java code we have the class WesternUnion that accepts letters with a delivery date and message. The letter is stored and when the deliver method is we remove the letter from our class if the delivery date is after the current date and time.
In order to test this class we want pass a Clock instance where we can mutate the clock, so we can test if the deliver method works if we invoke it multiple times. In the next specification we test this with different usages of MutableClock. Each specification method uses the MutableClock in a different way to test the WesternUnion class:
When we write a feature method in our Spock specification to test our class we might run into long running methods that are invoked. We can specify a maximum time we want to wait for a method. If the time spent by the method is more than the maximum time our feature method must fail. Spock has the @Timeout annotation to define this. We can apply the annotation to our specification or to feature methods in the specification. We specify the timeout value as argument for the @Timeout annotation. Seconds are the default time unit that is used. If we want to specify a different time unit we can use the annotation argument unit and use constants from java.util.concurrent.TimeUnit to set a value.
In the following example specification we set a general timeout of 1 second for the whole specification. For two methods we override this default timeout with their own value and unit:
Spock has the extension ConfineMetaClassChanges that can be used to encapsulate meta class changes to a feature method or specification. We must apply the annotation @ConfineMetaClassChanges to a feature method or to a whole specification to use this extension. Spock replaces the original meta class with a new one before a feature method is executed. After execution of a feature method the original meta class is used again. We could this by hand using the setup, setupSpec and their counter parts cleanup and cleanupSpec, but using this extension is so much easier. We must specify the class or classes whose meta class changes need to be confined as the value for the annotation.\
In the following example we add a new method asPirate to the String class. We apply the @ConfineMetaClassChanges to a method. This means the new method is only available inside the feature method.\
In the following example code we apply the @ConfineMetaClassChanges to the whole class. Now we see that the new method asPirate is still available in another feature method, than the one that defined it.
This post is very much inspired by this blog post of my colleague Albert van Veen.
If we need to add a Java system property or change the value of a Java system property inside our specification, then the change is kept as long as the JVM is running. We can make sure that changes to Java system properties are restored after a feature method has run. Spock offers the RestoreSystemProperties extension that saves the current Java system properties before a method is run and restores the values after the method is finished. We use the extension with the @RestoreSystemProperties annotation. The annotation can be applied at specification level or per feature method.
In the following example we see that changes to the Java system properties in the first method are undone again in the second method: