-
Notifications
You must be signed in to change notification settings - Fork 26
Parallel Blocks
In this section we explain how to verify parallel algorithms by creating parallel blocks in PVL. First, we give an example of a simple method using parallel block. Then, we discuss a more complex example with a barrier inside the parallel block. A barrier is used to synchronize threads inside a parallel block.
This example shows a simple method that adds two arrays in parallel and stores the result in another array:
/* 1 */ context_everywhere a != null && b != null && c != null;
/* 2 */ context_everywhere a.length == size && b.length == size && c.length == size;
/* 3 */ context (\forall* int i; i >= 0 && i < size; Perm(a[i], 1\2));
/* 4 */ context (\forall* int i; i >= 0 && i < size; Perm(b[i], 1\2));
/* 5 */ context (\forall* int i; i >= 0 && i < size; Perm(c[i], 1));
/* 6 */ ensures (\forall int i; i >= 0 && i < size; c[i] == a[i] + b[i]);
/* 7 */ void Add(int[] a, int[] b, int[] c, int size){
/* 8 */
/* 9 */ par threads (int tid = 0 .. size)
/* 10 */ context Perm(a[tid], 1\2) ** Perm(b[tid], 1\2) ** Perm(c[tid], 1);
/* 11 */ ensures c[tid] == a[tid] + b[tid];
/* 12 */ {
/* 13 */ c[tid] = a[tid] + b[tid];
/* 14 */ }
/* 15 */ }
In this method there is a parallel block (lines 9-14) named "threads". The keyword "par" is used to define a parallel block, followed by an arbitrary name after that defines the name of that block. Moreover, we define the number of threads in the parallel block as well as a name for the thread identifier. In this example, we have "size" threads in the range from 0 to "size-1" and "tid" is used as thread identifier to refer to each thread.
In addition to the specification of the method (lines 1-6), we add thread-level specification to the parallel block (lines 10-11). The precondition of the method indicates that we have read permission over all locations in arrays "a" and "b" and write permission for array "c" (lines 3-5). In the parallel block, we specify that each thread ("tid") has read permission to its own location in arrays "a" and "b" and write permission to its own location in array "c" (line 10). After termination of the parallel block as postcondition (1) we have the same permission (line 10) and (2) the result of each location in array "c" is the sum of the two corresponding locations in arrays "a" and "b" (line 11). From the postcondition of the parallel block, we can derive the postcondition of the method using universal quantifier for all locations in the arrays (line 3-6).
Next we show an example of a parallel block which uses a barrier to synchronize threads:
/* 1 */ context_everywhere array != null && array.length == size;
/* 2 */ requires (\forall* int i; i >= 0 && i < size; Perm(array[i], write));
/* 3 */ ensures (\forall* int i; i >= 0 && i < size; Perm(array[i], write));
/* 4 */ ensures (\forall int i; i >= 0 && i < size; (i != size-1 ==> array[i] == \old(array[i+1])) &&
/* 5 */ (i == size-1 ==> array[i] == \old(array[0])) );
/* 6 */ void leftRotation(int[] array, int size){
/* 7 */
/* 8 */ par threads (int tid = 0 .. size)
/* 9 */ requires tid != size-1 ==> Perm(array[tid+1], write);
/* 10 */ requires tid == size-1 ==> Perm(array[0], write);
/* 11 */ ensures Perm(array[tid], write);
/* 12 */ ensures tid != size-1 ==> array[tid] == \old(array[tid+1]);
/* 13 */ ensures tid == size-1 ==> array[tid] == \old(array[0]);
/* 14 */ {
/* 15 */ int temp;
/* 16 */ if (tid != size-1) {
/* 17 */ temp = array[tid+1];
/* 18 */ } else {
/* 19 */ temp = array[0];
/* 20 */ }
/* 21 */
/* 22 */ barrier(threads)
/* 23 */ requires tid != size-1 ==> Perm(array[tid+1], write);
/* 24 */ requires tid == size-1 ==> Perm(array[0], write);
/* 25 */ ensures Perm(array[tid], write);
/* 26 */ { /* Automatic proof */ }
/* 27 */
/* 28 */ array[tid] = temp;
/* 29 */ }
/* 30 */ }
This example illustrates a method named "leftRotation" that rotates the elements of an array to the left. In this example, we also have "size" threads in the range from 0 to "size-1" and "tid" is used as thread identifier. Inside the parallel block each thread ("tid") stores its right neighbor in a temporary location (i.e., "temp"), except thread "size-1" which stores the first element in the array (lines 15-20). Then each thread synchronizes at the barrier (line 22). The keyword "barrier" and the name of the parallel block as an argument (e.g., "threads" in the example) are used to define a barrier in PVL. After that, each thread writes the value read into its own location at index "tid" in the array (line 28).
To verify this method in VerCors, we annotate the barrier, in addition to the method and the parallel block. As precondition of the method, we have read permission over all locations in the array (line 2). At the beginning of the parallel block, each thread reads from its right neighbor, except thread "size-1" which reads from location 0 (lines 16-20). Therefore, we specify read permissions as precondition of the parallel block in lines 9-10. Since after the barrier each thread ("tid") writes into its own location at index ("tid"), we change the permissions in the barrier in such that each thread has write permissions to its own location (lines 23-25). When a thread reaches the barrier, it has to fulfill the barrier preconditions, and then it may assume the barrier postconditions. Moreover, the barrier postconditions must follow from the barrier preconditions.
As postcondition of the parallel block (1) first each thread has write permission to its own location (this comes from the postcondition of the barrier) in line 11 and (2) the elements are truly shifted to the left (lines 12-13). From the postcondition of the parallel block, we can establish the same postcondition for the method (lines 3-5).
In this chapter syntax is introduced for barriers. There is a caveat to using barriers, which we want to explain first so it is not missed. Barrier syntax in VerCors has two forms. Form 1:
barrier(barrierName) {
requires P;
ensures Q;
}
Form 2:
barrier(barrierName)
requires P;
ensures Q;
{ }
Notice how in form 1, the contract is inside the curly braces. In form 2, the contract resides between the barrier declaration and the curly braces.
There is a big difference between these two syntaxes. In form 1, the contract of the barrier is not actually checked by VerCors. This means that if you would add ensures false;
to your barrier using the syntax of form 1, VerCors will not indicate this is a problematic contract. For form 2, however, the contract is actually checked. Therefore, when using barriers, form 2 should be preferred. Form 1 should only be used if there is a proof external to VerCors.
In the previous examples, we defined a parallel block of threads to work on one-dimensional arrays. It is also possible to define two-dimensional thread layouts to work on two-dimensional arrays. As an example we define a two-dimensional thread layout to transpose a matrix in parallel:
context_everywhere inp != null && out != null; /// && out == size;
context_everywhere inp.length == size && out.length == size;
context_everywhere (\forall int i; 0 <= i && i < size; inp[i].length == size && out[i].length == size);
context (\forall* int i; 0 <= i && i < size;
(\forall* int j; 0 <= j && j < size; Perm(inp[i][j], read)));
context (\forall* int i; 0 <= i && i < size;
(\forall* int j; 0 <= j && j < size; Perm(out[i][j], write)));
ensures (\forall* int i; 0 <= i && i < size;
(\forall* int j; 0 <= j && j < size; out[i][j] == inp[j][i]));
void transpose(int[][] inp, int[][] out, int size){
par threadX (int tidX = 0 .. size)
context (\forall* int i; i >= 0 && i < size; Perm(inp[i][tidX], read));
context (\forall* int i; i >= 0 && i < size; Perm(out[tidX][i], write));
ensures (\forall int i; i >= 0 && i < size; out[tidX][i] == inp[i][tidX]);
{
par threadY (int tidY = 0 .. size)
context Perm(inp[tidY][tidX], read);
context Perm(out[tidX][tidY], write);
ensures out[tidX][tidY] == inp[tidY][tidX];
{
out[tidX][tidY] = inp[tidY][tidX];
}
}
}
As we can see defining nested parallel blocks allow us to define thread layout in different dimensions. We can simplify the above example syntactically into one parallel block:
context_everywhere inp != null && out != null;
context_everywhere inp.length == size && out.length == size;
context_everywhere (\forall int i; 0 <= i && i < size; inp[i].length == size && out[i].length == size);
context (\forall* int i; 0 <= i && i < size;
(\forall* int j; 0 <= j && j < size; Perm(inp[i][j], read)));
context (\forall* int i; 0 <= i && i < size;
(\forall* int j; 0 <= j && j < size; Perm(out[i][j], write)));
ensures (\forall int i; 0 <= i && i < size;
(\forall int j; 0 <= j && j < size; out[i][j] == inp[j][i]));
void transpose(int[][] inp, int[][] out, int size){
par threadXY (int tidX = 0 .. size, int tidY = 0 .. size)
context Perm(inp[tidY][tidX], read);
context Perm(out[tidX][tidY], write);
ensures out[tidX][tidY] == inp[tidY][tidX];
{
out[tidX][tidY] = inp[tidY][tidX];
}
}
There might be some scenarios that we have multiple parallel blocks and all threads in each parallel block are working in disjoint memory locations. In this case we can define multiple parallel blocks to run simultaneously. That means, threads in each parallel block run independently of other threads in different parallel blocks. Below is an example of such a scenario:
/* 1 */ context_everywhere a != null && b != null && c != null;
/* 2 */ context_everywhere a.length == size && b.length == size && c.length == size;
/* 3 */ context (\forall* int i; i >= 0 && i < size; Perm(a[i], write));
/* 4 */ context (\forall* int i; i >= 0 && i < size; Perm(b[i], write));
/* 5 */ context (\forall* int i; i >= 0 && i < size; Perm(c[i], write));
/* 6 */ ensures (\forall int i; i >= 0 && i < size; a[i] == \old(a[i]) + 1 && b[i] == \old(b[i]) + 1 &&
/* 7 */ c[i] == \old(c[i]) + 1);
/* 8 */ void simPar(int[] a, int[] b, int[] c, int size){
/* 9 */
/* 10 */ par thread1 (int tid1 = 0 .. size)
/* 11 */ context Perm(a[tid1], write);
/* 12 */ ensures a[tid1] == \old(a[tid1]) + 1;
/* 13 */ {
/* 14 */ a[tid1] = a[tid1] + 1;
/* 15 */ } and
/* 16 */ thread2 (int tid2 = 0 .. size)
/* 17 */ context Perm(b[tid2], write);
/* 18 */ ensures b[tid2] == \old(b[tid2]) + 1;
/* 19 */ {
/* 20 */ b[tid2] = b[tid2] + 1;
/* 21 */ } and
/* 22 */ thread3 (int tid3 = 0 .. size)
/* 23 */ context Perm(c[tid3], write);
/* 24 */ ensures c[tid3] == \old(c[tid3]) + 1;
/* 25 */ {
/* 26 */ c[tid3] = c[tid3] + 1;
/* 27 */ }
/* 28 */ }
As we can see we use "and" between each parallel block (lines 15 and 21) to define simultaneous parallel blocks. This construction can be used in a situation where we decide to run simultaneous instructions. That means, we do not define threads in the parallel blocks, but the par blocks run simultaneously. Below is an example in this situation where "a", "b" and "c" are objects included a field "val":
class HasVal {
int val;
}
class Main {
context Perm(a.val, write) ** Perm(b.val, write) ** Perm(c.val, write);
ensures a.val == \old(a.val) + 1 && b.val == \old(b.val) + 1 && c.val == \old(c.val) + 1;
void IncPar(HasVal a, HasVal b, HasVal c){
par
context Perm(a.val, write);
ensures a.val == \old(a.val) + 1;
{
a.val = a.val + 1;
} and
context Perm(b.val, write);
ensures b.val == \old(b.val) + 1;
{
b.val = b.val + 1;
} and
context Perm(c.val, write);
ensures c.val == \old(c.val) + 1;
{
c.val = c.val + 1;
}
}
}
In the above example, for each object we increase its "val" by one in parallel. As we can see, thread blocks are without names and thread identifiers in this case.
Tutorial
- Introduction
- Installing and Running VerCors
- Prototypical Verification Language
- Specification Syntax
- Permissions
- GPGPU Verification
- Axiomatic Data Types
- Arrays and Pointers
- Parallel Blocks
- Atomics and Locks
- Process Algebra Models
- Predicates
- Inheritance
- Exceptions & Goto
- VerCors by Error
- VeyMont
- Advanced Concepts
- Annex
- Case Studies
Developing for VerCors