-
Notifications
You must be signed in to change notification settings - Fork 5
/
Chap4.qmd
1068 lines (872 loc) · 52.2 KB
/
Chap4.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# Relations
## 4.1. Ordered Pairs and Cartesian Products
Section 4.1 of *How To Prove It* defines the *Cartesian product* $A \times B$ of two sets $A$ and $B$ to be the set of all ordered pairs $(a, b)$, where $a \in A$ and $b \in B$. However, in Lean, Cartesian product is an operation on *types*, not sets. If `A` and `B` are types, then `A × B` is the type of ordered pairs `(a, b)`, where `a` has type `A` and `b` has type `B`. (To enter the symbol `×` in Lean, type `\times` or `\x`.) In other words, if you have `a : A` and `b : B`, then `(a, b)` is an object of type `A × B`. There is also notation for the first and second coordinates of an ordered pair. If `p` has type `A × B`, then `p.fst` is the first coordinate of `p`, and `p.snd` is the second coordinate. You can also use the notation `p.1` for the first coordinate of `p` and `p.2` for the second coordinate. This means that `p = (p.fst, p.snd) = (p.1, p.2)`.
## 4.2. Relations
Section 4.2 of *HTPI* defines a *relation from $A$ to $B$* to be a subset of $A \times B$. In other words, if $R$ is a relation from $A$ to $B$, then $R$ is a set whose element are ordered pairs $(a, b)$, where $a \in A$ and $b \in B$. We will see in the next section that in Lean, it is convenient to use a somewhat different definition of relations. Nevertheless, we will take some time in this section to study sets of ordered pairs. If `A` and `B` are types, and `R` has type `Set (A × B)`, then `R` is a set whose elements are ordered pairs `(a, b)`, where `a` has type `A` and `b` has type `B`.
Section 4.2 of *HTPI* discusses several concepts concerning relations. Here is how these concepts are defined in *HTPI* (*HTPI* p. 183):
::: {.ndfn arguments="Definition 4.2.3"}
Suppose $R$ is a relation from $A$ to $B$. Then the *domain* of $R$ is the set
::: {.quote}
$\text{Dom}(R) = \{a \in A \mid \exists b \in B((a, b) \in R)\}$.
:::
The *range* of $R$ is the set
::: {.quote}
$\text{Ran}(R) = \{b \in B \mid \exists a \in A((a, b) \in R)\}$.
:::
The *inverse* of $R$ is the relation $R^{-1}$ from $B$ to $A$ define as follows:
::: {.quote}
$R^{-1} = \{(b, a) \in B \times A \mid (a, b) \in R\}$.
:::
Finally, suppose $R$ is a relation from $A$ to $B$ and $S$ is a relation from $B$ to $C$. Then the *composition* of $S$ and $R$ is the relation $S \circ R$ from $A$ to $C$ defined as follows:
::: {.quote}
$S \circ R = \{(a, c) \in A \times C \mid \exists b \in B((a, b) \in R \text{ and } (b, c) \in S)\}$.
:::
:::
There are several examples in *HTPI* that illustrate these definitions. We will focus here on seeing how to work with these concepts in Lean.
We can write corresponding definitions in Lean as follows:
```lean
def Dom {A B : Type} (R : Set (A × B)) : Set A :=
{a : A | ∃ (b : B), (a, b) ∈ R}
def Ran {A B : Type} (R : Set (A × B)) : Set B :=
{b : B | ∃ (a : A), (a, b) ∈ R}
def inv {A B : Type} (R : Set (A × B)) : Set (B × A) :=
{(b, a) : B × A | (a, b) ∈ R}
def comp {A B C : Type}
(S : Set (B × C)) (R : Set (A × B)) : Set (A × C) :=
{(a, c) : A × C | ∃ (x : B), (a, x) ∈ R ∧ (x, c) ∈ S}
```
Definitions in Lean are introduced with the keyword `def`. In the definition of `Dom`, we have declared that `A` and `B` are implicit arguments and `R` is an explicit argument. That means that, in a Lean file containing these definitions, if we have `R : Set (A × B)`, then we can just write `Dom R` for the domain of `R`, and Lean will figure out for itself what `A` and `B` are. After the list of arguments there is a colon and then the type of `Dom R`, which is `Set A`. This is followed by `:=` and then the definition of `Dom R`. The definition says that `Dom R` is the set of all objects `a` of type `A` such that there is some `b` of type `B` with `(a, b) ∈ R`. This is a direct translation, into Lean's type-theory language, of the first part of Definition 4.2.3. The other three definitions are similar; they define `Ran R` to be the range of `R`, `inv R` to be the inverse of `R`, and `comp S R` to be the composition of `S` and `R`.
Here is the main theorem about these concepts, as stated in *HTPI* (*HTPI* p. 187):
::: {.nthm arguments="Theorem 4.2.5"}
Suppose $R$ is a relation from $A$ to $B$, $S$ is a relation from $B$ to $C$, and $T$ is a relation from $C$ to $D$. Then:
1. $(R^{-1})^{-1} = R$.
2. $\mathrm{Dom}(R^{-1}) = \mathrm{Ran}(R)$.
3. $\mathrm{Ran}(R^{-1}) = \mathrm{Dom}(R)$.
4. $T \circ (S \circ R) = (T \circ S) \circ R$.
5. $(S \circ R)^{-1} = R^{-1} \circ S^{-1}$.
:::
All five parts of this theorem follow directly from the definitions of the relevant concepts. In fact, in the first three parts, Lean recognizes the two sides of the equation as being definitionally equal, and therefore the tactic `rfl` proves those parts:
```lean
theorem Theorem_4_2_5_1 {A B : Type}
(R : Set (A × B)) : inv (inv R) = R := by rfl
theorem Theorem_4_2_5_2 {A B : Type}
(R : Set (A × B)) : Dom (inv R) = Ran R := by rfl
theorem Theorem_4_2_5_3 {A B : Type}
(R : Set (A × B)) : Ran (inv R) = Dom R := by rfl
```
The fourth part will take a little more work to prove. We start the proof like this:
```lean
theorem Theorem_4_2_5_4 {A B C D : Type}
(R : Set (A × B)) (S : Set (B × C)) (T : Set (C × D)) :
comp T (comp S R) = comp (comp T S) R := by
apply Set.ext
fix (a, d) : A × D
**done::
```
After the `apply Set.ext` tactic, the goal is
::: {.quote}
`∀ (x : A × D), x ∈ comp T (comp S R) ↔ x ∈ comp (comp T S) R`
:::
The next step should be to introduce an arbitrary object of type `A × D`. We could just call this object `x`, but Lean lets us use a shortcut here. An object of type `A × D` must have the form of an ordered pair, where the first coordinate has type `A` and the second has type `D`. So Lean lets us write it as an ordered pair right away. That's what we've done in the second step, `fix (a, d) : A × D`. This tactic introduces two new variables into the proof, `a : A` and `d : D`. (The proof in *HTPI* uses a similar shortcut. And we used a similar shortcut in the definitions of `inv R` and `comp R`, where the elements of these sets were written as ordered pairs.)
Here is the complete proof.
```lean
theorem Theorem_4_2_5_4 {A B C D : Type}
(R : Set (A × B)) (S : Set (B × C)) (T : Set (C × D)) :
comp T (comp S R) = comp (comp T S) R := by
apply Set.ext
fix (a, d) : A × D
apply Iff.intro
· -- (→)
assume h1 : (a, d) ∈ comp T (comp S R)
--Goal : (a, d) ∈ comp (comp T S) R
define --Goal : ∃ (x : B), (a, x) ∈ R ∧ (x, d) ∈ comp T S
define at h1 --h1 : ∃ (x : C), (a, x) ∈ comp S R ∧ (x, d) ∈ T
obtain (c : C) (h2 : (a, c) ∈ comp S R ∧ (c, d) ∈ T) from h1
have h3 : (a, c) ∈ comp S R := h2.left
define at h3 --h3 : ∃ (x : B), (a, x) ∈ R ∧ (x, c) ∈ S
obtain (b : B) (h4 : (a, b) ∈ R ∧ (b, c) ∈ S) from h3
apply Exists.intro b --Goal : (a, b) ∈ R ∧ (b, d) ∈ comp T S
apply And.intro h4.left --Goal : (b, d) ∈ comp T S
define --Goal : ∃ (x : C), (b, x) ∈ S ∧ (x, d) ∈ T
show ∃ (x : C), (b, x) ∈ S ∧ (x, d) ∈ T from
Exists.intro c (And.intro h4.right h2.right)
done
· -- (←)
assume h1 : (a, d) ∈ comp (comp T S) R
define; define at h1
obtain (b : B) (h2 : (a, b) ∈ R ∧ (b, d) ∈ comp T S) from h1
have h3 : (b, d) ∈ comp T S := h2.right
define at h3
obtain (c : C) (h4 : (b, c) ∈ S ∧ (c, d) ∈ T) from h3
apply Exists.intro c
apply And.intro _ h4.right
define
show ∃ (x : B), (a, x) ∈ R ∧ (x, c) ∈ S from
Exists.intro b (And.intro h2.left h4.left)
done
done
```
Of course, if you have trouble reading this proof, you can enter it into Lean and see how the tactic state changes over the course of the proof.
Here is a natural way to start the proof of part 5:
```lean
theorem Theorem_4_2_5_5 {A B C : Type}
(R : Set (A × B)) (S : Set (B × C)) :
inv (comp S R) = comp (inv R) (inv S) := by
apply Set.ext
fix (c, a) : C × A
apply Iff.intro
· -- (→)
assume h1 : (c, a) ∈ inv (comp S R)
--Goal : (c, a) ∈ comp (inv R) (inv S)
define at h1 --h1 : ∃ (x : B), (a, x) ∈ R ∧ (x, c) ∈ S
define --Goal : ∃ (x : B), (c, x) ∈ inv S ∧ (x, a) ∈ inv R
obtain (b : B) (h2 : (a, b) ∈ R ∧ (b, c) ∈ S) from h1
apply Exists.intro b --Goal : (c, b) ∈ inv S ∧ (b, a) ∈ inv R
**done::
· -- (←)
**done::
done
```
After the tactics `apply Set.ext` and `fix (c, a) : C × A`, the goal is `(c, a) ∈ inv (comp S R) ↔ (c, a) ∈ comp (inv R) (inv S)`. For the proof of the left-to-right direction, we assume `h1 : (c, a) ∈ inv (comp S R)`, and we must prove `(c, a) ∈ comp (inv R) (inv S)`. The definition of `h1` is an existential statement, so we apply existential instantiation to obtain `b : B` and `h2 : (a, b) ∈ R ∧ (b, c) ∈ S`. The definition of the goal is also an existential statement, and after the tactic `apply Exists.intro b`, the goal is `(c, b) ∈ inv S ∧ (b, a) ∈ inv R`. It looks like this goal will follow easily from `h2`, using the definitions of the inverses of `S` and `R`.
One way to write out these definitions would be to use the tactics `define : (c, b) ∈ inv S` and `define : (b, a) ∈ inv R`. But we're going to use this example to illustrate another way to proceed. To use this alternative method, we'll need to prove a preliminary theorem before proving part 5 of Theorem 4.2.5:
```lean
theorem inv_def {A B : Type} (R : Set (A × B)) (a : A) (b : B) :
(b, a) ∈ inv R ↔ (a, b) ∈ R := by rfl
```
Now, any time we have a relation `R : Set (A × B)` and objects `a : A` and `b : B`, the expression `inv_def R a b` will be a proof of the statement `(b, a) ∈ inv R ↔ (a, b) ∈ R`. (Note that `A` and `B` are implicit arguments and don't need to be specified.) And that means that the tactic `rewrite [inv_def R a b]` will change `(b, a) ∈ inv R` to `(a, b) ∈ R`. In fact, as we've seen before, you can just write `rewrite [inv_def]`, and Lean will figure out how to apply the theorem `inv_def` to rewrite some part of the goal.
Returning to our proof of part 5 of Theorem 4.2.5, recall that after the step `apply Exists.intro b`, the goal is `(c, b) ∈ inv S ∧ (b, a) ∈ inv R`. Rather than using the `define` tactic to write out the definitions of the inverses, we'll use the tactic `rewrite [inv_def, inv_def]`. Why do we list `inv_def` twice in the `rewrite` tactic? When we ask Lean to use the theorem `inv_def` as a rewriting rule, it figures out that `inv_def S b c` is a proof of the statement `(c, b) ∈ inv S ↔ (b, c) ∈ S`, which can be used to rewrite the left half of the goal. To rewrite the right half, we need a different application of the `inv_def` theorem, `inv_def R a b`. So we have to ask Lean to apply the theorem a second time. After the `rewrite` tactic, the goal is `(b, c) ∈ S ∧ (a, b) ∈ R`, which will follow easily from `h2`.
The rest of the proof of straightforward. Here is the complete proof.
```lean
theorem Theorem_4_2_5_5 {A B C : Type}
(R : Set (A × B)) (S : Set (B × C)) :
inv (comp S R) = comp (inv R) (inv S) := by
apply Set.ext
fix (c, a) : C × A
apply Iff.intro
· -- (→)
assume h1 : (c, a) ∈ inv (comp S R)
--Goal : (c, a) ∈ comp (inv R) (inv S)
define at h1 --h1 : ∃ (x : B), (a, x) ∈ R ∧ (x, c) ∈ S
define --Goal : ∃ (x : B), (c, x) ∈ inv S ∧ (x, a) ∈ inv R
obtain (b : B) (h2 : (a, b) ∈ R ∧ (b, c) ∈ S) from h1
apply Exists.intro b --Goal : (c, b) ∈ inv S ∧ (b, a) ∈ inv R
rewrite [inv_def, inv_def] --Goal : (b, c) ∈ S ∧ (a, b) ∈ R
show (b, c) ∈ S ∧ (a, b) ∈ R from And.intro h2.right h2.left
done
· -- (←)
assume h1 : (c, a) ∈ comp (inv R) (inv S)
define at h1
define
obtain (b : B) (h2 : (c, b) ∈ inv S ∧ (b, a) ∈ inv R) from h1
apply Exists.intro b
rewrite [inv_def, inv_def] at h2
show (a, b) ∈ R ∧ (b, c) ∈ S from And.intro h2.right h2.left
done
done
```
By the way, an alternative way to complete both directions of this proof would have been to apply the commutativity of "and". See if you can guess the name of that theorem (you can use `#check` to confirm your guess) and apply it as a third rewriting rule in the `rewrite` steps.
### Exercises
::: {.numex arguments="1"}
```lean
theorem Exercise_4_2_9a {A B C : Type} (R : Set (A × B))
(S : Set (B × C)) : Dom (comp S R) ⊆ Dom R := sorry
```
:::
::: {.numex arguments="2"}
```lean
theorem Exercise_4_2_9b {A B C : Type} (R : Set (A × B))
(S : Set (B × C)) : Ran R ⊆ Dom S → Dom (comp S R) = Dom R := sorry
```
:::
::: {.numex arguments="3"}
```lean
--Fill in the blank to get a correct theorem and then prove the theorem
theorem Exercise_4_2_9c {A B C : Type} (R : Set (A × B))
(S : Set (B × C)) : ___ → Ran (comp S R) = Ran S := sorry
```
:::
::: {.numex arguments="4"}
```lean
theorem Exercise_4_2_12a {A B C : Type}
(R : Set (A × B)) (S T : Set (B × C)) :
(comp S R) \ (comp T R) ⊆ comp (S \ T) R := sorry
```
:::
5\. Here is an incorrect theorem with an incorrect proof.
::: {.nthm arguments="Incorrect Theorem"}
Suppose $R$ is a relation from $A$ to $B$ and $S$ and $T$ are relations from $B$ to $C$. Then $(S \setmin T) \circ R \subseteq (S \circ R) \setmin (T \circ R)$.
:::
::: {.npf arguments="Incorrect Proof (HTPI p. 190)"}
Suppose $(a, c) \in (S \setmin T) \circ R$. Then we can choose some $b \in B$ such that $(a, b) \in R$ and $(b, c) \in S \setmin T$, so $(b, c) \in S$ and $(b, c) \notin T$. Since $(a, b) \in R$ and $(b, c) \in S$, $(a, c) \in S \circ R$. Similarly, since $(a, b) \in R$ and $(b, c) \notin T$, $(a, c) \notin T \circ R$. Therefore $(a, c) \in (S \circ R) \setmin (T \circ R)$. Since $(a, c)$ was arbitrary, this shows that $(S \setmin T) \circ R \subseteq (S \circ R) \setmin (T \circ R)$. [ □]{.excl}\qedhere
:::
Find the mistake in the proof by attempting to write the proof in Lean:
```lean
--You won't be able to complete this proof
theorem Exercise_4_2_12b {A B C : Type}
(R : Set (A × B)) (S T : Set (B × C)) :
comp (S \ T) R ⊆ (comp S R) \ (comp T R) := sorry
```
6\. Is the following theorem correct? Try to prove it in Lean. If you can't prove it, see if you can find a counterexample.
```lean
--You might not be able to complete this proof
theorem Exercise_4_2_14c {A B C : Type}
(R : Set (A × B)) (S T : Set (B × C)) :
comp (S ∩ T) R = (comp S R) ∩ (comp T R) := sorry
```
7\. Is the following theorem correct? Try to prove it in Lean. If you can't prove it, see if you can find a counterexample.
```lean
--You might not be able to complete this proof
theorem Exercise_4_2_14d {A B C : Type}
(R : Set (A × B)) (S T : Set (B × C)) :
comp (S ∪ T) R = (comp S R) ∪ (comp T R) := sorry
```
## 4.3. More About Relations
Section 4.3 of *HTPI* introduces new notation for working with relations. If $R \subseteq A \times B$, $a \in A$, and $b \in B$, then *HTPI* introduces the notation $aRb$ as an alternative way of saying $(a, b) \in R$.
The notation we will use in Lean is slightly different. Corresponding to the notation $aRb$ in *HTPI*, in Lean we will use the notation `R a b`. And we cannot use this notation when `R` has type `Set (A × B)`. Rather, we will need to introduce a new type for the variable `R` in the notation `R a b`. The name we will use for this new type is `Rel A B`. Thus, if `R` has type `Rel A B`, `a` has type `A`, and `b` has type `B`, then `R a b` is a proposition. This should remind you of the way predicates work in Lean. If we have `P : Pred A`, then we think of `P` as representing a property that an object of type `A` might have, and if we also have `a : A`, then `P a` is the proposition asserting that `a` has the property represented by `P`. Similarly, if we have `R : Rel A B`, then we can think of `R` as representing a relationship that might hold between an object of type `A` and an object of type `B`, and if we also have `a : A` and `b : B`, then `R a b` is the proposition asserting that the relationship represented by `R` holds between `a` and `b`.
Notice that in *HTPI*, the same variable $R$ is used in both the notation $aRb$ and $(a, b) \in R$. But in Lean, the notation `R a b` is used when `R` has type `Rel A B`, and the notation `(a, b) ∈ R` is used when `R` has type `Set (A × B)`. The types `Rel A B` and `Set (A × B)` are different, so we cannot use the same variable `R` in the two notations. However, there is a correspondence between the two types. Suppose `R` has type `Rel A B`. If we let `R'` denote the set of all ordered pairs `(a, b) : A × B` such that the proposition `R a b` is true, then `R'` has type `Set (A × B)`. And there is then a simple relationship between `R` and `R'`: for any objects `a : A` and `b : B`, the propositions `R a b` and `(a, b) ∈ R'` are equivalent. For our work in Lean, we will say that `R` is a *relation* from `A` to `B`, and `R'` is the *extension* of `R`.
We can define the extension of a relation, and state the correspondence between a relation and its extension, in Lean as follows:
```lean
def extension {A B : Type} (R : Rel A B) : Set (A × B) :=
{(a, b) : A × B | R a b}
theorem ext_def {A B : Type} (R : Rel A B) (a : A) (b : B) :
(a, b) ∈ extension R ↔ R a b := by rfl
```
The rest of Chapter 4 of *HTPI* focuses on relations from a set to itself; in Lean, the corresponding idea is a relation from a type to itself. If `A` is any type and `R` has type `Rel A A`, then we will say that `R` is a *binary relation on `A`*. The notation `BinRel A` denotes the type of binary relations on `A`. In other words, `BinRel A` is just an abbreviation for `Rel A A`. If `R` is a binary relation on `A`, then we say that `R` is *reflexive* if for every `x` of type `A`, `R x x` holds. It is *symmetric* if for all `x` and `y` of type `A`, if `R x y` then `R y x`. And it is *transitive* if for all `x`, `y`, and `z` of type `A`, if `R x y` and `R y z` then `R x z`. Of course, we can tell Lean about these definitions, which correspond to Definition 4.3.2 in *HTPI*:
```lean
def reflexive {A : Type} (R : BinRel A) : Prop :=
∀ (x : A), R x x
def symmetric {A : Type} (R : BinRel A) : Prop :=
∀ (x y : A), R x y → R y x
def transitive {A : Type} (R : BinRel A) : Prop :=
∀ (x y z : A), R x y → R y z → R x z
```
Once again, we refer you to *HTPI* to see examples of these concepts, and we focus here on proving theorems about these concepts in Lean. The main theorem about these concepts in Section 4.3 of *HTPI* is Theorem 4.3.4. Here is what it says (*HTPI* p. 196):
::: {.nthm arguments="Theorem 4.3.4"}
Suppose $R$ is a relation on a set $A$.
1. $R$ is reflexive iff $\{(x, y) \in A \times A \mid x = y\} \subseteq R$.
2. $R$ is symmetric iff $R = R^{-1}$.
3. $R$ is transitive iff $R \circ R \subseteq R$.
:::
We can prove corresponding statements in Lean, but we'll have to be careful to distinguish between the types `BinRel A` and `Set (A × A)`. In *HTPI*, each of the three statements in the theorem uses the same letter $R$ on both sides of the "iff", but we can't write the statements that way in Lean. In each statement, the part before "iff" uses a concept that was defined for objects of type `BinRel A`, whereas the part after "iff" uses concepts that only make sense for objects of type `Set (A × A)`. So we'll have to rephrase the statements by using the correspondence between a relation of type `BinRel A` and its extension, which has type `Set (A × A)`. Here's the Lean theorem corresponding to statement 2 of Theorem 4.3.4:
```lean
theorem Theorem_4_3_4_2 {A : Type} (R : BinRel A) :
symmetric R ↔ extension R = inv (extension R) := by
apply Iff.intro
· -- (→)
assume h1 : symmetric R
define at h1 --h1 : ∀ (x y : A), R x y → R y x
apply Set.ext
fix (a, b) : A × A
show (a, b) ∈ extension R ↔ (a, b) ∈ inv (extension R) from
calc (a, b) ∈ extension R
_ ↔ R a b := by rfl
_ ↔ R b a := Iff.intro (h1 a b) (h1 b a)
_ ↔ (a, b) ∈ inv (extension R) := by rfl
done
· -- (←)
assume h1 : extension R = inv (extension R)
define --Goal : ∀ (x y : A), R x y → R y x
fix a : A; fix b : A
assume h2 : R a b --Goal : R b a
rewrite [←ext_def R, h1, inv_def, ext_def] at h2
show R b a from h2
done
done
```
Note that near the end of the proof, we assume `h2 : R a b`, and our goal is `R b a`. We convert `R a b` to `R b a` by a sequence of rewrites. Applying the right-to-left direction of the theorem `ext_def R a b` converts `R a b` to `(a, b) ∈ extension R`. Then rewriting with `h1` converts this to `(a, b) ∈ inv (extension R)`, using `inv_def (extension R) b a` converts this to `(b, a) ∈ extension R`, and finally `ext_def R b a` produces `R b a`. Usually we can leave out the arguments when we use a theorem as a rewriting rule, and Lean will figure them out for itself. But in this case, if you try using `←ext_def` as the first rewriting rule, you will see that Lean is unable to figure out that it should use the right-to-left direction of `ext_def R a b`. Supplying the first argument turns out to be enough of a hint for Lean to figure out the rest. That's why our first rewriting rule is `←ext_def R`.
We'll leave the proofs of the other two statements in Theorem 4.3.4 as exercises for you.
For any types `A` and `B`, if we want to define a particular relation `R` from `A` to `B`, we can do it by specifying, for any `a : A` and `b : B`, what proposition is represented by `R a b`. For example, for any type `A`, we can define a relation `elementhood A` from `A` to `Set A` as follows:
```lean
def elementhood (A : Type) (a : A) (X : Set A) : Prop := a ∈ X
```
This definition says that if `A` is a type, `a` has type `A`, and `X` has type `Set A`, then `elementhood A a X` is the proposition `a ∈ X`. Thus, if `elementhood A` is followed by objects of type `A` and `Set A`, the result is a proposition, so `elementhood A` is functioning as a relation from `A` to `Set A`. For example, `elementhood Int` is a relation from integers to sets of integers, and `elementhood Int 6 {n : Int | ∃ (k : Int), n = 2 * k}` is the (true) statement that `6` is an element of the set of even integers. (You are asked to prove it in the exercises.)
We can also use this method to define an operation that reverses the process of forming the extension of a relation. If `R` has type `Set (A × B)`, then we define `RelFromExt R` to be the relation whose extension is `R`. A few simple theorems, which follow directly from the definition, clarify the meaning of `RelFromExt R`.
```lean
def RelFromExt {A B : Type}
(R : Set (A × B)) (a : A) (b : B) : Prop := (a, b) ∈ R
theorem RelFromExt_def {A B : Type}
(R : Set (A × B)) (a : A) (b : B) :
RelFromExt R a b ↔ (a, b) ∈ R := by rfl
example {A B : Type} (R : Rel A B) :
RelFromExt (extension R) = R := by rfl
example {A B : Type} (R : Set (A × B)) :
extension (RelFromExt R) = R := by rfl
```
### Exercises
::: {.numex arguments="1"}
```lean
example :
elementhood Int 6 {n : Int | ∃ (k : Int), n = 2 * k} := sorry
```
:::
::: {.numex arguments="2"}
```lean
theorem Theorem_4_3_4_1 {A : Type} (R : BinRel A) :
reflexive R ↔ {(x, y) : A × A | x = y} ⊆ extension R := sorry
```
:::
::: {.numex arguments="3"}
```lean
theorem Theorem_4_3_4_3 {A : Type} (R : BinRel A) :
transitive R ↔
comp (extension R) (extension R) ⊆ extension R := sorry
```
:::
::: {.numex arguments="4"}
```lean
theorem Exercise_4_3_12a {A : Type} (R : BinRel A) (h1 : reflexive R) :
reflexive (RelFromExt (inv (extension R))) := sorry
```
:::
::: {.numex arguments="5"}
```lean
theorem Exercise_4_3_12c {A : Type} (R : BinRel A) (h1 : transitive R) :
transitive (RelFromExt (inv (extension R))) := sorry
```
:::
::: {.numex arguments="6"}
```lean
theorem Exercise_4_3_18 {A : Type}
(R S : BinRel A) (h1 : transitive R) (h2 : transitive S)
(h3 : comp (extension S) (extension R) ⊆
comp (extension R) (extension S)) :
transitive (RelFromExt (comp (extension R) (extension S))) := sorry
```
:::
::: {.numex arguments="7"}
```lean
theorem Exercise_4_3_20 {A : Type} (R : BinRel A) (S : BinRel (Set A))
(h : ∀ (X Y : Set A), S X Y ↔ X ≠ ∅ ∧ Y ≠ ∅ ∧
∀ (x y : A), x ∈ X → y ∈ Y → R x y) :
transitive R → transitive S := sorry
```
:::
::: {.mdsk}
:::
In the next three exercises, determine whether or not the theorem is correct.
::: {.numex arguments="8"}
```lean
--You might not be able to complete this proof
theorem Exercise_4_3_13b {A : Type}
(R1 R2 : BinRel A) (h1 : symmetric R1) (h2 : symmetric R2) :
symmetric (RelFromExt ((extension R1) ∪ (extension R2))) := sorry
```
:::
::: {.numex arguments="9"}
```lean
--You might not be able to complete this proof
theorem Exercise_4_3_13c {A : Type}
(R1 R2 : BinRel A) (h1 : transitive R1) (h2 : transitive R2) :
transitive (RelFromExt ((extension R1) ∪ (extension R2))) := sorry
```
:::
::: {.numex arguments="10"}
```lean
--You might not be able to complete this proof
theorem Exercise_4_3_19 {A : Type} (R : BinRel A) (S : BinRel (Set A))
(h : ∀ (X Y : Set A), S X Y ↔ ∃ (x y : A), x ∈ X ∧ y ∈ Y ∧ R x y) :
transitive R → transitive S := sorry
```
:::
## 4.4. Ordering Relations
Section 4.4 of *HTPI* begins by defining several new concepts about binary relations. Here are the definitions, written in Lean:
```lean
def antisymmetric {A : Type} (R : BinRel A) : Prop :=
∀ (x y : A), R x y → R y x → x = y
def partial_order {A : Type} (R : BinRel A) : Prop :=
reflexive R ∧ transitive R ∧ antisymmetric R
def total_order {A : Type} (R : BinRel A) : Prop :=
partial_order R ∧ ∀ (x y : A), R x y ∨ R y x
```
These definitions say that if `R` is a binary relation on `A`, then `R` is *antisymmetric* if `R x y` and `R y x` cannot both be true unless `x = y`. `R` is a *partial order on `A`*---or just a *partial order*, if `A` is clear from context---if it is reflexive, transitive, and antisymmetric. And `R` is a *total order on `A`* if it is a partial order and also, for any `x` and `y` of type `A`, either `R x y` or `R y x`. Note that, since Lean groups the connective `∧` to the right, `partial_order R` means `reflexive R ∧ (transitive R ∧ antisymmetric R)`, and therefore if `h` is a proof of `partial_order R`, then `h.left` is a proof of `reflexive R`, `h.right.left` is a proof of `transitive R`, and `h.right.right` is a proof of `antisymmetric R`.
Example 4.4.3 in *HTPI* gives several examples of partial orders and total orders. We'll give one of those examples here. For any type `A`, we define `sub A` to be the subset relation on sets of objects of type `A`:
```lean
def sub (A : Type) (X Y : Set A) : Prop := X ⊆ Y
```
According to this definition, `sub A` is a binary relation on `Set A`, and for any two sets `X` and `Y` of type `Set A`, `sub A X Y` is the proposition `X ⊆ Y`. We will leave it as an exercise for you to prove that `sub A` is a partial order on the type `Set A`.
Notice that `X ⊆ Y` could be thought of as expressing a sense in which `Y` is "at least as large as" `X`. Often, if `R` is a partial order on `A` and `a` and `b` have type `A`, then `R a b` can be thought of as meaning that `b` is in some sense "at least as large as" `a`. Many of the concepts we study for partial and total orders are motivated by this interpretation of `R`.
For example, if `R` is a partial order on `A`, `B` has type `Set A`, and `b` has type `A`, then we say that `b` is an *`R`-smallest element* of `B` if it is an element of `B`, and every element of `B` is at least as large as `b`, according to this interpretation of the ordering `R`. We say that `b` is an *`R`-minimal element* of `B` if it is an element of `B`, and there is no other element of `B` that is smaller than `b`, according to the ordering `R`. We can state these precisely as definitions in Lean:
```lean
def smallestElt {A : Type} (R : BinRel A) (b : A) (B : Set A) : Prop :=
b ∈ B ∧ ∀ x ∈ B, R b x
def minimalElt {A : Type} (R : BinRel A) (b : A) (B : Set A) : Prop :=
b ∈ B ∧ ¬∃ x ∈ B, R x b ∧ x ≠ b
```
Notice that, as in *HTPI*, in Lean we can write `∀ x ∈ B, P x` as an abbreviation for `∀ (x : A), x ∈ B → P x`, and `∃ x ∈ B, P x` as an abbreviation for `∃ (x : A), x ∈ B ∧ P x`. According to these definitions, `smallestElt R b B` is the proposition that `b` is an `R`-smallest element of `B`, and `minimalElt R b B` means that `b` is an `R`-minimal element of `B`.
Theorem 4.4.6 in *HTPI* asserts three statements about these concepts. We'll prove the second and third, and leave the first as an exercise for you. The first statement in Theorem 4.4.6 says that if `B` has an `R`-smallest element, then that `R`-smallest element is unique. Thus, we can talk about *the* `R`-smallest element of `B` rather than *an* `R`-smallest element. The second says that if `b` is the `R`-smallest element of `B`, then it is also an `R`-minimal element, and it is the only `R`-minimal element. Here is how you might start the proof. (Although Lean sometimes uses bounded quantifiers as abbreviations in the Infoview, we have written out the unabbreviated statements in the comments, to make the logic of some steps easier to follow.)
```lean
theorem Theorem_4_4_6_2 {A : Type} (R : BinRel A) (B : Set A) (b : A)
(h1 : partial_order R) (h2 : smallestElt R b B) :
minimalElt R b B ∧ ∀ (c : A), minimalElt R c B → b = c := by
define at h1 --h1 : reflexive R ∧ transitive R ∧ antisymmetric R
define at h2 --h2 : b ∈ B ∧ ∀ (x : A), x ∈ B → R b x
apply And.intro
· -- Proof that b is minimal
define --Goal : b ∈ B ∧ ¬∃ (x : A), x ∈ B ∧ R x b ∧ x ≠ b
apply And.intro h2.left
quant_neg --Goal : ∀ (x : A), ¬(x ∈ B ∧ R x b ∧ x ≠ b)
**demorgan : ¬(x ∈ B ∧ R x b ∧ x ≠ b)::
done
· -- Proof that b is only minimal element
**done::
done
```
When the goal is `∀ (x : A), ¬(x ∈ B ∧ R x b ∧ x ≠ b)`, it is tempting to apply the `demorgan` tactic to `¬(x ∈ B ∧ R x b ∧ x ≠ b)`, but unfortunately this generates an error in Lean: `unknown identifier 'x'`. The problem is that `x` is not defined in the tactic state, so without the quantifier `∀ (x : A)` in front of it, `¬(x ∈ B ∧ R x b ∧ x ≠ b)` doesn't mean anything to Lean. The solution to the problem is to deal with the universal quantifier first by introducing an arbitrary `x` of type `A`. Once `x` has been introduced, we can apply the `demorgan` tactic.
```lean
theorem Theorem_4_4_6_2 {A : Type} (R : BinRel A) (B : Set A) (b : A)
(h1 : partial_order R) (h2 : smallestElt R b B) :
minimalElt R b B ∧ ∀ (c : A), minimalElt R c B → b = c := by
define at h1 --h1 : reflexive R ∧ transitive R ∧ antisymmetric R
define at h2 --h2 : b ∈ B ∧ ∀ (x : A), x ∈ B → R b x
apply And.intro
· -- Proof that b is minimal
define --Goal : b ∈ B ∧ ¬∃ (x : A), x ∈ B ∧ R x b ∧ x ≠ b
apply And.intro h2.left
quant_neg --Goal : ∀ (x : A), ¬(x ∈ B ∧ R x b ∧ x ≠ b)
fix x : A
demorgan --Goal : ¬x ∈ B ∨ ¬(R x b ∧ x ≠ b)
or_right with h3 --h3 : x ∈ B; Goal : ¬(R x b ∧ x ≠ b)
demorgan --Goal : ¬R x b ∨ x = b
or_right with h4 --h4 : R x b; Goal : x = b
have h5 : R b x := h2.right x h3
have h6 : antisymmetric R := h1.right.right
define at h6 --h6 : ∀ (x y : A), R x y → R y x → x = y
show x = b from h6 x b h4 h5
done
· -- Proof that b is only minimal element
fix c : A
assume h3 : minimalElt R c B
define at h3 --h3 : c ∈ B ∧ ¬∃ (x : A), x ∈ B ∧ R x c ∧ x ≠ c
contradict h3.right with h4
--h4 : ¬b = c; Goal : ∃ (x : A), x ∈ B ∧ R x c ∧ x ≠ c
have h5 : R b c := h2.right c h3.left
show ∃ (x : A), x ∈ B ∧ R x c ∧ x ≠ c from
Exists.intro b (And.intro h2.left (And.intro h5 h4))
done
done
```
Finally, the third statement in Theorem 4.4.6 says that if `R` is a total order, then any `R`-minimal element of a set `B` must be the `R`-smallest element of `B`. The beginning of the proof is straightforward:
```lean
theorem Theorem_4_4_6_3 {A : Type} (R : BinRel A) (B : Set A) (b : A)
(h1 : total_order R) (h2 : minimalElt R b B) : smallestElt R b B := by
define at h1 --h1 : partial_order R ∧ ∀ (x y : A), R x y ∨ R y x
define at h2 --h2 : b ∈ B ∧ ¬∃ (x : A), x ∈ B ∧ R x b ∧ x ≠ b
define --Goal : b ∈ B ∧ ∀ (x : A), x ∈ B → R b x
apply And.intro h2.left --Goal : ∀ (x : A), x ∈ B → R b x
fix x : A
assume h3 : x ∈ B --Goal : R b x
**done::
```
Surprisingly, at this point it is difficult to find a way to reach the goal `R b x`. See *HTPI* for an explanation of why it turns out to be helpful to split the proof into two cases, depending on whether or not `x = b`. Of course, we use the `by_cases` tactic for this.
```lean
theorem Theorem_4_4_6_3 {A : Type} (R : BinRel A) (B : Set A) (b : A)
(h1 : total_order R) (h2 : minimalElt R b B) : smallestElt R b B := by
define at h1 --h1 : partial_order R ∧ ∀ (x y : A), R x y ∨ R y x
define at h2 --h2 : b ∈ B ∧ ¬∃ (x : A), x ∈ B ∧ R x b ∧ x ≠ b
define --Goal : b ∈ B ∧ ∀ (x : A), x ∈ B → R b x
apply And.intro h2.left --Goal : ∀ (x : A), x ∈ B → R b x
fix x : A
assume h3 : x ∈ B --Goal : R b x
by_cases h4 : x = b
· -- Case 1. h4 : x = b
rewrite [h4] --Goal : R b b
have h5 : partial_order R := h1.left
define at h5
have h6 : reflexive R := h5.left
define at h6
show R b b from h6 b
done
· -- Case 2. h4 : x ≠ b
have h5 : ∀ (x y : A), R x y ∨ R y x := h1.right
have h6 : R x b ∨ R b x := h5 x b
have h7 : ¬R x b := by
contradict h2.right with h8
show ∃ (x : A), x ∈ B ∧ R x b ∧ x ≠ b from
Exists.intro x (And.intro h3 (And.intro h8 h4))
done
disj_syll h6 h7
show R b x from h6
done
done
```
Imitating the definitions above, you should be able to formulate definitions of `R`-largest and `R`-maximal elements. Section 4.4 of *HTPI* defines four more terms: upper bound, lower bound, least upper bound, and greatest lower bound. We will discuss upper bounds and least upper bounds, and leave lower bounds and greatest lower bounds for you to figure out on your own.
If `R` is a partial order on `A`, `B` has type `Set A`, and `a` has type `A`, then `a` is called an *upper bound* for `B` if it is at least as large as every element of `B`. If it is the smallest element of the set of upper bounds, then it is called the *least upper bound* of `B`. The phrase "least upper bound" is often abbreviated "lub". Here are these definitions, written in Lean:
```lean
def upperBd {A : Type} (R : BinRel A) (a : A) (B : Set A) : Prop :=
∀ x ∈ B, R x a
def lub {A : Type} (R : BinRel A) (a : A) (B : Set A) : Prop :=
smallestElt R a {c : A | upperBd R c B}
```
As usual, we will let you consult *HTPI* for examples of these concepts. But we will mention one example: If `A` is a type and `F` has type `Set (Set A)`---that is, `F` is a set whose elements are sets of objects of type `A`---then the least upper bound of `F`, with respect to the partial order `sub A`, is `⋃₀ F`. We leave the proof of this fact as an exercise.
### Exercises
::: {.numex arguments="1"}
```lean
theorem Example_4_4_3_1 {A : Type} : partial_order (sub A) := sorry
```
:::
::: {.numex arguments="2"}
```lean
theorem Theorem_4_4_6_1 {A : Type} (R : BinRel A) (B : Set A) (b : A)
(h1 : partial_order R) (h2 : smallestElt R b B) :
∀ (c : A), smallestElt R c B → b = c := sorry
```
:::
::: {.numex arguments="3"}
```lean
--If F is a set of sets, then ⋃₀ F is the lub of F in the subset ordering
theorem Theorem_4_4_11 {A : Type} (F : Set (Set A)) :
lub (sub A) (⋃₀ F) F := sorry
```
:::
::: {.numex arguments="4"}
```lean
theorem Exercise_4_4_8 {A B : Type} (R : BinRel A) (S : BinRel B)
(T : BinRel (A × B)) (h1 : partial_order R) (h2 : partial_order S)
(h3 : ∀ (a a' : A) (b b' : B),
T (a, b) (a', b') ↔ R a a' ∧ S b b') :
partial_order T := sorry
```
:::
::: {.numex arguments="5"}
```lean
theorem Exercise_4_4_9_part {A B : Type} (R : BinRel A) (S : BinRel B)
(L : BinRel (A × B)) (h1 : total_order R) (h2 : total_order S)
(h3 : ∀ (a a' : A) (b b' : B),
L (a, b) (a', b') ↔ R a a' ∧ (a = a' → S b b')) :
∀ (a a' : A) (b b' : B),
L (a, b) (a', b') ∨ L (a', b') (a, b) := sorry
```
:::
::: {.numex arguments="6"}
```lean
theorem Exercise_4_4_15a {A : Type}
(R1 R2 : BinRel A) (B : Set A) (b : A)
(h1 : partial_order R1) (h2 : partial_order R2)
(h3 : extension R1 ⊆ extension R2) :
smallestElt R1 b B → smallestElt R2 b B := sorry
```
:::
::: {.numex arguments="7"}
```lean
theorem Exercise_4_4_15b {A : Type}
(R1 R2 : BinRel A) (B : Set A) (b : A)
(h1 : partial_order R1) (h2 : partial_order R2)
(h3 : extension R1 ⊆ extension R2) :
minimalElt R2 b B → minimalElt R1 b B := sorry
```
:::
::: {.numex arguments="8"}
```lean
theorem Exercise_4_4_18a {A : Type}
(R : BinRel A) (B1 B2 : Set A) (h1 : partial_order R)
(h2 : ∀ x ∈ B1, ∃ y ∈ B2, R x y) (h3 : ∀ x ∈ B2, ∃ y ∈ B1, R x y) :
∀ (x : A), upperBd R x B1 ↔ upperBd R x B2 := sorry
```
:::
::: {.numex arguments="9"}
```lean
theorem Exercise_4_4_22 {A : Type}
(R : BinRel A) (B1 B2 : Set A) (x1 x2 : A)
(h1 : partial_order R) (h2 : lub R x1 B1) (h3 : lub R x2 B2) :
B1 ⊆ B2 → R x1 x2 := sorry
```
:::
::: {.numex arguments="10"}
```lean
theorem Exercise_4_4_24 {A : Type} (R : Set (A × A)) :
smallestElt (sub (A × A)) (R ∪ (inv R))
{T : Set (A × A) | R ⊆ T ∧ symmetric (RelFromExt T)} := sorry
```
:::
## 4.5. Equivalence Relations
Chapter 4 of *HTPI* concludes with the study of one more important combination of properties that a relation might have. A binary relation $R$ on a set $A$ is called an *equivalence relation* if it is reflexive, symmetric, and transitive. If $x \in A$, then the *equivalence class* of $x$ with respect to $R$ is the set of all $y \in A$ such that $yRx$. In *HTPI*, this equivalence class is denoted $[x]_R$, so we have
$$
[x]_R = \{y \in A \mid yRx\}.
$$
The set whose elements are all of these equivalence classes is called $A$ *mod* $R$. It is written $A/R$, so
$$
A/R = \{[x]_R \mid x \in A\}.
$$
Note that $A/R$ is a set whose elements are sets: for each $x \in A$, $[x]_R$ is a subset of $A$, and $[x]_R \in A/R$.
To define these concepts in Lean, we write:
```lean
def equiv_rel {A : Type} (R : BinRel A) : Prop :=
reflexive R ∧ symmetric R ∧ transitive R
def equivClass {A : Type} (R : BinRel A) (x : A) : Set A :=
{y : A | R y x}
def mod (A : Type) (R : BinRel A) : Set (Set A) :=
{equivClass R x | x : A}
```
Thus, `equiv_rel R` is the proposition that `R` is an equivalence relation, `equivClass R x` is the equivalence class of `x` with respect to `R`, and `mod A R` is `A` mod `R`. Note that `equivClass R x` has type `Set A`, while `mod A R` has type `Set (Set A)`. The definition of `mod A R` is shorthand for `{X : Set A | ∃ (x : A), equivClass R x = X}`.
*HTPI* gives several examples of equivalence relations, and these examples illustrate that equivalence classes always have certain properties. The most important of these are that each equivalence class is a nonempty set, the equivalence classes do not overlap, and their union is all of `A`. We say that the equivalence classes form a *partition* of `A`. To state and prove these properties in Lean we will need some definitions. We start with these:
```lean
def empty {A : Type} (X : Set A) : Prop := ¬∃ (x : A), x ∈ X
def pairwise_disjoint {A : Type} (F : Set (Set A)) : Prop :=
∀ X ∈ F, ∀ Y ∈ F, X ≠ Y → empty (X ∩ Y)
```
To say that a set `X` is empty, we could write `X = ∅`, but it is more convenient to have a statement that says more explicitly what it means for a set to be empty. Thus, we have defined `empty X` to be the proposition saying that `X` has no elements. If `F` has type `Set (Set A)`, then `pairwise_disjoint F` is the proposition that no two distinct elements of `F` have any element in common---in other words, the elements of `F` do not overlap. We can now give the precise definition of a partition:
```lean
def partition {A : Type} (F : Set (Set A)) : Prop :=
(∀ (x : A), x ∈ ⋃₀ F) ∧ pairwise_disjoint F ∧ ∀ X ∈ F, ¬empty X
```
The main theorem about equivalence relations in *HTPI* is Theorem 4.5.4, which says that `mod A R` is a partition of `A`. The proof of this theorem is hard enough that *HTPI* proves two facts about equivalence classes first. A fact that is proven just for the purpose of using it to prove something else is often called a *lemma*. We can use this term in Lean as well. Here is the first part of Lemma 4.5.5 from *HTPI*
```lean
lemma Lemma_4_5_5_1 {A : Type} (R : BinRel A) (h : equiv_rel R) :
∀ (x : A), x ∈ equivClass R x := by
fix x : A
define --Goal : R x x
define at h --h : reflexive R ∧ symmetric R ∧ transitive R
have Rref : reflexive R := h.left
show R x x from Rref x
done
```
The command `#check @Lemma_4_5_5_1` produces the result
::: {.ind}
```
@Lemma_4_5_5_1 : ∀ {A : Type} (R : BinRel A),
equiv_rel R → ∀ (x : A), x ∈ equivClass R x
```
:::
Thus, if we have `R : BinRel A`, `h : equiv_rel R`, and `x : A`, then `Lemma_4_5_5_1 R h x` is a proof of `x ∈ equivClass R x`. We will use this at the end of the proof of our next lemma:
```lean
lemma Lemma_4_5_5_2 {A : Type} (R : BinRel A) (h : equiv_rel R) :
∀ (x y : A), y ∈ equivClass R x ↔
equivClass R y = equivClass R x := by
have Rsymm : symmetric R := h.right.left
have Rtrans : transitive R := h.right.right
fix x : A; fix y : A
apply Iff.intro
· -- (→)
assume h2 :
y ∈ equivClass R x --Goal : equivClass R y = equivClass R x
define at h2 --h2 : R y x
apply Set.ext
fix z : A
apply Iff.intro
· -- Proof that z ∈ equivClass R y → z ∈ equivClass R x
assume h3 : z ∈ equivClass R y
define --Goal : R z x
define at h3 --h3 : R z y
show R z x from Rtrans z y x h3 h2
done
· -- Proof that z ∈ equivClass R x → z ∈ equivClass R y
assume h3 : z ∈ equivClass R x
define --Goal : R z y
define at h3 --h3 : R z x
have h4 : R x y := Rsymm y x h2
show R z y from Rtrans z x y h3 h4
done
done
· -- (←)
assume h2 :
equivClass R y = equivClass R x --Goal : y ∈ equivClass R x
rewrite [←h2] --Goal : y ∈ equivClass R y
show y ∈ equivClass R y from Lemma_4_5_5_1 R h y
done
done
```
The definition of "partition" has three parts, so to prove Theorem 4.5.4 we will have to prove three statements. It will make the proof easier to read if we prove the three statements separately.
```lean
lemma Theorem_4_5_4_part_1 {A : Type} (R : BinRel A) (h : equiv_rel R) :
∀ (x : A), x ∈ ⋃₀ (mod A R) := by
fix x : A
define --Goal : ∃ (t : Set A), t ∈ mod A R ∧ x ∈ t
apply Exists.intro (equivClass R x)
apply And.intro _ (Lemma_4_5_5_1 R h x)
--Goal : equivClass R x ∈ mod A R
define --Goal : ∃ (x_1 : A), equivClass R x_1 = equivClass R x
apply Exists.intro x
rfl
done
lemma Theorem_4_5_4_part_2 {A : Type} (R : BinRel A) (h : equiv_rel R) :
pairwise_disjoint (mod A R) := by
define
fix X : Set A
assume h2 : X ∈ mod A R
fix Y : Set A
assume h3 : Y ∈ mod A R --Goal : X ≠ Y → empty (X ∩ Y)
define at h2; define at h3
obtain (x : A) (h4 : equivClass R x = X) from h2
obtain (y : A) (h5 : equivClass R y = Y) from h3
contrapos
assume h6 : ∃ (x : A), x ∈ X ∩ Y --Goal : X = Y
obtain (z : A) (h7 : z ∈ X ∩ Y) from h6
define at h7
rewrite [←h4, ←h5] at h7 --h7 : z ∈ equivClass R x ∧ z ∈ equivClass R y
have h8 : equivClass R z = equivClass R x :=
(Lemma_4_5_5_2 R h x z).ltr h7.left
have h9 : equivClass R z = equivClass R y :=
(Lemma_4_5_5_2 R h y z).ltr h7.right
show X = Y from
calc X
_ = equivClass R x := h4.symm
_ = equivClass R z := h8.symm
_ = equivClass R y := h9
_ = Y := h5
done
lemma Theorem_4_5_4_part_3 {A : Type} (R : BinRel A) (h : equiv_rel R) :
∀ X ∈ mod A R, ¬empty X := by
fix X : Set A
assume h2 : X ∈ mod A R --Goal : ¬empty X
define; double_neg --Goal : ∃ (x : A), x ∈ X
define at h2 --h2 : ∃ (x : A), equivClass R x = X
obtain (x : A) (h3 : equivClass R x = X) from h2
rewrite [←h3]
show ∃ (x_1 : A), x_1 ∈ equivClass R x from
Exists.intro x (Lemma_4_5_5_1 R h x)
done
```
It's easy now to put everything together to prove Theorem 4.5.4.
```lean
theorem Theorem_4_5_4 {A : Type} (R : BinRel A) (h : equiv_rel R) :
partition (mod A R) := And.intro (Theorem_4_5_4_part_1 R h)
(And.intro (Theorem_4_5_4_part_2 R h) (Theorem_4_5_4_part_3 R h))
```
Theorem 4.5.4 shows that an equivalence relation on `A` determines a partition of `A`, namely `mod A R`. Our next project will be to prove Theorem 4.5.6 in *HTPI*, which says that every partition of `A` arises in this way; that is, every partition is `mod A R` for some equivalence relation `R`. To prove this, we must show how to use a partition `F` to define an equivalence relation `R` for which `mod A R = F`. The proof in *HTPI* defines the required equivalence relation `R` as a set of ordered pairs, but in Lean we will need to define it instead as a binary relation on `A`. Translating *HTPI*'s set-theoretic definition into Lean's notation for binary relations leads to the following definition:
```lean
def EqRelFromPart {A : Type} (F : Set (Set A)) (x y : A) : Prop :=
∃ X ∈ F, x ∈ X ∧ y ∈ X
```
In other words, `EqRelFromPart F` is the binary relation on `A` that is true of any two objects `x` and `y` of type `A` if and only if `x` and `y` belong to the same set in `F`. Our plan now is to show that if `F` is a partition of `A`, then `EqRelFromPart F` is an equivalence relation on `A`, and `mod A (EqRelFromPart F) = F`.
Once again, *HTPI* breaks the proof up by proving some lemmas first, and we will find it convenient to break the proof into even smaller pieces. We will leave the proofs of most of these lemmas as exercises for you.
```lean
lemma overlap_implies_equal {A : Type}
(F : Set (Set A)) (h : partition F) :
∀ X ∈ F, ∀ Y ∈ F, ∀ (x : A), x ∈ X → x ∈ Y → X = Y := sorry
lemma Lemma_4_5_7_ref {A : Type} (F : Set (Set A)) (h : partition F):
reflexive (EqRelFromPart F) := sorry
lemma Lemma_4_5_7_symm {A : Type} (F : Set (Set A)) (h : partition F):
symmetric (EqRelFromPart F) := sorry
lemma Lemma_4_5_7_trans {A : Type} (F : Set (Set A)) (h : partition F):
transitive (EqRelFromPart F) := sorry
```
We can now put these pieces together to prove Lemma 4.5.7 in *HTPI*:
```lean
lemma Lemma_4_5_7 {A : Type} (F : Set (Set A)) (h : partition F) :
equiv_rel (EqRelFromPart F) := And.intro (Lemma_4_5_7_ref F h)
(And.intro (Lemma_4_5_7_symm F h) (Lemma_4_5_7_trans F h))
```
We need one more lemma before we can prove Theorem 4.5.6:
```lean
lemma Lemma_4_5_8 {A : Type} (F : Set (Set A)) (h : partition F) :
∀ X ∈ F, ∀ x ∈ X, equivClass (EqRelFromPart F) x = X := sorry
```
We are finally now ready to address Theorem 4.5.6. Here is the statement of the theorem:
```lean
theorem Theorem_4_5_6 {A : Type} (F : Set (Set A)) (h: partition F) :
∃ (R : BinRel A), equiv_rel R ∧ mod A R = F
```
Of course, the relation `R` that we will use to prove the theorem is `EqRelFromPart F`, so we could start the proof with the tactic `apply Exists.intro (EqRelFromPart F)`. But this means that the rest of the proof will involve many statements about the relation `EqRelFromPart F`. When a complicated object appears multiple times in a proof, it can make the proof a little easier to read if we give that object a name. We can do that by using a new tactic. The tactic `set R : BinRel A := EqRelFromPart F` introduces the new variable `R` into the tactic state. The variable `R` has type `BinRel A`, and it is definitionally equal to `EqRelFromPart F`. That means that, when necessary, Lean will fill in this definition of `R`. For example, one of our first steps will be to apply `Lemma_4_5_7` to `F` and `h`. The conclusion of that lemma is `equiv_rel (EqRelFromPart F)`, but Lean will recognize this as meaning the same thing as `equiv_rel R`. Here is the proof of the theorem:
```lean
theorem Theorem_4_5_6 {A : Type} (F : Set (Set A)) (h: partition F) :
∃ (R : BinRel A), equiv_rel R ∧ mod A R = F := by
set R : BinRel A := EqRelFromPart F
apply Exists.intro R --Goal : equiv_rel R ∧ mod A R = F
apply And.intro (Lemma_4_5_7 F h) --Goal : mod A R = F
apply Set.ext
fix X : Set A --Goal : X ∈ mod A R ↔ X ∈ F
apply Iff.intro
· -- (→)
assume h2 : X ∈ mod A R --Goal : X ∈ F
define at h2 --h2 : ∃ (x : A), equivClass R x = X
obtain (x : A) (h3 : equivClass R x = X) from h2
have h4 : x ∈ ⋃₀ F := h.left x
define at h4
obtain (Y : Set A) (h5 : Y ∈ F ∧ x ∈ Y) from h4
have h6 : equivClass R x = Y :=
Lemma_4_5_8 F h Y h5.left x h5.right
rewrite [←h3, h6]
show Y ∈ F from h5.left
done
· -- (←)
assume h2 : X ∈ F --Goal : X ∈ mod A R
have h3 : ¬empty X := h.right.right X h2
define at h3; double_neg at h3 --h3 : ∃ (x : A), x ∈ X
obtain (x : A) (h4 : x ∈ X) from h3
define --Goal : ∃ (x : A), equivClass R x = X
show ∃ (x : A), equivClass R x = X from
Exists.intro x (Lemma_4_5_8 F h X h2 x h4)
done
done
```
### Exercises
::: {.numex arguments="1"}
```lean
lemma overlap_implies_equal {A : Type}
(F : Set (Set A)) (h : partition F) :
∀ X ∈ F, ∀ Y ∈ F, ∀ (x : A), x ∈ X → x ∈ Y → X = Y := sorry
```
:::
::: {.numex arguments="2"}
```lean
lemma Lemma_4_5_7_ref {A : Type} (F : Set (Set A)) (h : partition F) :
reflexive (EqRelFromPart F) := sorry
```
:::
::: {.numex arguments="3"}
```lean
lemma Lemma_4_5_7_symm {A : Type} (F : Set (Set A)) (h : partition F) :
symmetric (EqRelFromPart F) := sorry
```
:::
::: {.numex arguments="4"}
```lean
lemma Lemma_4_5_7_trans {A : Type} (F : Set (Set A)) (h : partition F) :
transitive (EqRelFromPart F) := sorry
```
:::