-
Notifications
You must be signed in to change notification settings - Fork 0
/
computing.html
668 lines (529 loc) · 61.2 KB
/
computing.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>CERN COMPUTING</title>
<link rel="stylesheet" href="style.css" type="text/css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/css/all.min.css" integrity="sha512-" crossorigin="anonymous" referrerpolicy="no-referrer" />
<script src="script.js" defer></script>
</head>
<body style="overflow: hidden;">
<nav>
<div class="logo">
<h1><b><a href="index.html">CERN COMPUTING</a></b></h1>
</div>
<ul>
<li><b><a href="storage.html">Storage</a></b></li>
<li><b><a href="computing.html">Computing</a></b></li>
<li><b><a href="networking.html">Networking</a></b></li>
<li><b><a href="resources.html">Resources</a></b></li>
</ul>
</nav>
</br></br></br></br></br>
<div class="social1">
<h2>The history of computing at CERN is representative of the evolution of technologies that affect us all today.
Here some objects that tell the story from the 50s: processors, laptops, phone did not exists, and vacuum tubes computers filled entire rooms.
Computing became available to fundamental particle physics research at exactly that stage when further progress would have been stopped by the lack of means to perform large-scale calculations.
Computing at CERN was first dependent on big, heavy, mainframe supercomputers (one for all - CDC, CRAY, IBM).
In the 70s, proprietary mini-computers (DEC PDP/VAX, HP, Norsk Data) appeared: with these, the experiments were able to send their data to the Data Centre more quickly, and "digitalise" the geometry of particle physics collisions.
In the 80s, Work Stations (Apollo, Sun, silicon Graphics, UNIX boxes) allowed for the first time physicists to see results on a screen.
In the 90s, distributed computing became a must, and the mainframe computers were replaced by computer farms made of personal computers like the ones everyone has intheir home nowadays.
</h2>
</div>
<div class="content-wrapper">
<div class="left-box">
<div class="table-container">
<div class="column">
<h1>YEAR</h1>
<div class="year-name">
<div><p>1958-1976</p></div>
<!-- wim klein -->
<div><p>1958-1990s</p></div>
<!-- maniframe into -->
<div><p>1958-1965</p></div>
<!-- ferranti -->
<div><p>1961-1996</p></div>
<!-- ibm -->
<div><p>1965</p></div>
<!-- cdc -->
<div><p>1975</p></div>
<!-- amdahl -->
<div><p>1988</p></div>
<!-- cray -->
<div><p>1970s</p></div>
<!-- mini-computer -->
<div><p>1980s</p></div>
<!-- emulator -->
<div><p>1975</p></div>
<!-- mivrocomputer -->
<div><p>1971-2024</p></div>
<!-- processors -->
<div><p>1973</p></div>
<!-- touch -->
<div><p>1970s-80s</p></div>
<!-- printed curcuit -->
<div><p>1982</p></div>
<!-- apollo -->
<div><p>1988-1993</p></div>
<!-- shift -->
<div><p>1980s-2000s</p></div>
<!-- VMs -->
<div><p>1980s-2000s</p></div>
<!-- PC farm -->
<div><p>1980s</p></div>
<!-- personal computers -->
<div><p>1980s-2000s</p></div>
<!-- GPUs -->
<div><p>1990s</p></div>
<!-- cloud computing -->
<div><p>2000s</p></div>
<!-- QC -->
</div>
</div>
<div class="column">
<h1>HISTORY</h1>
<div class="year-name">
<div class="clickable" data-image="images/computing/wk.jpg">
<a href="#"><p><u>Willem Klein</u></p></a>
<p>
Before computers, CERN hired the Dutch prodigey mathematician as a human calculator.
He was faster than the first analogue calculators at the time, but was getting tired extremely quickly.
On 27 August 1976, Klein calculated the 73rd root of a 500-digit number in 2 minutes and 43 seconds, winning a place in the Guinness Book of Records.
He was unfortunately stabbed to death in his apartment by an unidentified individual.
W.K. was faster than the Ferranti Mercury, but he was getting tired easily and drank a lot - was often found in the St. Genis pub.
As a 1st April joke, operators brought CERN's Director General in front of the Ferranti, and told him that he could ask the supercomputer any question in any language, and the computer would understand and spit out a result.
W.K. was hiding behind the supercomputer wooden board and had access to controls. He also spoke 14 languages, so the trick was easily done.
<!-- https://cds.cern.ch/record/1728889/files/vol7-issue9-p173-e.pdf -->
</p>
</div>
<div class="clickable" data-image="images/computing/ibm360.jpg">
<a href="#"><p><u>Mainframe era - Introduction </u></p></a>
<p>
Why did CERN need a computer in the first place?
Since the early days, computers were built to read out electronically the measurement of the films containing interesting physics events.
The first recorded data was coming from bubble chambers experiment.
Bubble chambers are tanks of unstable (superheated) transparent liquid, sensitive to the passage of charged particles through it.
As they force their way through the liquid, charged particles ionise the atoms in the liquid and they make it boil, leaving bubbling traces.
Until the 90s, most of the Data Centre was filled with a few large mainframe computers, which was in charge of running all of the physics and engineering workloads of CERN.
CERN was a desirable client for early supercomputer companies such as IBM and CRAY, and therefore was always getting the latest machines at competitive prices.
Still, the mainframes were very expensive: around $5 millions for one of them.
Data and programmes had to be submitted through a "reception", located right at the entrance of this (Data Centre exhibition) room.
Physicists would carry by bike their magnetic tapes with experiments' data to the reception window.
Along with the data, they would bring the code punched on punched cards.
Many operators, dressed with white scrubs-looking uniforms, roamed around in here; the few to detain the power over all the computing power of CERN.
Scientists would impatiently (Carlo Rubbia was known for rushing to the window and try to skip the queue) be waiting for their cards to be queued on the mainframe computer, and for the tapes to be manually installed on them.
By the 70s, computers of various kinds spread at CERN.
Mainframe computers started to communicate with each other, but also with smaller minicomputers through the CERNET network.
By 1974, 150 computers of various sizes are reported to be installed on CERN campus:
<li>small mini-computers (PDP8, HP2115A) </li>
<li>larger mini's and control computers (PDP11/45, HP4100, Nord-10, IBM1800, Modular One, Ferranti Argus 500)</li>
<li>medium sized computers (PDP-10, IBM-360/44, CII 10070 and CDC 3200)</li>
<li>large computers (CDC 6600, CDC 7600)</li>
</br>
Computing became highly heterogeneous.
By the 80s, mainframes were part of an even bigger network of hundreds of shared computers and thousands of powerful, single-user workstations.
From the 60s to the 90s, CERN's overall computing capacity had expanded by 6 orders of magnitude, representing an incredible growth.
</p>
</div>
<div class="clickable" data-image="images/computing/ferranti.PNG, images/computing/tube.jpg, images/computing/pentode.png, images/computing/ferranti.jpeg, images/computing/vacuum.jpg, images/computing/DATA_JOURNEY.jpg, , images\computing\wires3.JPG , images\computing\wires0.JPG , images\computing\wires1.JPG , images\computing\wires2.JPG, images\computing\wiressssss.jpg">
<a href="#"><p><u>Mainframe computers - Ferranti Mercury</u></p></a>
<p>
<i>"It took 6 days for God to build the universe, and a little over two years for the Ferranti engineers to produce our Mercury, an assembly of complex circuitry hidden inside
a row of austere cabinets, making few concessions to people's curiosity."</i> (Paolo Zanella)
The first CERN computer, which was filling a whole room, took 2 years to build and 3 months to install.
It was at the time one of the most powerful European-built computers.
It was one million times slower than today's large computers and it cost 1 million Swiss Francs of the time.
The Ferranti Mercury worked on the principle of vacuum tubes, ampoules with an anode (positive) and a cathode (negative).
When powered with current, heat makes electron flow from anode to cathode, carrying and amplifying the signals through a gate in the middle (anode, cathode and gate contitute what was called a triode).
The Ferranti Mercury had several thousand vacuum tubes - every day, some of them broke and needed to be changed by an operator.
As can be seen in the diagram, the physics data journey, similar to nowadays, consisted in:
<li>Receiving the analogue data from the detector, </li>
<li>Analogue to digital converter (ADC): digitalising the data, transforming the signal from analogue to digital. </li>
<li>in parallel, passing the data through the trigger. The trigger filters the data and decides what we keep and what we discard. This is the part that has evolved the most throughout the years. Initially, the trigger was NOT at all a computer, but a bundle of cables that performed really basic operations like AND and NOT gates (see pictures). At the end of the 70s, it was replaced by mini-computers.</li>
<li>Data acquisition (DAQ) of the signal. In general, we only look at a small portion of the data, especially in the past, as the computers were not fast enough to process all of the data.</li>
<li>Write on tapes.</li>
<li>Transport of data to Data Centre. In chronological order: FOCUS, Bicycle onLine, RIOS, OMNET, CERNET, Internet. </li>
<li>Full event processing in the Data Centre. </li>
</br>
The Mercury computer had one input channel (connected to a paper tape reader) and one output channel (connected to a paper tape punch).
When more data from spark sound chambers started being produced, physicists started realising that automating the data read-out procedure would be useful.
Already in 1964, analysis programs were developed using a SDS 920 <a href="http://cds.cern.ch/record/1242404/files/p211.pdf">a computer adapter</a> online to the Ferranti Mercury.
<!-- The first vacuum tubes computer was designed in 1926 in Germany with the purpose of tax avoidance; in fact,
radio receivers had a tax that was levied depending on how many tube holders a radio receiver had. This solution allowed
radio receivers to have a single tube holder. -->
<!-- https://slideplayer.com/slide/12414425/ -->
</p>
</div>
<div class="clickable" data-image="images/computing/ibm_video.mp4, images/computing/ibm709_panel.jpg, images/computing/ibm360.jpg ,images\computing\ibm3090.png, images/storage/IBM_Magnetic_Tape_Comp_09.mp4, images/computing/DSCF8729.jpg ,images/computing/DSCF8733.jpg ,
images/computing/DSCF8736.jpg, images/computing/tcm.png,
, images\computing\info2.png , images\computing\info0.JPG , images\computing\info1.jpg, images/computing/ibm790onair.jpg, images/computing/ibm3090.jpg,images/computing/ibmscan.jpg, images/computing/scanning.jpg, images\computing\scanning2.jpg, images/computing/bebc2.jpg, images/computing/bubbles.jpg, ">
<a href="#"><p><u>Mainframe computers - IBM</u></p></a>
<p>
The IBM 709 (1961) and later IBM 7090 (1963) ran the first "FORTRAN" ("FORmula TRANSlating") programs written on punch cards, analysing measurements from bubble chambers.
CERN hired hundreds of young women - the scanning girls - whose job was to scan the bubble chambers' pictures and get the coordinates of particles' traces by hand.
Many physicists and engineers ended up marrying them.
Already in 1964, the spark chambers (stacks of metal plates placed in a sealed box filled with a gas such as helium or neon to detect
charged particles from cosmic rays, which ionise the gas between the plates creating a sparkle) experiments were using IBMs to analyse the data from
cosmic rays.
The IBM 7090 was the first transistorised mainframe; thanks to a Flying Spot Digitizer on-line to the 7090, spark chamber films were automatically
scanned and measured in record time.
The most powerful IBM mainframe CERN had was the IBM 3090 (1985 - 1996).
Here you can see a water-cooled Thermal Conduction Module (TCM) was used on the mainframes. It is made of ceramics, which is geometrically stable and can be fine-tuned to the temperature,
but at the same time very hard to produce and therefore extremely expensive.
Ceramics can also be made of multiple layers compared to plastic boards.
Each ceramic plate has 63 layers, weights around 2 kg and holds 2772 gold pins, resting on the chips inside.
The rubber tubes brought water to the pins which rested on the chips. The heated water was then circulated out and away.
This large panel was attached to the IBM mainframes (IBM360 or IBM370) and was used to connect communication lines to the mainframe channel.
<!-- scanning girls http://cds.cern.ch/record/41079
http://cds.cern.ch/record/40015 -->
<!-- https://www.researchgate.net/publication/3425891_Four_Decades_of_Research_on_Thermal_Contact_Gap_and_Joint_Resistance_in_Microelectronics -->
<!-- suerconductng coil BEBC http://cds.cern.ch/record/41411?ln=en
bebc http://cds.cern.ch/record/917685 -->
</p>
</div>
<div class="clickable" data-image="images/computing/cdc.jpg, images/computing/cdc_console1.jpg , images/computing/cdc_console.jpg, images/computing/cdc0.jpg, images/computing/cdc-1973.png, images/computing/cdc7600_suercomputer.jpg,images/computing/cdc6600_1.jpg ,images/computing/cdc6600_2.jpg ,images/computing/cdc6600_3.jpg ,images/computing/cdc6600_4.jpg ,images/computing/cdc2.jpg , images/computing/cdc6.jpg,images/computing/cdc7600_1.jpg ,images/computing/cdc7600_2.jpg ,images/computing/cdc7600_3.jpg ,images/computing/cdc7600_4.jpg ,images/computing/cdc7600_5.jpg, images/computing/vt_vs_transistors.png">
<a href="#"><p><u>Mainframe computers - CDC</u></p></a>
<p>
With the advent of the CDC 6600 (1965) the process of scanning and sending bubble chamber tracks to the mainframe was automated for the first time.
<b>Transistors</b> (officially invented in 1947) also replaced vacuum tubes, a huge leap in technology.
They are both devices to control the flow of electrons (current); in vacuum tubes electrons are the only charge carriers, while in transistors the charge carriers
are both electrons and holes, which flow between 3 semiconducting layers.
Here, a cordwood module from the CDC 6600, containing 64 silicon transistors.
Resistors seem to be stacked like cord between the two circuit boards in order to obtain a high density.
The second object, a black module with its corresponding plate, composed the later CDC 7600 (1972) mainframe, which provided the bulk of the computing power needed by CERN for almost 12 years
and it was 5 times faster than its predecessor. During its lifetime, it processed a total of 66813788 jobs and delivered 61321 cpu hours.
No system has ever lasted so long since then.
The stories by the engineers who later decided to emulate the mainframes to reduce their cost tell us that, while the
IBM was made in Germany and came with very easy-to-follow hardware instructions,
the CDC was designed by Seymour Cray and it was much harder to understand. Cray was a peculiar character - more later on - and wanted to do everything by himself.
<!-- https://cds.cern.ch/record/1767657
https://cds.cern.ch/record/969350
https://videos.cern.ch/record/43172
https://cds.cern.ch/record/41509
https://it-archives.web.cern.ch/content/5
https://it-archives.web.cern.ch/content/cdc-7600-2
https://it-archives.web.cern.ch/content/9733
https://it-archives.web.cern.ch/content/scan-81-11-73
https://it-archives.web.cern.ch/content/3-2
https://cds.cern.ch/record/41546
https://it-archives.web.cern.ch/content/14-0 -->
</p>
</div>
<div class="clickable" data-image="images/computing/DSCF8536.jpg, images/computing/DSCF8534.jpg ,images/computing/DSCF8533.jpg, images/computing/amdahl.jpg">
<a href="#"><p><u>Amdahl 470 - air cooling</u></p></a>
<p>
In 1967, computer scientist Gene Amdahl gave a talk at a conference showing how the performance improvement from parallel processing for a sequence of operations was limited.
Even if certain operations could be sped-up by being performed in parallel, other operations that could not, such as reading or writing data, would limit how
fast the system could be improved. Today, the law is used a lot in parallel computing.
It states tht "the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used".
Amdahl, an IBM competitor, invented an air cooling technology for its computers, which ran the popular IBM System/360 family of programs, but was faster and less expensive.
This object contains an actual Amdahl series 470 computer logic chip with an air cooling device mounted on top.
The package leads and cooling tower are gold-plated.
</p>
</div>
<div class="clickable" data-image="images/computing/cray_video.mp4,images/computing/cray.jpg, images/computing/cray2.jpg ,images/computing/DSCF8785.jpg ,images/computing/DSCF8784.jpg, images/computing/1992-cray-sneakers.png, images\computing\cray.png">
<a href="#"><p><u>Mainframe computers - CRAY X-MP</u></p></a>
<p>
<!-- SecurID is a mechanism developed by Security Dynamics that allows two-factor authentication for a user on a network resource; it was necessary to run analyses on the CRAY supercomputers.
The first Unix system dates back to 1981, but it was mounted on CRAY supercomputers and replaced their CRAY OS, which was proprietary.
CERN helped CRAY to install batch systems on their supercomputers, advancing technologies worldwide. By 1997, everything had transitioned to Unix and the computing model of CERN had changed. -->
Seymour Cray was one of the founders of the company Control Data Corporation, producer of all the CERN's CDC mainframes, and later founded the Cray Computer Corporation.
The main problem of his computers was error correction, as he refused to implement it.
He enjoyed skiing, windsurfing, tennis, and other sports. Another favorite pastime was digging a tunnel under his home; he attributed the secret of his success to "visits by elves" while he worked in the tunnel:
<i>"While I'm digging in the tunnel, the elves will often come to me with solutions to my problem."</i> When he was asked by management to provide detailed one-year and five-year plans for his next machine, he simply wrote:
<i>"Five-year goal: Build the biggest computer in the world. One year goal: One-fifth of the above."</i>
<!-- https://cds.cern.ch/record/2423272, https://cds.cern.ch/record/2437108 -->
CERN’s Cray X-MP ran a Unix variant (UNICOS) rather than their proprietary Cray OS.
CERN helped CRAY to install batch and tape handling software on the system, advancing Unix technology worldwide.
By 1997, everything had transitioned to Unix and the computing model of CERN had changed,
based on this Cray development.
CERN implemented high security features for the Cray including SecurID for login,
a filtered Ethernet network and physical security for the computer centre.
SecurID was a mechanism developed by Security Dynamics based on one-time passwords;
a SecurID card was necessary to run analyses on the CRAY supercomputers.
Curiosity: the CRAY supercomputer was used in the film "Sneakers", featuring Robert Redford. </p>
</div>
<div class="clickable" data-image="images/computing/nd-minicomputer.jpg, images/computing/nd-hall.png, images/computing/pdp10.jpg , images/computing/DSCF8727.jpg, images/computing/nord-10.PNG, images\computing\norsk-data.jpg, images\computing\nd2.JPG , images\computing\nd.JPG, images/computing/comp-lib.png, images/computing/adam_eve2.jpg , images/computing/adam_eve.jpg , images/computing/magnetic film.jpg">
<a href="#"><p><u>Mini-computers</u></p></a>
<p>
CERN bought the first transistorised mini-computer in 1964 (SDS 920), to be used on-line with acoustic spark chamber experiments.
The mini-computers were not that mini, they actually were very large and tall.
The mini-computers were the first to allow <a href="http://cds.cern.ch/record/862913?ln=e">online computing</a> in all the control rooms of experiments,
digitalisig the signals while they were taken. Although the first examples of online computing arrived already with the Ferranti Mercury (see Ferranti Mercury section), they
became an unmissable step of analyses when mini-computers were introduced at the trigger level.
Each trigger component was equivalent to a TV; at the time, processing and storing 5 million of TV data in series was a crazy concept.
Already in 1971, a project to scan and measure the photographs of the Mirabelle liquid-hydrogen bubble chamber (located near Serpukhov, Russia) was starting, called Adam and Eve.
The mini-computer PDP8/L can be seen on the left, with its control electronics and Adolfo fucci and colleagues manipulating it.
The use of mini-computers went hand in hand with the mainframes; as you can see in the pictures, there is a CDC mainframe next to the PDP.
Mini-computers really invaded CERN at VAXes and PDP invaded CERN; by the 80s, all the experiments' control rooms had been 'VAXinated'.
The name "VAX" comes from an acronym for "Virtual address eXtension", as the successor to the PDP-11.
A very well known mini-computers manufacturer was the Norwegian Norsk Data compnay, who built the world's first on-ship computer to automate onboard tasks already in 1969.
It was sturdy, with a thick metal layer protecting it, and it stood cargo boat conditions.
Here, a picture of Raymond Raush standing in front of 2 Nord-10s, used for the SPS controls.
Also some terminal monitors from the same manufacturer.
When somebody expressed the concert the Norwegian company could go burst and asked people in the control rooms what they would do if this happened,
the reply was: "we would hire all of their employees!", as they were one of th emost educated crews of computer operator and engineers at the time.
CERN was also building its own devices to connect to mini-computers. For example, as part of the <a href="https://cds.cern.ch/record/186063/files/CERN-75-15.pdf">"Library Mechanization Project"</a>
in 1973, keyboards like the yellow one in the picture were designed to edit library record on a PDP-11/20 mini-computer.
The keyboard was connected to the Unibus of the PDP-11 and its design was driven by the "needs for interactive input and editing of bibliographic information in a scientific library".
<!-- The lower part is based on the International typewriter format using ISO 7-bit encoding, while the upper part was used for Greek alphabet and diacritical signs. There were also a number of buttons for various control commands. -->
<!-- https://github.com/amakukha/PyPDP11?tab=readme-ov-file
Bjorn Hans Filip Lindstrom
We could in principle put this emulator on a raspberry pi and use the original terminal setup connected to it. That would be very cool -->
<!-- The first commercially available 32-bit computer was manufactured by Digital Equipment Corporation (DEC). -->
<!-- Adam and Eve: http://cds.cern.ch/record/767656 -->
</p>
</div>
<div class="clickable" data-image=" images\computing\3081E-0.JPG , images\computing\3081E-1.JPG , images\computing\3081E-2.JPG , images\computing\3081E-3.JPG, , images\computing\3081E-4.JPG ,images/computing/emulator.jpg , images/computing/DSCF8603.jpg ,images/computing/DSCF8606.jpg ,images/computing/DSCF8607.jpg ,images/computing/DSCF8613.jpg, , images\computing\ua1-team.jpg">
<a href="#"><p><u>Emulators</u></p></a>
<p>
Mainframe computers were expensive (from $2 to $5 million at the time), so many engineers designed home-made integrated circuits to make them more performant than microprocessors on the market.
These were called "emlators", as they emulated the functions of a mainframe, and they were 500 cheaper (one would cost around $10k at the time).
Emulators were employed to pioneer the development of online event selection. They were basically replacing what was initially done in the Data Centre:
fitting the experimental event to physics theory, analysing the data on the fly.
Notably, Adolfo Fucci, Mick Storr, Sergio Cittolin and the other members of the UA1 experiment (in the picture) were building emulators like this 3081/E from 1983.
There were 14 of them in the Data Centre, and 6 in the UA1 control room.
Using the predecessor of the 3081/E, the <a href=" https://cds.cern.ch/record/2626526/files/delphi-82-42_ocr.pdf">168/E</a>, the UA1 experiment was able to select on the fly only the “best” 10% of the events at the trigger level (the so-called express line);
this set-up led to the selection of the W and Z bosons, carriers of the weak interaction, for which Carlo Rubbia won the Nobel prize for physics.
The concept of filtering only the best events (High Level Trigger) is still in vogue nowadays at CERN, with a very little percentage of events actually ending up in storage.
The emulators were sitting on large fans to prevent them from overheating.
Each board was designed to perform a specific task; the MBs are 4MBs rapid memory boards, the AS performs arithmetic opoerations, IB are integers, and so on.
The IFVP board allows to connect to input and output, allowing to do online computing, and is therefore the most important part.
The boards were sometimes wired up processor boards, like the one on the side.
Transistors were mounted on board with pins, and connected between each other with copper wires.
The design was made on a computer and the computer program figured out the best way to thread the wires.
Each transistor carries one bit of information.
At the end of the 70s, multiple integrated circuits were used to build emulators.
</p>
</div>
<!-- http://cds.cern.ch/record/1831923 -->
<div class="clickable" data-image="images\computing\super caviar.jpg, images\computing\track-ua1.png , images\computing\sueprcaviar5.jpg , images\computing\supercaviar1.jpg , images\computing\supercaviar2.jpg , images\computing\supercaviar3.jpg , images\computing\supercaviar4.jpg , images\computing\sc1.jpg , images\computing\sc2.jpg , images\computing\sc3.jpg, images\computing\supercaviar.jpg,images/computing/supercaviar-slides.png,images/computing/supercaviarslides2.png">
<a href="#"><p><u>Micro-computers</u></p></a>
<p>
<!-- I rivelatori solid state sono cominciati con strips di Si (Kemmer 1980), (Heijne 1980), silicon drift chambers (Gatti&Rehak 1984), poi con pixels di Si (autori vari RD19 1988-1999) e MAPS Monolitic Acive Pixel Sensors. -->
Micro-computers were the first computers to be able to digitalise singal from source.
Already in 1968, Charpak's multi-wire proportional chambers (Nobel prize physics 1992) could achieve a counting rate a thousand
times better than existing detectors. It was a gas-filled box with a large number of parallel detector wires,
each connected to individual transistor amplifiers that would send digital signals of protons and antiprotons collisions (decaying into W and Z bosons)
to a computer every 22 microseconds, in real time. This was a revolution compared to the offline processing of bubble chambers' analogue pictures
of the past.
In 1973, the Split Field Magnet (SFM) - the largest spectrometer for particles from beam-beam collisions in the ISR -
already had 50 000 wires to amplify signals.
The development of <b>digital signal amplification</b> continued with drift chambers (Walenta 1971), employed in the UA1 experiment.
A drift chamber is a device in which electrons
liberated in a medium by ionizing collisions can be
moved away from their initial position by appropriate electric fields; compared to the multi-wire chambers, that only recorded the particle's posiiton,
in drift chambers the signal time component is also recorded.
By this time, data digitalisation has become a standard, along with microprocessor and data network technologies.
<!-- https://www.kfki.hu/~gbencedi/cikkek/science-13.pdf -->
<!-- Signal amplification then continued as well with Time Projection Chambers (Nygren 1974/75)
and the GEM and Micromega experiments. -->
The 168/E emulator would store the images in memory, and the SUPER CAVIAR micro-computer was reading them.
This micro-computer was built by Sergio Cittolin and B. G. Taylor with a 8-bit Motorola 6800 microprocessor
and is a first example of modern personal computers.
With the appearance of coloured screens in the 80s, it was possible to see the collision on the screen.
The online event display was a powerful monitor tool.
At the time of the UA1 experiment, 1Mb of data was digitalised per second, while during run4 of LHC, about 10Gb of data per second is digitalised.
This is like digitalising one single book every second, compared to all the books you can find on the planet!
<!--
- Front-end full data digitalisation.
flash ADC and TDC for pulse shape and time digitalisation (CTD), ReadOut Processor (ROP) and programmable logic
- Apparatus programmable controls and monitor.
Extensive use of
(CAVIAR microcomputer, CAMAC, MacVEE, ADLC and VME industry standards)
- Real time data analysis and event selection.
Online Introduction of IBM emulators (first online farm) -->
</p>
</div>
<!-- https://cds.cern.ch/record/134523/files/CM-P00061326.pdf
https://videos.cern.ch/record/1063075, minute 7.41 screen of display.
1973: video tape recording: https://cds.cern.ch/record/880656?ln=de -->
<!-- https://cds.cern.ch/record/18303/?ln=en -->
<div class="clickable" data-image="images/computing/amd1.jpg ,images/computing/amd2.jpg ,images/computing/amd3.jpg ,images/computing/DSCF8723.jpg ,images/computing/DSCF8726.jpg ,images/computing/intel1.jpg ,images/computing/intel2.jpg ,images/computing/DSCF8550.jpg ,images/computing/DSCF8551.jpg ,images/computing/DSCF8552.jpg ,images/computing/DSCF8553.jpg">
<a href="#"><p><u>Microprocessors</u></p></a>
<p>
In transistors electrons flow from the emitter to the collector.
The first transistors were made with Germanium, but they were soon replaced by Silicon (1954), which can withstand
higher temperatures. A semiconductor is "doped" by adding impurities into it;
these allow electrons (n-type) and holes (p-type) to flow through the energy gap in the semiconductor material when the temperature is increased.
By joining many transistors for specific functions together with resistors and capacitors, in 1959 integrated circuits came about.
Integrated circuits are the basis of all modern electronics and are contained in any electronic device in your hands.
In 1971 microprocessors were invented, which are specific integrated circuits that contain the functions of a computer's central processing unit, namely a computer brain.
The first microprocessor was the Intel 4004, a 4-bit chip.
With the advent of microprocessors personal computers (PCs) are born, and scientists start developing on their own devices.
The first microprocessor was the Intel 4004, a 4-bit chip. This means that 4 bits of information can be processed at the same time. Then came the 8-bit, 16-bit (on display: AMD 16-bit processor, 1982), 32-bit and 64-bit chips.
They are produced in thin wafers, like the round object on display.
Microprocessors evolved from single core, meaning they could execute one instruction at a time, to multi-core. The first dual-core processor appeared in 2004. On display, the first multi core Itanium processor (2006) and
an Intel Core 2 Duo Processor E6600 (2.4 GHz) from 2010.
</p>
</div>
<div class="clickable" data-image="images\computing\ts2.jpg , images\computing\ts4.jpg, images\computing\ts03.JPG , images\computing\ts3.jpg ,images/computing/ben strumpe.jpg, images/computing/benstrumpe_1.jpg, images\computing\ts01.JPG, images\computing\ts02.JPG ,images\computing\touch screen diagram.jpg, images\computing\ts1.jpg ,images/computing/hm.jpg, images/computing/yellow-report-touch.jpg">
<a href="#"><p><u>Capacitative touch screen and mouse ball</u></p></a>
<p>
An important technological invention that revolutionised the way we interact with computers and with smartphones appeared in the SPS control rooms at CERN as early as 1973.
Bent Stumpe, a Danish engineer who, before arriving at CERN, was working on control systems for the Danish Television Factory, came up with the idea of embedding in glass very thin wires so that the screens would still be transparent.
To create the signal, instead of soldering extremely small capacitors on the the printed circuits, which was very time consuming, he thought a finger could be used to complete the circuit and generate a signal.
He made a few attempts to prevent the finger-screen connection to short circuit due to the incorrect dielectric material between the two capacitative screens (our fingers are mostly made of water and therefore conductive),
and in a few months he came up with a <a href="https://cds.cern.ch/record/186242/files/CERN-73-06.pdf?version=1">proposal</a>.
The proposal was also inclusive of a computer controlled knob to navigate easily on the screen; he sent a request to buy some bowling balls to the confused and baffled CERN administration.
He was envisioning a system to track the cursor on the operators' screens in the SPS control room.
A ball would be embedded in the operator’ desk and placed on a plane; its movements would be detected and decomposed in x-y components.
It is basically the same concept of a modern mouse, with the exception that raster graphics with smooth mouse movements like you know it nowadays only appeared with the 1984 Macintosh.
Previously, computer graphics were made by displaying characters in a fixed grid which was usually 80 characters wide and 24 lines high.
The cursor was a blinking rectangle covering one of the 80x24 positions, and not a mouse pointer.
This meant the operators in the SPS would look at two screens. One was the black and white display behind the touch screen, with 16 fixed buttons displaying tree structure programs.
Each touch screen press told the computer program which area the operator had selected, and thus it could display another set of buttons, until finally a button was touched that activated a real program.
That then took over the second screen, namely the colour screen, where the actual operating program put its information. The blinking rectangle on the coloured screen responded to the knob’s movements.
In 1977 the touch screen was presented at the Hanover Fair in the form of a “Drinkomat”. Through a multiple choices menu on a touch screen,
guests were allowed to create their own alcoholic drinks; the completely automated creation of Bloody Marys charmed the impressed crowd.
Historically, Eric Johnson already invented the touch screen concept in 1965, but the capacitive technology is the one that allowed touch screens to take over the market.
After a few years, more than 7 billion touch screens were produced.
The father of the "mouse" concept, instead, is Douglas Engelbert; the famous public demonstration in which he
presented it for the first time (1968) was marked in history as <a href="https://www.youtube.com/watch?v=UhpTiWyVa6k&ab_channel=DougEngelbartInstitute">"The Mother of All Demos"</a>.
<!-- https://cds.cern.ch/record/917787?ln=fr
https://cds.cern.ch/record/969622?ln=fr
https://cds.cern.ch/record/917909?ln=fr -->
<!-- https://cds.cern.ch/record/1763521 -->
<!-- https://videos.cern.ch/record/1950314 -->
</p>
</div>
<div class="clickable" data-image="images/computing/DSCF8619.jpg, images/computing/DSCF8629.jpg, images/computing/DSCF8631.jpg , images/computing/DSCF8632.jpg, images/computing/DSCF8634.jpg, images/computing/DSCF8638.jpg, images/computing/DSCF8574.jpg ,images/computing/DSCF8575.jpg ,images/computing/DSCF8579.jpg ,images/computing/DSCF8581.jpg ,images/computing/DSCF8583.jpg ,images/computing/DSCF8586.jpg ,images/computing/DSCF8587.jpg ,images/computing/DSCF8596.jpg ,images/computing/DSCF8598.jpg ,images/computing/DSCF8600.jpg ,images/computing/DSCF8559.jpg ,images/computing/DSCF8561.jpg, images/computing/doping.png">
<a href="#"><p><u>Printed circuit boards</u></p></a>
<p>The wired up boards were first built by hand, then by a computer program.
Later, the small components were actually printed on the board, facilitating the building procedure.
These were boards that were inserted in mini-computers.
Memory is an important part of computing, storing data temporarily.
A ROM is used only to read a memory value that was programmed in it during masking (manufacture) based on the programmers needs.
The old PDP-11, used by many experiments, had a bootstrap ROM programmed by soldering in individual diodes.
Each bit in this ROM Diode Matrix is represented by the presence or absence of one diode, determining whether you can read off its contents.
It represents a huge development as no longer need to toggle bootstrap in on the front panel switches.
A RAM (Random Access Memory), instead, is used to store a state or logic and it can read and write in different clock cycles.
Here a magnetic memory from a mini-computer.
Magnetic core memory was the predominant form of random-access computer memory for 20 years.
It uses tiny magnetic toroids (rings), the cores, through which copper wires were hand-threaded to write and read information.
Each core represents one bit of information. The cores can be magnetized in two different ways (clockwise or counterclockwise) and the bit stored in a core is zero or one depending on that core's magnetization direction.
The wires are arranged to allow an individual core to be set to either a "one" or a "zero", and for its magnetization to be changed, by sending appropriate electric current pulses through selected wires.
The process of reading the core causes the core to be reset to a "zero", thus erasing it.
Solid state (transistor or tube based) memory appeared in the 70s and was used in parallel to core memory.
Core memory is slower, but needs very little energy and is very reliable.
RAM can be static or dynamic, In Static RAM (S-RAM), data is stored in transistors and requires a constant power flow. In Dynamic RAM (D-RAM), data is stored in capacitors and needs to be refreshed periodically.
Here, an S-RAM from 1989.
</p>
</div>
<div class="clickable" data-image="images/computing/apollo2.jpg ,images/computing/apollo1.jpg, images\computing\apollo.png, images\computing\BBD9EF21-E7B4-46A6-94CF-A7922FA5FCE4.jpg, , images\computing\root1.jpg , images\computing\root2.jpg , images\computing\geant.jpg, images\computing\0E6B63AD-AEEE-4380-9269-00BE61102AB8.jpg, images\computing\61C4734B-E6C6-4A19-BEE3-3A8B945A3688.jpg, images/computing/hbook_onscreen.PNG, images/computing/paw_brun.PNG">
<a href="#"><p><u>Apollo workstations</u></p></a>
<p>
The first modern Personal Work Stations were purchased by CERN in 1982 for their high-resolution graphics, but scientists and computer scientists soon realised they were highly performant to parallelise
computations and they had high storage capabilities, with powerful compilers and networking support. They ran Unix as a operating system.
They formed the first nodes of SHIFT, a distributed mainframe built on commodity hardware systems.
Parallelisation works very well with particle physics; contrary to meterology, biology, finance - and any subject which has a model based on a complex network, and needs to analyse the system as a whole -, particle physics
doesn't need large processing power as each collisison is INDEPENDENT.
Workstations had graphics terminals, and allowed the development, in 1985, of PAW (Physics Analysis Workstation), the first program through which physicists could see their results.
This was a first example of interactive analysis.
Simulation software (GEANT 1,2,3,4) instead, focuses on generalising the geometry of a particle collision.
ROOT, based on C++, allows to read data in chunks and modularly, and massively speeds up the anlaysis.
The name ROOT comes from the fact that Rene had a beautiful tree outside of his office window in the quiet Prevessin campus.
Rene Brun (in the pictures: sitting in front of an Apollo workstation and presenting PAW at CHEP 1989 in Oxford) tells us that in programming for physics 3 kinds of software are generally needed: simulation, recontruction, analysis.
<!-- https://cds.cern.ch/record/970059
https://cds.cern.ch/record/969332?ln=en -->
</p>
</div>
<div class="clickable" data-image="images/computing/shift1999.jpg">
<a href="#"><p><u>SHIFT (Scalable Heterogeneous Integrated Facility)</u></p></a>
<p>
The era of mainframes comes to an end and the one of PC farms begins.
A single big expensive machine is replaced by large numbers of computers which are cheap, easy to replace, modular, and better to run in parallel particle physics analysis collisions (which are all independent from one another).
To make them more efficient, a faster internet network is needed; in this environment, the transition to international interent standards and the development of the World Wide Web thrives.
All the SHIFT machines (before Linux was adapted to all computers), had proprietary but compatible UNIX
operating systems (OS). The use of UNIX was important to make sure that the SHIFT software behaved in the same way on
each different OS. The SHIFT scheduler distributed workloads on the different UNIX boxes, and operators were replaced by operating systems.
In a way, the OS is like the brain, and the PC boxes are like the limbs that execute commands by the OS.
The SHIFT tape and disk servers managed the flow of physics data to and from the SHIFT CPU’s using the SHIFT
high speed network. It was in effect a distributed (Cray-like) mainframe.
<!-- https://cds.cern.ch/record/40536 -->
</p>
</div>
<div class="clickable" data-image="">
<a href="#"><p><u>Virtual Machine</u></p></a>
<p>
Virtual machines are not new, IBM already produced their own in the 80s.
Virtualisation enables a computing environment to run multiple independent systems at the same time,
making an abstraction of the physical hardware and creating large aggregated pools of logical resources consisting of CPUs, memory, disks, file storage, applications, networking.
</p>
</div>
<div class="clickable" data-image=" images/computing/farms2.jpg , images/computing/sun.png, images\computing\sun.JPG, images/computing/farms.jpg, ">
<a href="#"><p><u>PC fams</u></p></a>
<p>
With the introduction of Linux, PC technology could replace the more expensive proprietary Unix boxes, which allowed SHIFT to expand to the present computer centre.
This data centre model was then
adopted by essentially all the major computer companies such as Google (founded 1998), Microsoft, IBM, etc.
Sergio Cittolin tells us the story of CERN needing increasingly more computational power, and
how crazy it sounded at the time: when planning the required power for the LHC, in the beginning the requirement of 10 TFLOPs CPU power seemed unimaginable.
However, when the film Titanic came out in 1996, it was made public that CPU power of 5 TFLOPs was used for the visual effects: it was therefore possible to achieve such power, there was hope.
On display, a Sun Ultra 30 workstation model from 1995.
</p>
</div>
<div class="clickable" data-image=" images/computing/pc0.png ,, images\computing\mac0.JPG , images\computing\mac.JPG, images/computing/pc1.png ">
<a href="#"><p><u>Personal Computers</u></p></a>
<p>
Here some examples of the first personal computers. The first portable Mac was release in 1989. With a Motorola 68000 microprocessor running at 16 MHz and a 1MB RAM, it costed $6500 at the time.
They revolutionised the way scientists could work, and not only. The introduction of PCs on th market made it possible for single individuals
to have access to this technology, that was beforehand only a priviledge for a few.
Computing and digitalisation infiltrated every field.
They were generally produced by Apple, IBM, Olivetti, Sun.
</p>
</div>
<div class="clickable" data-image="images\computing\cittolin.jpg">
<a href="#"><p><u>GPUs</u></p></a>
<p>
GPUs appeared in the 80s to make computer graphics more efficient for video games.
They are essential for analyses that use heavy statistical calculations, that need to process large amounts of data at the same time.
In the 90s, when GPUs came about, Moore's law had reached a plateau: the number of chips in a processor could not double anymore, as the processor's size
had been reduced to nanometers' lengths.
Nowadays, each laptop's graphics cards contains them. While the CPU (Central Processing Unit) organises the tasks in a computer and can be
thought of as the computer's brain, the GPU is like the part of the eye that processes an image before the brain reverses it and makes us understand what
we are looking at.
As on a CPU multiple processes can be run by multi-threading, on GPUs recursive and repetitive data operations can be run at the same time; this concept is
called data parallelisation. and it is at the core of vector graphics implementation.
This is why GPUs are extremely useful to train statistical algorithms - namely deep neural networks - that need to reiterate and repeat the same calculation on data thousands of times.
Software developers at CERN were developing multivariate analyses frameworks before GPUs came about.
Actually, the concept of multivariate analysis, at the core of machine learning, was already being discussed at physics conferences already in the 80s.
Notably, the "ACAT" (Advanced Computational Analysis Techniques) conference still organised nowadays was initially called "AI in HEP" (Artificial Intelligence in High Energy Physics);
however, scientists thought this title sounded too extravagant and opted for something less pretentious.
Even if artificial/statistical intelligence is not new,
until GPUs took over the market it was mostly theoretical. This technology, still evolving fast, fully allowed the concrete application of AI
algorithms, providing the necessary hardware.
Cittolin (creator of DaVinci's-like illustrations of CMS data processing pipeline) tells us how AI's origins can be traced back to the 15th century, when Gutenberg invented the print.
With this method, the concept of indexing pages and of page numbers came about, introducing the intelligent way of making catalogues of information, and in a way repreating the same operation
again and again. The termin "intelligence" in Artificial Intelligence des not have to be interpreted in the latin "intelligentia", but rather in the automatic capability
of being more efficient.
</p>
</div>
<div class="clickable" data-image="x">
<a href="#"><p><u>Cloud computing</u></p></a>
<p>
In cloud computing, the virtual machines are hosted on a remote server and are accessed through the internet.
They do not have anything to do with the sky, but the term "cloud" rather refers to a cluster of connected machines owned by large companies, who basically sell processing power to whoever needs it.
Up to nowadays, CERN has always been a proud bottom-up efforts, free-solutions' promoter, and scientists always advocate
for technologies that are home-built and not owned by thrid parties. This is the reason why the Data Centre continues to exist, and continues to expand.
</p>
</div>
<div class="clickable" data-image="images/computing/qc4.png,images/computing/qc3.png , images/computing/qc5.png, images/computing/qc1.png , images/computing/qc2.png , ">
<a href="#"><p><u>Quantum Computing</u></p></a>
<p>
The Quantum Technology Initiative is onvolved in quantum computing research. Quantum computing takes advantage of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data.
</p>
</div>
</div>
</div>
</div>
</div>
<div class="right-box">
<img src="images/computing/cdc6.jpg" alt="Placeholder">
<div id="imageGrid" class="grid-container"></div>
</div>
</div>
</body>
</html>
<!-- Curiosity: a fervent supporter of CERN, Heisenberg became the first Chairman of the Laboratory's Scientific Directives Committee in October 1954, even before CERN purchased the first computer. -->