-
Notifications
You must be signed in to change notification settings - Fork 1
/
index.html
327 lines (305 loc) · 13 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<script src="http://www.google.com/jsapi" type="text/javascript"></script>
<script type="text/javascript">google.load("jquery", "1.3.2");</script>
<style type="text/css">
body {
font-family: "Titillium Web", "HelveticaNeue-Light", "Helvetica Neue Light", "Helvetica Neue", Helvetica, Arial, "Lucida Grande", sans-serif;
font-weight: 300;
font-size: 17px;
margin-left: auto;
margin-right: auto;
width: 980px;
}
h1 {
font-weight:300;
line-height: 1.15em;
}
h2 {
font-size: 1.75em;
}
a:link,a:visited {
color: #B6486F;
text-decoration: none;
}
a:hover {
color: #208799;
}
h1, h2, h3 {
text-align: center;
}
h1 {
font-size: 40px;
font-weight: 500;
}
h2 {
font-weight: 400;
margin: 16px 0px 4px 0px;
}
.paper-title {
padding: 16px 0px 16px 0px;
}
section {
margin: 32px 0px 32px 0px;
text-align: justify;
clear: both;
}
.col-5 {
width: 20%;
float: left;
}
.col-4 {
width: 25%;
float: left;
}
.col-3 {
width: 33%;
float: left;
}
.col-2 {
width: 50%;
float: left;
}
.col-1 {
width: 100%;
float: left;
}
.row, .author-row, .affil-row {
overflow: auto;
}
.author-row, .affil-row {
font-size: 26px;
}
.row {
margin: 16px 0px 16px 0px;
}
.authors {
font-size: 26px;
}
.affil-row {
margin-top: 16px;
}
.teaser {
max-width: 100%;
}
.text-center {
text-align: center;
}
.screenshot {
width: 256px;
border: 1px solid #ddd;
}
.screenshot-el {
margin-bottom: 16px;
}
hr {
height: 1px;
border: 0;
border-top: 1px solid #ddd;
margin: 0;
}
.material-icons {
vertical-align: -6px;
}
p {
line-height: 1.25em;
}
.caption {
font-size: 16px;
/*font-style: italic;*/
color: #666;
text-align: center;
margin-top: 4px;
margin-bottom: 10px;
}
video {
display: block;
margin: auto;
}
figure {
display: block;
margin: auto;
margin-top: 10px;
margin-bottom: 10px;
}
#bibtex pre {
font-size: 14px;
background-color: #eee;
padding: 16px;
}
.blue {
color: #2c82c9;
font-weight: bold;
}
.orange {
color: #d35400;
font-weight: bold;
}
.flex-row {
display: flex;
flex-flow: row wrap;
justify-content: space-around;
padding: 0;
margin: 0;
list-style: none;
}
.paper-btn {
position: relative;
text-align: center;
display: inline-block;
margin: 8px;
padding: 8px 8px;
border-width: 0;
outline: none;
border-radius: 2px;
background-color: #B6486F;
color: white !important;
font-size: 20px;
width: 100px;
font-weight: 600;
}
.paper-btn-parent {
display: flex;
justify-content: center;
margin: 16px 0px;
}
.paper-btn:hover {
opacity: 0.85;
}
.container {
margin-left: auto;
margin-right: auto;
padding-left: 16px;
padding-right: 16px;
}
.venue {
/*color: #B6486F;*/
font-size: 30px;
}
</style>
<!-- End : Google Analytics Code -->
<script type="text/javascript" src="../js/hidebib.js"></script>
<link href='https://fonts.googleapis.com/css?family=Titillium+Web:400,600,400italic,600italic,300,300italic' rel='stylesheet' type='text/css'>
<head>
<title>Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction</title>
<meta property="og:description" content="Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction"/>
<link href="https://fonts.googleapis.com/css2?family=Material+Icons" rel="stylesheet">
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:creator" content="@ArashVahdat">
<meta name="twitter:title" content="Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction">
<meta name="twitter:description" content="Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction">
<meta name="twitter:image" content="">
</head>
<body>
<div class="container">
<div class="paper-title">
<h1>Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction</h1>
</div>
<div id="authors">
<center>
<div class="author-row">
<div class="col-4 text-center"><a href="https://zhentao-liu.github.io/">Zhentao Liu</a><sup>1</sup></div>
<div class="col-4 text-center"><a href="https://yuffish.github.io/">Yu Fang</a><sup>1</sup></div>
<div class="col-4 text-center"><a href="https://enigma-li.github.io/">Changjian Li</a><sup>4</sup></div>
<div class="col-4 text-center"><a href="http://hanwu.website/">Han Wu</a><sup>1</sup></div>
<div class="col-4 text-center"><a href="https://liuyuan-pal.github.io/">Yuan Liu</a><sup>5</sup></div>
<div class="col-3 text-center"><a href="https://idea.bme.shanghaitech.edu.cn/">Dingggang Shen</a><sup>1,2,3</sup></div>
<div class="col-3 text-center"><a href="https://shanghaitech-impact.github.io/">Zhiming Cui</a><sup>1</sup></div>
</div>
<center>
<table align=center width=800px>
<tr>
<tr align=center width=800px>
<center>
<div class="col-1 text-center">
<span style="font-size:20px"><sup>1</sup>School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China</span>
</div>
<div class="col-1 text-center">
<span style="font-size:20px"><sup>2</sup>Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China</span>
</div>
<div class="col-1 text-center">
<span style="font-size:20px"><sup>3</sup>Shanghai Clinical Research and Trial Center, Shanghai, China</span>
</div>
<div class="col-1 text-center">
<span style="font-size:20px"><sup>4</sup>School of Informatics, The University of Edinburgh, Edinburgh, UK</span>
</div>
<div class="col-1 text-center">
<span style="font-size:20px"><sup>5</sup>Department of Computer Science, The University of Hong Kong, Hong Kong, China</span>
</div>
</center>
</tr>
</tr>
</table>
</center>
</center>
<br>
<div class="affil-row">
<div class="venue text-center"><b></b></div>
</div>
<br>
<div style="clear: both">
<div class="paper-btn-parent">
<a class="paper-btn" href="https://ieeexplore.ieee.org/document/10705334">
<span class="material-icons"></span>
Paper
</a>
<a class="paper-btn" href="https://arxiv.org/abs/2303.14739">
<span class="material-icons"></span>
Arxiv
</a>
<a class="paper-btn" href="https://github.com/ShanghaiTech-IMPACT/Geometry-Aware-Attenuation-Learning-for-Sparse-View-CBCT-Reconstruction/">
<span class="material-icons"></span>
Code
</a>
</div></div>
</div>
<section id="abstract"/>
<h2>Abstract</h2>
<hr>
<center><img width="100%" src="./image/DRR.png" style="margin-top: 20px; margin-bottom: 3px;"></center>
<p class="caption">
CBCT scanning and reconstruction. In the CBCT scanning process, CBCT scanning (a) would generate a sequence of 2D X-ray projections (b). These projections are utilized to reconstruct 3D CBCT image (c).
</p><p class="caption">
<div class="flex-row">
<p>Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging. Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image, leading to considerable radiation exposure. This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses. While recent advances, including deep learning and neural rendering algorithms, have made strides in this area, these methods either produce unsatisfactory results or suffer from time inefficiency of individual optimization. In this paper, we introduce a novel geometry-aware encoder-decoder framework to solve this problem. Our framework starts by encoding multi-view 2D features from various 2D X-ray projections with a 2D CNN encoder. Leveraging the geometry of CBCT scanning, it then back-projects the multi-view 2D features into the 3D space to formulate a comprehensive volumetric feature map, followed by a 3D CNN decoder to recover 3D CBCT image. Importantly, our approach respects the geometric relationship between 3D CBCT image and its 2D X-ray projections during feature back projection stage, and enjoys the prior knowledge learned from the data population. This ensures its adaptability in dealing with extremely sparse view inputs without individual training, such as scenarios with only 5 or 10 X-ray projections. Extensive evaluations on two simulated datasets and one real-world dataset demonstrate exceptional reconstruction quality and time efficiency of our method.<p>
</div>
</section>
<section id="Methodology"/>
<h2>Methodology</h2>
<hr>
<center><img width="100%" src="./image/CBCT_recon_TMI.png" style="margin-top: 20px; margin-bottom: 3px;"></center>
<p class="caption">
Overview of our proposed method. A 2D CNN encoder first extracts feature representations from multi-view X-ray projections. Then, we build a 3D feature map by feature back projection and adaptive feature fusing. Finally, this 3D feature map is fed into a 3D CNN decoder to produce the final CBCT image.
</p><p class="caption">
<div class="flex-row">
<p>Sparse-view CBCT reconstruction is a highly ill-posed problem with twofold challenges: (1) How to bridge the dimension gap between multi-view 2D X-ray projections and the CBCT image; (2) How to solve information insufficiency introduced by extremely sparse-view input. In this study, we introduce a geometry-aware encoder-decoder framework to solve this task efficiently. It seamlessly integrates the multi-view consistency of neural rendering and the generalization ability of deep learning, effectively addressing the challenges mentioned above. Specifically, we first adopt a 2D convolutional neural network (CNN) encoder to extract multi-view 2D features from different X-ray projections. Then, in aligning with the geometry of CBCT scanning, we back-project multi-view 2D features into 3D space, which properly bridges the dimension gap with multi-view consistency. Particularly, as different views offer varying degrees of information, an adaptive feature fusion strategy is further introduced to aggregate these multi-view features. Consequently, a 3D volumetric feature is constructed and then decoded into 3D CBCT image with a 3D CNN decoder. Our framework's inherent geometry awareness ensures accurate information retrieval from multi-view X-ray projections. Moreover, by capturing prior knowledge from populations in extensive datasets, our method can generalize well across different patients without individual optimization, even with extremely sparse input views, such as 5 or 10 views. We have validated our effectiveness on two simulated datasets (dental and spine) and one real-world dataset (walnut). You may refer to the code link for the details of our datasets.<p>
</div>
</section>
<section id="results"/>
<h2>Results</h2>
<hr>
<center><img width="100%" src="./image/results_fig1.png" style="margin-top: 20px; margin-bottom: 3px;"></center>
<p class="caption">
Qualitative comparison on case #10 from dental dataset (axial slice). Window: [-1000, 2000] HU. For more results, please refer to our paper.
</p><p class="caption">
<center><img width="100%" src="./image/results_fig.png" style="margin-top: 20px; margin-bottom: 3px;"></center>
<p class="caption">
Qualitative comparison with current two SOTA methods SNAF and DIF-Net on case #9 from dental dataset. From top to bottom: axial, coronal, and sagittal slices. Window: [-1000, 2000] HU. For more results, please refer to our paper.
</p><p class="caption">
<center><img width="100%" src="./image/results_tab.png" style="margin-top: 20px; margin-bottom: 3px;"></center>
<p class="caption">
Quantitative comparison on case #9 from dental dataset. The best performance is shown in bold. For more results, please refer to our paper.
</p><p class="caption">
</section>
<section id="bibtex">
<h2>Citation</h2>
<hr>
<pre><code>@ARTICLE{SVCT,
author={Liu, Zhentao and Fang, Yu and Li, Changjian and Wu, Han and Liu, Yuan and Shen, Dinggang and Cui, Zhiming},
journal={IEEE Transactions on Medical Imaging},
title={Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction},
year={2024},
doi={10.1109/TMI.2024.3473970}
}
</code></pre>
</section>