-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathachievements.html
More file actions
393 lines (380 loc) · 20.2 KB
/
achievements.html
File metadata and controls
393 lines (380 loc) · 20.2 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
<!DOCTYPE HTML>
<!--
Ex Machina by TEMPLATED
templated.co @templatedco
Released for free under the Creative Commons Attribution 3.0 license (templated.co/license)
-->
<html>
<head>
<title>GraphNEx</title>
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<meta name="description" content="" />
<meta name="keywords" content="" />
<link href='https://fonts.googleapis.com/css?family=Roboto+Condensed:700italic,400,300,700' rel='stylesheet' type='text/css'>
<!--[if lte IE 8]><script src="js/html5shiv.js"></script><![endif]-->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<script src="js/skel.min.js"></script>
<script src="js/skel-panels.min.js"></script>
<script src="js/init.js"></script>
<noscript>
<link rel="stylesheet" href="css/skel-noscript.css" />
<link rel="stylesheet" href="css/style.css" />
<link rel="stylesheet" href="css/style-desktop.css" />
<link rel="stylesheet" href="css/style-mobile.css" />
</noscript>
<!--[if lte IE 8]><link rel="stylesheet" href="css/ie/v8.css" /><![endif]-->
<!--[if lte IE 9]><link rel="stylesheet" href="css/ie/v9.css" /><![endif]-->
<link rel="shortcut icon" type="image/png" href="images/favicon.png"/>
</head>
<body class="no-sidebar">
<!-- Header -->
<div id="header">
<div class="container">
<!-- Logo -->
<div id="logo">
<h1><a href="index.html"><img src="images/GraphNEx_logo.png" alt="GraphNEx logo" height="65px"></a></h1>
</div>
<!-- Nav -->
<nav id="nav">
<ul>
<li><a href="index.html">Main</a></li>
<li class="active"><a href="achievements.html">Achievements</a></li>
<li><a href="team.html">Team</a></li>
<li><a href="publications.html">Publications</a></li>
<li><a href="blog.html">Blog</a></li>
<li><a href="events.html">Events</a></li>
</ul>
</nav>
</div>
</div>
<!-- Header -->
<!-- Banner -->
<div id="banner">
<div class="container">
</div>
</div>
<!-- /Banner -->
<!-- Main -->
<div id="page">
<!-- Main -->
<div id="main" class="container">
<div class="row">
<div class="12u">
<section>
<header>
<h2>Achievements</h2>
</header>
<header>
<h3>Methods</h3>
</header>
<ul class="style1">
<li>
<div class="container-li">
<div class="text">
<i>Explainability Value Proposition Canvas (xVPC)</i><br>
A boundary object to facilitate the interaction between AI and HCI experts for designing user-centred XAI solutions.
xVPC borrows from the value proposition canvas broadly used by entrepreneurs to create new services and products.
The GraphNEx canvas is useful not only for research and innovation in XAI but also in computer and data science education for collaborative design thinking activities.
The canvas was used to support the user-centred design of interfaces and helps the average citizen understand what private information they disclosed when sharing pictures online.
Compared to the original VPC, xVPC highlights a) end-user expectations expressed in terms of insights to be created and uncertainties to be reduced thanks to the XAI solution being designed,
and b) actionable knowledge provided by the XAI solution through methods and interfaces targeting a predefined end-user segment,
such as AI experts, domain specific practitioners, or the general public.<br>
[<a href="https://infoscience.epfl.ch/record/297198" target="_blank">publication</a>]
[<a href="https://graphnex.github.io/xvpc/" target="_blank">more details</a>]
</div>
<div class="image-li">
<img src="images/xVPC.png">
</div>
</div>
</li>
<li>
<div class="container-li">
<div class="text">
<i>Novel harmonic analysis on directed graphs with random walk operator: From Fourier analysis to wavelets</i><br>
Signals on (strongly) connected directed graphs are analysed with a novel harmonic method taking
advantage of the random walk operator. Multi-scale analyses have been validated on semi-supervised
learning problems and signal modelling problems.<br>
[<a href="https://anr.hal.science/hal-03858081v1" target="_blank">publication</a>]
</div>
<div class="image-li">
<img src="images/harmonic.png">
</div>
</div>
</li>
<li>
<div class="container-li">
<div class="text">
<i>Simple way to learn metrics to compare signals on attributed graphs</i><br>
A new Simple Graph Metric Learning (SGML) model, based on Simple Graph Convolutional Neural Networks (SGCN)
and elements of Optimal Transport theory, that builds an appropriate distance from a database of labeled (attributed) graphs
and improves the performance of simple classification algorithms such as k-NN. This model has few trainable parameters and
the distance can be quickly trained while maintaining good performances.<br>
[<a href="https://arxiv.org/abs/2209.12727" target="_blank">publication</a>]
</div>
<div class="image-li">
<img src="images/sgml.png">
</div>
</div>
</li>
<li>
<div class="container-li">
<div class="text">
<i>Graph Privacy Advisor (GPA)</i><br>
A pipeline that uses existing convolutional neural networks to identify concepts (objects and scenes) and
initialise their features in a graph (e.g., object cardinality);
a graph neural network to update and propagate the features between nodes;
and an MLP-based classifier to predict if an image is private or public.
Partially building and improving on an existing method (<a href="https://doi.org/10.1016/j.patcog.2020.107360" target="_blank">GIP</a>),
GPA models the nodes as the object categories and two class nodes,
and the edges as the binary co-occurrence of the nodes in at least one image of the training set.
GPA outperforms the state-of-the-art GIP in terms of classification performance
on three existing datasets for image privacy classification:
<a href="https://zenodo.org/records/4568971" target="_blank">PicAlert</a> (+1.6 pp for Macro F1-score),
<a href="https://zenodo.org/records/6406870" target="_blank">PrivacyAlert</a> (+5.3 pp for Macro F1-score), and
<a href="https://doi.org/10.1016/j.patcog.2020.107360" target="_blank">Image Privacy Dataset</a> (+4.3 pp for Macro F1-score).
Compared to <a href="https://doi.org/10.1016/j.patcog.2020.107360" target="_blank">GIP</a>,
GPA has the advantage of using a small-size feature vector (2 elements) instead of a 4096-dimensional vector.<br>
[<a href="https://doi.org/10.48550/arXiv.2210.11169" target="_blank">publication</a>]
</div>
<div class="image-li">
<img src="images/GPA_pipeline.png">
</div>
</div>
</li>
<li>
<div class="container-li">
<div class="text">
<i>Human interpretable features for Privacy Protection</i><br>
A set of human-interpretable features was defined and validated on two image privacy datasets:
<a href="https://zenodo.org/records/4568971" target="_blank">PicAlert</a> and
<a href="https://zenodo.org/records/6406870" target="_blank">PrivacyAlert</a>.
Using these human interpretable features with classical machine learning algorithms, such as Logistic Regression and Multi-layer perceptron,
achieves comparable performance (only 2.5 percentage points (pp) less) to using high-dimensional deep features extracted
by recent convolutional neural networks (ResNet and ConvNext based models) or vision transformers (Swin models).
Moreover, the use of these selected features together with deep features improves the classification performance by 1 pp.<br>
[<a href="https://doi.org/10.48550/arXiv.2310.19582" target="_blank">publication</a>]
</div>
<div class="image-li">
<img src="images/human-interpretable-features.png">
</div>
</div>
</li>
<li>
<div class="container-li">
<div class="text">
<i>Adaptive Neighbourhood Graph Neural Network (AN-GNN)</i><br>
A metric learning method that constructs and refines a dynamic graph neural network (AN-GNN) from acoustic features.
AN-GNN achieves 96.4% retrieval accuracy compared to 38.5% with a Euclidean metric
and 86.0% with a multilayer perceptron on the Cyberlioz dataset for music information retrieval.
AN-GNN showed the benefits of using graph-based models in audio based tasks.<br>
[<a href="https://qmro.qmul.ac.uk/xmlui/handle/123456789/90297" target="_blank">publication</a>]
</div>
<div class="image-li">
<img src="images/AN-GNN.png">
</div>
</div>
</li>
<li>
<div class="container-li">
<div class="text">
<i>New explainability performance measure</i><br>
An explainability performance measure that estimates the minimal number of features necessary
for a prediction when the explanations take the form of a list of features (nodes, groups of nodes)
ranked in order of importance for the predictions.<br>
[<a href="https://hal.science/hal-04226971/" target="_blank">publication</a>]
</div>
<div class="image-li">
<img src="images/GRETSI.png">
</div>
</div>
</li>
</ul>
<br>
<header>
<h3>Software</h3>
</header>
<ul class="style1">
<li>
<div class="container-li">
<div class="text">
<i>Graph Privacy Advisor (GPA)</i><br>
A pipeline for predicting if an image is private or public. GPA uses a scene classifier
(a convolutional neural network pretrained on the Place365 dataset), a trainable fully connected layer
that transforms the predicted scene logits to the class node features (logits), and an object detector
(a YOLOv5 pre-trained on the COCO dataset with 80 object categories + background class)
to localise objects in an image and counting their instances.
The transformed scene information and cardinality of localised objects categories are used by GPA
as visual clues for a Graph Neural Network (Gated graph neural network followed by a modified Graph Attention Network)
to classify if an image is public or private. The software provides the training and testing codes of GPA
on different public datasets (PicAlert, PrivacyAlert, Image Privacy Dataset),
and a demo code to use GPA on images provided by the user.<br>
[<a href="https://github.com/smartcameras/GPA" target="_blank">open-source code</a>]
</div>
<div class="image-li">
<img src="images/GPA_pipeline.png">
</div>
</div>
</li>
<li>
<div class="container-li">
<div class="text">
<i>XAI for Genomics</i><br>
A benchmark of different machine learning models on different real and simulated datasets to predict
phenotype from gene expression data and evaluation of the explainability with different scores.
The public repository includes 3 datasets with real samples from TCGA (PanCan, BRCA, KIRC) and
5 datasets with simulated data (SIMU1, SIMU2, SimuA,SimuB, SimuC);
4 machine learning models: logistic regression (LR), multilayer perceptron (MLP), diffusion + logistic regression (DiffuseLR),
diffusion + multilayer perceptron (DiffuseMLP); correlation graphs over all features using training examples;
and 4 explainability performance measures (prediction gaps in descending order of importance (PGI)
and in ascending order of importance (PGU) or in random order (PGR), and feature agreements (FA)).
Comparative results in terms of classification accuracy and explainability across all models and datasets are provided for reproducibility. <br>
[<a href="https://github.com/mbonto/XAI_for_genomics" target="_blank">open-source code</a>]
</div>
<div class="image-li">
<img src="images/GRETSI.png">
</div>
</div>
</li>
<li>
<div class="container-li">
<div class="text">
<i>Adaptive Neighborhood Graph Neural Network</i><br>
Public release of the open-source code for the AN-GNN for music retrieval based on human similarity judgements.
PSGNN uses a pre-trained network (Open L3 model trained on both video & audio data from Audioset) to extract embeddings
from given audio files and constructs a graph from the embeddings in a batch based on a novel adaptation of Gaussian RBF kernel.
The node embeddings are further refined using a stack of graph convolution network and then clustered appropriately through a proxy
anchor loss objective. The model trains to predict the cluster label (19 clusters) for each input audio file and achieves a 96.4% accuracy on the task.<br>
[<a href="https://github.com/cyrusvahidi/psgnn" target="_blank">open-source code</a>]
</div>
<div class="image-li">
<img src="images/AN-GNN.png">
</div>
</div>
</li>
</ul>
<br>
<header>
<h3 id="interfaces">Interfaces</h3>
</header>
<ul class="style1">
<li>
<div class="container-li">
<div class="text">
<i>You and Your Images</i><br>
In collaboration with the Royal College Academy (UK), “You and your images” engages the community on privacy risks
when uploading an image online and increase people’s awareness. After answering a set of questions, people can upload an image,
receive back a set of keywords describing the image and grasp how this information may be exploited by a service provider.
A demo of the interactive website was presented at the Victoria and Albert (V&A) museum in London,
during the Friday Late event on 23 September 2022 and the Digital Design Weekend on 24-25 Sept 2022.
Both events were well attended by an estimation of over 2,000 participants, and over 100 participants from young professionals
to experts on data privacy took part in the "You and Your Images" activity.
Participants positively commented on the activity as “You and your images” helped better understand
how much information images are revealing and how, possibly, this information can be used.<br>
[<a href="https://www.visualeaks.org/youandyourimages/" target="_blank">web interface</a>]
</div>
<div class="image-li">
<img src="images/you-and-your-images.png">
</div>
</div>
</li>
<li>
<div class="container-li">
<div class="text">
<i>Privacy Advisor for XAI: User study and word-cloud visualisation</i><br>
An interface that visualises a word cloud of concepts predicted by a general-purpose classifier to a naive user given an input image.
The interface was used for a user study to assess the privacy degree of an image (privacy awareness)
and user satisfaction (quality and utility of the interface) on the usefulness of the word cloud when reviewing the initial evaluation.
The results of this study suggest that using a word cloud to display keywords representing privacy concepts present
in the input image could make users more aware of an image’s potentially private nature.<br>
[<a href="https://privacy-advisor.netlify.app" target="_blank">web interface</a>]
</div>
<div class="image-li">
<img src="images/privacyadvisor_userstudy.png">
</div>
</div>
</li>
<li>
<div class="container-li">
<div class="text">
<i>G-Interface: Visualising and manipulating prior knowledge graphs for Privacy Protection and System Genetics</i><br>
G-Interface, based on the <a href="https://js.cytoscape.org/" target="_blank">Cytoscape.js</a> library,
visualizes knowledge graph data related to models developed within GraphNEx for the Privacy-Protection and System Genetics
use-cases. G-Interface provides a menu to filter and manipulate the graph, enabling model refinement.
The software, or part of it (e.g., word clouds, layouts), can be integrated into other platforms, projects, and experiments.<br>
[<a href="https://graphnex.github.io/g-interface/" target="_blank">web interface</a>]
[<a href="https://github.com/graphnex/g-interface" target="_blank">open-source code</a>]
</div>
<div class="image-li">
<img src="images/g-interface.png">
</div>
</div>
</li>
</ul>
<br>
<header>
<h3>Other</h3>
</header>
<ul class="style1">
<li>
<div class="container-li">
<div class="text">
<i>The 2022 Intelligent Sensing Winter School on XAI Sensing</i><br>
Organised in collaboration with the <a href="https://cis.eecs.qmul.ac.uk/" target="_blank">Centre for Intelligent Sensing</a> at Queen Mary University of London,
the Winter School hosted 4 tutorials and 8 talks from international experts presenting their research related to
Explainable Artificial Intelligence (XAI) and Interpretable Machine Learning.
The audience was also invited to submit an expression of interest for a short (5-10 minutes) presentation related to their work, and 6 people presented a short talk on the last day.
The event received more than 600 registrations and peaks of attendance of around 200 participants.<br>
[<a href="http://cis.eecs.qmul.ac.uk/school2022.html" target="_blank">webpage</a>]
</div>
<div class="image-li">
<img src="images/cis-winter-school.png">
</div>
</div>
</li>
</ul>
</section>
</div>
</div>
</div>
<!-- Main -->
</div>
<!-- /Main -->
<!-- Featured -->
<div id="featured">
<div class="container">
<div class="divider"></div>
</div>
</div>
<!-- /Featured -->
<div id="footer">
<div id="footer">
<div class="container">
<div class="row">
<section>
<h2>Partners</h2>
<ul class="style5">
<li><a href="https://www.qmul.ac.uk/" target="_blank"><img src="images/QMUL_logo.png" alt="QMUL logo" style="padding:5px 55px 5px 5px"></a></li>
<li><a href="http://www.ens-lyon.fr/" target="_blank"><img src="images/ENS_de_Lyon_logo.png" alt="ENS de Lyon logo" style="padding:5px 55px 5px 5px;"></a></li>
<li><a href="https://www.epfl.ch/en/" target="_blank"><img src="images/EPFL_logo.png" alt="EPFL logo" style="padding:5px 55px 5px 5px;"></a></li>
</ul>
</section>
</div>
<div class="row">
<section>
<h2>Sponsors</h2>
<ul class="style5">
<li><a href="https://epsrc.ukri.org/" target="_blank"><img src="images/EPSRC_logo.png" alt="EPSRC logo" style="padding:5px 55px 5px 5px"></a></li>
<li><a href="https://anr.fr/en/" target="_blank"><img src="images/anr_logo.png" alt="ANR logo" style="padding:5px 55px 5px 5px"></a></li>
<li><a href="https://www.snf.ch/en" target="_blank"><img src="images/SNF_logo.png" alt="SNF logo" style="padding:5px 55px 5px 5px"></a></li>
<li><a href="https://www.chistera.eu/" target="_blank"><img src="images/Chistera_logo.png" alt="Chistera logo" style="padding:5px 55px 5px 5px"></a></li>
</ul>
</section>
</div>
</div>
</div>
<!-- Copyright -->
<div id="copyright" class="container">
©Copyright GraphNEx 2021-2024
</div>
</body>
</html>