Skip to content

Commit e6e63ce

Browse files
committed
adding note to application pop up and section.
1 parent e2112d9 commit e6e63ce

11 files changed

Lines changed: 79 additions & 32 deletions

File tree

File renamed without changes.
5.98 MB
Loading
1.13 MB
Loading
-84.1 KB
Binary file not shown.
933 KB
Loading

_site/img/member_images/ghost.png

-243 KB
Binary file not shown.
-897 KB
Binary file not shown.

_site/index.html

Lines changed: 27 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,19 @@
4747
class="w-32 h-32 object-contain" alt="Logo">
4848
</div>
4949
<h2 class="text-2xl font-bold text-blue-800 mb-4">Join Our Lab</h2>
50-
<p class="text-gray-600 mb-6">For those interested in joining our lab, please fill out the <strong>Application Form</strong>.</p>
50+
<p class="text-gray-600 mb-4">For those interested in joining our lab, please fill out the <strong>Application Form</strong>.</p>
51+
52+
<!-- Important Notice in Popup -->
53+
<div class="mb-6 p-4 bg-red-50 border-l-4 border-red-500 rounded-r-xl text-left">
54+
<p class="text-xs md:text-sm text-red-600 font-bold flex items-start gap-2">
55+
<i class="fas fa-exclamation-triangle mt-0.5 flex-shrink-0"></i>
56+
<span>
57+
Only the applicants who fill out the official form will have their applications reviewed.
58+
Direct emails will be ignored.
59+
</span>
60+
</p>
61+
</div>
62+
5163
<a href="https://docs.google.com/forms/d/e/1FAIpQLScUGAM_k4n7rT736ThnAe5jn2owVe2KFO7cib3fHsiL6I099g/viewform?usp=send_form"
5264
target="_blank" rel="noopener noreferrer"
5365
class="inline-block bg-blue-600 hover:bg-blue-700 text-white font-bold py-3.5 px-10 rounded-full shadow-lg transform transition hover:-translate-y-1">
@@ -135,8 +147,8 @@ <h3 class="feature-title">Security & Privacy</h3>
135147
<div class="relative group">
136148
<div class="absolute inset-0 bg-blue-200 rounded-2xl blur-xl opacity-0 group-hover:opacity-40 transition-opacity duration-500"></div>
137149
<img loading="lazy"
138-
src="img/dash_logo/Join_our_lab.png"
139-
onerror="this.src=getImg('img/dash_logo/Join_our_lab.png')"
150+
src="img/dash_logo/Join_our_lab_new.png"
151+
onerror="this.src=getImg('img/dash_logo/Join_our_lab_new.png')"
140152
alt="Join Our Lab"
141153
class="relative max-w-[180px] md:max-w-full h-auto drop-shadow-2xl transition-transform duration-500 group-hover:scale-105"
142154
/>
@@ -157,10 +169,21 @@ <h3 class="text-3xl md:text-4xl font-black text-gray-900 mb-4 leading-tight">
157169
Shape the Future of <span class="text-blue-600">AI & Security</span> with us.
158170
</h3>
159171

160-
<p class="text-lg text-gray-600 mb-8 leading-relaxed max-w-2xl">
172+
<p class="text-lg text-gray-600 mb-6 leading-relaxed max-w-2xl">
161173
We are looking for dedicated individuals whose research interests align with Computer Vision, Anomaly Detection, and Applied Data Science.
162174
<strong>Coding proficiency and passion for research are highly valued.</strong>
163175
</p>
176+
177+
<!-- IMPORTANT NOTICE IN RED -->
178+
<div class="mb-8 p-4 bg-red-50 border-l-4 border-red-500 rounded-r-xl inline-block text-left">
179+
<p class="text-sm md:text-base text-red-600 font-bold flex items-start gap-2">
180+
<i class="fas fa-exclamation-triangle mt-1 flex-shrink-0"></i>
181+
<span>
182+
Only the applicants who fill out this form have their applications looked through.
183+
Any emails sent directly will be ignored and only the Google Forms will be accepted.
184+
</span>
185+
</p>
186+
</div>
164187

165188
<div class="flex flex-col sm:flex-row items-center gap-4 justify-center md:justify-start">
166189
<a href="https://forms.gle/RYCUasAUbsFhJtyb6"

_site/js/membersdata.js

Lines changed: 6 additions & 6 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

_site/js/publications_data.js

Lines changed: 21 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,5 @@
11
const PUBLICATIONS_DATA_LOCAL = [
2-
{
3-
"title": "Fitting Image Diffusion Models on Video Datasets",
4-
"authors": [
5-
"Juhun Lee",
6-
"Simon S. Woo"
7-
],
8-
"venue_full": "Workshop on International Conference on Computer Vision",
9-
"venue": "ICCV",
10-
"track": "Workshop Paper",
11-
"Factor": [
12-
"",
13-
0
14-
],
15-
"year": 2026,
16-
"links": {
17-
"conf": "https://iccv.thecvf.com/"
18-
},
19-
"img": "/img/Publications/2026_ICCVW_Juhun.png",
20-
"abstract": "Image diffusion models are trained on independently sampled static images. While this is the bedrock task protocol in generative modeling, capturing the temporal world through the lens of static snapshots is information-deficient by design. This limitation leads to slower convergence, limited distributional coverage, and reduced generalization. In this work, we propose a simple and effective training strategy that leverages the temporal inductive bias present in continuous video frames to improve diffusion training. Notably, the proposed method requires no architectural modification and can be seamlessly integrated into standard diffusion training pipelines. We evaluate our method on the HandCo dataset, where hand-object interactions exhibit dense temporal coherence andsubtle variations in finger articulation often result in semantically distinct motions. Empirically, our method accelerates convergence by over 2x faster and achieves lower FID on both training and validation distributions. It also improves generative diversity by encouraging the model to capture meaningful temporal variations. We further provide an optimization analysis showing that our regularization reduces the gradient variance, which contributes to faster convergence."
21-
},
2+
223
{
234
"title": "ICR-NET: Robust Deepfake Detection under Temporal Corruption",
245
"authors": [
@@ -111,6 +92,26 @@ const PUBLICATIONS_DATA_LOCAL = [
11192
"img": "/img/Publications/WWW2026short_yurim.png",
11293
"abstract": "As pretrained models are increasingly shared on the web, ensuring that models can forget or delete sensitive, copyrighted, or private information upon request has become crucial. Machine unlearning has been proposed to address this issue. However, current evaluations for unlearning methods rely on output-based metrics, which cannot verify whether information is completely deleted or merely suppressed at the representation level, where suppression is insufficient for true unlearning. To address this gap, we propose a novel restoration-based analysis framework that uses Sparse Autoencoders to identify class-specific expert features in intermediate layers and applies inference-time steering to quantitatively distinguish between suppression and deletion. Applying our framework to 12 major unlearning methods in image classification tasks, we find that most methods achieve high restoration rates of unlearned information, indicating that they only suppress information at the decision-boundary level, while preserving semantic features in intermediate representations. Notably, even retraining from pretrained checkpoints shows high restoration, revealing that pretrained feature hierarchies persist. These results demonstrate that representation-level retention poses significant risks overlooked by output-based metrics, highlighting the need for new unlearning evaluation criteria. We propose new evaluation guidelines that prioritize representation-level verification, especially for privacy-critical applications in the pretrained model era."
11394
},
95+
{
96+
"title": "Fitting Image Diffusion Models on Video Datasets",
97+
"authors": [
98+
"Juhun Lee",
99+
"Simon S. Woo"
100+
],
101+
"venue_full": "Workshop on International Conference on Computer Vision",
102+
"venue": "ICCV",
103+
"track": "Workshop Paper",
104+
"Factor": [
105+
"",
106+
0
107+
],
108+
"year": 2025,
109+
"links": {
110+
"conf": "https://iccv.thecvf.com/"
111+
},
112+
"img": "/img/Publications/2025_ICCVW_Juhun.png",
113+
"abstract": "Image diffusion models are trained on independently sampled static images. While this is the bedrock task protocol in generative modeling, capturing the temporal world through the lens of static snapshots is information-deficient by design. This limitation leads to slower convergence, limited distributional coverage, and reduced generalization. In this work, we propose a simple and effective training strategy that leverages the temporal inductive bias present in continuous video frames to improve diffusion training. Notably, the proposed method requires no architectural modification and can be seamlessly integrated into standard diffusion training pipelines. We evaluate our method on the HandCo dataset, where hand-object interactions exhibit dense temporal coherence andsubtle variations in finger articulation often result in semantically distinct motions. Empirically, our method accelerates convergence by over 2x faster and achieves lower FID on both training and validation distributions. It also improves generative diversity by encouraging the model to capture meaningful temporal variations. We further provide an optimization analysis showing that our regularization reduces the gradient variance, which contributes to faster convergence."
114+
},
114115
{
115116
"title": "Self-Disclosure of Mental Health via Deepfakes: Testing the Effects of Self-Deepfakes on Affective Resistance and Intention to Seek Mental Health Support",
116117
"authors": [

0 commit comments

Comments
 (0)