Publications

Preprint (arXiv)

[P3] Aligning Language Models to Explicitly Handle Ambiguity

[P2] Hyper-CL: Conditioning Sentence Representations with Hypernetworks

[P1] Analysis of Multi-Source Language Training in Cross-Lingual Transfer


International Conferences

2024

[C17] BlendX: Complex Multi-Intent Detection with Blended Patterns 

2023

[C16] X-SNS: Cross-Lingual Transfer Prediction through Sub-Network Similarity

[C15] Universal Domain Adaptation for Robust Handling of Distributional Shifts in NLP

[C14] Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners

2022

[C13] Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations

[C12] Enhancing Out-of-Distribution Detection in Natural Language Understanding via Implicit Layer Ensemble

[C11] Revisiting the Practical Effectiveness of Constituency Parse Extraction from Pre-trained Language Models


Before 2022

[C10] Multilingual Chart-based Constituency Parse Extraction from Pre-trained Language Models

[C9] Self-Guided Contrastive Learning for BERT Sentence Representations

[C8] Heads-up! Unsupervised Constituency Parsing via Self-Attention Heads

[C7] Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction

[C6] Cell-aware Stacked LSTMs for Modeling Sentences

[C5] Don't Just Scratch the Surface: Enhancing Word Representations for Korean with Hanja

[C4] Intrinsic Evaluation of Grammatical Information within Word Embeddings

[C3] A Cross-Sentence Latent Variable Model for Semi-Supervised Text Sequence Matching

[C2] Dynamic Compositionality in Recursive Neural Networks with Structure-aware Tag Representations

[C1] Element-wise Bilinear Interaction for Sentence Matching

International Workshops

[W6] Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator

[W5] HYU at SemEval-2022 Task 2: Effective Idiomaticity Detection with Consideration at Different Levels of Contextualization

[W4] IDS at SemEval-2020 Task 10: Does Pre-trained Language Model Know What to Emphasize?

[W3] Summary Level Training of Sentence Rewriting for Abstractive Summarization

[W2] SNU_IDS at SemEval-2018 Task 12: Sentence Encoder with Contextualized Vectors for Argument Reasoning Comprehension

[W1] A Syllable-based Technique for Word Embeddings of Korean Words

Domestic Conferences

[D5] 대조학습 기반 문장표현 방법론 개선을 위한 공통 오류 분석 및 앙상블 기법
(Enhancing Sentence Representations with Common Error Analysis and Ensemble Techniques in Contrastive Learning)

[D4] 효과적인 한국어 교차언어 전송을 위한 특성 연구
(Research on Features for Effective Cross-Lingual Transfer in Korean)

[D3] MAdapter: 효율적인 중간 층 도입을 통한 Adapter 구조 개선
(MAdapter: A Refinement of Adapters by Augmenting Efficient Middle Layers)

[D2] 원천 언어 다각화를 통한 교차 언어 전이 성능 향상
(Enhanced Zero-Shot Cross-Lingual Transfer with the Diversification of Source Languages)

[D1] 한국어 문장 표현을 위한 비지도 대조 학습 방법론의 비교 및 분석
(Comparison and Analysis of Unsupervised Contrastive Learning Approaches for Korean Sentence Representations)