Research
Research Agenda
Any research topic related to natural language processing is within the interest of HYU NLP Lab. members.
Specifically, we are recently paying more attention to the topics such as:
Effective and efficient utilization of large language models & spread of the benefits of scale into reasonable environments
Parameter-efficient transfer learning
In-context learning and prompt engineering
Knowledge distillation, pruning, quantization, etc.
LLMs with longer context
Guarantee for equivalent benefits of NLP to everyone
Democratization (reverse-engineering) of related technologies
Multilingual language models & cross-lingual transfer & Korean NLP
Research related to bias, fairness, safety, ethics, etc.
Code generation
Sustainable update of knowledge
Continual learning
Retrieval-augmented generation
Models augmented with tools
Building domain/task-specific (customized) specialists
Domain adaptation for temporal/distributional shift
Knowledge editing
Text representation learning and related applications
Contrastive learning
Dense retrieval & open-domain question answering
Multi-label classification
Interpretability and explainable AI (XAI) for NLP
Probing: analysis of the inner workings of existing approaches and models
Fusion of linguistic theories and neural approaches
Construction of evaluation protocols from diverse (e.g., linguistic, societal) perspectives
Natural language generation
Consistent and factual generation without hallucination
Detection of machine-generated text
Dialogue system
Multimodality
Combination of visual (image, video), textual (including multilingualism & programming languages), tabular, and vocal features
Projects
책임 연구
음성인식 고도화를 위한 복합의도 모델 연구, 현대자동차 (현대 NGV), 2024-2026
자연어 기반 복합 의도 알고리즘 연구, 현대자동차 (현대 NGV), 2023-2024
대규모 언어 모델 기반 고품질 문장 표현 학습을 위한 효율적인 방법론 분석 및 개발, 한국연구재단 (NRF), 2022-2024
음성인식 자연어처리엔진의 차량 도메인 언어 이해 향상을 위한 사전학습언어모델 아키텍처 고도화, 현대자동차 (현대NGV), 2022-2023
공동 연구
한양대학교 인공지능대학원지원사업, 정보통신기획평가원 (IITP), 2021-
한양대학교 컴퓨터소프트웨어학과 BK21-FOUR 인공지능 혁신인재 교육단, 한국연구재단 (NRF), 2022-
초연결사회 위험 관리를 위한 빅데이터 기반 사회 환경 실시간 모니터링/사회 시뮬레이션 시스템, 한국연구재단 (NRF), 2022-
인공지능반도체대학원(한양대학교), 정보통신기획평가원 (IITP), 2023-
Lab Seminar
Visit here!
Lab History
(24/02/20) One paper (BlendX: Complex Multi-Intent Detection with Blended Patterns) has been accepted for presentation at LREC-COLING 2024. Big congrats to Yejin, Jungyeon, and Kangsan!
(24/02/15) Jinhyeon, Young Hyun, Taejun, and Seong Hoon have graduated with their Master's degrees. Wish them all their best!
(23/11/22) One paper has been accepted to KSC 2023. Congrats to Jii!
(23/10/08) Two papers (X-SNS: Cross-Lingual Transfer Prediction through Sub-Network Similarity & Universal Domain Adaptation for Robust Handling of Distributional Shifts in NLP) have been accepted at EMNLP 2023 (Findings). X-SNS is especially our first internal project that will be showcased at a major conference. Congrats to Taejun, Jinhyeon, Deokyoung, and Seong Hoon. See you in Singapore!
(23/09/25) Two papers have been accepted to HCLT 2023. Congrats to Jinhyeon and Taejun!
(23/06/30) Updated the contents regarding our journey to KCC 2023, which took place on the Jeju island. At the conference, Seong Hoon presented his work titled Enhanced Zero-Shot Cross-Lingual Transfer with the Diversification of Source Languages.
(22/11/25) One paper (Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners) has been accepted to AAAI 2023. Congrats!
(22/10/16) Taeuk, Kangsan, Jinhyeon, Minjin, and Kyumin participated in COLING 2022.
(22/10/07) Two papers (Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations and Enhancing Out-of-Distribution Detection in Natural Language Understanding via Implicit Layer Ensemble) for which Taeuk participated as a corresponding author have been accepted to EMNLP 2022 (1 Main, 1 Findings). They are the results of a collaboration with SNU and Naver. See you in Abu Dhabi!
(22/10/04) One paper (Comparison and Analysis of Unsupervised Contrastive Learning Approaches for Korean Sentence Representations) has been accepted to HCLT 2022 (authors: Young Hyun, Kyumin, Minjin, Jii, Kangsan, and Taeuk). Congrats!
(22/08/17) Revisiting the Practical Effectiveness of Constituency Parse Extraction from Pre-trained Language Models (authors: Taeuk) has been accepted to COLING 2022. See you soon in Gyeongju!
(22/05/22) Taeuk has been invited to serve as an Area Chair for the Unsupervised and Weakly-supervised Methods Track of EMNLP 2022.
(22/04/13) HYU at SemEval-2022 Task 2: Effective Idiomaticity Detection with Consideration at Different Levels of Contextualization (authors: Youngju and Taeuk) has been accepted to SemEval 2022. Congrats!
(21/10/15) Taeuk has been nominated to serve as an Action Editor for the ACL Rolling Review (a new reviewing platform for *ACL conferences)!
(21/08/26) One paper (Multilingual Chart-based Constituency Parse Extraction from Pre-trained Language Models) is accepted at EMNLP 2021 Findings.
(21/08/03) Taeuk presented his work Self-Guided Contrastive Learning for BERT Sentence Representations at ACL 2021.