Posts by Collection

portfolio

publications

A Unified Framework for Multi-Domain CTR Prediction via Large Language Models

Published in TOIS, ACM Transactions on Information Systems, 2024

Uni-CTR leverages Large Language Models and pluggable domain networks to address the seesaw phenomenon and scalability challenges in multi-domain CTR prediction, achieving SOTA performance across various scenarios.

Citation: Zichuan Fu, Xiangyang Li, Chuhan Wu, Yichao Wang, Kuicai Dong, Xiangyu Zhao, Mengchen Zhao, Huifeng Guo, and Ruiming Tang. 2024. A Unified Framework for Multi-Domain CTR Prediction via Large Language Models. ACM Trans. Inf. Syst. Just Accepted (October 2024). https://doi.org/10.1145/3698878
Download Paper

LLM4MSR: An LLM-Enhanced Paradigm for Multi-Scenario Recommendation

Published in CIKM’24 (Full Research Paper track), Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, 2024

LLM4MSR enhances multi-scenario recommendation by leveraging LLM for knowledge extraction and hierarchical meta networks, achieving improved performance and interpretability without LLM fine-tuning while maintaining deployment efficiency.

Citation: Yuhao Wang, Yichao Wang, Zichuan Fu, Xiangyang Li, Wanyu Wang, Yuyang Ye, Xiangyu Zhao, Huifeng Guo, and Ruiming Tang. 2024. LLM4MSR: An LLM-Enhanced Paradigm for Multi-Scenario Recommendation. In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management (CIKM '24). Association for Computing Machinery, New York, NY, USA, 2472–2481. https://doi.org/10.1145/3627673.3679743
Download Paper

Model Merging for Knowledge Editing

Published in ACL’25(Industry Track), Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics, 2025

A two-stage framework combining robust supervised fine-tuning with model merging for efficient knowledge editing in LLMs that preserves general capabilities while outperforming existing methods.

Citation:
Download Paper

Training-free LLM Merging for Multi-task Learning

Published in ACL’25, Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics, 2025

Hi-Merging: a training-free method that merges specialized LLMs into a unified multi-task model using hierarchical pruning and scaling, preserving individual strengths while minimizing parameter conflicts across languages and tasks.

Citation:
Download Paper

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.