Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Journal 1, 2010
This paper is about the number 2. The number 3 is left for future work.
Citation: Your Name, You. (2010). "Paper Title Number 2." Journal 1. 1(2).
Download Paper | Download Slides
Published in Journal 1, 2015
This paper is about the number 3. The number 4 is left for future work.
Citation: Your Name, You. (2015). "Paper Title Number 3." Journal 1. 1(3).
Download Paper | Download Slides
Published in TOIS, ACM Transactions on Information Systems, 2024
Uni-CTR leverages Large Language Models and pluggable domain networks to address the seesaw phenomenon and scalability challenges in multi-domain CTR prediction, achieving SOTA performance across various scenarios.
Citation: Zichuan Fu, Xiangyang Li, Chuhan Wu, Yichao Wang, Kuicai Dong, Xiangyu Zhao, Mengchen Zhao, Huifeng Guo, and Ruiming Tang. 2024. A Unified Framework for Multi-Domain CTR Prediction via Large Language Models. ACM Trans. Inf. Syst. Just Accepted (October 2024). https://doi.org/10.1145/3698878
Download Paper
Published in CIKM’24 (Full Research Paper track), Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, 2024
LLM4MSR enhances multi-scenario recommendation by leveraging LLM for knowledge extraction and hierarchical meta networks, achieving improved performance and interpretability without LLM fine-tuning while maintaining deployment efficiency.
Citation: Yuhao Wang, Yichao Wang, Zichuan Fu, Xiangyang Li, Wanyu Wang, Yuyang Ye, Xiangyu Zhao, Huifeng Guo, and Ruiming Tang. 2024. LLM4MSR: An LLM-Enhanced Paradigm for Multi-Scenario Recommendation. In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management (CIKM '24). Association for Computing Machinery, New York, NY, USA, 2472–2481. https://doi.org/10.1145/3627673.3679743
Download Paper
Published in ACL’25(Industry Track), Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics, 2025
A two-stage framework combining robust supervised fine-tuning with model merging for efficient knowledge editing in LLMs that preserves general capabilities while outperforming existing methods.
Citation:
Download Paper
Published in ACL’25, Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics, 2025
Hi-Merging: a training-free method that merges specialized LLMs into a unified multi-task model using hierarchical pruning and scaling, preserving individual strengths while minimizing parameter conflicts across languages and tasks.
Citation:
Download Paper
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.