Sitemap
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Pages
Posts
Future Blog Post
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Blog Post number 4
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 3
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 2
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 1
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
portfolio
Portfolio item number 1
Short description of portfolio item number 1
Portfolio item number 2
Short description of portfolio item number 2
publications
Rethinking Learning Rate Tuning in the Era of Large Language Models
Published in 2023 IEEE 5th International Conference on Cognitive Machine Intelligence (CogMI), 2023
This paper explores the challenges of learning rate tuning for Large Language Models (LLMs) and introduces LRBench++ for benchmarking.
Recommended citation: Jin, H., Wei, W., Wang, X., Zhang, W., & Wu, Y. (2023). "Rethinking Learning Rate Tuning in the Era of Large Language Models." 2023 IEEE 5th International Conference on Cognitive Machine Intelligence (CogMI), 112-121.
Download Paper
DA-MoE: Towards Dynamic Expert Allocation for Mixture-of-Experts Models
Published in arXiv Preprint, 2024
This paper proposes DA-MoE, a novel dynamic router mechanism for Mixture-of-Experts (MoE) models, enabling efficient expert allocation based on token importance.
Recommended citation: Aghdam, M. A., Jin, H., & Wu, Y. (2024). "DA-MoE: Towards Dynamic Expert Allocation for Mixture-of-Experts Models." arXiv Preprint. arXiv:2409.06669.
Download Paper
CE-CoLLM: Efficient and Adaptive Large Language Models Through Cloud-Edge Collaboration
Published in arXiv Preprint, 2024
This paper proposes CE-CoLLM, a novel cloud-edge collaboration framework for efficient and adaptive inference of Large Language Models (LLMs).
Recommended citation: Jin, H., & Wu, Y. (2024). "CE-CoLLM: Efficient and Adaptive Large Language Models Through Cloud-Edge Collaboration." arXiv Preprint. arXiv:2411.02829.
Download Paper
talks
Talk 1 on Relevant Topic in Your Field
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Conference Proceeding talk 3 on Relevant Topic in Your Field
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
teaching
Teaching experience 1
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Teaching experience 2
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.