Zishun Yu

Computer Science,
University of Illinois at Chicago
Chicago, IL 60607
E-mail: zyu32 [@] uic [DOT] edu
[LinkedIn] [Kaggle] [Google Scholar]


  • 2024-05: Joining Meta GenAI @ Menlo Park as a research scientist intern.
  • 2024-05: Presented at ICLR, Vienna, Austria.
  • 2024-04: One paper accepted at UAI 2024.
  • 2024-03: Awarded travel support from ICLR 2024, see you in Vienna. 🇦🇹
  • 2024-02: Presented at ALT, San Diego, California.
  • 2024-01: One paper accepted at ICLR 2024 as spotlight presentation.
  • 2023-12: One paper accepted at ALT 2024.
  • 2023-11: One paper accepted at FMDM workshop @ NeurIPS.
  • 2023-09: Continuing my internship with ByteDance (remotely) @ Chicago.
  • 2023-08: Presented at ICML, Honolulu, Hawaii. 🏝
  • 2023-05: Joining ByteDance @ Bellevue as a research scientist intern.
  • 2023-04: One paper accepted at ICML 2023.
  • 2023-01: Received Kaggle Competition Master title.
  • 2023-01: Won a gold medal 🥇 for Kaggle annual optimization competition 2022.
  • 2022-12: Presented at NeurIPS, New Orleans, Louisiana.
  • 2022-09: One paper accepted at NeurIPS 2022.
  • 2022-05: One paper accepted at UAI 2022.
  • 2022-01: Won a silver medal 🥈 for Kaggle annual optimization competition 2021.
  • 2021-09: One paper accepted at IEEE Trans. ITS.
  • 2021-08: Starting as a Ph.D. student in CS at UIC.
  • 2021-05: Received NSF TRIPODS graduate fellowship (summer 2021).
  • 2021-05: Received M.Sc. in IEOR from UIC.
  • 2021-03: Won a bronze medal 🥉 for Kaggle RANZCR CLiP competition.
  • 2021-01: Won a silver medal 🥈 for Kaggle annual optimization competition 2020.
  • 2018-09: Presented at INFORMS annual meeting, Phoenix, Arizona

About Me

I am a computer science Ph.D. student at the University of Illinois Chicago (UIC), fortunate to be advised by Prof. Xinhua Zhang. I do reinforement learning (RL) and large language model (LLM) research. My current research focues on developing principled LLM methods through the lens of RL.

  • Research interets: My research spans many aspects of RL, including (but not limited to) RL theory [ALT24], fine-tuning [ICML23], RLxLLM (fine-tuning/alignment) [ICLR24], robustness [UAI24], and other applications [IEEE21]. Besides, I am also interested broadly in machine learning problems, such as (certifiable) robust (graph) learning [NeurIPS22], optimal transport [UAI22].

  • Experiences: I interned with ByteDance at Bellevue during the summer of ‘23, working on code-llm fine-tuning from the perspective of off-policy learning. I will spend the summer of ‘24 with Meta GenAI, on RLxLLMs. I earned my M.Sc. from UIC IEOR and my B.Eng. from HUST, and also did an undergraduate research visiting to York Univeristy, Canada.

  • Kaggle: Outside of research, I compete on Kaggle, and earned a Kaggle competition Master (rank ~1% of all Kaggle users) in ‘22. I mainly do combinatorics competitions for kaggle, usually mixed integer programming problems. (I have managed to earn a medal in all the combinatorics competitions I’ve competed in.)

  • I (occassionally) hike, trek and lift for fun.