Hi, this is the home page of Nan Jiang (姜楠). I am a machine learning researcher. My core research area is reinforcement learning (RL). I care about sample efficiency, and use ideas from statistical learning theory to analyze and develop RL algorithms.
Prospective students: please read this note.
I am open to collaboration on applying RL to domain X: please read this note
Experiences
2018 – Now | Assistant Professor, CS @ UIUC | |
2017 – 2018 | Postdoc Researcher, MSR NYC | |
2011 – 2017 | PhD, CSE @ UMich |
CV (Sept 2022) | |
nanjiang_cs ; also on Mastodon | |
nanjiang at illinois dot edu |
|
3322 Siebel Center |
Reinforcement Learning in Low-Rank MDPs with Density Features [arXiv]
(ICML-23) Audrey Huang, Jinglin Chen, Nan Jiang.
Clean results obtained by novel error induction analysis for taming error exponentiation.
Offline Reinforcement Learning with Realizability and Single-policy Concentrability [arXiv]
(COLT-22) Wenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, Jason D. Lee.
Behavior regularization is the key to avoiding degenerate saddle points under function approximation
Adversarially Trained Actor Critic for Offline Reinforcement Learning [arXiv]
(ICML-22, Outstanding Paper Runner-up ) Ching-An Cheng, Tengyang Xie, Nan Jiang, Alekh Agarwal.
Bellman-consistent pessimism meets the robust policy improvement of imitation learning
Towards Hyperparameter-free Policy Selection for Offline Reinforcement Learning [arXiv, code]
(NeurIPS-21) Siyuan Zhang, Nan Jiang.
BVFT shows promising empirical performance for offline policy selection.
On Query-efficient Planning in MDPs under Linear Realizability of the Optimal State-value Function [arXiv]
(COLT-21) Gellert Weisz, Philip Amortila, Barnabás Janzer, Yasin Abbasi-Yadkori, Nan Jiang, Csaba Szepesvári.
Cute tensorization trick for generative model + linear V*.
Batch Value-function Approximation with Only Realizability [arXiv, talk]
(ICML-21) Tengyang Xie, Nan Jiang.
Learning Q* from a realizable and otherwise arbitrary function class, which was believed to be impossible
Minimax Weight and Q-Function Learning for Off-Policy Evaluation [arXiv]
(ICML-20) Masatoshi Uehara, Jiawei Huang, Nan Jiang.
Learning importance weights and value functions from each other, with connections to many old and new algorithms in RL.
Information-Theoretic Considerations in Batch Reinforcement Learning [pdf, poster, MSR talk, Simons talk]
(ICML-19) Jinglin Chen, Nan Jiang.
Revisiting some fundamental aspects of value-based RL.
Contextual Decision Processes with Low Bellman Rank are PAC-Learnable [ICML version, arXiv, errata, poster, talk video]
(ICML-17) Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, Robert E. Schapire.
A new and general theory of exploration in RL with function approximation.
Doubly Robust Off-policy Value Evaluation for Reinforcement Learning [pdf, poster]
(ICML-16) Nan Jiang, Lihong Li.
Simple and effective improvement of importance sampling via control variates.