Delta and DeltaAI @ NeurIPS

NeurIPS 2025

At least 36 posters, spotlights or workshops by researchers using Delta and DeltaAI to discover new methods in AI and ML are being presented at NeurIPS 2025.

The researchers were provided access to the NSF funded Delta and DeltaAI resources from the national ACCESS CI program, the national NAIRR program, and by the Illinois Computes program.

If we missed your NeurIPS contribution then please let us know by sending email to help@ncsa.illinois.edu.

Posters

Inpainting the Neural Picture: Inferring Unrecorded Brain Area Dynamics from Multi-Animal Datasets
Ji XiaYizi ZhangShuqi WangGenevera I. AllenLiam PaninskiCole Lincoln HurwitzKenneth D. Miller

Retrospective In-Context Learning for Temporal Credit Assignment with Large Language Models
Wentse ChenJiayu ChenFahim TajwarHao ZhuXintong DuanRuslan SalakhutdinovJeff Schneider

Reasoning as an Adaptive Defense for Safety
Taeyoun KimFahim TajwarAditi RaghunathanAviral Kumar

REN: Fast and Efficient Region Encodings from Patch-Based Image Encoders
Savya KhoslaSethuraman T VBarnett LeeAlex SchwingDerek Hoiem

Space Group Equivariant Crystal Diffusion
Rees ChangAngela PakAlex GuerraNi ZhanNick RichardsonElif ErtekinRyan P Adams

Frame In-N-Out: Unbounded Controllable Image-to-Video Generation
Boyang WangXuweiyi ChenMatheus GadelhaZezhou Cheng

LabelAny3D: Label Any Object 3D in the Wild
Jin YaoRadowan Mahmud RedoySebastian ElbaumMatthew B. DwyerZezhou Cheng

NoPo-Avatar: Generalizable and Animatable Avatars from Sparse Inputs without Human Poses
Jing WenAlex SchwingShenlong Wang

ASGO: Adaptive Structured Gradient Optimization
Kang AnYuxing LiuRui PanYi RenShiqian MaDonald GoldfarbTong Zhang

Optimizing Chain-of-Thought Reasoners via Gradient Variance Minimization in Rejection Sampling and RL
Jiarui YaoYifan HAOHanning ZhangHanze DongWei XiongNan JiangTong Zhang

The Unreasonable Effectiveness of Entropy Minimization in LLM Reasoning
Shivam AgarwalZimin ZhangLifan YuanJiawei HanHao Peng

DynamicRAG: Leveraging Outputs of Large Language Model as Feedback for Dynamic Reranking in Retrieval-Augmented Generation
Jiashuo SunXianrui ZhongSizhe ZhouJiawei Han

LM: Mutual Information Scaling Law for Long-Context Language Modeling
Zhuo ChenOriol Mayné i ComasZhuotao JinDi LuoMarin Soljacic

Reinforcement Learning Finetunes Small Subnetworks in Large Language Models
Sagnik MukherjeeLifan YuanDilek Hakkani-TürHao Peng

RAST: Reasoning Activation in LLMs via Small-model Transfer
Siru OuyangXinyu ZhuZilin XiaoMinhao JiangYu MengJiawei Han

MR. Video: MapReduce as an Effective Principle for Long Video Understanding
Ziqi PangYu-Xiong WangZiqi PangYu-Xiong Wang

Self-Guided Hierarchical Exploration for Generalist Foundation Model Web Agents
Qianlan YangXiangjun WangDanielle PerszykYu-Xiong Wang

Training Language Models to Reason Efficiently
Daman AroraAndrea Zanette

HoloScene: Simulation‑Ready Interactive 3D Worlds from a Single Video
Hongchi XiaChih-Hao LinHao-Yu HsuQuentin LeboutetKatelyn GaoMichael PaulitschBenjamin UmmenhoferShenlong Wang

One Token per Highly Selective Frame: Towards Extreme Compression for Long Video Understanding
Zheyu Aqa ZhangZiqi PangShixing ChenXiang HaoVimal BhatYu-Xiong Wang

Equi-mRNA: Protein Translation Equivariant Encoding for mRNA Language Models
Mehdi Yazdani-JahromiAli Khodabandeh YalabadiOzlem Garibay

Taming Hyperparameter Sensitivity in Data Attribution: Practical Selection Without Costly Retraining
Weiyi WangJunwei DengYuzheng HuShiyuan ZhangXirui JiangRunting ZhangHan ZhaoJiaqi W. Ma

Fast Training of Large Kernel Models with Delayed Projections
Amirhesam AbedsoltanSiyuan MaParthe PanditMikhail Belkin

Compute-Optimal Scaling for Value-Based Deep RL
Preston FuOleh RybkinZhiyuan ZhouMichal NaumanPieter AbbeelSergey LevineAviral Kumar

All You Need is One: Capsule Prompt Tuning with a Single Vector
Yiyang LiuJames Chenhao LiangHeng FanWenhao YangYiming CuiXiaotian HanLifu HuangDongfang LiuQifan WangCheng Han

HyGen: Efficient LLM Serving via Elastic Online-Offline Request Co-location
Ting SunPenghan WangFan Lai

Physics-Constrained Flow Matching: Sampling Generative Models with Hard Constraints
Utkarsh UtkarshPengfei CaiAlan EdelmanRafael Gomez-BombarelliChristopher Vincent Rackauckas

Seeds of Structure: Patch PCA Reveals Universal Compositional Cues in Diffusion Models
Qingsong WangZhengchao WanMikhail BelkinYusu Wang

Can NeRFs “See” without Cameras?
Chaitanya AmballaYu-Lin WeiSattwik BasuZhijian YangMehmet ErgezerRomit Roy Choudhury

Spotlights

ARECHO: Autoregressive Evaluation via Chain-Based Hypothesis Optimization for Speech Multi-Metric Estimation
Jiatong ShiYifan ChengBo-Hao SuHye-jin ShimJinchuan TianSamuele CornellYiwen ZhaoSiddhant AroraShinji Watanabe

RGB-Only Supervised Camera Parameter Optimization in Dynamic Scenes
Fang LiHao ZhangNarendra Ahuja

Towards Understanding the Mechanisms of Classifier-Free Guidance
Xiang LiRongrong WangQing Qu

The Best Instruction-Tuning Data are Those That Fit
Dylan ZhangQirun DaiHao Peng

Proxy-SPEX: Sample-Efficient Interpretability via Sparse Feature Interactions in LLMs
Landon ButlerAbhineet AgarwalJustin Singh KangYigit Efe ErginbasBin YuKannan Ramchandran

Corporate Needs You to Find the Difference: Revisiting Submodular and Supermodular Ratio Optimization Problems
Elfarouk HarbYousef YassinChandra Chekuri

Workshops

Quantifying the Role of OpenFold Components in Protein Structure Prediction
Tyler L. HayesGiri P Krishnan

References

NeurIPS 2025 at OpenReview

Delta & DeltaAI
1205 W. Clark St.
Urbana, Illinois 61801
Email: info@ncsa.illinois.edu
CookieSettings CookieSettings