新闻动态
成员信息
科学研究
联系我们
中文 (简体)
中文 (简体)
English
Ning Ding
最新
Fourier Position Embedding: Enhancing Attentions Periodic Extension for Length Generalization
Free Process Rewards without Process Labels
How to Synthesize Text Data without Model Collapse?
MedXpertQA: Benchmarking Expert-Level Medical Reasoning and Understanding
Technologies on Effectiveness and Efficiency: A Survey of State Spaces Models
Process reinforcement through implicit rewards.
Advancing LLM Reasoning Generalists with Preference Trees
OpenPRM: Building Open-domain Process-based Reward Models with Preference Trees
Automating exploratory proteomics research via language models.
Empowering private tutoring by chaining large language models.
Scalable Efficient Training of Large Language Models with Low-dimensional Projected Attention
UltraMedical: Building Specialized Generalists in Biomedicine
Fast and Slow Generating: An Empirical Study on Large and Small Language Models Collaborative Decoding
Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process
CoGenesis: A Framework Collaborating Large and Small Language Models for Secure Context-Aware Instruction Following
Generative AI for Complex Scenarios: Language Models are Sequence Processors
CRaSh: Clustering, Removing, and Sharing Enhance Fine-tuning without Full Large Language Model
Enhancing Chat Language Models by Scaling High-quality Instructional Conversations
Sparse Low-rank Adaptation of Pre-trained Language Models
引用
×