
016 namely CLEVER, which is augmentation-free 017 and mitigates biases on the inference stage. 018 Specifically, we train a claim-evidence fusion 019 model and a claim-only model …
Alias-Free Mamba Neural Operator - OpenReview
Sep 25, 2024 · Functionally, MambaNO achieves a clever balance between global integration, facilitated by state space model of Mamba that scans the entire function, and local integration, …
Measuring Mathematical Problem Solving With the MATH Dataset
Oct 18, 2021 · To find the limits of Transformers, we collected 12,500 math problems. While a three-time IMO gold medalist got 90%, GPT-3 models got ~5%, with accuracy increasing slowly.
Weakly-Supervised Affordance Grounding Guided by Part-Level...
Jan 22, 2025 · In this work, we focus on the task of weakly supervised affordance grounding, where a model is trained to identify affordance regions on objects using human-object …
Large Language Models are Human-Level Prompt Engineers
Feb 1, 2023 · We propose an algorithm for automatic instruction generation and selection for large language models with human level performance.
Reasoning of Large Language Models over Knowledge Graphs with...
Jan 22, 2025 · While large language models (LLMs) have made significant progress in processing and reasoning over knowledge graphs, current methods suffer from a high non-retrieval rate.
Faster Cascades via Speculative Decoding | OpenReview
Jan 22, 2025 · Cascades and speculative decoding are two common approaches to improving language models' inference efficiency. Both approaches interleave two models, but via …
Thieves on Sesame Street! Model Extraction of BERT-based APIs
Dec 19, 2019 · Finally, we study two defense strategies against model extraction—membership classification and API watermarking—which while successful against some adversaries can …
Diffusion Generative Modeling for Spatially Resolved Gene...
Jan 22, 2025 · Spatial Transcriptomics (ST) allows a high-resolution measurement of RNA sequence abundance by systematically connecting cell morphology depicted in Hematoxylin …
Not All Tokens Are What You Need for Pretraining | OpenReview
Sep 25, 2024 · Previous language model pre-training methods have uniformly applied a next-token prediction loss to all training tokens. Challenging this norm, we posit that ''Not all tokens …