Papers
-
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
-
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
-
Differentially Private Heavy Hitter Detection using Federated Analytics
-
Llama 2: Open Foundation and Fine-Tuned Chat Models
-
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
-
Neuralangelo: High-Fidelity Neural Surface Reconstruction
-
Orca: Progressive Learning from Complex Explanation Traces of GPT-4
-
AWQ: Activation-aware Weight Quantization for LLM Compression and AccelerationMassachusetts Institute of Technology
-
TransAct: Transformer-based Realtime User Action Model for Recommendation at Pinterest
-
IMAGEBIND: One Embedding Space To Bind Them Al
-
ReWOO: Decoupling Reasoning from Observation for Efficient LLM Reasoning
-
Voyager: An Open-Ended Embodied Agent with Large Language Models
-
QLoRA: Efficient Finetuning of Quantized LLMsUniversity of Washington
-
Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture
-
Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
-
Segment Anything
-
Sequential Attention: Making AI Models Leaner and Faster Without Sacrificing Accuracy
-
DAMO-YOLO : A Report on Real-Time Object Detection Design
-
VECO 2.0: Cross-lingual Language Model Pre-training with Multi-granularity Contrastive Learning
-
Sparks of Artificial General Intelligence: Early experiments with GPT-4
-
Generative Agents: Interactive Simulacra of Human Behavior
-
Regression Transformer enables concurrent sequence regression and generation for molecular language modelling
-
CAMEL: Communicative Agents for “Mind” Exploration of Large Language Models
-
Reflexion: Language Agents with Verbal Reinforcement Learning
-
Application-Agnostic Language Modeling for On-Device ASR
-
Language Is Not All You Need: Aligning Perception with Language Models
-
LLaMA: Open and Efficient Foundation Language Models
-
Adding Conditional Control to Text-to-Image Diffusion ModelsStanford University
-
Toolformer: Language Models Can Teach Themselves to Use Tools
-
Flow Matching for Generative Modeling
-
Why we built an AI supercomputer in the cloud
-
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video
-
Large language models generate functional protein sequences across diverse families
-
Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers
-
Constitutional AI: Harmlessness from AI Feedback
-
Robust Speech Recognition via Large-Scale Weak Supervision
-
Stable Diffusion with Core ML on Apple Silicon
-
Fast Inference from Transformers via Speculative DecodingStanford University, University of California, Berkeley
-
Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks
-
GPTQ: Accurate Post-Training Quantization for Generative Pre-trained TransformersEidgenössische Technische Hochschule Zürich, Institute of Science and Technology Austria
-
wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
-
data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language
-
Synergizing Reasoning and Acting in Language Models
-
PaLM: Scaling Language Modeling with Pathways
-
Monolith: Real Time Recommendation System With Collisionless Embedding Table
-
Rethinking Personalized Ranking at Pinterest: An End-to-End Approach
-
Toy Models of Superposition
-
AudioLM: A Language Modeling Approach to Audio Generation
-
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
-
Prompt Tuning for Generative Multimodal Pretrained Models
