Papers
-
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision
-
DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks
-
AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
-
Improving Language Understanding by Generative Pre-Training (GPT-1)
-
Language Models are Unsupervised Multitask Learners
-
Mask R-CNN
-
Billion-scale similarity search with GPUs
-
Learning with Privacy at Scale
-
Proximal Policy Optimization Algorithms
-
Attention Is All You Need
-
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
-
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
-
Bag of Tricks for Efficient Text Classification
-
Concrete Problems in AI Safety
-
Deep Residual Learning for Image Recognition
-
Performance of Large Language Models in Answering Critical Care Medicine Questions
