Skip to main content
Link
Menu
Expand
(external link)
Document
Search
Copy
Copied
Out of context notes
Home
Readings
List of papers
Meta- (out-of-context) learning in neural networks
Tell, don't show - Declarative facts influence how LLMs generalize
Why can GPT learn in-context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers
Confusions
confusions
Why can GPT learn in-context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers
arxiv
Read the paper on a
webpage
Paper’s
code repository