Meta (out-of-context) learning in neural networks

Outline

  • There exists a phenomenon they name “Meta out-of-context-learning” (meta OCL) that LLMs exhibit.
  • They suggest that meta OCL leads LLMs to better and more readily internalize the semantic content if it comes from trustworthy sources.
  • They say that LLMs then use these relevant abstractions even when they’re not present in context.
  • They demonstrate meta OCL in synthetic CV setting
  • They propose two hypotheses for emergence of meta OCL
  • They reflect on what these results imply about capabilities of future AI systems