Notable LLMs

See also 👉 Llamas 🦙

Instruction Tuning / Supervised Finetuning

Retrieval Augmented Generation (RAG)

Chain of Thought

Chain-of-Thought Prompting induces Language Models to perform reasoning and leverages in-context learning. Chain-of-Thought (CoT) was introduced in Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (AFAIK) in the context of arithmetic reasoning, i.e. wordy numeracy problems like:

Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?

A: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11.

Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?

Tokenisation

Tokenisation is filed under Tokenisation

Watermarking of (Large) Language Models

Agents