Transparency

Sources — Every claim, traced

“Official sources only” is a claim that needs proof. Here it is. Every academic paper, official documentation page, company technical report, and institutional publication cited across the AI Atlas — listed by guide, linked directly. 116 sources across 21 guides. All verified April 2026.

116 cited sources 19 academic papers 21 guides covered 0 blog posts or opinion

Source policy

Every factual claim in the AI Atlas is grounded in one of the following: a peer-reviewed or preprint academic paper (linked to arXiv or the original journal), official company documentation or technical reports (linked to the company’s own domain), or verified institutional research from government bodies, UNESCO, or academic institutions. Opinion pieces, blog posts, and secondary sources are not used as primary citations.

Where a company has not published technical papers — Midjourney, ElevenLabs, and Runway do not — this is stated explicitly in the guide. The source listed is the most authoritative available: company documentation, API references, or research published on the company’s own domain.

19
Peer-reviewed and preprint academic papers (arXiv)
12+
Official company documentation domains
4
Institutional sources (UNESCO, DfE, QAA, ICO)
0
Blog posts, opinion, or secondary sources used as citations

Academic and research papers

The foundational peer-reviewed papers that underpin modern AI — cited in the technical depth sections of each guide. All links go directly to arXiv or the original journal.

"Attention Is All You Need" — Vaswani et al., 2017
arXiv preprint / peer-reviewed
arxiv.org/abs/1706.03762
"BERT: Pre-training of Deep Bidirectional Transformers" — Devlin et al., 2018
arXiv preprint / peer-reviewed
arxiv.org/abs/1810.04805
"Scaling Laws for Neural Language Models" — Kaplan et al., 2020
arXiv preprint / peer-reviewed
arxiv.org/abs/2001.08361
"Language Models are Few-Shot Learners" (GPT-3) — Brown et al., 2020
arXiv preprint / peer-reviewed
arxiv.org/abs/2005.14165
"Learning Transferable Visual Models From Natural Language Supervision" (CLIP) — Radford et al., 2021
arXiv preprint / peer-reviewed
arxiv.org/abs/2103.00020
"Evaluating Large Language Models Trained on Code" (Codex) — Chen et al., 2021
arXiv preprint / peer-reviewed
arxiv.org/abs/2106.09685
"High-Resolution Image Synthesis with Latent Diffusion Models" — Rombach et al., 2022
arXiv preprint / peer-reviewed
arxiv.org/abs/2112.10752
"Training Language Models to Follow Instructions" (InstructGPT) — Ouyang et al., 2022
arXiv preprint / peer-reviewed
arxiv.org/abs/2203.02155
"Training Compute-Optimal Large Language Models" (Chinchilla) — Hoffmann et al., 2022
arXiv preprint / peer-reviewed
arxiv.org/abs/2203.15556
"Constitutional AI: Harmlessness from AI Feedback" — Bai et al., 2022
arXiv preprint / peer-reviewed
arxiv.org/abs/2212.08073
"LLaMA: Open and Efficient Foundation Language Models" — Touvron et al., 2023
arXiv preprint / peer-reviewed
arxiv.org/abs/2302.13971
"GPT-4 Technical Report" — OpenAI, 2023
arXiv preprint / peer-reviewed
arxiv.org/abs/2303.08774
"Direct Preference Optimization" — Rafailov et al., 2023
arXiv preprint / peer-reviewed
arxiv.org/abs/2305.14314
"Llama 2: Open Foundation and Fine-Tuned Chat Models" — Touvron et al., 2023
arXiv preprint / peer-reviewed
arxiv.org/abs/2307.09288
"Mistral 7B" — Jiang et al., 2023
arXiv preprint / peer-reviewed
arxiv.org/abs/2310.06825
"Mixtral of Experts" — Jiang et al., 2023
arXiv preprint / peer-reviewed
arxiv.org/abs/2312.11805
"Mixtral 8x7B — Sparse Mixture of Experts" — Jiang et al., 2024
arXiv preprint / peer-reviewed
arxiv.org/abs/2401.04088
"Gemma: Open Models Based on Gemini Research" — Google DeepMind, 2024
arXiv preprint / peer-reviewed
arxiv.org/abs/2403.05530
"Gemma 2: Improving Open Language Models" — Google DeepMind, 2024
arXiv preprint / peer-reviewed
arxiv.org/abs/2407.21783

All sources by guide

Official documentation, API references, company technical reports, and institutional guidance — organised by the guide in which each source is cited. Click any guide name to go directly to that guide.

Guide Source description Domain
AI Assistants
ChatGPTopenai.com/chatgpt/pricingopenai.com
"GPT-4 Technical Report" — OpenAI, 2023arxiv.org
"Training Language Models to Follow Instructions" (InstructGPT) — Ouyang et al., 2022arxiv.org
openai.com/index/hello-gpt-4oopenai.com
openai.com/index/learning-to-reason-with-llmsopenai.com
platform.openai.com/docs/overviewplatform.openai.com
openai.com/safety/preparednessopenai.com
openai.com/chatgptopenai.com
OpenAI Researchopenai.com
API Documentationplatform.openai.com
Claudeanthropic.com/claudeanthropic.com
docs.anthropic.comdocs.anthropic.com
"Constitutional AI: Harmlessness from AI Feedback" — Bai et al., 2022arxiv.org
transformer-circuits.pubtransformer-circuits.pub
transformer-circuits.pubtransformer-circuits.pub
anthropic.com/responsible-scaling-policyanthropic.com
anthropic.com/claude/model-cardsanthropic.com
anthropic.com/news/computer-useanthropic.com
docs.anthropic.com/en/docs/about-claude/modelsdocs.anthropic.com
Interpretability researchtransformer-circuits.pub
Geminigemini.google.comgemini.google.com
one.google.comone.google.com
"Mixtral of Experts" — Jiang et al., 2023arxiv.org
"Gemma: Open Models Based on Gemini Research" — Google DeepMind, 2024arxiv.org
blog.google/technology/google-deepmindblog.google
ai.google.dev/gemini-api/docsai.google.dev
Google AI Developerai.google.dev
Microsoft Copilotmicrosoft.com/copilotmicrosoft.com
learn.microsoft.com — Copilot Architecturelearn.microsoft.com
learn.microsoft.com — Copilot Privacylearn.microsoft.com
learn.microsoft.com/azure/ai-services/openailearn.microsoft.com
copilot.microsoft.comcopilot.microsoft.com
M365 Copilot docslearn.microsoft.com
Perplexityperplexity.ai/properplexity.ai
perplexity.ai/hub/blogperplexity.ai
docs.perplexity.aidocs.perplexity.ai
perplexity.aiperplexity.ai
Perplexity blogperplexity.ai
Grokgithub.com/xai-org/grok-1github.com
x.ai/blog/grok-osx.ai
x.ai/blog/grok-2x.ai
docs.x.aidocs.x.ai
grok.comgrok.com
xAI blogx.ai
Open Source Models
Llama"LLaMA: Open and Efficient Foundation Language Models" — Touvron et al., 2023arxiv.org
"Llama 2: Open Foundation and Fine-Tuned Chat Models" — Touvron et al., 2023arxiv.org
"Gemma 2: Improving Open Language Models" — Google DeepMind, 2024arxiv.org
github.com/meta-llama/llama-modelsgithub.com
ai.meta.com/blog/llama-4ai.meta.com
"Evaluating Large Language Models Trained on Code" (Codex) — Chen et al., 2021arxiv.org
"Direct Preference Optimization" — Rafailov et al., 2023arxiv.org
github.com/meta-llama/llama-models — LICENSEgithub.com
ai.meta.com/llamaai.meta.com
GitHub — model cardsgithub.com
Ollama (run locally)ollama.com
Hugging Face PEFThuggingface.co
Mistraldocs.mistral.aidocs.mistral.ai
"Mistral 7B" — Jiang et al., 2023arxiv.org
"Mixtral 8x7B — Sparse Mixture of Experts" — Jiang et al., 2024arxiv.org
mistral.aimistral.ai
Le Chatchat.mistral.ai
Image, Video and Audio AI
Midjourneymidjourney.com/accountmidjourney.com
"High-Resolution Image Synthesis with Latent Diffusion Models" — Rombach et al., 2022arxiv.org
"Learning Transferable Visual Models From Natural Language Supervision" (CLIP) — Radford et al., 202arxiv.org
docs.midjourney.comdocs.midjourney.com
midjourney.commidjourney.com
Community showcasemidjourney.com
DALL-E 3openai.com/dall-e-3openai.com
openai.com/papers/dall-e-3.pdfcdn.openai.com
platform.openai.com/docs/guides/imagesplatform.openai.com
Sorasora.comsora.com
openai.com/research/video-generation-models-as-world-simulatorsopenai.com
ElevenLabselevenlabs.io/pricingelevenlabs.io
elevenlabs.io/docselevenlabs.io
elevenlabs.io/docs/modelselevenlabs.io
elevenlabs.ioelevenlabs.io
Safety policyelevenlabs.io
Runwayrunwayml.com/pricingrunwayml.com
runwayml.com/research/gen-3-alpharunwayml.com
docs.runwayml.comdocs.runwayml.com
runwayml.comrunwayml.com
Research blogrunwayml.com
Productivity AI
Copilot in M365microsoft.com/microsoft-365/copilotmicrosoft.com
learn.microsoft.com — Copilot Architecturelearn.microsoft.com
data protection commitmentslearn.microsoft.com
learn.microsoft.com — Copilot Studiolearn.microsoft.com
Microsoft Learn docslearn.microsoft.com
Gemini for Workspaceworkspace.google.com/products/geminiworkspace.google.com
workspace.google.com/securityworkspace.google.com
notebooklm.google.com/aboutnotebooklm.google.com
NotebookLMnotebooklm.google.com
Vertex AIcloud.google.com
Notion AInotion.so/pricingnotion.so
notion.so/blog/notion-ainotion.so
notion.so/help/notion-ainotion.so
notion.so privacy overviewnotion.so
notion.sonotion.so
Understanding AI
AI HistoryMind, Vol. LIX, No. 236academic.oup.com
"Attention Is All You Need" — Vaswani et al., 2017arxiv.org
"Language Models are Few-Shot Learners" (GPT-3) — Brown et al., 2020arxiv.org
nature.comnature.com
NeurIPS proceedingsproceedings.neurips.cc
"BERT: Pre-training of Deep Bidirectional Transformers" — Devlin et al., 2018arxiv.org
"Training Language Models to Follow Instructions" (InstructGPT) — Ouyang et al., 2022arxiv.org
Start Here"Attention Is All You Need" — Vaswani et al., 2017arxiv.org
"Training Language Models to Follow Instructions" (InstructGPT) — Ouyang et al., 2022arxiv.org
"Language Models are Few-Shot Learners" (GPT-3) — Brown et al., 2020arxiv.org
"Scaling Laws for Neural Language Models" — Kaplan et al., 2020arxiv.org
"Constitutional AI: Harmlessness from AI Feedback" — Bai et al., 2022arxiv.org
AI Glossary"Constitutional AI: Harmlessness from AI Feedback" — Bai et al., 2022arxiv.org
"Attention Is All You Need" — Vaswani et al., 2017arxiv.org
"Training Compute-Optimal Large Language Models" (Chinchilla) — Hoffmann et al., 2022arxiv.org
"Learning Transferable Visual Models From Natural Language Supervision" (CLIP) — Radford et al., 202arxiv.org
Use Cases and For You
AI for Studentsqaa.ac.uk — Academic Integrityqaa.ac.uk
AI for Teachersgov.ukgov.uk
UNESCOunesdoc.unesco.org

Sources are reviewed and updated as guides are revised. All links on this page were verified in April 2026. If you find a broken link, a missing citation, or believe any claim is incorrectly attributed, please contact us via the About page.