Code Llama was trained on a 16k context window. In addition, the three model variants had additional long-context fine-tuning, allowing them to manage a context window of up to 100,000 tokensCode LLaMa DescendentsNexusRavenPhind Code LLaMa Introducing Code Llama, a state-of-the-art large language model for codingCode Llama, which is built on top of Llama 2, is free for research and commercial use.https://ai.meta.com/blog/code-llama-large-language-model-coding/Introducing Code Llama, a state-of-the-art large language model for codingCode Llama, which is built on top of Llama 2, is free for research and commercial use.https://ai.meta.com/blog/code-llama-large-language-model-coding/codellama (Code Llama)Org profile for Code Llama on Hugging Face, the AI community building the future.https://huggingface.co/codellamaLlama 2 learns to codeWe’re on a journey to advance and democratize artificial intelligence through open source and open science.https://huggingface.co/blog/codellama