Skip to main content

Mixtral 8x7B Instruct

  • Developed by
    • Mistralai
  • Model type
    • Multilingual sparse Mixture-of-Experts (MoE) model
  • Task
    • Text Generation
  • Model description
    • The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B e.g. outperforms Llama 2 70B on most benchmarks.