Meta: Llama 4 Maverick

由 Meta 提供

Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward pass (400B total). It supports multilingual text and image input, and produces multilingual text and code output across 12 supported languages. Optimized for vision-language tasks, Maverick is instruction-tuned for assistant-like behavior, image reasoning, and general-purpose multimodal interaction. Maverick features early fusion for native multimodality and a 1 million token context window. It was trained on a curated mixture of public, licensed, and Meta-platform data, covering ~22 trillion tokens, with a knowledge cutoff in August 2024. Released on April 5, 2025 under the Llama 4 Community License, Maverick is suited for research and commercial applications requiring advanced multimodal understanding and high model throughput.

支持流式输出最大 16K tokens

模型特性

输入模态

  • text
  • image

模型参数

最大 Tokens:16K
默认温度:0.7
流式输出:支持

定价信息

积分定价

文本输入:0 积分
图片输入:0 积分
消息输出:0 积分

API 定价

输入价格:$0.150/1K tokens
输出价格:$0.600/1K tokens

使用示例

API 调用示例
curl -X POST "https://chat.chatsking.com/api/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "llama-4-maverick-17B-128e-instruct-fp8",
    "messages": [
      {
        "role": "user",
        "content": "Hello, how are you?"
      }
    ],
    "max_tokens": 1000,
    "temperature": 0.7
  }'