Skip to main content

Xiaomi MiMo

https://platform.xiaomimimo.com/#/docs

tip

We support ALL Xiaomi MiMo models, just set model=xiaomi_mimo/<any-model-on-xiaomi-mimo> as a prefix when sending litellm requests

API Key​

# env variable
os.environ['XIAOMI_MIMO_API_KEY']

Sample Usage​

from litellm import completion
import os

os.environ['XIAOMI_MIMO_API_KEY'] = ""
response = completion(
model="xiaomi_mimo/mimo-v2-flash",
messages=[
{
"role": "user",
"content": "What's the weather like in Boston today in Fahrenheit?",
}
],
max_tokens=1024,
temperature=0.3,
top_p=0.95,
)
print(response)

Sample Usage - Streaming​

from litellm import completion
import os

os.environ['XIAOMI_MIMO_API_KEY'] = ""
response = completion(
model="xiaomi_mimo/mimo-v2-flash",
messages=[
{
"role": "user",
"content": "What's the weather like in Boston today in Fahrenheit?",
}
],
stream=True,
max_tokens=1024,
temperature=0.3,
top_p=0.95,
)

for chunk in response:
print(chunk)

Usage with LiteLLM Proxy Server​

Here's how to call a Xiaomi MiMo model with the LiteLLM Proxy Server

  1. Modify the config.yaml
model_list:
- model_name: my-model
litellm_params:
model: xiaomi_mimo/<your-model-name> # add xiaomi_mimo/ prefix to route as Xiaomi MiMo provider
api_key: api-key # api key to send your model
  1. Start the proxy
$ litellm --config /path/to/config.yaml
  1. Send Request to LiteLLM Proxy Server
import openai
client = openai.OpenAI(
api_key="sk-1234", # pass litellm proxy key, if you're using virtual keys
base_url="http://0.0.0.0:4000" # litellm-proxy-base url
)

response = client.chat.completions.create(
model="my-model",
messages = [
{
"role": "user",
"content": "what llm are you"
}
],
)

print(response)

Supported Models​

Model NameUsage
mimo-v2-flashcompletion(model="xiaomi_mimo/mimo-v2-flash", messages)