镜像社区
部署GPU实例
文档中心
常见问题(FAQ)
LLM
MiniCPM-o-2.6
MiniCPM-o 是从 MiniCPM-V 分级的最新端侧多模态 LLM (MLLM) 系列。这些模型现在可以将图像、视频、文本和音频作为输入,并以端到端方式提供高质量的文本和语音输出。
0/小时
v1.0

minicpm-o-2.6 WebDemo 部署

环境配置

基础环境如下:

----------------
ubuntu 22.04
Python 3.12.3
cuda 12.1
pytorch 2.3.0
----------------

可使用P403090RTX40系显卡运行

打开终端或新建 Jupyter.ipynb 文件,换源加速及安装魔搭依赖

pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple

pip install modelscope==1.20.0

手动pip安装

pip install Pillow==10.1.0 torch==2.3.1 torchaudio==2.3.1 torchvision==0.18.1 transformers==4.44.2 sentencepiece==0.2.0 vector-quantize-pytorch==1.18.5 vocos==0.1.0 accelerate==1.2.1 timm==0.9.10 soundfile==0.12.1 librosa==0.9.0 decord moviepy fastapi uvicorn python-multipart streamlit

模型下载

下载 MiniCPM-o 2.6模型文件

在根目录,即/workspace路径下新建 model_download.py 文件并在其中输入以下内容,粘贴代码后记得保存文件,如下图所示。并运行 python model_download.py 执行下载,模型大小为 18GB 左右,下载模型大概需要5-30分钟。

from modelscope import snapshot_download
# cache_dir记得修改为自己的目录路径
model_dir = snapshot_download(OpenBMB/MiniCPM-o-2_6, cache_dir=/workspace, revision=master)

代码准备

在根目录,即/workspace路径下新建 minicpm-o-2.6WebDemo_streamlit.py 文件并在其中输入以下内容,粘贴代码后记得保存文件。下面的代码有很详细的注释,大家如有不理解的地方,欢迎提出issue。

import os.path

import streamlit as st
import torch
from PIL import Image
from decord import VideoReader, cpu
import numpy as np
from transformers import AutoModel, AutoTokenizer

# 模型路径 - 请确保该路径存在且包含正确的模型文件
model_path = /workspace/OpenBMB/MiniCPM-o-2_6
upload_path = /workspace/upload

# 如果上传目录不存在则创建
os.makedirs(upload_path, exist_ok=True)

# 用户和助手的聊天界面名称
U_NAME = User
A_NAME = Assistant

# 设置Streamlit页面配置
st.set_page_config(
    page_title=Self-LLM MiniCPM-V-2_6 Streamlit,
    page_icon=:robot:,
    layout=wide
)

# 加载模型和分词器(使用缓存以提高性能)
@st.cache_resource
def load_model_and_tokenizer():
    print(fload_model_and_tokenizer from {model_path})
    model = (AutoModel.from_pretrained(model_path, 
                                       trust_remote_code=True, 
                                       attn_implementation=sdpa).
             to(dtype=torch.bfloat16))
    tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
    return model, tokenizer


# 如果会话状态中没有模型和分词器,则进行初始化
if model not in st.session_state:
    st.session_state.model, st.session_state.tokenizer = load_model_and_tokenizer()
    st.session_state.model.eval().cuda()
    print(model and tokenizer had loaded completed!)

# 初始化会话状态中的聊天历史和媒体追踪
if chat_history not in st.session_state:
    st.session_state.chat_history = []
    st.session_state.uploaded_image_list = []
    st.session_state.uploaded_image_num = 0
    st.session_state.uploaded_video_list = []
    st.session_state.uploaded_video_num = 0
    st.session_state.response = 

# 侧边栏配置

# 在侧边栏创建标题和链接
with st.sidebar:
    st.title([开源大模型使用指南](https://github.com/datawhalechina/self-llm.git))
    
# 创建主标题和副标题
st.title(💬 MiniCPM-V-2_6 ChatBot)
st.caption(🚀 A streamlit chatbot powered by Self-LLM)

# 创建最大长度参数滑块(0-4096,默认2048)
max_length = st.sidebar.slider(max_length, 0, 4096, 2048, step=2)

# 模型生成参数设置
repetition_penalty = st.sidebar.slider(repetition_penalty, 0.0, 2.0, 1.05, step=0.01)
top_k = st.sidebar.slider(top_k, 0, 100, 100, step=1)
top_p = st.sidebar.slider(top_p, 0.0, 1.0, 0.8, step=0.01)
temperature = st.sidebar.slider(temperature, 0.0, 1.0, 0.7, step=0.01)

# 清除会话历史并释放内存的按钮
buttonClean = st.sidebar.button(清除会话历史, key=clean)
if buttonClean:
    # 重置所有会话状态变量
    st.session_state.chat_history = []
    st.session_state.uploaded_image_list = []
    st.session_state.uploaded_image_num = 0
    st.session_state.uploaded_video_list = []
    st.session_state.uploaded_video_num = 0
    st.session_state.response = 

    # 如果有GPU,清除CUDA缓存
    if torch.cuda.is_available():
        torch.cuda.empty_cache()

    # 刷新界面
    st.rerun()

# 使用适当的格式显示聊天历史
for i, message in enumerate(st.session_state.chat_history):
    if message[role] == user:
        with st.chat_message(name=user, avatar=user):
            if message[image] is not None:
                st.image(message[image], caption=用户上传的图片, width=512, use_container_width=False)
                continue
            elif message[video] is not None:
                st.video(message[video], format=video/mp4, loop=False, autoplay=False, muted=True)
                continue
            elif message[content] is not None:
                st.markdown(message[content])
    else:
        with st.chat_message(name=model, avatar=assistant):
            st.markdown(message[content])

# 模式选择下拉菜单
selected_mode = st.sidebar.selectbox(选择模式, [文本, 单图片, 多图片, 视频])

# 定义支持的图片格式
image_type = [.jpg, .jpeg, .png, .bmp, .tiff, .webp]

# 单图片模式配置
if selected_mode == 单图片:
    uploaded_image = st.sidebar.file_uploader(上传单张图片, key=1, type=image_type,
                                              accept_multiple_files=False)
    if uploaded_image is not None:
        st.image(uploaded_image, caption=用户上传的图片, width=512, use_container_width=False)
        st.session_state.chat_history.append({role: user, content: None, image: uploaded_image, video: None})
        st.session_state.uploaded_image_list = [uploaded_image]
        st.session_state.uploaded_image_num = 1

# 多图片模式配置
if selected_mode == 多图片:
    uploaded_image_list = st.sidebar.file_uploader(上传多张图片, key=2, type=image_type,
                                                   accept_multiple_files=True)
    uploaded_image_num = len(uploaded_image_list)

    if uploaded_image_list is not None and uploaded_image_num > 0:
        for img in uploaded_image_list:
            st.image(img, caption=用户上传的图片, width=512, use_container_width=False)
            st.session_state.chat_history.append({role: user, content: None, image: img, video: None})
        st.session_state.uploaded_image_list = uploaded_image_list
        st.session_state.uploaded_image_num = uploaded_image_num

# 定义支持的视频格式
video_type = [.mp4, .mkv, .mov, .avi, .flv, .wmv, .webm, .m4v]

# 重要提示:要处理较大的视频文件,请使用以下命令运行:
# streamlit run ./web_demo_streamlit-minicpmv2_6.py --server.maxUploadSize 1024
# Streamlit默认的200MB上传限制可能不足以处理视频
# 请根据可用的GPU内存调整大小

# 视频模式配置
if selected_mode == 视频:
    uploaded_video = st.sidebar.file_uploader(上传单个视频文件, 
                                              key=3, 
                                              type=video_type,
                                              accept_multiple_files=False)
    if uploaded_video is not None:
        try:
            # 正确处理视频保存路径
            video_filename = os.path.basename(uploaded_video.name)
            uploaded_video_path = os.path.join(upload_path, video_filename)
            
            # 将视频文件写入磁盘
            with open(uploaded_video_path, wb) as vf:
                vf.write(uploaded_video.getbuffer())
            
            # 显示视频并更新会话状态
            st.video(uploaded_video_path)
            st.session_state.chat_history.append({role: user, content: None, image: None, video: uploaded_video_path})
            st.session_state.uploaded_video_list = [uploaded_video_path]
            st.session_state.uploaded_video_num = 1
            
        except Exception as e:
            st.error(f处理视频时出错:{str(e)})
            print(f错误详情:{str(e)})

# 视频处理的最大帧数 - 如果遇到CUDA内存不足,请减少此值
MAX_NUM_FRAMES = 64

def encode_video(video_path):
    
    对视频进行编码,以固定速率采样帧并转换为图像数组。
    实现均匀采样以在内存限制下处理较长视频。
    
    def uniform_sample(frame_indices, num_samples):
        # 计算均匀分布的采样间隔
        gap = len(frame_indices) / num_samples
        sampled_idxs = np.linspace(gap / 2, len(frame_indices) - gap / 2, num_samples, dtype=int)
        return [frame_indices[i] for i in sampled_idxs]

    # 在CPU上初始化视频读取器
    vr = VideoReader(video_path, ctx=cpu(0))

    # 以1FPS采样帧
    sample_fps = round(vr.get_avg_fps() / 1)
    frame_idx = list(range(0, len(vr), sample_fps))

    # 如果帧数超过最大值,进行均匀采样
    if len(frame_idx) > MAX_NUM_FRAMES:
        frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES)

    # 将帧转换为PIL图像格式
    frames = vr.get_batch(frame_idx).asnumpy()
    frames = [Image.fromarray(frame.astype(uint8)) for frame in frames]

    print(帧数:, len(frames))
    return frames

# 聊天输入处理
user_text = st.chat_input(请输入您的问题)
if user_text is not None:
    if user_text.strip() == :
        st.warning(输入消息不能为空!, icon=⚠️)
    else:
        # 显示用户消息
        with st.chat_message(U_NAME, avatar=user):
            st.session_state.chat_history.append({
                role: user,
                content: user_text,
                image: None,
                video: None
            })
            st.markdown(f{U_NAME}: {user_text})

        # 使用模型处理响应
        model = st.session_state.model
        tokenizer = st.session_state.tokenizer
        content_list = []  # 存储模型输入内容的列表
        imageFile = None

        with st.chat_message(A_NAME, avatar=assistant):
            # 处理不同的输入模式
            if selected_mode == 单图片:
                print(使用单图片模式)
                if len(st.session_state.chat_history) > 1 and len(st.session_state.uploaded_image_list) >= 1:
                    uploaded_image = st.session_state.uploaded_image_list[-1]
                    if uploaded_image:
                        imageFile = Image.open(uploaded_image).convert(RGB)
                        content_list.append(imageFile)
                else:
                    print(单图片模式:未找到图片)

            elif selected_mode == 多图片:
                print(使用多图片模式)
                if len(st.session_state.chat_history) > 1 and st.session_state.uploaded_image_num >= 1:
                    for uploaded_image in st.session_state.uploaded_image_list:
                        imageFile = Image.open(uploaded_image).convert(RGB)
                        content_list.append(imageFile)
                else:
                    print(多图片模式:未找到图片)

            elif selected_mode == 视频:
                print(使用视频模式)
                if len(st.session_state.chat_history) > 1 and st.session_state.uploaded_video_num == 1:
                    uploaded_video_path = st.session_state.uploaded_video_list[-1]
                    if uploaded_video_path:
                        with st.spinner(正在编码视频,请稍候...):
                            frames = encode_video(uploaded_video_path)
                else:
                    print(视频模式:未找到视频)

            # 配置模型生成参数
            params = {
                sampling: True,
                top_p: top_p,
                top_k: top_k,
                temperature: temperature,
                repetition_penalty: repetition_penalty,
                max_new_tokens: max_length,
                stream: True
            }

            # 根据输入模式设置参数
            if st.session_state.uploaded_video_num == 1 and selected_mode == 视频:
                msgs = [{role: user, content: frames + [user_text]}]
                # 视频模式特定参数
                params[max_inp_length] = 4352  # 视频模式的最大输入长度
                params[use_image_id] = False  # 禁用图像ID
                params[max_slice_nums] = 1  # 如果高分辨率视频出现CUDA内存不足,请减小此值
            else:
                content_list.append(user_text)
                msgs = [{role: user, content: content_list}]

            print(content_list:, content_list)  # 调试信息
            print(params:, params)  # 调试信息

            # 生成并显示模型响应
            with st.spinner(AI正在思考...):
                response = model.chat(image=None, msgs=msgs, context=None, tokenizer=tokenizer, **params)
            st.session_state.response = st.write_stream(response)
            st.session_state.chat_history.append({
                role: model,
                content: st.session_state.response,
                image: None,
                video: None
            })

        # 添加视觉分隔符
        st.divider()

运行 demo

在终端中运行以下命令,启动 streamlit 服务,server.port 可以更换端口

streamlit run minicpm-o-2.6WebDemo_streamlit.py --server.address 0.0.0.0 --server.port 1111

左侧为侧边栏,可以设置不同的参数和切换不同的模式(文本、单图片、多图片、视频),右侧为聊天界面,可以输入问题,点击发送后,模型会根据输入的问题和上传的图片或视频,生成回答。

生成参数说明

这里简单说下几个可调参数的含义:

temperature(温度系数)

  • 范围:0.0-1.0
  • 控制采样随机性:值越大,生成越随机;值越小,生成越确定
  • 建议值:0.7-0.9

top_p(核采样)

  • 范围:0.0-1.0
  • 只保留累积概率超过top_p的词来采样
  • 截断式控制,动态概率阈值

top_k(前k采样)

  • 范围:正整数
  • 只从概率最高的k个词中采样
  • 固定数量截断

max_new_tokens(生成长度)

  • 控制新生成的token数量上限
  • 对话建议:1024-2048
  • 长文建议:2048-4096

最佳实践

  • 创意写作:temperature=0.8, top_p=0.9 (或 top_k=50)
  • 事实回答:temperature=0.3, top_p=0.1 (或 top_k=10)
  • 代码生成:temperature=0.2, top_p=0.9 (或 top_k=20)
  • 建议
    • top_k与top_p选择其一使用
    • top_k建议范围:10-50,值越小生成越保守

MiniCPM-o-2.6 多模态语音能力

首先 pip 换源加速下载并安装依赖包

pip install accelerate==1.2.1 timm==0.9.10 soundfile==0.12.1 librosa==0.9.0 vector-quantize-pytorch==1.18.5 vocos==0.1.0

在根目录,即/workspace路径下新建 voice_demo.py 文件并在其中输入以下内容,粘贴代码后记得保存文件。

import torch
from PIL import Image
from modelscope import AutoModel, AutoTokenizer
import librosa

# 加载 omni 模型,默认会初始化视觉、音频和 TTS 模块
# 如果只加载视觉模型,设置 init_audio=False 和 init_tts=False
# 如果只加载音频模型,设置 init_vision=False
model = AutoModel.from_pretrained(
    /workspace/OpenBMB/MiniCPM-o-2_6/,
    trust_remote_code=True,
    attn_implementation=sdpa, # 使用 sdpa 或 flash_attention_2
    torch_dtype=torch.bfloat16,
    init_vision=True,
    init_audio=True,
    init_tts=True
)

model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(/workspace/OpenBMB/MiniCPM-o-2_6/, trust_remote_code=True)

# 除了视觉模式,TTS 处理器和 vocos 也需要初始化
model.init_tts()

#语音模仿#
mimick_prompt = 请重复每个用户的讲话内容,包括语音风格和内容。
audio_input, _ = librosa.load(/workspace/demo/minick.wav, sr=16000, mono=True)
msgs = [{role: user, content: [mimick_prompt,audio_input]}]

res = model.chat(
    msgs=msgs,
    tokenizer=tokenizer,
    sampling=True,
    max_new_tokens=128,
    use_tts_template=True,
    temperature=0.3,
    generate_audio=True,
    output_audio_path=/workspace/result/output.wav, # 将 TTS 结果保存到 output_audio_path
)
print(res)

#语音生成#

语音生成任务 Speech Generation Task Prompt:
    Human Instruction-to-Speech: see https://voxinstruct.github.io/VoxInstruct/
    Example:
        # 在新闻中,一个年轻男性兴致勃勃地说:“祝福亲爱的祖国母亲美丽富强!”他用低音调和低音量,慢慢地说出了这句话。
        # Delighting in a surprised tone, an adult male with low pitch and low volume comments:One even gave my little dog a biscuit This dialogue takes place at a leisurely pace, delivering a sense of excitement and surprise in the context. 

    Voice Cloning or Voice Conversion: With this mode, model will act like a TTS model. 

# Human Instruction-to-Speech:
task_prompt = 在新闻中,一个年轻男性兴致勃勃地说:“祝福亲爱的祖国母亲美丽富强!”他用低音调和低音量,慢慢地说出了这句话。 #Try to make some Human Instruction-to-Speech prompt (Voice Creation)
msgs = [{role: user, content: [task_prompt]}] # you can also try to ask the same audio question

res = model.chat(
    msgs=msgs,
    tokenizer=tokenizer,
    sampling=True,
    max_new_tokens=128,
    use_tts_template=True,
    generate_audio=True,
    temperature=0.3,
    output_audio_path=/workspace/result/result.wav,
)
print(res)
# IPython.display.Audio(result.wav)

在终端中运行以下命令,即可在result文件夹下看到生成的语音。

python voice_demo.py

如果GPU不足,请关闭后台占用,或重启实例。

镜像信息
@liusha
已使用
3
镜像大小60GB
最近编辑2025-01-23
支持卡型
RTX40系48G RTX40系2080
+3
框架版本
PyTorch-2.3.0
CUDA版本
12.1
应用
JupyterLab: 8888
版本
v1.0
2025-06-27
PyTorch:2.3.0 | CUDA:12.1 | 大小:60.00GB
优云智算 | MiniCPM-o-2.6一键部署