3.6. CodeGeeX

CodeGeeX2-6B

模型下载

  • url: CodeGeeX2-6B

  • branch: main

  • commit id: 3cb3f8fa305c8188c6c997d0be2ccc4b87ba6f7f

将上述url设定的路径下的内容全部下载到codegeex2-6b文件夹中。

Tokenizer修改

codegeex2-6b/tokenization_chatglm.py中ChatGLMTokenizer类的__init__函数修改为如下内容:

    def __init__(self,
                 vocab_file,
                 padding_side="left",
                 clean_up_tokenization_spaces=False,
                 **kwargs):
        self.tokenizer = SPTokenizer(vocab_file)
        super().__init__(padding_side=padding_side,
                         clean_up_tokenization_spaces=clean_up_tokenization_spaces,
                         **kwargs)
        self.name = "GLMTokenizer"

        self.vocab_file = vocab_file
        # self.tokenizer = SPTokenizer(vocab_file)
        self.special_tokens = {
            "<bos>": self.tokenizer.bos_id,
            "<eos>": self.tokenizer.eos_id,
            "<pad>": self.tokenizer.pad_id
        }

批量离线推理

python3.8 -m vllm_utils.benchmark_test \
 --model=[path of codegeex2-6b] \
 --demo=cc \
 --output-len=512 \
 --dtype=float16

性能测试

python3.8 -m vllm_utils.benchmark_test --perf \
 --model=[path of codegeex2-6b] \
 --max-model-len=8192 \
 --tokenizer=[path of codegeex2-6b] \
 --input-len=128 \
 --output-len=1024 \
 --num-prompts=64 \
 --block-size=64 \
 --dtype=float16

注:

  • 本模型支持的max-model-len为8k;

  • input-lenoutput-lennum-prompts可按需调整;

  • 配置 output-len为1时,输出内容中的latency即为time_to_first_token_latency;