PaddleNLP的环境配置:
conda create -n paddle—test python=3.9
conda activate paddle—test
python -m pip install paddlepaddle-gpu==2.6.1.post112 -f https://www.paddlepaddle.org.cn/whl/windows/mkl/avx/stable.html
(paddle—test) (venv) PS D:\work\论文写作\邮件\PaddleNLP-develop> python
Python 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from paddlenlp.transformers import AutoTokenizer, AutoModelForCausalLM
D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\_distutils_hack\__init__.py:36: UserWarning: Setuptools is replacing distutils.warnings.warn("Setuptools is replacing distutils.")
>>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B")
[2024-11-19 15:48:45,051] [ INFO] - The `unk_token` parameter needs to be defined: we use `eos_token` by default.
>>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B", dtype="float16")
[2024-11-19 15:48:51,240] [ INFO] - We are using <class 'paddlenlp.transformers.qwen2.modeling.Qwen2ForCausalLM'> to load 'Qwen/Qwen2-0.5B'.
[2024-11-19 15:48:51,241] [ INFO] - Loading configuration file C:\Users\Win11\.paddlenlp\models\Qwen/Qwen2-0.5B\config.json
[2024-11-19 15:48:51,241] [ INFO] - Loading weights file from cache at C:\Users\Win11\.paddlenlp\models\Qwen/Qwen2-0.5B\model.safetensors
[2024-11-19 15:48:55,345] [ INFO] - Loaded weights file from disk, setting weights to model.
W1119 15:48:56.299374 25568 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 12.6, Runtime API Version: 11.2
W1119 15:48:56.797859 25568 dynamic_loader.cc:285] Note: [Recommend] copy cudnn into CUDA installation directory. For instance, download cudnn-10.0-windows10-x64-v7.6.5.32.zip from NVIDIA's official website,
then, unzip it and copy it into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0
You should do this according to your CUDA installation directory and CUDNN version.
Traceback (most recent call last):File "<stdin>", line 1, in <module>File "D:\work\论文写作\邮件\PaddleNLP-develop\paddlenlp\transformers\auto\modeling.py", line 794, in from_pretrainedreturn cls._from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)File "D:\work\论文写作\邮件\PaddleNLP-develop\paddlenlp\transformers\auto\modeling.py", line 342, in _from_pretrainedreturn model_class.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)File "D:\work\论文写作\邮件\PaddleNLP-develop\paddlenlp\transformers\model_utils.py", line 2463, in from_pretrainedmodel = cls(config, *init_args, **model_kwargs)File "D:\work\论文写作\邮件\PaddleNLP-develop\paddlenlp\transformers\utils.py", line 289, in __impl__init_func(self, *args, **kwargs)File "D:\work\论文写作\邮件\PaddleNLP-develop\paddlenlp\transformers\qwen2\modeling.py", line 1242, in __init__self.qwen2 = Qwen2Model(config)File "D:\work\论文写作\邮件\PaddleNLP-develop\paddlenlp\transformers\utils.py", line 289, in __impl__init_func(self, *args, **kwargs)File "D:\work\论文写作\邮件\PaddleNLP-develop\paddlenlp\transformers\qwen2\modeling.py", line 897, in __init__self.embed_tokens = nn.Embedding(File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\nn\layer\common.py", line 1496, in __init__self.weight = self.create_parameter(File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\nn\layer\layers.py", line 781, in create_parameterreturn self._helper.create_parameter(File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\base\layer_helper_base.py", line 430, in create_parameterreturn self.main_program.global_block().create_parameter(File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\base\framework.py", line 4381, in create_parameterinitializer(param, self)File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\nn\initializer\initializer.py", line 40, in __call__return self.forward(param, block)File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\nn\initializer\xavier.py", line 135, in forwardout_var = _C_ops.uniform(
RuntimeError: (PreconditionNotMet) The third-party dynamic library (cudnn64_8.dll) that Paddle depends on is not configured correctly. (error code is 126)Suggestions:1. Check if the third-party dynamic library (e.g. CUDA, CUDNN) is installed correctly and its version is matched with paddlepaddle you installed.2. Configure third-party dynamic library environment variables as follows:- Linux: set LD_LIBRARY_PATH by `export LD_LIBRARY_PATH=...`- Windows: set PATH by `set PATH=XXX; (at ..\paddle\phi\backends\dynload\dynamic_loader.cc:312)
import paddle
paddle.utils.run_check()
(paddle—test) (venv) PS D:\work\论文写作\邮件\PaddleNLP-develop> python
Python 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import paddle
>>> paddle.utils.run_check()
Running verify PaddlePaddle program ...
I1119 15:56:13.232272 21360 program_interpreter.cc:212] New Executor is Running.
W1119 15:56:13.264072 21360 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 12.6, Runtime API Version: 11.2
W1119 15:56:13.264580 21360 dynamic_loader.cc:285] Note: [Recommend] copy cudnn into CUDA installation directory.For instance, download cudnn-10.0-windows10-x64-v7.6.5.32.zip from NVIDIA's official website,
then, unzip it and copy it into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0
You should do this according to your CUDA installation directory and CUDNN version.
Traceback (most recent call last):File "<stdin>", line 1, in <module>File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\utils\install_check.py", line 273, in run_check_run_static_single(use_cuda, use_xpu, use_custom, custom_device_name)File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\utils\install_check.py", line 150, in _run_static_singleexe.run(startup_prog)File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\base\executor.py", line 1746, in runres = self._run_impl(File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\base\executor.py", line 1952, in _run_implret = new_exe.run(File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\base\executor.py", line 831, in runtensors = self._new_exe.run(
RuntimeError: In user code:File "<stdin>", line 1, in <module>File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\utils\install_check.py", line 273, in run_check_run_static_single(use_cuda, use_xpu, use_custom, custom_device_name)File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\utils\install_check.py", line 135, in _run_static_singleinput, out, weight = _simple_network()File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\utils\install_check.py", line 31, in _simple_networkweight = paddle.create_parameter(File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\tensor\creation.py", line 228, in create_parameterreturn helper.create_parameter(File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\base\layer_helper_base.py", line 444, in create_parameterself.startup_program.global_block().create_parameter(File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\base\framework.py", line 4381, in create_parameterinitializer(param, self)File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\nn\initializer\initializer.py", line 40, in __call__return self.forward(param, block)File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\nn\initializer\constant.py", line 84, in forwardop = block.append_op(File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\base\framework.py", line 4467, in append_opop = Operator(File "D:\work\论文写作\邮件\PaddleNLP-develop\venv\lib\site-packages\paddle\base\framework.py", line 3016, in __init__for frame in traceback.extract_stack():PreconditionNotMetError: The third-party dynamic library (cudnn64_8.dll) that Paddle depends on is not configured correctly. (error code is 126)Suggestions:1. Check if the third-party dynamic library (e.g. CUDA, CUDNN) is installed correctly and its version is matched with paddlepaddle you installed.2. Configure third-party dynamic library environment variables as follows:- Linux: set LD_LIBRARY_PATH by `export LD_LIBRARY_PATH=...`- Windows: set PATH by `set PATH=XXX; (at ..\paddle\phi\backends\dynload\dynamic_loader.cc:312)[operator < fill_constant > error]
>>>
更换版本
#python -m pip install paddlepaddle-gpu==2.6.1.post112 -f https://www.paddlepaddle.org.cn/whl/windows/mkl/avx/stable.html
python -m pip install paddlepaddle-gpu==3.0.0b1 -i https://www.paddlepaddle.org.cn/packages/stable/cu123/
(paddle—test) (venv) PS D:\work\论文写作\邮件\PaddleNLP-develop> python
Python 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import paddle
>>> paddle.utils.run_check()
Running verify PaddlePaddle program ...
I1119 16:00:49.811331 25164 program_interpreter.cc:243] New Executor is Running.
W1119 16:00:49.811331 25164 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 12.6, Runtime API Version: 12.3
W1119 16:00:49.812327 25164 gpu_resources.cc:164] device: 0, cuDNN Version: 9.0.
I1119 16:00:50.964934 25164 interpreter_util.cc:648] Standalone Executor is Used.
PaddlePaddle works well on 1 GPU.
PaddlePaddle is installed successfully! Let's start deep learning with PaddlePaddle now.
>>>>>> from paddlenlp.transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B")
[2024-11-19 16:05:28,468] [ INFO] - The `unk_token` parameter needs to be defined: we use `eos_token` by default.
>>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B", dtype="float16")
[2024-11-19 16:05:36,929] [ INFO] - We are using <class 'paddlenlp.transformers.qwen2.modeling.Qwen2ForCausalLM'> to load 'Qwen/Qwen2-0.5B'.
[2024-11-19 16:05:36,929] [ INFO] - Loading configuration file C:\Users\Win11\.paddlenlp\models\Qwen/Qwen2-0.5B\config.json
[2024-11-19 16:05:36,934] [ INFO] - Loading weights file from cache at C:\Users\Win11\.paddlenlp\models\Qwen/Qwen2-0.5B\model.safetensors
[2024-11-19 16:05:39,705] [ INFO] - Loaded weights file from disk, setting weights to model.
[2024-11-19 16:05:49,260] [ INFO] - All model checkpoint weights were used when initializing Qwen2ForCausalLM.[2024-11-19 16:05:49,260] [ WARNING] - Some weights of Qwen2ForCausalLM were not initialized from the model checkpoint at Qwen/Qwen2-0.5B and are newly initialized: ['lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[2024-11-19 16:05:49,261] [ INFO] - Loading configuration file C:\Users\Win11\.paddlenlp\models\Qwen/Qwen2-0.5B\generation_config.json
>>> input_features = tokenizer("你好!请自我介绍一下。", return_tensors="pd")
>>> outputs = model.generate(**input_features, max_length=128)
>>> print(tokenizer.batch_decode(outputs[0], skip_special_tokens=True))
[' 我是一个AI语言模型,我可以回答各种问题,包括但不限于:天气、新闻、历史、文化、科学、教育、娱乐等。请问您有什么需要了解的吗?']
>>>
`