(笔记)第三期书生·浦语大模型实战营(十一卷王场)–书生基础岛第5关---XTuner 微调个人小助手认知

学员闯关手册:https://aicarrier.feishu.cn/wiki/ZcgkwqteZi9s4ZkYr0Gcayg1n1g?open_in_browser=true
课程视频:https://www.bilibili.com/video/BV1tz421B72y/
课程文档:
https://github.com/InternLM/Tutorial/tree/camp3/docs/L1/XTuner
关卡作业:https://github.com/InternLM/Tutorial/blob/camp3/docs/L1/XTuner/task.md
开发机平台:https://studio.intern-ai.org.cn/
开发机平台介绍:https://aicarrier.feishu.cn/wiki/GQ1Qwxb3UiQuewk8BVLcuyiEnHe

在这里插入图片描述

准备工作

#1、创建虚拟环境
#创建开发机:Cuda12.2-conda
#克隆代码
mkdir -p /root/InternLM/Tutorial
git clone -b camp3  https://github.com/InternLM/Tutorial /root/InternLM/Tutorial
# 创建虚拟环境
conda create -n xtuner0121 python=3.10 -y# 激活虚拟环境(注意:后续的所有操作都需要在这个虚拟环境中进行)
conda activate xtuner0121# 安装一些必要的库
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia -y
# 安装其他依赖
pip install transformers==4.39.3
pip install streamlit==1.36.0
#2、安装 XTuner
# 创建一个目录,用来存放源代码
mkdir -p /root/InternLM/codecd /root/InternLM/codegit clone -b v0.1.21  https://github.com/InternLM/XTuner /root/InternLM/code/XTuner
# 进入到源码目录
cd /root/InternLM/code/XTuner
conda activate xtuner0121# 执行安装
pip install -e '.[deepspeed]'
#下面是更快的安装
#pip install -e '.[deepspeed]' -i https://mirrors.aliyun.com/pypi/simple/
#3、模型准备
# 创建一个目录,用来存放微调的所有资料,后续的所有操作都在该路径中进行
mkdir -p /root/InternLM/XTunercd /root/InternLM/XTunermkdir -p Shanghai_AI_Laboratoryln -s /root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b Shanghai_AI_Laboratory/internlm2-chat-1_8b
#使用tree命令来观察目录结构
apt-get install -y tree
tree -l
#4、xtuner_streamlit_demo.py
#tools/xtuner_streamlit_demo.py
import copy
import warnings
from dataclasses import asdict, dataclass
from typing import Callable, List, Optionalimport streamlit as st
import torch
from torch import nn
from transformers.generation.utils import (LogitsProcessorList,StoppingCriteriaList)
from transformers.utils import loggingfrom transformers import AutoTokenizer, AutoModelForCausalLM  # isort: skiplogger = logging.get_logger(__name__)model_name_or_path = "/root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b"@dataclass
class GenerationConfig:# this config is used for chat to provide more diversitymax_length: int = 2048top_p: float = 0.75temperature: float = 0.1do_sample: bool = Truerepetition_penalty: float = 1.000@torch.inference_mode()
def generate_interactive(model,tokenizer,prompt,generation_config: Optional[GenerationConfig] = None,logits_processor: Optional[LogitsProcessorList] = None,stopping_criteria: Optional[StoppingCriteriaList] = None,prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor],List[int]]] = None,additional_eos_token_id: Optional[int] = None,**kwargs,
):inputs = tokenizer([prompt], padding=True, return_tensors='pt')input_length = len(inputs['input_ids'][0])for k, v in inputs.items():inputs[k] = v.cuda()input_ids = inputs['input_ids']_, input_ids_seq_length = input_ids.shape[0], input_ids.shape[-1]if generation_config is None:generation_config = model.generation_configgeneration_config = copy.deepcopy(generation_config)model_kwargs = generation_config.update(**kwargs)bos_token_id, eos_token_id = (  # noqa: F841  # pylint: disable=W0612generation_config.bos_token_id,generation_config.eos_token_id,)if isinstance(eos_token_id, int):eos_token_id = [eos_token_id]if additional_eos_token_id is not None:eos_token_id.append(additional_eos_token_id)has_default_max_length = kwargs.get('max_length') is None and generation_config.max_length is not Noneif has_default_max_length and generation_config.max_new_tokens is None:warnings.warn(f"Using 'max_length''s default ({repr(generation_config.max_length)}) \to control the generation length. "'This behaviour is deprecated and will be removed from the \config in v5 of Transformers -- we'' recommend using `max_new_tokens` to control the maximum \length of the generation.',UserWarning,)elif generation_config.max_new_tokens is not None:generation_config.max_length = generation_config.max_new_tokens + \input_ids_seq_lengthif not has_default_max_length:logger.warn(  # pylint: disable=W4902f"Both 'max_new_tokens' (={generation_config.max_new_tokens}) "f"and 'max_length'(={generation_config.max_length}) seem to ""have been set. 'max_new_tokens' will take precedence. "'Please refer to the documentation for more information. ''(https://huggingface.co/docs/transformers/main/''en/main_classes/text_generation)',UserWarning,)if input_ids_seq_length >= generation_config.max_length:input_ids_string = 'input_ids'logger.warning(f"Input length of {input_ids_string} is {input_ids_seq_length}, "f"but 'max_length' is set to {generation_config.max_length}. "'This can lead to unexpected behavior. You should consider'" increasing 'max_new_tokens'.")# 2. Set generation parameters if not already definedlogits_processor = logits_processor if logits_processor is not None \else LogitsProcessorList()stopping_criteria = stopping_criteria if stopping_criteria is not None \else StoppingCriteriaList()logits_processor = model._get_logits_processor(generation_config=generation_config,input_ids_seq_length=input_ids_seq_length,encoder_input_ids=input_ids,prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,logits_processor=logits_processor,)stopping_criteria = model._get_stopping_criteria(generation_config=generation_config,stopping_criteria=stopping_criteria)logits_warper = model._get_logits_warper(generation_config)unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)scores = Nonewhile True:model_inputs = model.prepare_inputs_for_generation(input_ids, **model_kwargs)# forward pass to get next tokenoutputs = model(**model_inputs,return_dict=True,output_attentions=False,output_hidden_states=False,)next_token_logits = outputs.logits[:, -1, :]# pre-process distributionnext_token_scores = logits_processor(input_ids, next_token_logits)next_token_scores = logits_warper(input_ids, next_token_scores)# sampleprobs = nn.functional.softmax(next_token_scores, dim=-1)if generation_config.do_sample:next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)else:next_tokens = torch.argmax(probs, dim=-1)# update generated ids, model inputs, and length for next stepinput_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)model_kwargs = model._update_model_kwargs_for_generation(outputs, model_kwargs, is_encoder_decoder=False)unfinished_sequences = unfinished_sequences.mul((min(next_tokens != i for i in eos_token_id)).long())output_token_ids = input_ids[0].cpu().tolist()output_token_ids = output_token_ids[input_length:]for each_eos_token_id in eos_token_id:if output_token_ids[-1] == each_eos_token_id:output_token_ids = output_token_ids[:-1]response = tokenizer.decode(output_token_ids)yield response# stop when each sentence is finished# or if we exceed the maximum lengthif unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):breakdef on_btn_click():del st.session_state.messages@st.cache_resource
def load_model():model = (AutoModelForCausalLM.from_pretrained(model_name_or_path,trust_remote_code=True).to(torch.bfloat16).cuda())tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,trust_remote_code=True)return model, tokenizerdef prepare_generation_config():with st.sidebar:max_length = st.slider('Max Length',min_value=8,max_value=32768,value=2048)top_p = st.slider('Top P', 0.0, 1.0, 0.75, step=0.01)temperature = st.slider('Temperature', 0.0, 1.0, 0.1, step=0.01)st.button('Clear Chat History', on_click=on_btn_click)generation_config = GenerationConfig(max_length=max_length,top_p=top_p,temperature=temperature)return generation_configuser_prompt = '<|im_start|>user\n{user}<|im_end|>\n'
robot_prompt = '<|im_start|>assistant\n{robot}<|im_end|>\n'
cur_query_prompt = '<|im_start|>user\n{user}<|im_end|>\n\<|im_start|>assistant\n'def combine_history(prompt):messages = st.session_state.messagesmeta_instruction = ('')total_prompt = f"<s><|im_start|>system\n{meta_instruction}<|im_end|>\n"for message in messages:cur_content = message['content']if message['role'] == 'user':cur_prompt = user_prompt.format(user=cur_content)elif message['role'] == 'robot':cur_prompt = robot_prompt.format(robot=cur_content)else:raise RuntimeErrortotal_prompt += cur_prompttotal_prompt = total_prompt + cur_query_prompt.format(user=prompt)return total_promptdef main():# torch.cuda.empty_cache()print('load model begin.')model, tokenizer = load_model()print('load model end.')st.title('InternLM2-Chat-1.8B')generation_config = prepare_generation_config()# Initialize chat historyif 'messages' not in st.session_state:st.session_state.messages = []# Display chat messages from history on app rerunfor message in st.session_state.messages:with st.chat_message(message['role'], avatar=message.get('avatar')):st.markdown(message['content'])# Accept user inputif prompt := st.chat_input('What is up?'):# Display user message in chat message containerwith st.chat_message('user'):st.markdown(prompt)real_prompt = combine_history(prompt)# Add user message to chat historyst.session_state.messages.append({'role': 'user','content': prompt,})with st.chat_message('robot'):message_placeholder = st.empty()for cur_response in generate_interactive(model=model,tokenizer=tokenizer,prompt=real_prompt,additional_eos_token_id=92542,**asdict(generation_config),):# Display robot response in chat message containermessage_placeholder.markdown(cur_response + '▌')message_placeholder.markdown(cur_response)# Add robot response to chat historyst.session_state.messages.append({'role': 'robot','content': cur_response,  # pylint: disable=undefined-loop-variable})torch.cuda.empty_cache()if __name__ == '__main__':main()

微调前的模型对话

conda activate xtuner0121
streamlit run /root/InternLM/Tutorial/tools/xtuner_streamlit_demo.py
#Power shell
ssh -CNg -L 8501:127.0.0.1:8501 root@ssh.intern-ai.org.cn -p 34911
#浏览器:http://127.0.0.1:8501

指令跟随微调

数据集准备

cd /root/InternLM/XTuner
mkdir -p datas
touch datas/assistant.json
cd /root/InternLM/XTuner
touch xtuner_generate_assistant.py
cd /root/InternLM/XTuner
cp /root/InternLM/Tutorial/tools/xtuner_generate_assistant.py ./
#xtuner_generate_assistant.py
import json# 设置用户的名字
name = '伍鲜同志'
# 设置需要重复添加的数据次数
n = 8000# 初始化数据
data = [{"conversation": [{"input": "请介绍一下你自己", "output": "我是{}的小助手,内在是上海AI实验室书生·浦语的1.8B大模型哦".format(name)}]},{"conversation": [{"input": "你在实战营做什么", "output": "我在这里帮助{}完成XTuner微调个人小助手的任务".format(name)}]}
]# 通过循环,将初始化的对话数据重复添加到data列表中
for i in range(n):data.append(data[0])data.append(data[1])# 将data列表中的数据写入到'datas/assistant.json'文件中
with open('datas/assistant.json', 'w', encoding='utf-8') as f:# 使用json.dump方法将数据以JSON格式写入文件# ensure_ascii=False 确保中文字符正常显示# indent=4 使得文件内容格式化,便于阅读json.dump(data, f, ensure_ascii=False, indent=4)
# 将对应的name进行修改(在第4行的位置)
- name = '伍鲜同志'
+ name = "朱娅梅"
#生成数据文件
cd /root/InternLM/XTuner
conda activate xtuner0121
python xtuner_generate_assistant.py

修改配置文件

#配置文件
conda activate xtuner0121
xtuner list-cfg -p internlm2
cd /root/InternLM/XTuner
conda activate xtuner0121
xtuner copy-cfg internlm2_chat_1_8b_qlora_alpaca_e3 .#######################################################################
#                          PART 1  Settings                           #
#######################################################################
- pretrained_model_name_or_path = 'internlm/internlm2-chat-1_8b'
+ pretrained_model_name_or_path = '/root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b'- alpaca_en_path = 'tatsu-lab/alpaca'
+ alpaca_en_path = 'datas/assistant.json'evaluation_inputs = [
-    '请给我介绍五个上海的景点', 'Please tell me five scenic spots in Shanghai'
+    '请介绍一下你自己', 'Please introduce yourself'
]#######################################################################
#                      PART 3  Dataset & Dataloader                   #
#######################################################################
alpaca_en = dict(type=process_hf_dataset,
-   dataset=dict(type=load_dataset, path=alpaca_en_path),
+   dataset=dict(type=load_dataset, path='json', data_files=dict(train=alpaca_en_path)),tokenizer=tokenizer,max_length=max_length,
-   dataset_map_fn=alpaca_map_fn,
+   dataset_map_fn=None,template_map_fn=dict(type=template_map_fn_factory, template=prompt_template),remove_unused_columns=True,shuffle_before_pack=True,pack_to_max_length=pack_to_max_length,use_varlen_attn=use_varlen_attn)

修改完后的完整的配置文件是

#configs/internlm2_chat_1_8b_qlora_alpaca_e3_copy.py
# Copyright (c) OpenMMLab. All rights reserved.
import torch
from datasets import load_dataset
from mmengine.dataset import DefaultSampler
from mmengine.hooks import (CheckpointHook, DistSamplerSeedHook, IterTimerHook,LoggerHook, ParamSchedulerHook)
from mmengine.optim import AmpOptimWrapper, CosineAnnealingLR, LinearLR
from peft import LoraConfig
from torch.optim import AdamW
from transformers import (AutoModelForCausalLM, AutoTokenizer,BitsAndBytesConfig)from xtuner.dataset import process_hf_dataset
from xtuner.dataset.collate_fns import default_collate_fn
from xtuner.dataset.map_fns import alpaca_map_fn, template_map_fn_factory
from xtuner.engine.hooks import (DatasetInfoHook, EvaluateChatHook,VarlenAttnArgsToMessageHubHook)
from xtuner.engine.runner import TrainLoop
from xtuner.model import SupervisedFinetune
from xtuner.parallel.sequence import SequenceParallelSampler
from xtuner.utils import PROMPT_TEMPLATE, SYSTEM_TEMPLATE#######################################################################
#                          PART 1  Settings                           #
#######################################################################
# Model
pretrained_model_name_or_path = '/root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b'
use_varlen_attn = False# Data
alpaca_en_path = 'datas/assistant.json'
prompt_template = PROMPT_TEMPLATE.internlm2_chat
max_length = 2048
pack_to_max_length = True# parallel
sequence_parallel_size = 1# Scheduler & Optimizer
batch_size = 1  # per_device
accumulative_counts = 16
accumulative_counts *= sequence_parallel_size
dataloader_num_workers = 0
max_epochs = 3
optim_type = AdamW
lr = 2e-4
betas = (0.9, 0.999)
weight_decay = 0
max_norm = 1  # grad clip
warmup_ratio = 0.03# Save
save_steps = 500
save_total_limit = 2  # Maximum checkpoints to keep (-1 means unlimited)# Evaluate the generation performance during the training
evaluation_freq = 500
SYSTEM = SYSTEM_TEMPLATE.alpaca
evaluation_inputs = ['请介绍一下你自己', 'Please introduce yourself'
]#######################################################################
#                      PART 2  Model & Tokenizer                      #
#######################################################################
tokenizer = dict(type=AutoTokenizer.from_pretrained,pretrained_model_name_or_path=pretrained_model_name_or_path,trust_remote_code=True,padding_side='right')model = dict(type=SupervisedFinetune,use_varlen_attn=use_varlen_attn,llm=dict(type=AutoModelForCausalLM.from_pretrained,pretrained_model_name_or_path=pretrained_model_name_or_path,trust_remote_code=True,torch_dtype=torch.float16,quantization_config=dict(type=BitsAndBytesConfig,load_in_4bit=True,load_in_8bit=False,llm_int8_threshold=6.0,llm_int8_has_fp16_weight=False,bnb_4bit_compute_dtype=torch.float16,bnb_4bit_use_double_quant=True,bnb_4bit_quant_type='nf4')),lora=dict(type=LoraConfig,r=64,lora_alpha=16,lora_dropout=0.1,bias='none',task_type='CAUSAL_LM'))#######################################################################
#                      PART 3  Dataset & Dataloader                   #
#######################################################################
alpaca_en = dict(type=process_hf_dataset,dataset=dict(type=load_dataset, path='json', data_files=dict(train=alpaca_en_path)),tokenizer=tokenizer,max_length=max_length,dataset_map_fn=None,template_map_fn=dict(type=template_map_fn_factory, template=prompt_template),remove_unused_columns=True,shuffle_before_pack=True,pack_to_max_length=pack_to_max_length,use_varlen_attn=use_varlen_attn)sampler = SequenceParallelSampler \if sequence_parallel_size > 1 else DefaultSampler
train_dataloader = dict(batch_size=batch_size,num_workers=dataloader_num_workers,dataset=alpaca_en,sampler=dict(type=sampler, shuffle=True),collate_fn=dict(type=default_collate_fn, use_varlen_attn=use_varlen_attn))#######################################################################
#                    PART 4  Scheduler & Optimizer                    #
#######################################################################
# optimizer
optim_wrapper = dict(type=AmpOptimWrapper,optimizer=dict(type=optim_type, lr=lr, betas=betas, weight_decay=weight_decay),clip_grad=dict(max_norm=max_norm, error_if_nonfinite=False),accumulative_counts=accumulative_counts,loss_scale='dynamic',dtype='float16')# learning policy
# More information: https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/param_scheduler.md  # noqa: E501
param_scheduler = [dict(type=LinearLR,start_factor=1e-5,by_epoch=True,begin=0,end=warmup_ratio * max_epochs,convert_to_iter_based=True),dict(type=CosineAnnealingLR,eta_min=0.0,by_epoch=True,begin=warmup_ratio * max_epochs,end=max_epochs,convert_to_iter_based=True)
]# train, val, test setting
train_cfg = dict(type=TrainLoop, max_epochs=max_epochs)#######################################################################
#                           PART 5  Runtime                           #
#######################################################################
# Log the dialogue periodically during the training process, optional
custom_hooks = [dict(type=DatasetInfoHook, tokenizer=tokenizer),dict(type=EvaluateChatHook,tokenizer=tokenizer,every_n_iters=evaluation_freq,evaluation_inputs=evaluation_inputs,system=SYSTEM,prompt_template=prompt_template)
]if use_varlen_attn:custom_hooks += [dict(type=VarlenAttnArgsToMessageHubHook)]# configure default hooks
default_hooks = dict(# record the time of every iteration.timer=dict(type=IterTimerHook),# print log every 10 iterations.logger=dict(type=LoggerHook, log_metric_by_epoch=False, interval=10),# enable the parameter scheduler.param_scheduler=dict(type=ParamSchedulerHook),# save checkpoint per `save_steps`.checkpoint=dict(type=CheckpointHook,by_epoch=False,interval=save_steps,max_keep_ckpts=save_total_limit),# set sampler seed in distributed evrionment.sampler_seed=dict(type=DistSamplerSeedHook),
)# configure environment
env_cfg = dict(# whether to enable cudnn benchmarkcudnn_benchmark=False,# set multi process parametersmp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),# set distributed parametersdist_cfg=dict(backend='nccl'),
)# set visualizer
visualizer = None# set log level
log_level = 'INFO'# load from which checkpoint
load_from = None# whether to resume training from the loaded checkpoint
resume = False# Defaults to use random seed and disable `deterministic`
randomness = dict(seed=None, deterministic=False)# set log processor
log_processor = dict(by_epoch=False)
cd /root/InternLM/XTuner
cp /root/InternLM/Tutorial/configs/internlm2_chat_1_8b_qlora_alpaca_e3_copy.py ./
启动微调
cd /root/InternLM/XTuner
conda activate xtuner0121
xtuner train ./internlm2_chat_1_8b_qlora_alpaca_e3_copy.py
模型转换
cd /root/InternLM/XTuner
conda activate xtuner0121# 先获取最后保存的一个pth文件
pth_file=`ls -t ./work_dirs/internlm2_chat_1_8b_qlora_alpaca_e3_copy/*.pth | head -n 1`
export MKL_SERVICE_FORCE_INTEL=1
export MKL_THREADING_LAYER=GNU
xtuner convert pth_to_hf ./internlm2_chat_1_8b_qlora_alpaca_e3_copy.py ${pth_file} ./hf
模型合并
cd /root/InternLM/XTuner
conda activate xtuner0121export MKL_SERVICE_FORCE_INTEL=1
export MKL_THREADING_LAYER=GNU
xtuner convert merge /root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b ./hf ./merged --max-shard-size 2GB

修改脚本文件xtuner_streamlit_demo.py并启动

# 直接修改脚本文件第18行
- model_name_or_path = "/root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b"
+ model_name_or_path = "/root/InternLM/XTuner/merged"
conda activate xtuner0121
streamlit run /root/InternLM/Tutorial/tools/xtuner_streamlit_demo.py
#Power shell端口映射:ssh -CNg -L 8501:127.0.0.1:8501 root@ssh.intern-ai.org.cn -p 43551
#浏览器:http://127.0.0.1:8501 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.xdnf.cn/news/1556178.html

如若内容造成侵权/违法违规/事实不符,请联系一条长河网进行投诉反馈,一经查实,立即删除!

相关文章

2024.9月29日~10月6日 SSM框架项目-《电信资费管理系统》

一、数据库介绍&#xff1a; 1、account&#xff1a;帐务信息表 2、admin_info&#xff1a;管理员信息表 3、admin_role&#xff1a;管理员角色信息表 4、cost&#xff1a;资费信息表 5、privilege_info&#xff1a;权限信息表 6、role_info&#xff1a;角色信息表 7、role_pri…

使用frp将树莓派穿透到外网

引言 frp官网 最近买了一块树莓派 zero 2w&#xff0c;想要它可以进行远程访问&#xff0c;所以想到了frp这个方案进行穿透&#xff0c;后期会使用树莓派搭建音乐服务器&#xff0c;本人手机内存有点小&#xff0c;xxxx云音乐太占空间&#xff0c;有兴趣的话可以关注后续。 …

数据结构与算法——Java实现 30.合并多个有序链表 小顶堆实现

后来我们都走了很久&#xff0c;远到提及往事时&#xff0c; 总会加上once upon a time —— 24.10.6 23. 合并 K 个升序链表 给你一个链表数组&#xff0c;每个链表都已经按升序排列。 请你将所有链表合并到一个升序链表中&#xff0c;返回合并后的链表。 示例 1&#xff1…

Maven安装使用

说明&#xff1a;Maven是Apache旗下的一个开源项目&#xff0c;是一款用于管理和构建java项目的工具。一般来说&#xff0c;它帮助我们管理依赖、构建项目。本文介绍在Windows系统下安装Maven。 下载&安装&验证 下载 首先&#xff0c;在Maven官网&#xff08;https:…

C++模版SFIANE应用踩的一个小坑

一天一个C大佬同事&#xff0c;突然截图过来一段代码&#xff1a;这写的啥呀&#xff0c;啰里吧嗦的&#xff0c;这个构造函数模板参数T1感觉是多余的呀 template<class T> class TestClass { public:TestClass(){}//函数1template<class T1 T, std::enable_if_t<…

vSAN05:vSAN延伸集群简介与创建、资源要求与计算、高级功能配置、维护、故障处理

目录 vSAN延伸集群延伸集群创建延伸集群的建议网络配置vSAN延伸集群的端口见证主机的资源要求vSAN延伸集群中见证节点带宽占用vSAN延伸集群的允许故障数vSAN延伸集群不同配置下的空间占用 vSAN延伸集群的HA配置vSAN延伸集群的DRS配置vSAN存储策略以及虚拟机/主机策略的互操作vS…

十四、深入理解Mysql索引底层数据结构与算法

文章目录 一、索引的本质1、索引是帮助MySQL高效获取数据的排好序的数据结构2、索引的数据结构3、数据结构可视化网站 二、常见数据结构介绍1、B-Tree2、BTree&#xff08;B-Tree变种&#xff09;3、Hash结构 三、存储引擎的索引实现1、MyISAM存储引擎索引实现MyISAM索引文件和…

AI配音(声音克隆)

Fish Audio: Free Generative AI Text To Speech & Voice Cloning 【【AI配音】终于找到免费 & 小白友好的声音克隆软件了&#xff01;真人相似度98%!】https://www.bilibili.com/video/BV1MwbFeCE2X?vd_source3cc3c07b09206097d0d8b0aefdf07958 我终于找到总这3款免…

新机配置Win11

Win11跳联网 在连接网络的界面输入ShiftF10打开命令行&#xff0c;然后输入oobe\bypassnro然后会重启&#xff0c;在联网的界面就可以进行跳过了。 编码 在中国大陆Windows使用的编码是GBK编码 查看电脑系统版本 WinR输入winver即可 桌面图标 设置->个性化->主题…

【机器学习】深度学习、强化学习和深度强化学习?

深度学习、强化学习和深度强化学习是机器学习的三个重要子领域。它们有着各自独特的应用场景和研究目标&#xff0c;虽然都属于机器学习的范畴&#xff0c;但各自的实现方式和侧重点有所不同。 1. 深度学习&#xff08;Deep Learning&#xff09; 深度学习是一种基于神经网络的…

Vite多环境配置与打包:

环境变量必须以VITE开头 1.VITE_BASE_API&#xff1a; 在开发环境中设置为 /dev-api&#xff0c;这是一个本地 mock 地址&#xff0c;通常用于模拟后端接口。 2.VITE_ENABLE_ERUDA&#xff1a; 设置为 "true"&#xff0c;表示启用调试工具&#xff0c;通常是为了…

【MySQL】-- 库的操作

文章目录 1. 查看数据库1.1 语法 2. 创建数据库2.1 语法2.2 示例2.2.1 创建一个名为java114的数据库2.2.2 创建数据库java114&#xff0c;如果数据库不存在则创建2.2.3 查看警告信息 3. 字符集编码和校验&#xff08;排序&#xff09;规则3.1 查看数据库支持的字符集编码3.2 查…

动态SLAM总结二

文章目录 Mapping the Static Parts of Dynamic Scenes from 3D LiDAR Point Clouds Exploiting Ground Segmentation&#xff1a;&#xff08;2021&#xff09;RF-LIO&#xff1a;&#xff08;2022&#xff09;RH-Map&#xff1a;&#xff08;2023&#xff09;Mapless Online …

子比主题美化 – 添加天气教程

前言 经常看到很多的网站顶部或者侧边有显示天气状态的小条幅&#xff0c;看着也美观&#xff0c;寻思着也在自己的小站上显示天气。大体的思路是能识别用的ip地址来确认位置然后以代码形式在前台显示出。 经过在百度上搜索一番&#xff0c;发现一个很不错的天气api&#xff…

万界星空科技MES数据集成平台

制造执行系统MES作为连接企业上层ERP系统和现场控制系统的桥梁&#xff0c;承担了实时数据采集、处理、分析和传递的重要任务。MES数据集成平台是一个集成各类数据源&#xff0c;将数据进行整合和统一管理的系统&#xff0c;通过提供标准化接口和协议&#xff0c;实现数据的无缝…

GOME数据IDL处理

GOME数据后缀为xdr 数据url&#xff1a;https://lweb.cfa.harvard.edu/~xliu/GMLV3/ 官方文档给出的读取方式为IDL&#xff08;restore方式&#xff09;&#xff1a; 以下是包含的数据字段&#xff1a; ;print,LONS ;print,ALB ;print,NLON ;print,NLAT ;print,LATS ; AVGK…

基于ssm 框架的java 开发语言的 在线教育学习平台系统设计与实现 源码 论文

博主介绍&#xff1a;专注于Java&#xff08;springboot ssm springcloud等开发框架&#xff09; vue .net php phython node.js uniapp小程序 等诸多技术领域和毕业项目实战、企业信息化系统建设&#xff0c;从业十五余年开发设计教学工作 ☆☆☆ 精彩专栏推荐订阅☆☆☆…

Kotlin真·全平台——Kotlin Compose Multiplatform Mobile(kotlin跨平台方案、KMP、KMM)

前言 随着kotlin代码跨平台方案的推出&#xff0c;kotlin跨平台一度引起不少波澜。但波澜终归没有掀起太大的风浪&#xff0c;作为一个敏捷型开发的公司&#xff0c;依然少不了Android和iOS的同步开发&#xff0c;实际成本和效益并没有太多变化。所以对于大多数公司来说依然风平…

系统设计,如何设计一个秒杀功能

需要解决的问题 瞬时流量的承接防止超卖预防黑产避免对正常服务的影响兜底方法 前端设计 利用 CDN 缓存静态资源&#xff0c;减轻服务器的压力在前端随机限流按钮防抖&#xff0c;防止用户重复点击 后端设计 Nginx 做统一接入&#xff0c;进行负载均衡与限流用 sentinel 等…