深度学习笔记17_TensorFlow实现咖啡豆识别

  •  🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊 | 接辅导、项目定制

一、我的环境

1.语言环境:Python 3.9

2.编译器:Pycharm

3.深度学习环境:TensorFlow 2.10.0

二、GPU设置

       若使用的是cpu则可忽略

import tensorflow as tf
gpus = tf.config.list_physical_devices("GPU")if gpus:gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPUtf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用tf.config.set_visible_devices([gpu0],"GPU")

、导入数据

data_dir = "./data/"
data_dir = pathlib.Path(data_dir)image_count = len(list(data_dir.glob('*/*/*.jpg')))print("图片总数为:",image_count)
#图片总数为:1200

、数据预处理

batch_size = 32
img_height = 224
img_width = 224"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory("./data/train/",seed=123,image_size=(img_height, img_width),batch_size=batch_size)"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory("./data/test/",seed=123,image_size=(img_height, img_width),batch_size=batch_size)
class_names = train_ds.class_names
print(class_names)

运行结果: 

['Dark', 'Green', 'Light', 'Medium']

五、可视化图片

plt.figure(figsize=(10, 4))  # 图形的宽为10高为5for images, labels in train_ds.take(1):for i in range(10):ax = plt.subplot(2, 5, i + 1)  plt.imshow(images[i].numpy().astype("uint8"))plt.title(class_names[labels[i]])plt.axis("off")
plt.show()

 运行结果:

​​

再次检查数据:

for image_batch, labels_batch in train_ds:print(image_batch.shape)print(labels_batch.shape)break

 运行结果:

(32, 224, 224, 3)
(32,)

六、配置数据集

  • shuffle():打乱数据,关于此函数的详细介绍可以参考:https://zhuanlan.zhihu.com/p/42417456
  • prefetch():预取数据,加速运行
  • cache():将数据集缓存到内存当中,加速运行
AUTOTUNE = tf.data.AUTOTUNEtrain_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
val_ds   = val_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(val_ds))
first_image = image_batch[0]# 查看归一化后的数据
print(np.min(first_image), np.max(first_image))

七、自建模型

from tensorflow.keras import layers, models, Input
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropoutdef VGG16(nb_classes, input_shape):input_tensor = Input(shape=input_shape)# 1st blockx = Conv2D(64, (3,3), activation='relu', padding='same',name='block1_conv1')(input_tensor)x = Conv2D(64, (3,3), activation='relu', padding='same',name='block1_conv2')(x)x = MaxPooling2D((2,2), strides=(2,2), name = 'block1_pool')(x)# 2nd blockx = Conv2D(128, (3,3), activation='relu', padding='same',name='block2_conv1')(x)x = Conv2D(128, (3,3), activation='relu', padding='same',name='block2_conv2')(x)x = MaxPooling2D((2,2), strides=(2,2), name = 'block2_pool')(x)# 3rd blockx = Conv2D(256, (3,3), activation='relu', padding='same',name='block3_conv1')(x)x = Conv2D(256, (3,3), activation='relu', padding='same',name='block3_conv2')(x)x = Conv2D(256, (3,3), activation='relu', padding='same',name='block3_conv3')(x)x = MaxPooling2D((2,2), strides=(2,2), name = 'block3_pool')(x)# 4th blockx = Conv2D(512, (3,3), activation='relu', padding='same',name='block4_conv1')(x)x = Conv2D(512, (3,3), activation='relu', padding='same',name='block4_conv2')(x)x = Conv2D(512, (3,3), activation='relu', padding='same',name='block4_conv3')(x)x = MaxPooling2D((2,2), strides=(2,2), name = 'block4_pool')(x)# 5th blockx = Conv2D(512, (3,3), activation='relu', padding='same',name='block5_conv1')(x)x = Conv2D(512, (3,3), activation='relu', padding='same',name='block5_conv2')(x)x = Conv2D(512, (3,3), activation='relu', padding='same',name='block5_conv3')(x)x = MaxPooling2D((2,2), strides=(2,2), name = 'block5_pool')(x)# full connectionx = Flatten()(x)x = Dense(4096, activation='relu',  name='fc1')(x)x = Dense(4096, activation='relu', name='fc2')(x)output_tensor = Dense(nb_classes, activation='softmax', name='predictions')(x)model = Model(input_tensor, output_tensor)return modelmodel=VGG16(len(class_names), (img_width, img_height, 3))
model.summary()

运行结果:

_________________________________________________________________Layer (type)                Output Shape              Param #
=================================================================input_1 (InputLayer)        [(None, 224, 224, 3)]     0block1_conv1 (Conv2D)       (None, 224, 224, 64)      1792block1_conv2 (Conv2D)       (None, 224, 224, 64)      36928block1_pool (MaxPooling2D)  (None, 112, 112, 64)      0block2_conv1 (Conv2D)       (None, 112, 112, 128)     73856block2_conv2 (Conv2D)       (None, 112, 112, 128)     147584block2_pool (MaxPooling2D)  (None, 56, 56, 128)       0block3_conv1 (Conv2D)       (None, 56, 56, 256)       295168block3_conv2 (Conv2D)       (None, 56, 56, 256)       590080block3_conv3 (Conv2D)       (None, 56, 56, 256)       590080block3_pool (MaxPooling2D)  (None, 28, 28, 256)       0block4_conv1 (Conv2D)       (None, 28, 28, 512)       1180160block4_conv2 (Conv2D)       (None, 28, 28, 512)       2359808block4_conv3 (Conv2D)       (None, 28, 28, 512)       2359808block4_pool (MaxPooling2D)  (None, 14, 14, 512)       0block5_conv1 (Conv2D)       (None, 14, 14, 512)       2359808block5_conv2 (Conv2D)       (None, 14, 14, 512)       2359808block5_conv3 (Conv2D)       (None, 14, 14, 512)       2359808block5_pool (MaxPooling2D)  (None, 7, 7, 512)         0flatten (Flatten)           (None, 25088)             0fc1 (Dense)                 (None, 4096)              102764544fc2 (Dense)                 (None, 4096)              16781312predictions (Dense)         (None, 4)                 16388=================================================================
Total params: 134,276,932
Trainable params: 134,276,932
Non-trainable params: 0
_________________________________________________________________

八、编译

        在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:

  • 损失函数(loss):用于衡量模型在训练期间的准确率。
  • 优化器(optimizer):决定模型如何根据其看到的数据和自身的损失函数进行更新。
  • 指标(metrics):用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。
# 设置初始学习率
initial_learning_rate = 1e-4lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate, decay_steps=30,      # 敲黑板!!!这里是指 steps,不是指epochsdecay_rate=0.92,     # lr经过一次衰减就会变成 decay_rate*lrstaircase=True)# 设置优化器
opt = tf.keras.optimizers.Adam(learning_rate=initial_learning_rate)model.compile(optimizer=opt,loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['accuracy'])

九、训练模型

epochs = 20history = model.fit(train_ds,validation_data=val_ds,epochs=epochs
)

运行结果:

Epoch 1/20
30/30 [==============================] - 38s 592ms/step - loss: 1.3814 - accuracy: 0.2573 - val_loss: 1.3019 - val_accuracy: 0.3083
Epoch 2/20
30/30 [==============================] - 15s 486ms/step - loss: 1.0376 - accuracy: 0.4719 - val_loss: 0.6470 - val_accuracy: 0.7458
Epoch 3/20
30/30 [==============================] - 14s 475ms/step - loss: 0.6289 - accuracy: 0.6542 - val_loss: 0.4882 - val_accuracy: 0.7500
Epoch 4/20
30/30 [==============================] - 15s 485ms/step - loss: 0.4762 - accuracy: 0.7979 - val_loss: 1.0989 - val_accuracy: 0.8000
Epoch 5/20
30/30 [==============================] - 14s 479ms/step - loss: 0.6664 - accuracy: 0.7260 - val_loss: 0.5444 - val_accuracy: 0.7750
Epoch 6/20
30/30 [==============================] - 14s 474ms/step - loss: 0.3893 - accuracy: 0.8448 - val_loss: 0.2358 - val_accuracy: 0.8875
Epoch 7/20
30/30 [==============================] - 14s 476ms/step - loss: 0.3163 - accuracy: 0.8969 - val_loss: 0.3107 - val_accuracy: 0.8667
Epoch 8/20
30/30 [==============================] - 14s 474ms/step - loss: 0.2634 - accuracy: 0.9062 - val_loss: 0.1829 - val_accuracy: 0.9333
Epoch 9/20
30/30 [==============================] - 14s 476ms/step - loss: 0.1136 - accuracy: 0.9646 - val_loss: 0.1342 - val_accuracy: 0.9458
Epoch 10/20
30/30 [==============================] - 14s 477ms/step - loss: 0.0828 - accuracy: 0.9760 - val_loss: 0.0664 - val_accuracy: 0.9833
Epoch 11/20
30/30 [==============================] - 14s 476ms/step - loss: 0.0683 - accuracy: 0.9729 - val_loss: 0.2063 - val_accuracy: 0.9458
Epoch 12/20
30/30 [==============================] - 14s 473ms/step - loss: 0.0537 - accuracy: 0.9823 - val_loss: 0.0288 - val_accuracy: 0.9917
Epoch 13/20
30/30 [==============================] - 14s 472ms/step - loss: 0.0404 - accuracy: 0.9865 - val_loss: 0.2180 - val_accuracy: 0.9458
Epoch 14/20
30/30 [==============================] - 14s 472ms/step - loss: 0.0382 - accuracy: 0.9917 - val_loss: 0.0738 - val_accuracy: 0.9750
Epoch 15/20
30/30 [==============================] - 14s 474ms/step - loss: 0.0152 - accuracy: 0.9969 - val_loss: 0.0499 - val_accuracy: 0.9750
Epoch 16/20
30/30 [==============================] - 15s 485ms/step - loss: 0.3555 - accuracy: 0.9167 - val_loss: 0.0507 - val_accuracy: 0.9875
Epoch 17/20
30/30 [==============================] - 15s 485ms/step - loss: 0.1555 - accuracy: 0.9552 - val_loss: 0.1155 - val_accuracy: 0.9667
Epoch 18/20
30/30 [==============================] - 15s 489ms/step - loss: 0.0767 - accuracy: 0.9688 - val_loss: 0.0613 - val_accuracy: 0.9875
Epoch 19/20
30/30 [==============================] - 15s 482ms/step - loss: 0.0432 - accuracy: 0.9812 - val_loss: 0.0915 - val_accuracy: 0.9750
Epoch 20/20
30/30 [==============================] - 14s 475ms/step - loss: 0.0367 - accuracy: 0.9906 - val_loss: 0.0337 - val_accuracy: 0.9833

 十、模型评估

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']loss = history.history['loss']
val_loss = history.history['val_loss']epochs_range = range(epochs)plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

全局平均池化代替全连接层

  • 极大的减少了网络的参数量(原始网络中全连接层参数量占到整个网络参数总量的80%作用)
  • 相当于在网络结构上做正则,防止模型发生过拟合
_________________________________________________________________Layer (type)                Output Shape              Param #
=================================================================input_1 (InputLayer)        [(None, 224, 224, 3)]     0block1_conv1 (Conv2D)       (None, 224, 224, 64)      1792block1_conv2 (Conv2D)       (None, 224, 224, 64)      36928block1_pool (MaxPooling2D)  (None, 112, 112, 64)      0block2_conv1 (Conv2D)       (None, 112, 112, 128)     73856block2_conv2 (Conv2D)       (None, 112, 112, 128)     147584block2_pool (MaxPooling2D)  (None, 56, 56, 128)       0block3_conv1 (Conv2D)       (None, 56, 56, 256)       295168block3_conv2 (Conv2D)       (None, 56, 56, 256)       590080block3_conv3 (Conv2D)       (None, 56, 56, 256)       590080block3_pool (MaxPooling2D)  (None, 28, 28, 256)       0block4_conv1 (Conv2D)       (None, 28, 28, 512)       1180160block4_conv2 (Conv2D)       (None, 28, 28, 512)       2359808block4_conv3 (Conv2D)       (None, 28, 28, 512)       2359808block4_pool (MaxPooling2D)  (None, 14, 14, 512)       0block5_conv1 (Conv2D)       (None, 14, 14, 512)       2359808block5_conv2 (Conv2D)       (None, 14, 14, 512)       2359808block5_conv3 (Conv2D)       (None, 14, 14, 512)       2359808block5_pool (MaxPooling2D)  (None, 7, 7, 512)         0global_average_pooling2d (G  (None, 512)              0lobalAveragePooling2D)predictions (Dense)         (None, 4)                 2052=================================================================
Total params: 14,716,740
Trainable params: 14,716,740
Non-trainable params: 0
_________________________________________________________________
Epoch 1/20
30/30 [==============================] - 36s 561ms/step - loss: 1.3824 - accuracy: 0.2552 - val_loss: 1.3368 - val_accuracy: 0.2125
Epoch 2/20
30/30 [==============================] - 14s 451ms/step - loss: 1.2286 - accuracy: 0.3667 - val_loss: 0.9773 - val_accuracy: 0.5500
Epoch 3/20
30/30 [==============================] - 14s 452ms/step - loss: 0.8348 - accuracy: 0.6021 - val_loss: 0.7338 - val_accuracy: 0.6625
Epoch 4/20
30/30 [==============================] - 14s 450ms/step - loss: 0.6489 - accuracy: 0.7333 - val_loss: 0.8191 - val_accuracy: 0.6542
Epoch 5/20
30/30 [==============================] - 14s 451ms/step - loss: 0.6889 - accuracy: 0.7188 - val_loss: 0.4738 - val_accuracy: 0.8167
Epoch 6/20
30/30 [==============================] - 14s 452ms/step - loss: 0.3798 - accuracy: 0.8479 - val_loss: 0.3068 - val_accuracy: 0.8667
Epoch 7/20
30/30 [==============================] - 14s 453ms/step - loss: 0.3275 - accuracy: 0.8906 - val_loss: 0.2464 - val_accuracy: 0.9000
Epoch 8/20
30/30 [==============================] - 14s 460ms/step - loss: 0.4658 - accuracy: 0.8271 - val_loss: 0.6661 - val_accuracy: 0.7500
Epoch 9/20
30/30 [==============================] - 14s 462ms/step - loss: 0.2678 - accuracy: 0.9031 - val_loss: 0.2194 - val_accuracy: 0.9208
Epoch 10/20
30/30 [==============================] - 14s 456ms/step - loss: 0.2523 - accuracy: 0.9187 - val_loss: 0.2138 - val_accuracy: 0.9250
Epoch 11/20
30/30 [==============================] - 14s 460ms/step - loss: 0.1870 - accuracy: 0.9354 - val_loss: 0.2064 - val_accuracy: 0.9125
Epoch 12/20
30/30 [==============================] - 14s 456ms/step - loss: 0.2718 - accuracy: 0.9135 - val_loss: 0.6631 - val_accuracy: 0.7500
Epoch 13/20
30/30 [==============================] - 14s 458ms/step - loss: 0.3490 - accuracy: 0.8740 - val_loss: 0.1596 - val_accuracy: 0.9458
Epoch 14/20
30/30 [==============================] - 14s 463ms/step - loss: 0.1525 - accuracy: 0.9563 - val_loss: 0.1226 - val_accuracy: 0.9625
Epoch 15/20
30/30 [==============================] - 14s 454ms/step - loss: 0.1136 - accuracy: 0.9656 - val_loss: 0.2463 - val_accuracy: 0.8958
Epoch 16/20
30/30 [==============================] - 14s 453ms/step - loss: 0.0945 - accuracy: 0.9646 - val_loss: 0.2166 - val_accuracy: 0.9250
Epoch 17/20
30/30 [==============================] - 14s 453ms/step - loss: 0.1903 - accuracy: 0.9333 - val_loss: 0.0848 - val_accuracy: 0.9625
Epoch 18/20
30/30 [==============================] - 14s 455ms/step - loss: 0.1039 - accuracy: 0.9729 - val_loss: 0.1146 - val_accuracy: 0.9542
Epoch 19/20
30/30 [==============================] - 14s 453ms/step - loss: 0.0801 - accuracy: 0.9781 - val_loss: 0.0763 - val_accuracy: 0.9708
Epoch 20/20
30/30 [==============================] - 14s 453ms/step - loss: 0.0769 - accuracy: 0.9750 - val_loss: 0.0492 - val_accuracy: 0.9708

十一、总结

       本周通过tenserflow框架创建VGG16网络模型进行猴痘病识别学习,学习如何搭建VGG16网络模型,学习在不影响准确率的前提下轻量化模型;通过使用全局平均池化代替全连接层,极大的减少了网络的参数量。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.xdnf.cn/news/1540802.html

如若内容造成侵权/违法违规/事实不符,请联系一条长河网进行投诉反馈,一经查实,立即删除!

相关文章

96 kHz、24bit 立体声音频ADC芯片GC5358描述

概述: GC5358 是一款高性能、宽采样率、立体声音频模数转换器。其采样率范围是8KHz~96KHz,非常适合从消费级到专业级的音频应用系统。单端模拟输入不需要外围器件。GC5358 音频有两种数据格式:MSB对齐和 I2S 格式,和各种如 DTV、D…

移动技术开发:简单文本编辑器

1 实验名称 简单文本编辑器 2 实验目的 掌握基本布局管理器的使用方法和基本控件的使用方法&#xff0c;以及事件监听处理的使用方法 3 实验源代码 布局文件代码&#xff1a; <?xml version"1.0" encoding"utf-8"?> <LinearLayout xmlns:an…

Elasticsearch:检索增强生成背后的重要思想

作者&#xff1a;来自 Elastic Jessica L. Moszkowicz 星期天晚上 10 点&#xff0c;我九年级的女儿哭着冲进我的房间。她说她对代数一无所知&#xff0c;注定要失败。我进入超级妈妈模式&#xff0c;却发现我一点高中数学知识都不记得了。于是&#xff0c;我做了任何一位超级妈…

Java servlet《网吧机房管理系统浅析》

网吧机房管理系统在网吧运营中起着至关重要的作用。 对于用户而言&#xff0c;该系统提供了便捷的登录方式&#xff0c;通过用户名和密码可准确显示所在网吧机房号&#xff0c;便于快速定位。同时&#xff0c;合理的机房分配功能确保用户获得良好上网体验。遇到问题时&#xff…

两栏布局和三栏布局的实现方法

两栏布局 右侧不设置宽&#xff0c;实现一栏自适应。 1. float margin-left 左侧设置float&#xff0c;且设置宽度&#xff0c;右侧margin-left为左侧的宽度 <head><style>.left{width: 300px;height: 500px;background-color: palegreen;float: left;}.right…

AI 基础设施:构建AI时代全栈云计算体系

生成式AI 新时代下催生新的基础设施需求 随着企业在数字化转型之路上越走越远&#xff0c;期间一场新的技术革命正在发生&#xff0c;近几年涌现的生成式AI技术正在迅速改变科技、商业和整个社会的格局。这种强大的技术能够从数据中学习并生成预测性输出&#xff0c;生成式 AI …

使用chatgpt降低论文重复率的方法和需要注意的一些细节

学境思源&#xff0c;一键生成论文初稿&#xff1a; AcademicIdeas - 学境思源AI论文写作 要降低论文的重复率&#xff0c;可以借助ChatGPT进行多种方式的优化。以下是几种策略&#xff1a; 1. 重写段落或句子&#xff1a; 输入你认为可能重复率较高的段落或句子&#xff0c;…

前端JavaScript导出excel,并用excel分析数据,使用SheetJS导出excel

前言&#xff1a;哈喽&#xff0c;大家好&#xff0c;今天给大家分享今天给大家分享一篇文章&#xff01;并提供具体代码帮助大家深入理解&#xff0c;彻底掌握&#xff01;创作不易&#xff0c;如果能帮助到大家或者给大家一些灵感和启发&#xff0c;欢迎收藏关注哦 &#x1f…

Windows通过网线传文件

文章目录 网线网络中看不到另一台计算机Nginx参考文献 网线 两台电脑用网线连接 电脑A 控制面板\网络和 Internet\网络和共享中心 → 更改适配器设置 → 右键以太网&#xff08;未识别的网络&#xff09; → 属性 → Internet 协议版本 4 (TCP/IPv4) 属性 → 使用下面的 IP …

SQL编程题复习(24/9/19)

练习题 x25 10-145 查询S001学生选修而S003学生未选修的课程&#xff08;MSSQL&#xff09;10-146 检索出 sc表中至少选修了’C001’与’C002’课程的学生学号10-147 查询平均分高于60分的课程&#xff08;MSSQL&#xff09;10-148 检索C002号课程的成绩最高的二人学号&#xf…

en造数据结构与算法C# 群组行为优化 和 头鸟控制

实现&#xff1a; 1.给鸟类随机播放随机动画使得每一只鸟扇翅膀的频率都不尽相同 2.可以自行添加权重&#xff0c;并在最后 sumForce separationForce cohesionForce alignmentForce;分别乘上相应权重&#xff0c;这样鸟就能快速飞行和转向辣 using System.Collections.Ge…

小程序地图展示poi帖子点击可跳转

小程序地图展示poi帖子点击可跳转 是类似于小红书地图功能的需求 缺点 一个帖子只能有一个点击事件&#xff0c;不适合太复杂的功能&#xff0c;因为一个markers只有一个回调回调中只有markerId可以使用。 需求介绍 页面有地图入口&#xff0c;点开可打开地图界面地图上展…

安全热点问题

安全热点问题 1.DDOS2.补丁管理3.堡垒机管理4.加密机管理 1.DDOS 分布式拒绝服务攻击&#xff0c;是指黑客通过控制由多个肉鸡或服务器组成的僵尸网络&#xff0c;向目标发送大量看似合法的请求&#xff0c;从而占用大量网络资源使网络瘫痪&#xff0c;阻止用户对网络资源的正…

[Unity Demo]从零开始制作空洞骑士Hollow Knight第六集:制作小骑士完整的跳跃落地行为

提示&#xff1a;文章写完后&#xff0c;目录可以自动生成&#xff0c;如何生成可参考右边的帮助文档 文章目录 前言一、制作一个完整的小骑士跳跃落地行为 1.制作动画以及UNITY编辑器编辑2.使用代码实现完整的跳跃落地行为控制3.更多要考虑到的点总结 前言 大家好久不见&…

基于MTL的多任务视频推荐系统

多任务学习&#xff0c;也就是MTL(Multi-task Learning)&#xff0c;现在已经被用在很多领域了&#xff0c;比如处理自然语言、搞计算机视觉&#xff0c;还有语音识别这些领域。MTL在大规模的推荐系统里也玩得挺溜&#xff0c;尤其是那些做视频推荐的大家伙。 MTL的玩法就是&a…

NLP 主要语言模型分类

文章目录 ngram自回归语言模型TransformerGPTBERT&#xff08;2018年提出&#xff09;基于 Transformer 架构的预训练模型特点应用基于 transformer&#xff08;2017年提出&#xff0c;attention is all you need&#xff09;堆叠层数与原transformer 的差异bert transformer 层…

浅谈穷举法

穷举法 穷举法是一种通过逐一列举所有可能情况来寻找解决方案的方法。就像找到一把钥匙打开一把锁&#xff0c;我们会尝试每一把钥匙直到找到正确的那一把。比如&#xff0c;如果你忘记了自己的密码&#xff0c;可以尝试每一种可能的组合直到找到正确的密码为止 穷举法的结构 …

【Python】快速判断两个commit 是否存在cherry-pick 关系

判断两个提交是否有 cherry-pick 关系的 Python 脚本&#xff0c;可以基于以下三种常见情况进行优化&#xff1a; Commit Hash 一致&#xff1a;如果两个提交的 hash 完全相同&#xff0c;那么它们是相同的提交。 Commit Title 存在关联&#xff1a;如果两个提交的 commit mes…

如何下载ComfyUI开发版

看B站视频&#xff0c;见用绘世可以下载ComfyUI开发版&#xff0c;而我又不想在电脑里放太多东西&#xff0c;于是研究了一下&#xff0c;如何直接从GitHub网站下载。具体步骤看图示。 看压缩包内容&#xff0c;应该直接解压覆盖就可以了&#xff0c;暂未有时间测试。