第R3周:RNN-心脏病预测(TensorFlow版)

>- **🍨 本文为[🔗365天深度学习训练营]中的学习记录博客**
>- **🍖 原作者:[K同学啊]**

🍺 要求:

  1. 找到并处理第8周的程序问题(本文给出了答案)
  2. 了解循环神经网络(RNN)的构建过程。
  3. 测试集accuracy到达87%。

🍻 拔高(可选):

  1. 测试集accuracy到达89%

往期文章可查阅: 深度学习总结

🚀我的环境:

  • 语言环境:Python3.11.7
  • 编译器:jupyter notebook
  • 深度学习框架:TensorFlow2.13.0

代码流程图如下所示:

一、RNN是什么

传统神经网络的结构比较简单:输入层 – 隐藏层 – 输出层

RNN 跟传统神经网络最大的区别在于每次都会将前一次的输出结果,带到下一次的隐藏层中,一起训练。如下图所示:

这里用一个具体的案例来看看 RNN 是如何工作的:用户说了一句“what time is it?”,我们的神经网络会先将这句话分为五个基本单元(四个单词+一个问号)

然后,按照顺序将五个基本单元输入RNN网络,先将 “what”作为RNN的输入,得到输出 01

随后,按照顺序将“time”输入到RNN网络,得到输出02。

这个过程我们可以看到,输入 “time” 的时候,前面“what” 的输出也会对02的输出产生了影响(隐藏层中有一半是黑色的)。

以此类推,我们可以看到,前面所有的输入产生的结果都对后续的输出产生了影响(可以看到圆形中包含了前面所有的颜色)。

当神经网络判断意图的时候,只需要最后一层的输出 05,如下图所示:

二、前期准备

1. 设置GPU

import tensorflow as tfgpus=tf.config.list_physical_devices("GPU")if gpus:gpu0=gpus[0]  #如果有多个GPU,仅使用第0个GPUtf.config.experimental.set_memory_growth(gpu0,True) #设置GPU显存用量按需使用tf.config.set_visible_devices([gpu0],"GPU")gpus

运行结果:

[]

因为我的TensorFlow没有安装GPU版本,故显示此。

2. 导入数据

数据介绍:

●age:1) 年龄
●sex:2) 性别
●cp:3) 胸痛类型 (4 values)
●trestbps:4) 静息血压
●chol:5) 血清胆甾醇 (mg/dl
●fbs:6) 空腹血糖 > 120 mg/dl
●restecg:7) 静息心电图结果 (值 0,1 ,2)
●thalach:8) 达到的最大心率
●exang:9) 运动诱发的心绞痛
●oldpeak:10) 相对于静止状态,运动引起的ST段压低
●slope:11) 运动峰值 ST 段的斜率
●ca:12) 荧光透视着色的主要血管数量 (0-3)
●thal:13) 0 = 正常;1 = 固定缺陷;2 = 可逆转的缺陷
●target:14) 0 = 心脏病发作的几率较小 1 = 心脏病发作的几率更大

import pandas as pd
import numpy as npdf=pd.read_csv("D:\THE MNIST DATABASE\RNN\R3\heart.csv")
df

运行结果:

3. 检查数据

# 检查是否有空值
df.isnull().sum()

 运行结果:

age         0
sex         0
cp          0
trestbps    0
chol        0
fbs         0
restecg     0
thalach     0
exang       0
oldpeak     0
slope       0
ca          0
thal        0
target      0
dtype: int64

三、数据预处理

1. 划分训练集与测试集

测试集与验证集的关系:

(1)验证集并没有参与训练过程梯度下降过程的,狭义上来讲是没有参与模型的参数训练更新的。

(2)但是广义上来讲,验证集存在的意义确实参与了一个“人工调参”的过程,我们根据每一个epoch训练之后模型在valid data上的表现来决定是否需要训练进行early stop,或者根据这个过程模型的性能变化来调整模型的超参数,如学习率,batch_size等等。
(3)我们也可以认为,验证集也参与了训练,但是并没有使得模型去overfit验证集。

from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_splitx=df.iloc[:,:-1]
y=df.iloc[:,-1]x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.1,random_state=1)
x_train.shape,y_train.shape

运行结果:

((272, 13), (272,))

2. 标准化

# 将每一列特征标准化为标准正态分布,注意,标准化是针对每一列而言的
sc=StandardScaler()
x_train=sc.fit_transform(x_train)
x_test=sc.transform(x_test)x_train=x_train.reshape(x_train.shape[0],x_train.shape[1],1)
x_test=x_test.reshape(x_test.shape[0],x_test.shape[1],1)

四、构建RNN模型

函数原型:

tf.keras.layers.SimpleRNN(units,activation=‘tanh’,use_bias=True,kernel_initializer=‘glorot_uniform’,
recurrent_initializer=‘orthogonal’,bias_initializer=‘zeros’,kernel_regularizer=None,recurrent_regularizer=None,bias_regularizer=None,activity_regularizer=None,kernel_constraint=None,recurrent_constraint=None,
bias_constraint=None,dropout=0.0,recurrent_dropout=0.0,return_sequences=False,return_state=False,
go_backwards=False,stateful=False,unroll=False,**kwargs)

关键参数说明:

●units: 正整数,输出空间的维度。
●activation: 要使用的激活函数。 默认:双曲正切(tanh)。 如果传入 None,则不使用激活函数 (即 线性激活:a(x) = x)。
●use_bias: 布尔值,该层是否使用偏置向量。
●kernel_initializer: kernel 权值矩阵的初始化器, 用于输入的线性转换 (详见 initializers)。
●recurrent_initializer: recurrent_kernel 权值矩阵 的初始化器,用于循环层状态的线性转换 (详见 initializers)。
●bias_initializer:偏置向量的初始化器 (详见initializers)。
●dropout: 在 0 和 1 之间的浮点数。 单元的丢弃比例,用于输入的线性转换。

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,LSTM,SimpleRNNmodel=Sequential()
model.add(SimpleRNN(200,input_shape=(13,1),activation='relu'))
model.add(Dense(100,activation='relu'))
model.add(Dense(1,activation='sigmoid'))
model.summary()

 运行结果:

Model: "sequential"
_________________________________________________________________Layer (type)                Output Shape              Param #   
=================================================================simple_rnn (SimpleRNN)      (None, 200)               40400     dense (Dense)               (None, 100)               20100     dense_1 (Dense)             (None, 1)                 101       =================================================================
Total params: 60601 (236.72 KB)
Trainable params: 60601 (236.72 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

五、编译模型

opt=tf.keras.optimizers.Adam(learning_rate=1e-4)model.compile(loss='binary_crossentropy',optimizer=opt,metrics="accuracy")

六、训练模型

epochs=100history=model.fit(x_train,y_train,epochs=epochs,batch_size=128,validation_data=(x_test,y_test),verbose=1)

运行结果:

Epoch 1/100
3/3 [==============================] - 1s 140ms/step - loss: 0.6778 - accuracy: 0.7206 - val_loss: 0.6456 - val_accuracy: 0.8710
Epoch 2/100
3/3 [==============================] - 0s 16ms/step - loss: 0.6669 - accuracy: 0.7721 - val_loss: 0.6315 - val_accuracy: 0.8710
Epoch 3/100
3/3 [==============================] - 0s 16ms/step - loss: 0.6576 - accuracy: 0.7794 - val_loss: 0.6184 - val_accuracy: 0.8387
Epoch 4/100
3/3 [==============================] - 0s 16ms/step - loss: 0.6489 - accuracy: 0.7794 - val_loss: 0.6051 - val_accuracy: 0.8710
Epoch 5/100
3/3 [==============================] - 0s 16ms/step - loss: 0.6399 - accuracy: 0.7721 - val_loss: 0.5921 - val_accuracy: 0.8710
Epoch 6/100
3/3 [==============================] - 0s 16ms/step - loss: 0.6309 - accuracy: 0.7757 - val_loss: 0.5785 - val_accuracy: 0.8710
Epoch 7/100
3/3 [==============================] - 0s 16ms/step - loss: 0.6221 - accuracy: 0.7757 - val_loss: 0.5644 - val_accuracy: 0.8710
Epoch 8/100
3/3 [==============================] - 0s 16ms/step - loss: 0.6133 - accuracy: 0.7757 - val_loss: 0.5502 - val_accuracy: 0.8710
Epoch 9/100
3/3 [==============================] - 0s 15ms/step - loss: 0.6037 - accuracy: 0.7757 - val_loss: 0.5355 - val_accuracy: 0.8710
Epoch 10/100
3/3 [==============================] - 0s 17ms/step - loss: 0.5932 - accuracy: 0.7794 - val_loss: 0.5197 - val_accuracy: 0.8387
Epoch 11/100
3/3 [==============================] - 0s 16ms/step - loss: 0.5832 - accuracy: 0.7831 - val_loss: 0.5031 - val_accuracy: 0.8387
Epoch 12/100
3/3 [==============================] - 0s 17ms/step - loss: 0.5717 - accuracy: 0.7941 - val_loss: 0.4862 - val_accuracy: 0.8387
Epoch 13/100
3/3 [==============================] - 0s 16ms/step - loss: 0.5595 - accuracy: 0.7941 - val_loss: 0.4670 - val_accuracy: 0.8387
Epoch 14/100
3/3 [==============================] - 0s 14ms/step - loss: 0.5462 - accuracy: 0.8015 - val_loss: 0.4456 - val_accuracy: 0.8710
Epoch 15/100
3/3 [==============================] - 0s 16ms/step - loss: 0.5318 - accuracy: 0.8088 - val_loss: 0.4226 - val_accuracy: 0.8710
Epoch 16/100
3/3 [==============================] - 0s 16ms/step - loss: 0.5166 - accuracy: 0.8015 - val_loss: 0.3984 - val_accuracy: 0.8387
Epoch 17/100
3/3 [==============================] - 0s 16ms/step - loss: 0.5000 - accuracy: 0.8015 - val_loss: 0.3757 - val_accuracy: 0.8387
Epoch 18/100
3/3 [==============================] - 0s 16ms/step - loss: 0.4845 - accuracy: 0.8088 - val_loss: 0.3550 - val_accuracy: 0.8387
Epoch 19/100
3/3 [==============================] - 0s 16ms/step - loss: 0.4694 - accuracy: 0.8088 - val_loss: 0.3355 - val_accuracy: 0.8710
Epoch 20/100
3/3 [==============================] - 0s 17ms/step - loss: 0.4545 - accuracy: 0.8015 - val_loss: 0.3177 - val_accuracy: 0.8710
Epoch 21/100
3/3 [==============================] - 0s 16ms/step - loss: 0.4425 - accuracy: 0.8015 - val_loss: 0.3035 - val_accuracy: 0.8710
Epoch 22/100
3/3 [==============================] - 0s 15ms/step - loss: 0.4350 - accuracy: 0.8015 - val_loss: 0.2928 - val_accuracy: 0.8710
Epoch 23/100
3/3 [==============================] - 0s 15ms/step - loss: 0.4264 - accuracy: 0.8015 - val_loss: 0.2856 - val_accuracy: 0.8710
Epoch 24/100
3/3 [==============================] - 0s 16ms/step - loss: 0.4199 - accuracy: 0.7978 - val_loss: 0.2840 - val_accuracy: 0.9032
Epoch 25/100
3/3 [==============================] - 0s 17ms/step - loss: 0.4175 - accuracy: 0.8088 - val_loss: 0.2795 - val_accuracy: 0.9032
Epoch 26/100
3/3 [==============================] - 0s 15ms/step - loss: 0.4127 - accuracy: 0.8051 - val_loss: 0.2726 - val_accuracy: 0.9032
Epoch 27/100
3/3 [==============================] - 0s 16ms/step - loss: 0.4080 - accuracy: 0.8088 - val_loss: 0.2675 - val_accuracy: 0.8710
Epoch 28/100
3/3 [==============================] - 0s 17ms/step - loss: 0.4088 - accuracy: 0.8125 - val_loss: 0.2663 - val_accuracy: 0.9032
Epoch 29/100
3/3 [==============================] - 0s 16ms/step - loss: 0.4026 - accuracy: 0.8235 - val_loss: 0.2671 - val_accuracy: 0.9032
Epoch 30/100
3/3 [==============================] - 0s 17ms/step - loss: 0.3955 - accuracy: 0.8125 - val_loss: 0.2701 - val_accuracy: 0.9032
Epoch 31/100
3/3 [==============================] - 0s 15ms/step - loss: 0.3929 - accuracy: 0.8162 - val_loss: 0.2703 - val_accuracy: 0.9032
Epoch 32/100
3/3 [==============================] - 0s 17ms/step - loss: 0.3905 - accuracy: 0.8199 - val_loss: 0.2686 - val_accuracy: 0.9032
Epoch 33/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3877 - accuracy: 0.8199 - val_loss: 0.2631 - val_accuracy: 0.9032
Epoch 34/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3832 - accuracy: 0.8235 - val_loss: 0.2568 - val_accuracy: 0.9032
Epoch 35/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3843 - accuracy: 0.8162 - val_loss: 0.2560 - val_accuracy: 0.9032
Epoch 36/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3809 - accuracy: 0.8199 - val_loss: 0.2577 - val_accuracy: 0.9032
Epoch 37/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3752 - accuracy: 0.8199 - val_loss: 0.2602 - val_accuracy: 0.9032
Epoch 38/100
3/3 [==============================] - 0s 17ms/step - loss: 0.3718 - accuracy: 0.8309 - val_loss: 0.2629 - val_accuracy: 0.9032
Epoch 39/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3694 - accuracy: 0.8235 - val_loss: 0.2622 - val_accuracy: 0.9032
Epoch 40/100
3/3 [==============================] - 0s 15ms/step - loss: 0.3666 - accuracy: 0.8272 - val_loss: 0.2601 - val_accuracy: 0.9032
Epoch 41/100
3/3 [==============================] - 0s 17ms/step - loss: 0.3655 - accuracy: 0.8309 - val_loss: 0.2594 - val_accuracy: 0.9032
Epoch 42/100
3/3 [==============================] - 0s 15ms/step - loss: 0.3643 - accuracy: 0.8346 - val_loss: 0.2587 - val_accuracy: 0.9032
Epoch 43/100
3/3 [==============================] - 0s 19ms/step - loss: 0.3600 - accuracy: 0.8382 - val_loss: 0.2610 - val_accuracy: 0.9032
Epoch 44/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3568 - accuracy: 0.8382 - val_loss: 0.2637 - val_accuracy: 0.9032
Epoch 45/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3560 - accuracy: 0.8346 - val_loss: 0.2608 - val_accuracy: 0.9032
Epoch 46/100
3/3 [==============================] - 0s 17ms/step - loss: 0.3527 - accuracy: 0.8382 - val_loss: 0.2563 - val_accuracy: 0.9032
Epoch 47/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3506 - accuracy: 0.8382 - val_loss: 0.2541 - val_accuracy: 0.9032
Epoch 48/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3482 - accuracy: 0.8419 - val_loss: 0.2542 - val_accuracy: 0.9032
Epoch 49/100
3/3 [==============================] - 0s 14ms/step - loss: 0.3457 - accuracy: 0.8419 - val_loss: 0.2560 - val_accuracy: 0.9032
Epoch 50/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3418 - accuracy: 0.8456 - val_loss: 0.2558 - val_accuracy: 0.9032
Epoch 51/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3401 - accuracy: 0.8529 - val_loss: 0.2554 - val_accuracy: 0.9032
Epoch 52/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3381 - accuracy: 0.8529 - val_loss: 0.2577 - val_accuracy: 0.9032
Epoch 53/100
3/3 [==============================] - 0s 18ms/step - loss: 0.3354 - accuracy: 0.8529 - val_loss: 0.2608 - val_accuracy: 0.9032
Epoch 54/100
3/3 [==============================] - 0s 13ms/step - loss: 0.3337 - accuracy: 0.8603 - val_loss: 0.2611 - val_accuracy: 0.9032
Epoch 55/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3318 - accuracy: 0.8603 - val_loss: 0.2628 - val_accuracy: 0.9032
Epoch 56/100
3/3 [==============================] - 0s 17ms/step - loss: 0.3302 - accuracy: 0.8640 - val_loss: 0.2666 - val_accuracy: 0.9032
Epoch 57/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3292 - accuracy: 0.8603 - val_loss: 0.2669 - val_accuracy: 0.9032
Epoch 58/100
3/3 [==============================] - 0s 17ms/step - loss: 0.3261 - accuracy: 0.8640 - val_loss: 0.2655 - val_accuracy: 0.9032
Epoch 59/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3231 - accuracy: 0.8640 - val_loss: 0.2669 - val_accuracy: 0.9032
Epoch 60/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3219 - accuracy: 0.8640 - val_loss: 0.2701 - val_accuracy: 0.9032
Epoch 61/100
3/3 [==============================] - 0s 15ms/step - loss: 0.3207 - accuracy: 0.8676 - val_loss: 0.2714 - val_accuracy: 0.9032
Epoch 62/100
3/3 [==============================] - 0s 18ms/step - loss: 0.3168 - accuracy: 0.8640 - val_loss: 0.2727 - val_accuracy: 0.9032
Epoch 63/100
3/3 [==============================] - 0s 17ms/step - loss: 0.3150 - accuracy: 0.8640 - val_loss: 0.2709 - val_accuracy: 0.9032
Epoch 64/100
3/3 [==============================] - 0s 16ms/step - loss: 0.3139 - accuracy: 0.8860 - val_loss: 0.2688 - val_accuracy: 0.8710
Epoch 65/100
3/3 [==============================] - 0s 15ms/step - loss: 0.3130 - accuracy: 0.8934 - val_loss: 0.2700 - val_accuracy: 0.8710
Epoch 66/100
3/3 [==============================] - 0s 15ms/step - loss: 0.3118 - accuracy: 0.8824 - val_loss: 0.2725 - val_accuracy: 0.8710
Epoch 67/100
3/3 [==============================] - 0s 17ms/step - loss: 0.3077 - accuracy: 0.8897 - val_loss: 0.2765 - val_accuracy: 0.8710
Epoch 68/100
3/3 [==============================] - 0s 18ms/step - loss: 0.3050 - accuracy: 0.8934 - val_loss: 0.2801 - val_accuracy: 0.9032
Epoch 69/100
3/3 [==============================] - 0s 17ms/step - loss: 0.3022 - accuracy: 0.8897 - val_loss: 0.2821 - val_accuracy: 0.9032
Epoch 70/100
3/3 [==============================] - 0s 13ms/step - loss: 0.3002 - accuracy: 0.8897 - val_loss: 0.2837 - val_accuracy: 0.9032
Epoch 71/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2981 - accuracy: 0.8934 - val_loss: 0.2852 - val_accuracy: 0.8710
Epoch 72/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2973 - accuracy: 0.8860 - val_loss: 0.2870 - val_accuracy: 0.8710
Epoch 73/100
3/3 [==============================] - 0s 12ms/step - loss: 0.2962 - accuracy: 0.8860 - val_loss: 0.2873 - val_accuracy: 0.8710
Epoch 74/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2954 - accuracy: 0.8897 - val_loss: 0.2849 - val_accuracy: 0.8710
Epoch 75/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2909 - accuracy: 0.8860 - val_loss: 0.2865 - val_accuracy: 0.9032
Epoch 76/100
3/3 [==============================] - 0s 16ms/step - loss: 0.2867 - accuracy: 0.8897 - val_loss: 0.2942 - val_accuracy: 0.9032
Epoch 77/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2888 - accuracy: 0.8824 - val_loss: 0.3043 - val_accuracy: 0.9032
Epoch 78/100
3/3 [==============================] - 0s 15ms/step - loss: 0.2932 - accuracy: 0.8713 - val_loss: 0.3046 - val_accuracy: 0.9032
Epoch 79/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2871 - accuracy: 0.8824 - val_loss: 0.2997 - val_accuracy: 0.8710
Epoch 80/100
3/3 [==============================] - 0s 18ms/step - loss: 0.2799 - accuracy: 0.8787 - val_loss: 0.2997 - val_accuracy: 0.8387
Epoch 81/100
3/3 [==============================] - 0s 16ms/step - loss: 0.2790 - accuracy: 0.8860 - val_loss: 0.2980 - val_accuracy: 0.8387
Epoch 82/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2782 - accuracy: 0.8934 - val_loss: 0.2978 - val_accuracy: 0.8387
Epoch 83/100
3/3 [==============================] - 0s 18ms/step - loss: 0.2758 - accuracy: 0.9007 - val_loss: 0.3000 - val_accuracy: 0.8710
Epoch 84/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2752 - accuracy: 0.8897 - val_loss: 0.3061 - val_accuracy: 0.9032
Epoch 85/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2794 - accuracy: 0.8787 - val_loss: 0.3143 - val_accuracy: 0.9032
Epoch 86/100
3/3 [==============================] - 0s 14ms/step - loss: 0.2809 - accuracy: 0.8750 - val_loss: 0.3126 - val_accuracy: 0.9032
Epoch 87/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2748 - accuracy: 0.8824 - val_loss: 0.3098 - val_accuracy: 0.8710
Epoch 88/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2704 - accuracy: 0.8860 - val_loss: 0.3121 - val_accuracy: 0.8387
Epoch 89/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2697 - accuracy: 0.8860 - val_loss: 0.3158 - val_accuracy: 0.8387
Epoch 90/100
3/3 [==============================] - 0s 16ms/step - loss: 0.2668 - accuracy: 0.8897 - val_loss: 0.3152 - val_accuracy: 0.8387
Epoch 91/100
3/3 [==============================] - 0s 14ms/step - loss: 0.2619 - accuracy: 0.8971 - val_loss: 0.3177 - val_accuracy: 0.8387
Epoch 92/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2609 - accuracy: 0.8897 - val_loss: 0.3207 - val_accuracy: 0.8387
Epoch 93/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2616 - accuracy: 0.8934 - val_loss: 0.3216 - val_accuracy: 0.8387
Epoch 94/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2604 - accuracy: 0.8897 - val_loss: 0.3249 - val_accuracy: 0.8387
Epoch 95/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2569 - accuracy: 0.8971 - val_loss: 0.3252 - val_accuracy: 0.8387
Epoch 96/100
3/3 [==============================] - 0s 13ms/step - loss: 0.2556 - accuracy: 0.9007 - val_loss: 0.3199 - val_accuracy: 0.8387
Epoch 97/100
3/3 [==============================] - 0s 17ms/step - loss: 0.2542 - accuracy: 0.9081 - val_loss: 0.3121 - val_accuracy: 0.8387
Epoch 98/100
3/3 [==============================] - 0s 16ms/step - loss: 0.2553 - accuracy: 0.9081 - val_loss: 0.3088 - val_accuracy: 0.8387
Epoch 99/100
3/3 [==============================] - 0s 16ms/step - loss: 0.2509 - accuracy: 0.9081 - val_loss: 0.3095 - val_accuracy: 0.9032
Epoch 100/100
3/3 [==============================] - 0s 16ms/step - loss: 0.2504 - accuracy: 0.8971 - val_loss: 0.3119 - val_accuracy: 0.9032

七、模型评估

import matplotlib.pyplot as pltacc=history.history['accuracy']
val_acc=history.history['val_accuracy']loss=history.history['loss']
val_loss=history.history['val_loss']epochs_range=range(epochs)plt.figure(figsize=(14,4))
plt.subplot(1,2,1)plt.plot(epochs_range,acc,label='Training Accuracy')
plt.plot(epochs_range,val_acc,label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')plt.subplot(1,2,2)
plt.plot(epochs_range,loss,label='Training Loss')
plt.plot(epochs_range,val_loss,label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

运行结果:

scores=model.evaluate(x_test,y_test,verbose=0)
print("%s:%.2f%%" % (model.metrics_names[1],scores[1]*100))

 运行结果:

accuracy:90.32%

八、心得体会

学习了什么是RNN,并且在TensorFlow环境下构建了简单的RNN模型,本次的训练结果恰巧达到90%,中间也通过修改学习率等尝试提升准确率,但结果都差强人意,留待以后学习过程中构建更为复杂的网络模型提升准确率。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.xdnf.cn/news/12573.html

如若内容造成侵权/违法违规/事实不符,请联系一条长河网进行投诉反馈,一经查实,立即删除!

相关文章

数据结构 ——— 链式二叉树oj题:将链式二叉树的前序遍历存放在数组中

题目要求 给你二叉树的根节点 root ,返回它节点值的 前序 遍历 手搓一个链式二叉树 代码演示: // 数据类型 typedef int BTDataType;// 二叉树节点的结构 typedef struct BinaryTreeNode {BTDataType data; //每个节点的数据struct BinaryTreeNode* l…

前端中的 File 和 Blob两个对象到底有什么不同

JavaScript 在处理文件、二进制数据和数据转换时,提供了一系列的 API 和对象,比如 File、Blob、FileReader、ArrayBuffer、Base64、Object URL 和 DataURL。每个概念在不同场景中都有重要作用。下面的内容我们将会详细学习每个概念及其在实际应用中的用法…

酒店叮咚门铃的类型有哪些

在酒店的环境中,叮咚门铃虽小,却有着重要的作用,它是客人与酒店服务人员沟通的重要桥梁。酒店叮咚门铃主要有以下几种类型: 有线叮咚门铃 这是较为传统的一种类型。它通过电线连接,通常安装在客房的墙壁上,…

SFW3009 多功能移动照明系统

SFW3009 多功能移动照明系统 适用范围 广泛适用于铁路、水利、电网等抢险救援现场大范围移动照明。 结构特性 灯具体积小、重量轻,可以实现拖行、手提、背行三种携带方式。灯具底部也可以安装铁轨轮,便于用户在铁轨上作业。 灯头组件由左右两个灯头…

JavaWeb——Web入门(8/9)- Tomcat:基本使用(下载与安装、目录结构介绍、启动与关闭、可能出现的问题及解决方案、总结)

目录 基本使用内容 下载与安装 目录结构介绍 启动与关闭 启动 关闭 可能出现的问题及解决方案 问题一:启动时窗口一闪而过 问题二:端口号冲突 问题三:部署应用程序 总结 基本使用内容 Tomcat 服务器在 Java Web 开发中扮演着至关重…

w032基于web的阿博图书馆管理系统

🙊作者简介:拥有多年开发工作经验,分享技术代码帮助学生学习,独立完成自己的项目或者毕业设计。 代码可以查看文章末尾⬇️联系方式获取,记得注明来意哦~🌹赠送计算机毕业设计600个选题excel文件&#xff0…

Java:使用Jackson解析json时如何正确获取节点中的值?

使用Jackson解析json时,经常会需要获取到某一节点下的值,例如: { “data”: { "test1": "value1", "test2": null, "test3": 10 } } 以Jackson2.13.5为例,使用at(jsonPtrExp)这种API&…

前端必懂:常见排序算法深度解析

在前端开发中,排序算法是一种非常重要的工具。无论是对数组进行排序以展示数据,还是对复杂对象进行排序以实现特定的功能,理解和掌握常见的排序算法对于提高开发效率和代码质量至关重要。本文将介绍几种前端常见的排序算法。 一、冒泡排序(Bu…

vue 依赖注入(Provide、Inject )和混入(mixins)

Prop 逐级透传问题​ 通常情况下,当我们需要从父组件向子组件传递数据时,会使用 props。想象一下这样的结构:有一些多层级嵌套的组件,形成了一棵巨大的组件树,而某个深层的子组件需要一个较远的祖先组件中的部分数据。…

开启鸿蒙开发之旅:核心组件及其各项属性介绍——布局容器组件

写在前面 组件的结构 rkTS通过装饰器 Component 和 Entry 装饰 struct 关键字声明的数据结构,构成一个自定义组件。 自定义组件中提供了一个 build 函数,开发者需在该函数内以链式调用的方式进行基本的 UI 描述 今天我们要学习的就是写在build 函数里的系…

数据结构OJ题

目录 轮转数组原地移除数组中所有元素val删除有序数组中的重复项合并两个有序数组 轮转数组 思路1: 1.利用循环将最后一位数据放到临时变量(n)中 2.利用第二层循环将数据往后移一位 3.将变量(n)的数据放到数组第一位 时…

Pencils Protocol 推出新板块 Auction ,为什么重要且被看好?

Pencils Protocol 上线了又一新产品板块 Auction,预示着生态版图的进一步完善,该板块的推出无论是对于 Pencils Protocol 协议本身,还是 Scroll 生态都是极为重要的。 社区正在成为主导加密市场发展的重要力量 自 DeFi Summer 以来&#xf…

Pytorch学习--神经网络--完整的模型训练套路

一、下载数据集 train_data torchvision.datasets.CIFAR10(root"datasets",trainTrue,transformtorchvision.transforms.ToTensor(),downloadTrue) train_data torchvision.datasets.CIFAR10(root"datasets",trainFalse,transformtorchvision.transform…

常用数字器件的描述-组合逻辑器件

目录 基本逻辑门 编码器 译码器 数据选择器 数值比较器 三态缓冲器 奇偶校验器 组合逻辑器件有逻辑门、编码器与译码器、数据选择器和数值比较器、加法器、三态器件和奇偶校验器等多种类型。 基本逻辑门 Verilog HDL中定义了实现七种逻辑关系的基元,例化这些…

在Django中安装、配置、使用CKEditor5,并将CKEditor5录入的文章展现出来,实现一个简单博客网站的功能

在Django中可以使用CKEditor4和CKEditor5两个版本,分别对应软件包django-ckeditor和django-ckeditor-5。原来使用的是CKEditor4,python manager.py makemigrations时总是提示CKEditor4有安全风险,建议升级到CKEditor5。故卸载了CKEditor4&…

高效视觉方案:AR1335与i.MX8MP的完美结合

方案采用NXP i.MX8MP处理器和onsemi AR1335图像传感器,i.MX8MP集成四核Cortex-A53、NPU及双ISP技术。AR1335是一颗分辨率为13M的CMOS传感器。它使用了先进的BSI技术,提供了超高的分辨率和出色的低光性能,非常适合于需要高质量图像的应用。此外…

STM32软件SPI驱动BMP280(OLED显示)

STM32软件SPI驱动BMP280 OLED显示 BMP280简介寄存器简要说明SPI通讯代码逻辑代码展示 现象总结 BMP280简介 数字接口类型:IIC(从模式3.4MHz)或SPI(3线或4线制从模式10MHz) 气压测量范围:300~11…

基于Servlet实现MVC

目录 1.MVC相关概念 核心思想: 主要作用: 2.基于Servlet实现MVC 组成部分: 案例 实验步骤: 新建maven项目SpringMvcDemo 删除src目录并添加子模块MvcServlet ​编辑 导入相关依赖: 编写servlet 注册S…

剪辑师必备50多种擦拭转场/光效过渡效果Premiere Pro模板素材

项目特点: Premiere Pro的擦拭转场和光效闪烁过渡效果 Premiere Pro 2023及更高版本 适用于任何FPS和分辨率的照片和视频 易于使用 包含视频教程 无需插件 拖放方法 高品质 提高视频剪辑效率,节省时间,为视频创作添加独特且专业的转场风格。 …

数字化转型的架构蓝图构建指南:从理论到实践的系统实施路径

企业数字化转型的挑战与架构蓝图的重要性 在数字化浪潮的推动下,企业面临着前所未有的转型压力。传统业务模式和运营流程逐渐被更具弹性和敏捷性的数字化模式所取代,而企业架构蓝图作为战略转型的“导航仪”,能够为企业指明方向。企业架构治…