2023高教社杯全国大学生数学建模竞赛C题 Python代码演示

目录

    • 问题一
      • 1.1 蔬菜类商品不同品类或不同单品之间可能存在一定的关联关系,请分析蔬菜各品类及单品销售量的分布规律及相互关系。
        • 数据预处理
          • 数据合并
          • 提取年、月、日信息
          • 对蔬菜的各品类按月求销量均值
        • 季节性时间序列分解
          • STL分解
          • 加法分解
          • 乘法分解
        • ARIMA
        • LSTM

import pandas as pdpath = '/home/shiyu/Desktop/path_acdemic/ant/数模/历年题目/2023/'
d1 = pd.read_excel(path + '附件1.xlsx')
d2 = pd.read_excel(path + '附件2.xlsx')
d3 = pd.read_excel(path + '附件3.xlsx')
d4 = pd.read_excel(path + '附件4.xlsx',sheet_name='Sheet1')
print(d1.shape)
print(d2.shape)
print(d3.shape)
print(d4.shape)
(251, 4)
(878503, 7)
(55982, 3)
(251, 3)

问题一

1.1 蔬菜类商品不同品类或不同单品之间可能存在一定的关联关系,请分析蔬菜各品类及单品销售量的分布规律及相互关系。

在这里插入图片描述

数据预处理
数据合并
d1['分类名称'].value_counts()
分类名称
花叶类      100
食用菌       72
辣椒类       45
水生根茎类     19
茄类        10
花菜类        5
Name: count, dtype: int64
import pandas as pd
d12 = pd.merge(d2, d1, on='单品编码')d3.columns = ['销售日期'] + list(d3.columns[1:3])
d123 = pd.merge(d12, d3, on=['单品编码','销售日期'])
d1234 = pd.merge(d123, d4, on=['单品编码','单品名称'])d1234.shape
(878503, 12)
提取年、月、日信息
d1234['月份'] = d1234['销售日期'].dt.month
d1234['月份'] = d1234['月份'].astype(str).str.zfill(2)
d1234['年份'] = d1234['销售日期'].dt.year
d1234['年月'] = d1234['年份'].astype(str) + '-' + d1234['月份'].astype(str) 
对蔬菜的各品类按月求销量均值
def my_group(category):d_sub = d1234[d1234['分类名称'] == category]# 对销量按月求均值sale_by_month = pd.DataFrame(d_sub.groupby(['年月'])['销量(千克)'].mean())sale_by_month.columns = [category + sale_by_month.columns]sale_by_month['年月'] = sale_by_month.indexreturn(sale_by_month)
sale_by_month_leaves = my_group('花叶类')
sale_by_month_mushroom = my_group('食用菌')
sale_by_month_pepper = my_group('辣椒类')
sale_by_month_water = my_group('水生根茎类')
sale_by_month_eggplant = my_group('茄类')
sale_by_month_cauliflower = my_group('花菜类')
from functools import reduce
dfs = [sale_by_month_leaves, sale_by_month_mushroom, sale_by_month_pepper, sale_by_month_water, sale_by_month_eggplant, sale_by_month_cauliflower]
sale_by_month_all = reduce(lambda left,right: pd.merge(left,right), dfs)
sale_by_month_all.head()
花叶类销量(千克)年月食用菌销量(千克)辣椒类销量(千克)水生根茎类销量(千克)茄类销量(千克)花菜类销量(千克)
00.4646802020-070.3088060.2801850.4187340.5808380.473726
10.4831672020-080.3348040.3092980.5333210.5491050.455973
20.5007422020-090.3516440.3012420.5579130.5438800.464073
30.5291072020-100.4584460.2924240.6515360.5368340.510383
40.6257632020-110.5538530.3229140.6434660.4841980.535812
df = pd.DataFrame(None, columns=['年月', '销量','蔬菜品类'], index=range(sale_by_month_all.shape[0]*6))
df['销量'] = list(sale_by_month_all.iloc[:,0]) + list(sale_by_month_all.iloc[:,2]) + list(sale_by_month_all.iloc[:,3]) + list(sale_by_month_all.iloc[:,4]) + list(sale_by_month_all.iloc[:,5]) + list(sale_by_month_all.iloc[:,6])
df['年月'] = list(sale_by_month_all.iloc[:,1]) * 6
names = list(sale_by_month_all.columns[0]) + list(sale_by_month_all.columns)[2:7]
df['蔬菜品类'] = [x for x in names for i in range(sale_by_month_all.shape[0])]
df.head(3)
年月销量蔬菜品类
02020-070.464680花叶类销量(千克)
12020-080.483167花叶类销量(千克)
22020-090.500742花叶类销量(千克)
import plotly.express as pxfig = px.line(df, x="年月", y="销量", color='蔬菜品类', title='各蔬菜品类月销量随时间变化')
# center title
fig.update_layout(title_x=0.5)
# remove background color
fig.update_layout({
'plot_bgcolor': 'rgba(0, 0, 0, 0)',
'paper_bgcolor': 'rgba(0, 0, 0, 0)',
})
fig.show()

在这里插入图片描述

季节性时间序列分解
sale_by_month_all.head(3)
花叶类销量(千克)年月食用菌销量(千克)辣椒类销量(千克)水生根茎类销量(千克)茄类销量(千克)花菜类销量(千克)
00.4646802020-070.3088060.2801850.4187340.5808380.473726
10.4831672020-080.3348040.3092980.5333210.5491050.455973
20.5007422020-090.3516440.3012420.5579130.5438800.464073

水生根茎类

STL分解

https://www.geo.fu-berlin.de/en/v/soga-py/Advanced-statistics/time-series-analysis/Seasonal-decompositon/STL-decomposition/index.html

import pandas as pd
from statsmodels.tsa.seasonal import seasonal_decompose, STL
import matplotlib.pyplot as pltstl = STL(sale_by_month_all.iloc[:,4], period=12)
res = stl.fit()data = {'trend': res.trend,'seasonality': res.seasonal,'residuals':res.resid}res_stl = pd.DataFrame(data)
res_stl.head()
trendseasonalityresiduals
00.644373-0.2477680.022129
10.642466-0.1322870.023142
20.640681-0.059934-0.022835
30.6390110.018400-0.005875
40.6374520.007495-0.001481
# Linux show Chinese characters *** important
plt.rcParams['font.family'] = 'WenQuanYi Micro Hei' 
plt.rcParams['figure.dpi'] = 300
plt.rcParams['savefig.dpi'] = 300fig = res.plot()

在这里插入图片描述

import scipy.stats as statsplt.figure(figsize=(18, 6))plt.figure()
stats.probplot(res.resid, dist="norm", plot=plt)
plt.title("QQ-Plot")
plt.show()

在这里插入图片描述

# histogram plot
plt.figure(figsize=(9, 3))plt.hist(res.resid)
plt.title("Residuals")
plt.show()

在这里插入图片描述

加法分解
add = seasonal_decompose(sale_by_month_all.iloc[:,4], period=12,model="additive"
)data = {'trend': add.trend,'seasonality': add.seasonal,'residuals':add.resid}res_add = pd.DataFrame(data)
res_add.iloc[6:10,:]
trendseasonalityresiduals
60.6190410.168495-0.011543
70.6220290.1436840.092367
80.6273880.0474140.054634
90.632759-0.0250560.054265
add.plot()

在这里插入图片描述

乘法分解
multi = seasonal_decompose(sale_by_month_all.iloc[:,4], period=12,model="multiplicative"
)data = {'trend': multi.trend,'seasonality': multi.seasonal,'residuals':multi.resid}res_multi = pd.DataFrame(data)
res_multi.iloc[6:10,:]
trendseasonalityresiduals
60.6190411.2591770.995523
70.6220291.2214431.129390
80.6273881.0722591.084304
90.6327590.9637601.085500
multi.plot()

在这里插入图片描述

ARIMA

https://machinelearningmastery.com/arima-for-time-series-forecasting-with-python/

基于STL分解得到的趋势性,使用ARIMA模型进行拟合预测

res_stl.head()
trendseasonalityresiduals
00.644373-0.2477680.022129
10.642466-0.1322870.023142
20.640681-0.059934-0.022835
30.6390110.018400-0.005875
40.6374520.007495-0.001481
import datetime
from matplotlib import pyplotseries = res_stl['trend']
series.plot()

在这里插入图片描述

from pandas.plotting import autocorrelation_plot
autocorrelation_plot(series)

在这里插入图片描述

from statsmodels.tsa.arima.model import ARIMA
model = ARIMA(series, order=(10,1,0))
model_fit = model.fit()
print(model_fit.summary())
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                  trend   No. Observations:                   36
Model:                ARIMA(10, 1, 0)   Log Likelihood                 222.352
Date:                Thu, 01 Aug 2024   AIC                           -422.704
Time:                        20:01:12   BIC                           -405.596
Sample:                             0   HQIC                          -416.798- 36                                         
Covariance Type:                  opg                                         
==============================================================================coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1          1.4304      0.096     14.970      0.000       1.243       1.618
ar.L2         -0.2795      0.077     -3.638      0.000      -0.430      -0.129
ar.L3          0.0147      0.078      0.187      0.851      -0.139       0.169
ar.L4         -0.0643      0.045     -1.417      0.156      -0.153       0.025
ar.L5         -0.1505      0.046     -3.291      0.001      -0.240      -0.061
ar.L6         -0.0301      0.071     -0.422      0.673      -0.170       0.110
ar.L7         -0.0039      0.066     -0.059      0.953      -0.132       0.125
ar.L8          0.0468      0.039      1.198      0.231      -0.030       0.123
ar.L9          0.0183      0.034      0.537      0.591      -0.048       0.085
ar.L10         0.0070      0.104      0.067      0.946      -0.198       0.212
sigma2      1.491e-07   1.57e-08      9.498      0.000    1.18e-07     1.8e-07
===================================================================================
Ljung-Box (L1) (Q):                   0.08   Jarque-Bera (JB):               449.77
Prob(Q):                              0.78   Prob(JB):                         0.00
Heteroskedasticity (H):               0.09   Skew:                             3.52
Prob(H) (two-sided):                  0.00   Kurtosis:                        19.09
===================================================================================
from pandas import DataFrame
# line plot of residuals
residuals = DataFrame(model_fit.resid)
residuals.plot()
pyplot.show()

在这里插入图片描述

# summary stats of residuals
print(residuals.describe())
               0
count  36.000000
mean    0.017927
std     0.107392
min    -0.001907
25%    -0.000042
50%     0.000025
75%     0.000113
max     0.644373
import warnings
warnings.filterwarnings("ignore")
from numpy import sqrt X = series.values
train, test = X, X
history = [x for x in train]
predictions = list()# walk-forward validation
for t in range(len(test)):model = ARIMA(history, order=(5,1,0))model_fit = model.fit()output = model_fit.forecast()yhat = output[0]predictions.append(yhat)obs = test[t]history.append(obs)print('predicted=%f, expected=%f' % (yhat, obs))# evaluate forecasts
rmse = sqrt(mean_squared_error(test, predictions))
print('Test RMSE: %.3f' % rmse)
predicted=0.746828, expected=0.644373
predicted=0.639646, expected=0.642466
predicted=0.637131, expected=0.640681
predicted=0.636508, expected=0.639011
predicted=0.638260, expected=0.637452
predicted=0.636941, expected=0.636011
predicted=0.635828, expected=0.634695
predicted=0.634524, expected=0.633510
predicted=0.633352, expected=0.632476
predicted=0.632332, expected=0.631610
predicted=0.631483, expected=0.630988
predicted=0.630880, expected=0.630979
predicted=0.630907, expected=0.633298
predicted=0.633330, expected=0.636138
predicted=0.636276, expected=0.639619
predicted=0.639865, expected=0.643978
predicted=0.644346, expected=0.649266
predicted=0.649768, expected=0.655252
predicted=0.655890, expected=0.661718
predicted=0.662507, expected=0.668408
predicted=0.669351, expected=0.675055
predicted=0.676136, expected=0.681361
predicted=0.682541, expected=0.687128
predicted=0.688354, expected=0.692332
predicted=0.693554, expected=0.697262
predicted=0.698456, expected=0.701650
predicted=0.702786, expected=0.705999
predicted=0.707089, expected=0.710305
predicted=0.711366, expected=0.714555
predicted=0.715603, expected=0.718749
predicted=0.719792, expected=0.722890
predicted=0.723909, expected=0.726981
predicted=0.728039, expected=0.731031
predicted=0.732095, expected=0.735043
predicted=0.736114, expected=0.739024
predicted=0.740101, expected=0.742980
Test RMSE: 0.017
# plot forecasts against actual outcomes
pyplot.plot(test)
pyplot.plot(predictions, color='red')
pyplot.show()

在这里插入图片描述

LSTM

https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
# fix random seed for reproducibility
tf.random.set_seed(7)
df = d1234.iloc[:,3]
df = np.array(df).reshape(-1,1)# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
df_norm = scaler.fit_transform(df)# split into train and test sets
train_size = int(len(df_norm) * 0.7)
test_size = len(df_norm) - train_size
train, test = df_norm[0:train_size,:], df_norm[train_size:len(df_norm),:]
print(len(train), len(test))
614952 263551
# convert an array of values into a dataset matrix
def create_dataset(dataset, look_back=1):dataX, dataY = [], []for i in range(len(dataset)-look_back-1):a = dataset[i:(i+look_back), 0]dataX.append(a)dataY.append(dataset[i + look_back, 0])return np.array(dataX), np.array(dataY)
# reshape into X=t and Y=t+1
look_back = 1
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)# reshape input to be [samples, time steps, features]
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))# create and fit the LSTM network
model = Sequential()
model.add(LSTM(4, input_shape=(1, look_back)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2)
Epoch 1/100
1/1 - 1s - loss: 0.0026 - 611ms/epoch - 611ms/step
Epoch 2/100
1/1 - 0s - loss: 0.0024 - 2ms/epoch - 2ms/step
Epoch 3/100
1/1 - 0s - loss: 0.0023 - 2ms/epoch - 2ms/step
Epoch 4/100
1/1 - 0s - loss: 0.0021 - 2ms/epoch - 2ms/step
Epoch 5/100
1/1 - 0s - loss: 0.0020 - 2ms/epoch - 2ms/step
Epoch 6/100
1/1 - 0s - loss: 0.0019 - 2ms/epoch - 2ms/step
Epoch 7/100
1/1 - 0s - loss: 0.0017 - 1ms/epoch - 1ms/step
Epoch 8/100
1/1 - 0s - loss: 0.0016 - 1ms/epoch - 1ms/step
Epoch 9/100
1/1 - 0s - loss: 0.0015 - 1ms/epoch - 1ms/step
Epoch 10/100
1/1 - 0s - loss: 0.0014 - 1ms/epoch - 1ms/step
Epoch 11/100
1/1 - 0s - loss: 0.0013 - 1ms/epoch - 1ms/step
Epoch 12/100
1/1 - 0s - loss: 0.0012 - 1ms/epoch - 1ms/step
Epoch 13/100
1/1 - 0s - loss: 0.0011 - 2ms/epoch - 2ms/step
Epoch 14/100
1/1 - 0s - loss: 9.5703e-04 - 2ms/epoch - 2ms/step
Epoch 15/100
1/1 - 0s - loss: 8.6722e-04 - 2ms/epoch - 2ms/step
Epoch 16/100
1/1 - 0s - loss: 7.8267e-04 - 2ms/epoch - 2ms/step
Epoch 17/100
1/1 - 0s - loss: 7.0334e-04 - 2ms/epoch - 2ms/step
Epoch 18/100
1/1 - 0s - loss: 6.2916e-04 - 2ms/epoch - 2ms/step
Epoch 19/100
1/1 - 0s - loss: 5.6005e-04 - 1ms/epoch - 1ms/step
Epoch 20/100
1/1 - 0s - loss: 4.9593e-04 - 1ms/epoch - 1ms/step
Epoch 21/100
1/1 - 0s - loss: 4.3668e-04 - 1ms/epoch - 1ms/step
Epoch 22/100
1/1 - 0s - loss: 3.8218e-04 - 2ms/epoch - 2ms/step
Epoch 23/100
1/1 - 0s - loss: 3.3229e-04 - 2ms/epoch - 2ms/step
Epoch 24/100
1/1 - 0s - loss: 2.8686e-04 - 2ms/epoch - 2ms/step
Epoch 25/100
1/1 - 0s - loss: 2.4572e-04 - 2ms/epoch - 2ms/step
Epoch 26/100
1/1 - 0s - loss: 2.0869e-04 - 2ms/epoch - 2ms/step
Epoch 27/100
1/1 - 0s - loss: 1.7558e-04 - 2ms/epoch - 2ms/step
Epoch 28/100
1/1 - 0s - loss: 1.4619e-04 - 2ms/epoch - 2ms/step
Epoch 29/100
1/1 - 0s - loss: 1.2030e-04 - 2ms/epoch - 2ms/step
Epoch 30/100
1/1 - 0s - loss: 9.7691e-05 - 2ms/epoch - 2ms/step
Epoch 31/100
1/1 - 0s - loss: 7.8142e-05 - 1ms/epoch - 1ms/step
Epoch 32/100
1/1 - 0s - loss: 6.1421e-05 - 1ms/epoch - 1ms/step
Epoch 33/100
1/1 - 0s - loss: 4.7296e-05 - 2ms/epoch - 2ms/step
Epoch 34/100
1/1 - 0s - loss: 3.5534e-05 - 2ms/epoch - 2ms/step
Epoch 35/100
1/1 - 0s - loss: 2.5905e-05 - 2ms/epoch - 2ms/step
Epoch 36/100
1/1 - 0s - loss: 1.8181e-05 - 2ms/epoch - 2ms/step
Epoch 37/100
1/1 - 0s - loss: 1.2141e-05 - 2ms/epoch - 2ms/step
Epoch 38/100
1/1 - 0s - loss: 7.5720e-06 - 2ms/epoch - 2ms/step
Epoch 39/100
1/1 - 0s - loss: 4.2689e-06 - 2ms/epoch - 2ms/step
Epoch 40/100
1/1 - 0s - loss: 2.0387e-06 - 2ms/epoch - 2ms/step
Epoch 41/100
1/1 - 0s - loss: 7.0006e-07 - 2ms/epoch - 2ms/step
Epoch 42/100
1/1 - 0s - loss: 8.5575e-08 - 1ms/epoch - 1ms/step
Epoch 43/100
1/1 - 0s - loss: 4.2072e-08 - 1ms/epoch - 1ms/step
Epoch 44/100
1/1 - 0s - loss: 4.3148e-07 - 2ms/epoch - 2ms/step
Epoch 45/100
1/1 - 0s - loss: 1.1312e-06 - 1ms/epoch - 1ms/step
Epoch 46/100
1/1 - 0s - loss: 2.0341e-06 - 1ms/epoch - 1ms/step
Epoch 47/100
1/1 - 0s - loss: 3.0485e-06 - 1ms/epoch - 1ms/step
Epoch 48/100
1/1 - 0s - loss: 4.0975e-06 - 1ms/epoch - 1ms/step
Epoch 49/100
1/1 - 0s - loss: 5.1187e-06 - 2ms/epoch - 2ms/step
Epoch 50/100
1/1 - 0s - loss: 6.0630e-06 - 2ms/epoch - 2ms/step
Epoch 51/100
1/1 - 0s - loss: 6.8934e-06 - 2ms/epoch - 2ms/step
Epoch 52/100
1/1 - 0s - loss: 7.5847e-06 - 4ms/epoch - 4ms/step
Epoch 53/100
1/1 - 0s - loss: 8.1211e-06 - 3ms/epoch - 3ms/step
Epoch 54/100
1/1 - 0s - loss: 8.4957e-06 - 3ms/epoch - 3ms/step
Epoch 55/100
1/1 - 0s - loss: 8.7090e-06 - 3ms/epoch - 3ms/step
Epoch 56/100
1/1 - 0s - loss: 8.7674e-06 - 3ms/epoch - 3ms/step
Epoch 57/100
1/1 - 0s - loss: 8.6819e-06 - 2ms/epoch - 2ms/step
Epoch 58/100
1/1 - 0s - loss: 8.4674e-06 - 2ms/epoch - 2ms/step
Epoch 59/100
1/1 - 0s - loss: 8.1409e-06 - 2ms/epoch - 2ms/step
Epoch 60/100
1/1 - 0s - loss: 7.7213e-06 - 2ms/epoch - 2ms/step
Epoch 61/100
1/1 - 0s - loss: 7.2276e-06 - 2ms/epoch - 2ms/step
Epoch 62/100
1/1 - 0s - loss: 6.6792e-06 - 2ms/epoch - 2ms/step
Epoch 63/100
1/1 - 0s - loss: 6.0943e-06 - 2ms/epoch - 2ms/step
Epoch 64/100
1/1 - 0s - loss: 5.4902e-06 - 2ms/epoch - 2ms/step
Epoch 65/100
1/1 - 0s - loss: 4.8822e-06 - 2ms/epoch - 2ms/step
Epoch 66/100
1/1 - 0s - loss: 4.2841e-06 - 2ms/epoch - 2ms/step
Epoch 67/100
1/1 - 0s - loss: 3.7074e-06 - 2ms/epoch - 2ms/step
Epoch 68/100
1/1 - 0s - loss: 3.1617e-06 - 1ms/epoch - 1ms/step
Epoch 69/100
1/1 - 0s - loss: 2.6544e-06 - 1ms/epoch - 1ms/step
Epoch 70/100
1/1 - 0s - loss: 2.1909e-06 - 1ms/epoch - 1ms/step
Epoch 71/100
1/1 - 0s - loss: 1.7745e-06 - 1ms/epoch - 1ms/step
Epoch 72/100
1/1 - 0s - loss: 1.4072e-06 - 2ms/epoch - 2ms/step
Epoch 73/100
1/1 - 0s - loss: 1.0890e-06 - 1ms/epoch - 1ms/step
Epoch 74/100
1/1 - 0s - loss: 8.1904e-07 - 1ms/epoch - 1ms/step
Epoch 75/100
1/1 - 0s - loss: 5.9503e-07 - 1ms/epoch - 1ms/step
Epoch 76/100
1/1 - 0s - loss: 4.1391e-07 - 1ms/epoch - 1ms/step
Epoch 77/100
1/1 - 0s - loss: 2.7203e-07 - 1ms/epoch - 1ms/step
Epoch 78/100
1/1 - 0s - loss: 1.6523e-07 - 2ms/epoch - 2ms/step
Epoch 79/100
1/1 - 0s - loss: 8.9118e-08 - 2ms/epoch - 2ms/step
Epoch 80/100
1/1 - 0s - loss: 3.9202e-08 - 1ms/epoch - 1ms/step
Epoch 81/100
1/1 - 0s - loss: 1.1048e-08 - 1ms/epoch - 1ms/step
Epoch 82/100
1/1 - 0s - loss: 4.0064e-10 - 1ms/epoch - 1ms/step
Epoch 83/100
1/1 - 0s - loss: 3.2746e-09 - 1ms/epoch - 1ms/step
Epoch 84/100
1/1 - 0s - loss: 1.6036e-08 - 1ms/epoch - 1ms/step
Epoch 85/100
1/1 - 0s - loss: 3.5442e-08 - 1ms/epoch - 1ms/step
Epoch 86/100
1/1 - 0s - loss: 5.8693e-08 - 1ms/epoch - 1ms/step
Epoch 87/100
1/1 - 0s - loss: 8.3416e-08 - 1ms/epoch - 1ms/step
Epoch 88/100
1/1 - 0s - loss: 1.0769e-07 - 1ms/epoch - 1ms/step
Epoch 89/100
1/1 - 0s - loss: 1.3003e-07 - 1ms/epoch - 1ms/step
Epoch 90/100
1/1 - 0s - loss: 1.4932e-07 - 1ms/epoch - 1ms/step
Epoch 91/100
1/1 - 0s - loss: 1.6482e-07 - 1ms/epoch - 1ms/step
Epoch 92/100
1/1 - 0s - loss: 1.7613e-07 - 1ms/epoch - 1ms/step
Epoch 93/100
1/1 - 0s - loss: 1.8309e-07 - 1ms/epoch - 1ms/step
Epoch 94/100
1/1 - 0s - loss: 1.8579e-07 - 1ms/epoch - 1ms/step
Epoch 95/100
1/1 - 0s - loss: 1.8451e-07 - 2ms/epoch - 2ms/step
Epoch 96/100
1/1 - 0s - loss: 1.7963e-07 - 1ms/epoch - 1ms/step
Epoch 97/100
1/1 - 0s - loss: 1.7169e-07 - 1ms/epoch - 1ms/step
Epoch 98/100
1/1 - 0s - loss: 1.6123e-07 - 1ms/epoch - 1ms/step
Epoch 99/100
1/1 - 0s - loss: 1.4884e-07 - 1ms/epoch - 1ms/step
Epoch 100/100
1/1 - 0s - loss: 1.3510e-07 - 1ms/epoch - 1ms/step<keras.callbacks.History at 0x7f1d50766be0>
# make predictions
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
# invert predictions
trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])
# calculate root mean squared error
trainScore = np.sqrt(mean_squared_error(trainY[0], trainPredict[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = np.sqrt(mean_squared_error(testY[0], testPredict[:,0]))
print('Test Score: %.2f RMSE' % (testScore))
1/1 [==============================] - 0s 192ms/step
1/1 [==============================] - 0s 10ms/step
Train Score: 0.06 RMSE
Test Score: 0.19 RMSE
# shift train predictions for plotting
trainPredictPlot = np.empty_like(df_norm)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict# shift test predictions for plotting
testPredictPlot = np.empty_like(df_norm)
testPredictPlot[:, :] = np.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict# plot baseline and predictions
plt.plot(scaler.inverse_transform(df_norm))
plt.plot(trainPredictPlot)
plt.plot(testPredictPlot)
plt.show()

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.xdnf.cn/news/142140.html

如若内容造成侵权/违法违规/事实不符,请联系一条长河网进行投诉反馈,一经查实,立即删除!

相关文章

什么是代理IP_如何建立代理IP池?

什么是代理IP_如何建立代理IP池&#xff1f; 1. 概述1.1 什么是代理IP&#xff1f;1.2 代理IP的工作原理1.3 爬虫的应用场景1.3.1 搜索引擎&#xff0c;最大的爬虫1.3.2 数据采集&#xff0c;市场分析利器1.3.3 舆情监控&#xff0c;品牌营销手段1.3.4 价格监测&#xff0c;全网…

时序差分法

一、时序差分法 时序差分是一种用来估计一个策略的价值函数的方法&#xff0c;它结合了蒙特卡洛和动态规划算法的思想。时序差分方法和蒙特卡洛的相似之处在于可以从样本数据中学习&#xff0c;不需要事先知道环境&#xff1b;和动态 规划的相似之处在于根据贝尔曼方程的思想&…

外网(公网)访问VMware workstation 虚拟机内web网站的配置方法---端口转发总是不成功的原因

问题背景&#xff1a;客户提供的服务器操作系统配置web程序时&#xff0c;总是显示莫名其妙的问题&#xff0c;发现是高版本操作系统的.net库已经对低版本.net库进行了大范围修订&#xff0c;导致在安全检测上、软件代码规范上更加苛刻&#xff0c;最终导致部署不成功。于是想到…

【C++】入门基础(下)

Hi&#xff01;很高兴见到你~ 目录 7、引用 7.3 引用的使用&#xff08;实例&#xff09; 7.4 const引用 【第一分点】 【第二分点1】 【第二分点2】 7.5 指针和引用的关系&#xff08;面试点&#xff09; 8、inline 9、nullptr Relaxing Time&#xff01; ———…

基于VUE的老年颐养中心系统的设计与实现计算机毕业论文

根据联合国的预测&#xff0c;2000-2050年将是我国人口年龄结构急剧老化的阶段&#xff0c;老化过程大致也可分为三个阶段&#xff1a;第一阶段&#xff0c;65岁及以上人口比例从2000年的6.97%上升到2020年的11.7%&#xff0c;20年时间仅上升4.63个百分点。第二阶段为2020-2040…

蓝桥杯省赛真题——大臣的旅费

输入样例&#xff1a; 5 1 2 2 1 3 1 2 4 5 2 5 4 输出样例&#xff1a; 135分析&#xff1a; 本题实际上要求我们去求在图中最远两点之间的距离&#xff0c;也就是树的直径 我们先从某一个点出发&#xff0c;到达离其最远的点&#xff0c;然后再重复操作一次即可 #inclu…

钢轨缺陷检测-目标检测数据集(包括VOC格式、YOLO格式)

钢轨缺陷检测-目标检测数据集&#xff08;包括VOC格式、YOLO格式&#xff09; 数据集&#xff1a; 链接&#xff1a;https://pan.baidu.com/s/1h7Dc0MiiRgtd7524cBUOFQ?pwdfr9y 提取码&#xff1a;fr9y 数据集信息介绍&#xff1a; 共有 1493 张图像和一一对应的标注文件 标…

【二叉树进阶】二叉搜索树

目录 1. 二叉搜索树概念 2. 二叉搜索树的实现 2.1 创建二叉搜索树节点 2.2 创建实现二叉搜索树 2.3 二叉搜索树的查找 2.4 二叉搜索树的插入 2.5 二叉搜索树的删除 2.6 中序遍历 2.7 完整代码加测试 3. 二叉搜索树的应用 3.1 K模型&#xff1a; 3.2 KV模型&#xf…

数据技术革命来袭!从仓库到飞轮,企业数字化的终极进化!

文章目录 数据仓库&#xff1a;信息化的基石数据中台&#xff1a;数字化转型的加速器数据飞轮&#xff1a;智能化的新纪元技术演进的驱动力 自20世纪80年代末数据仓库问世以来&#xff0c;它迅速成为企业数据管理的核心。作为一名大数据工程师&#xff0c;我深刻体会到数据仓库…

k8s使用本地docker私服启动自制的flink集群

目标&#xff1a;使用本地flink环境自制flink镜像包上传到本地的私服&#xff0c;然后k8s使用本地的私服拉取镜像启动Flink集群 1、将本地的flink软件包打包成Docker镜像 从官网下载flink-1.13.6的安装包&#xff0c;修改其中的flink-conf.yaml&#xff0c;修改下面几项配置 …

Mistral AI再创新高,Pixtral 12B多模态模型强势来袭

前沿科技速递&#x1f680; 近日&#xff0c;Mistral AI 发布了其首款多模态大模型——Pixtral 12B。作为一款具有语言与视觉处理能力的模型&#xff0c;Pixtral 12B 支持高达10241024像素的图像&#xff0c;具备强大的文本生成、图像理解与生成能力&#xff0c;能够处理复杂的…

热成像目标检测数据集

热成像目标检测数据集 V2 版本 项目背景 热成像技术因其在安防监控、夜间巡逻、消防救援等领域的独特优势而受到重视。本数据集旨在提供高质量的热成像图像及其对应的可见光图像&#xff0c;支持热成像目标检测的研究与应用。 数据集概述 名称&#xff1a;热成像目标检测数据…

Kafka日志索引详解与常见问题分析

目录 一、Kafka的Log日志梳理 1、Topic下的消息是如何存储的&#xff1f; 1. log文件追加记录所有消息 2. index和timeindex加速读取log消息日志 2、文件清理机制 1. 如何判断哪些日志文件过期了 2. 过期的日志文件如何处理 3、Kafka的文件高效读写机制 1. Kafka的文件…

图神经网络模型扩展(5)--2

1.图的无监督学习 在数据爆炸的时代&#xff0c;大部分数据都是没有标签的。为了将它们应用到深度学习模型上&#xff0c;需要大量的人力来标注数据&#xff0c;例如我们熟知的人脸识别项目&#xff0c;如果想取得更好的识别效果&#xff0c;则一定需要大量人工标注的人脸数据。…

Android MediaPlayer + GLSurfaceView 播放视频

Android使用OpenGL 播放视频 概述TextureView的优缺点OpenGL的优缺点 实现复杂图形效果的场景参考 概述 在Android开发中&#xff0c;使用OpenGL ES来渲染视频是一种常见的需求&#xff0c;尤其是在需要实现自定义的视频播放界面或者视频特效时。结合MediaPlayer&#xff0c;我…

【论文阅读】BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning

Abstract 在这篇论文中&#xff0c;我们研究了使基于视觉的机器人操纵系统能够泛化到新任务的问题&#xff0c;这是机器人学习中的一个长期挑战。我们从模仿学习的角度来应对这一挑战&#xff0c;旨在研究如何扩展和扩大收集的数据来促进这种泛化。为此&#xff0c;我们开发了…

数据库之索引<保姆级文章>

目录&#xff1a; 一. 什么是索引 二. 索引应该选择哪种数据结构 三. MySQL中的页 四. 索引分类及使用 一. 什么是索引&#xff1a; 1. MySQL的索引是⼀种数据结构&#xff0c;它可以帮助数据库高效地查询、更新数据表中的数据。 索引通过 ⼀定的规则排列数据表中的记录&#x…

F28335 时钟及控制系统

1 F28335 系统时钟来源 1.1 振荡器OSC与锁相环PLL 时钟信号对于DSP来说是非常重要的,它为DSP工作提供一个稳定的机器周期从而使系统能够正常运行。时钟系统犹如人的心脏,一旦有问题整个系统就崩溃。DSP 属于数字信号处理器, 它正常工作也必须为其提供时钟信号。那么这个时钟…

【例题】lanqiao3225 宝藏排序Ⅰ

这里的n的范围可以使用冒泡排序、选择排序和插入排序等算法。 冒泡排序 nint(input()) alist(map(int,input().split()))def pop_sort(a):for i in range(n):for j in range(n-i-1):if a[j]>a[j1]:a[j],a[j1]a[j1],a[j] pop_sort(a) print( .join(map(str,a)))选择排序 n…

数据结构(7.3_2)——平衡二叉树

平衡二叉树&#xff0c;简称平衡树(AVL树)----树上任一结点的左子树和右子树的高度之差不超过1. 结点的平衡因子左子树高-右子树高 //平衡二叉树结点 typedef struct AVLNode {int key;//数据域int blalance;//平衡因子struct AVLNode* lchild, * rchild; }AVLNode,*AVLTree; …