基于变分自动编码器VAE的电池剩余使用寿命RUL估计

加载模块

import math
import itertools
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
from keras import layers
from sklearn.svm import SVR
from tensorflow import keras
from keras import backend as K
import matplotlib.pyplot as plt
from keras.regularizers import l2
from keras.regularizers import l1_l2
from keras.models import load_model
from keras.callbacks import ReduceLROnPlateau
from sklearn.model_selection import KFold, GridSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import GroupShuffleSplit
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split, cross_val_score, cross_validate
from keras.callbacks import Callback, EarlyStopping, ModelCheckpoint, TensorBoard, LambdaCallback
from keras.layers import Input, Dense, Lambda, LSTM, RepeatVector, Bidirectional, Masking, Dropout, BatchNormalization

读取数据

# Data reading
data_dir = './original_data/'# Specify the file path and column names
file_path = os.path.join(data_dir, 'battery_RUL.txt')# Specify the column names
column_names = ['unit_nr','time_cycles','s_discharge_t','s_decrement_3.6-3.4V','s_max_voltage_discharge','s_min_voltage_charge','Time_at_4.15V_s','s_time_constant_current','s_charging_time','RUL'
]# Read the text file into a DataFrame
df = pd.read_csv(file_path, sep='\s+', header=None, skiprows=0, names=column_names)# Calculate the unique number of batteries
unique_batteries = df['unit_nr'].nunique()
print("\n Unique number of batteries:", unique_batteries)# Check for missing values
if df.isnull().any().any():print("\n There are missing values in the DataFrame.")# Print the shape of the DataFrame
print("\n DataFrame shape:", df.shape)# print df info
print(df.info())# Print the first few rows of the DataFrame
print("\n First few rows of the DataFrame:")
df.head()

Unique number of batteries: 14

DataFrame shape: (15064, 10)
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 15064 entries, 0 to 15063
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 unit_nr 15064 non-null float64
1 time_cycles 15064 non-null float64
2 s_discharge_t 15064 non-null float64
3 s_decrement_3.6-3.4V 15064 non-null float64
4 s_max_voltage_discharge 15064 non-null float64
5 s_min_voltage_charge 15064 non-null float64
6 Time_at_4.15V_s 15064 non-null float64
7 s_time_constant_current 15064 non-null float64
8 s_charging_time 15064 non-null float64
9 RUL 15064 non-null int64
dtypes: float64(9), int64(1)
memory usage: 1.1 MB
None

First few rows of the DataFrame:

图片

数据前处理

# Filter rows where 'time_cycles' is equal to 1
df_at_cycle_1 = df[df['time_cycles'] == 1]# Create a bar chart with all battery units on the X-axis
plt.figure(figsize=(12, 6))
plt.bar(df_at_cycle_1['unit_nr'], df_at_cycle_1['RUL'])
plt.xlabel('Battery Number')
plt.ylabel('RUL at Cycle 1')
plt.title('RUL Values at the Beginning of Cycle 1 for Each Battery Unit')
plt.xticks(df_at_cycle_1['unit_nr'])  # Set X-axis ticks explicitly
plt.tight_layout()
plt.show()

图片

plt.figure(figsize = (8,8))
sns.heatmap(df.corr(),annot=True, cbar=False, cmap='Blues', fmt='.1f')

图片

Correlation between RUL and:

Max. Voltage Dischar. (V) is 0.8

Min. Voltage Charg. (V) is -0.8

Time at 4.15V (s) is 0.2

Cycle index is -1.0

Discharge Time (s), Decrement 3.6-3.4V (s), Time constant current (s) and Charging time (s) are 0.

And correlation between Time at 4.15V and these four features are 0.8, 0.5,0.6 and 0.7.

df1=df.drop(['s_discharge_t', 's_decrement_3.6-3.4V', 's_time_constant_current','s_charging_time'], axis=1)
plt.figure(figsize = (4,4))
sns.heatmap(df1.corr(),annot=True, cbar=False, cmap='Blues', fmt='.1f')

图片

df1.head()

图片

def exponential_smoothing(df, sensors, n_samples, alpha=0.4):df = df.copy()# first, take the exponential weighted meandf[sensors] = df.groupby('unit_nr', group_keys=True)[sensors].apply(lambda x: x.ewm(alpha=alpha).mean()).reset_index(level=0, drop=True)# second, drop first n_samples of each unit_nr to reduce filter delaydef create_mask(data, samples):result = np.ones_like(data)result[0:samples] = 0return resultmask = df.groupby('unit_nr')['unit_nr'].transform(create_mask, samples=n_samples).astype(bool)df = df[mask]return df
#-----------------------------------------------------------------------------------------------------------------------
def data_standardization(df, sensors):df = df.copy()# Apply StandardScaler to the sensor datascaler = StandardScaler()df[sensors] = scaler.fit_transform(df[sensors])return df# MMS_X = MinMaxScaler()# mms_y = MinMaxScaler()
#-----------------------------------------------------------------------------------------------------------------------
def gen_train_data(df, sequence_length, columns):data = df[columns].valuesnum_elements = data.shape[0]# -1 and +1 because of Python indexingfor start, stop in zip(range(0, num_elements-(sequence_length-1)), range(sequence_length, num_elements+1)):yield data[start:stop, :]
#-----------------------------------------------------------------------------------------------------------------------
def gen_data_wrapper(df, sequence_length, columns, unit_nrs=np.array([])):if unit_nrs.size <= 0:unit_nrs = df['unit_nr'].unique()data_gen = (list(gen_train_data(df[df['unit_nr']==unit_nr], sequence_length, columns))for unit_nr in unit_nrs)data_array = np.concatenate(list(data_gen)).astype(np.float32)return data_array
#-----------------------------------------------------------------------------------------------------------------------
def gen_labels(df, sequence_length, label):data_matrix = df[label].valuesnum_elements = data_matrix.shape[0]# -1 because I want to predict the rul of that last row in the sequence, not the next rowreturn data_matrix[sequence_length-1:num_elements, :]
#-----------------------------------------------------------------------------------------------------------------------
def gen_label_wrapper(df, sequence_length, label, unit_nrs=np.array([])):if unit_nrs.size <= 0:unit_nrs = df['unit_nr'].unique()label_gen = [gen_labels(df[df['unit_nr']==unit_nr], sequence_length, label)for unit_nr in unit_nrs]label_array = np.concatenate(label_gen).astype(np.float32)return label_array
#---------Original code------------------------------------------------------------------------------------------------------
def gen_test_data(df, sequence_length, columns, mask_value):if df.shape[0] < sequence_length:data_matrix = np.full(shape=(sequence_length, len(columns)), fill_value=mask_value) # padidx = data_matrix.shape[0] - df.shape[0]data_matrix[idx:,:] = df[columns].values  # fill with available dataelse:data_matrix = df[columns].values# specifically yield the last possible sequencestop = data_matrix.shape[0]start = stop - sequence_lengthfor i in list(range(1)):yield data_matrix[start:stop, :]
#---------------------------------------------------------------------------------------------------------------------------
def change_test_index(df, initial_unit_number, start_index, end_index, min_rows, max_rows):df.reset_index(drop=True, inplace=True)y_test = []  # Initialize an empty list to store y_test valuesy_test.append(df.loc[0, 'RUL'])while end_index < len(df):# Calculate the number of rows to be assigned to the current unit_nrnum_rows = min_rows + (end_index % (max_rows - min_rows + 1))end_index = end_index + num_rows# Update the unit_nr for the current block of rowsdf.loc[start_index:end_index, 'unit_nr'] = initial_unit_number# Update the "time_cycles" column starting from the first row with "time_cycles" == 1time_cycle_to_change = end_index + 1df.loc[time_cycle_to_change, 'time_cycles'] = 1# Append the values to y_test (replace 'column_name' with the actual column name you want to append)y_test.append(df.loc[time_cycle_to_change, 'RUL'])# Update the starting and ending index for the next block of rowsstart_index = end_index + 1initial_unit_number += 1# Drop rows with NaN values at the end of DataFrame 'df'df.dropna(axis=0, how='any', inplace=True)# Drop any NaN values at the end of list 'y_test'while len(y_test) > 0 and pd.isnull(y_test[-1]):y_test.pop()return df, y_test

获取数据

def get_data(df, sensors, sequence_length, alpha):# List of battery unitsbattery_units = range(1, 15)sensor_names = ['s_{}'.format(i+1) for i in range(0,10)]# Define the number of batteries for training and testingtrain_batteries = 12test_batteries = 2# Extract the batteries for training and testingtrain_units = battery_units[:train_batteries]test_units = battery_units[train_batteries:train_batteries + test_batteries]# Create the training and testing datasetstrain = df[df['unit_nr'].isin(train_units)].copy()  # Use copy to avoid the SettingWithCopyWarningtest = df[df['unit_nr'].isin(test_units)].copy()  # Use copy to avoid the SettingWithCopyWarningX_test_pre, y_test = change_test_index(test, 13, 0, 0, 230, 240)# y_test = pd.Series(y_test)  # Convert y_test list to a pandas Series# remove unused sensorsdrop_sensors = [element for element in sensor_names if element not in sensors]# Apply standardization to the training and testing data using data_standardization functionstandard_train = data_standardization(train, sensors)standard_test = data_standardization(test, sensors)# Exponential smoothing of training and testing dataX_train_pre = exponential_smoothing(standard_train, sensors, 0, alpha)X_test_pre = exponential_smoothing(standard_test, sensors, 0, alpha)# Train-validation splitgss = GroupShuffleSplit(n_splits=1, train_size=0.85, random_state=42)# Generate the train/val for each sample by iterating over the train and val unitsfor train_unit, val_unit in gss.split(X_train_pre['unit_nr'].unique(), groups=X_train_pre['unit_nr'].unique()):train_unit = X_train_pre['unit_nr'].unique()[train_unit]  # gss returns indexes and index starts at 1val_unit = X_train_pre['unit_nr'].unique()[val_unit]x_train = gen_data_wrapper(X_train_pre, sequence_length, sensors, train_unit)y_train = gen_label_wrapper(X_train_pre, sequence_length, ['RUL'], train_unit)x_val = gen_data_wrapper(X_train_pre, sequence_length, sensors, val_unit)y_val = gen_label_wrapper(X_train_pre, sequence_length, ['RUL'], val_unit)# create sequences for testtest_gen = (list(gen_test_data(X_test_pre[X_test_pre['unit_nr']==unit_nr], sequence_length, sensors, -99.))for unit_nr in X_test_pre['unit_nr'].unique())x_test = np.concatenate(list(test_gen)).astype(np.float32)return x_train, y_train, x_val, y_val, x_test, y_test

训练回调

# --------------------------------------- TRAINING CALLBACKS  ---------------------------------------
class save_latent_space_viz(Callback):def __init__(self, model, data, target):self.model = modelself.data = dataself.target = targetdef on_train_begin(self, logs={}):self.best_val_loss = 100000def on_epoch_end(self, epoch, logs=None):encoder = self.model.layers[0]if logs.get('val_loss') < self.best_val_loss:self.best_val_loss = logs.get('val_loss')viz_latent_space(encoder, self.data, self.target, epoch, True, False)def get_callbacks(model, data, target):model_callbacks = [EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=30),ModelCheckpoint(filepath='./checkpoints/checkpoint',monitor='val_loss', mode='min', verbose=1, save_best_only=True, save_weights_only=True),TensorBoard(log_dir='./logs'),save_latent_space_viz(model, data, target)]return model_callbacksdef viz_latent_space(encoder, data, targets=[], epoch='Final', save=False, show=True):z, _, _  = encoder.predict(data)plt.figure(figsize=(3, 3))  # Smaller figsize value to reduce the plot sizeif len(targets) > 0:plt.scatter(z[:, 1], z[:, 0],  c=targets)else:plt.scatter(z[:, 1], z[:, 0])plt.xlabel('z - dim 1')plt.ylabel('z - dim 2')plt.colorbar()if show:plt.show()if save:plt.savefig('./images/latent_space_epoch' + str(epoch) + '.png')return z

最佳学习率

# ----------------------------------------- FIND OPTIMAL LR  ----------------------------------------
class LRFinder:"""Cyclical LR, code tailored from:https://towardsdatascience.com/estimating-optimal-learning-rate-for-a-deep-neural-network-ce32f2556ce0"""def __init__(self, model):self.model = modelself.losses = []self.lrs = []self.best_loss = 1e9def on_batch_end(self, batch, logs):# Log the learning ratelr = K.get_value(self.model.optimizer.lr)self.lrs.append(lr)# Log the lossloss = logs['loss']self.losses.append(loss)# Check whether the loss got too large or NaNif batch > 5 and (math.isnan(loss) or loss > self.best_loss * 4):self.model.stop_training = Truereturnif loss < self.best_loss:self.best_loss = loss# Increase the learning rate for the next batchlr *= self.lr_multK.set_value(self.model.optimizer.lr, lr)def find(self, x_train, y_train, start_lr, end_lr, batch_size=64, epochs=1, **kw_fit):# If x_train contains data for multiple inputs, use length of the first input.# Assumption: the first element in the list is single input; NOT a list of inputs.N = x_train[0].shape[0] if isinstance(x_train, list) else x_train.shape[0]# Compute number of batches and LR multipliernum_batches = epochs * N / batch_sizeself.lr_mult = (float(end_lr) / float(start_lr)) ** (float(1) / float(num_batches))# Save weights into a fileinitial_weights = self.model.get_weights()# Remember the original learning rateoriginal_lr = K.get_value(self.model.optimizer.lr)# Set the initial learning rateK.set_value(self.model.optimizer.lr, start_lr)callback = LambdaCallback(on_batch_end=lambda batch, logs: self.on_batch_end(batch, logs))self.model.fit(x_train, y_train,batch_size=batch_size, epochs=epochs,callbacks=[callback],**kw_fit)# Restore the weights to the state before model fittingself.model.set_weights(initial_weights)# Restore the original learning rateK.set_value(self.model.optimizer.lr, original_lr)def find_generator(self, generator, start_lr, end_lr, epochs=1, steps_per_epoch=None, **kw_fit):if steps_per_epoch is None:try:steps_per_epoch = len(generator)except (ValueError, NotImplementedError) as e:raise e('`steps_per_epoch=None` is only valid for a'' generator based on the ''`keras.utils.Sequence`'' class. Please specify `steps_per_epoch` ''or use the `keras.utils.Sequence` class.')self.lr_mult = (float(end_lr) / float(start_lr)) ** (float(1) / float(epochs * steps_per_epoch))# Save weights into a fileinitial_weights = self.model.get_weights()# Remember the original learning rateoriginal_lr = K.get_value(self.model.optimizer.lr)# Set the initial learning rateK.set_value(self.model.optimizer.lr, start_lr)callback = LambdaCallback(on_batch_end=lambda batch,logs: self.on_batch_end(batch, logs))self.model.fit_generator(generator=generator,epochs=epochs,steps_per_epoch=steps_per_epoch,callbacks=[callback],**kw_fit)# Restore the weights to the state before model fittingself.model.set_weights(initial_weights)# Restore the original learning rateK.set_value(self.model.optimizer.lr, original_lr)def plot_loss(self, n_skip_beginning=10, n_skip_end=5, x_scale='log'):"""Plots the loss.Parameters:n_skip_beginning - number of batches to skip on the left.n_skip_end - number of batches to skip on the right."""plt.ylabel("loss")plt.xlabel("learning rate (log scale)")plt.plot(self.lrs[n_skip_beginning:-n_skip_end], self.losses[n_skip_beginning:-n_skip_end])plt.xscale(x_scale)plt.show()def plot_loss_change(self, sma=1, n_skip_beginning=10, n_skip_end=5, y_lim=(-0.01, 0.01)):"""Plots rate of change of the loss function.Parameters:sma - number of batches for simple moving average to smooth out the curve.n_skip_beginning - number of batches to skip on the left.n_skip_end - number of batches to skip on the right.y_lim - limits for the y axis."""derivatives = self.get_derivatives(sma)[n_skip_beginning:-n_skip_end]lrs = self.lrs[n_skip_beginning:-n_skip_end]plt.ylabel("rate of loss change")plt.xlabel("learning rate (log scale)")plt.plot(lrs, derivatives)plt.xscale('log')plt.ylim(y_lim)plt.show()def get_derivatives(self, sma):assert sma >= 1derivatives = [0] * smafor i in range(sma, len(self.lrs)):derivatives.append((self.losses[i] - self.losses[i - sma]) / sma)return derivativesdef get_best_lr(self, sma, n_skip_beginning=10, n_skip_end=5):derivatives = self.get_derivatives(sma)best_der_idx = np.argmin(derivatives[n_skip_beginning:-n_skip_end])return self.lrs[n_skip_beginning:-n_skip_end][best_der_idx]

结果

# --------------------------------------------- RESULTS  --------------------------------------------
def get_model(path):saved_VRAE_model = load_model(path, compile=False)# return encoder, regressorreturn saved_VRAE_model.layers[1], saved_VRAE_model.layers[2]def evaluate(y_true, y_hat, label='test'):mse = mean_squared_error(y_true, y_hat)rmse = np.sqrt(mse)variance = r2_score(y_true, y_hat)print('{} set RMSE:{}, R2:{}'.format(label, rmse, variance))return rmse, variancedef score(y_true, y_hat):res = 0for true, hat in zip(y_true, y_hat):subs = hat - trueif subs < 0:res = res + np.exp(-subs/10)[0]-1else:res = res + np.exp(subs/13)[0]-1print("score: ", res)def results(path, x_train, y_train, x_test, y_test):# Get modelencoder, regressor = get_model(path)# Latent spacetrain_mu = viz_latent_space(encoder, x_train, y_train)test_mu = viz_latent_space(encoder, x_test, y_test)# Evaluatey_hat_train = regressor.predict(train_mu)y_hat_test = regressor.predict(test_mu)evaluate(y_train, y_hat_train, 'train')evaluate(y_test, y_hat_test, 'test')score(y_test, y_hat_test)

模型结构

class Sampling(keras.layers.Layer):"""Uses (z_mean, sigma) to sample z, the vector encoding an engine trajetory."""def call(self, inputs):mu, sigma = inputsbatch = tf.shape(mu)[0]dim = tf.shape(mu)[1]epsilon = tf.keras.backend.random_normal(shape=(batch, dim))return mu + tf.exp(0.5 * sigma) * epsilon

class RVE(keras.Model):def __init__(self, encoder, regressor, decoder=None, **kwargs):super(RVE, self).__init__(**kwargs)self.encoder = encoderself.regressor = regressorself.total_loss_tracker = keras.metrics.Mean(name="total_loss")self.kl_loss_tracker = keras.metrics.Mean(name="kl_loss")self.reg_loss_tracker = keras.metrics.Mean(name="reg_loss")self.decoder = decoderif self.decoder!=None:self.reconstruction_loss_tracker = keras.metrics.Mean(name="reconstruction_loss")@propertydef metrics(self):if self.decoder!=None:return [self.total_loss_tracker,self.kl_loss_tracker,self.reg_loss_tracker,self.reconstruction_loss_tracker]else:return [self.total_loss_tracker,self.kl_loss_tracker,self.reg_loss_tracker,]def train_step(self, data):x, target_x = datawith tf.GradientTape() as tape:# kl lossmu, sigma, z = self.encoder(x)kl_loss = -0.5 * (1 + sigma - tf.square(mu) - tf.exp(sigma))kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))# Regressorreg_prediction = self.regressor(z)reg_loss = tf.reduce_mean(keras.losses.mse(target_x, reg_prediction))# Reconstructionif self.decoder!=None:reconstruction = self.decoder(z)reconstruction_loss = tf.reduce_mean(keras.losses.mse(x, reconstruction))total_loss = kl_loss + reg_loss + reconstruction_lossself.reconstruction_loss_tracker.update_state(reconstruction_loss)else:total_loss = kl_loss + reg_lossgrads = tape.gradient(total_loss, self.trainable_weights)self.optimizer.apply_gradients(zip(grads, self.trainable_weights))self.total_loss_tracker.update_state(total_loss)self.kl_loss_tracker.update_state(kl_loss)self.reg_loss_tracker.update_state(reg_loss)return {"loss": self.total_loss_tracker.result(),"kl_loss": self.kl_loss_tracker.result(),"reg_loss": self.reg_loss_tracker.result(),}def test_step(self, data):x, target_x = data# kl lossmu, sigma, z = self.encoder(x)kl_loss = -0.5 * (1 + sigma - tf.square(mu) - tf.exp(sigma))kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))# Regressorreg_prediction = self.regressor(z)reg_loss = tf.reduce_mean(keras.losses.mse(target_x, reg_prediction))# Reconstructionif self.decoder!=None:reconstruction = self.decoder(z)reconstruction_loss = tf.reduce_mean(keras.losses.mse(x, reconstruction))total_loss = kl_loss + reg_loss + reconstruction_losselse:total_loss = kl_loss + reg_lossreturn {"loss": total_loss,"kl_loss": kl_loss,"reg_loss": reg_loss,}

# Set hyperparameters
sequence_length = 200
alpha = 0.2# Load and preprocess data
df = df  # Load dataset
sensors = ['s_max_voltage_discharge', 's_min_voltage_charge', "Time_at_4.15V_s"] # Define the sensors# Call get_data_with_kfold to get the necessary data
x_train, y_train, x_val, y_val, x_test, y_test = get_data(df1, sensors, sequence_length, alpha)# from scipy.signal import savgol_filter
# # Apply Savitzky-Golay filter
# x_val_smoothed = savgol_filter(x_val, window_length=4, polyorder=2, axis=0)# Setup the network parameters:
timesteps = x_train.shape[1]
input_dim = x_train.shape[2]
intermediate_dim = 32
batch_size = 256
latent_dim = 2
masking_value = -99 # used to mask values in sequences with less than 250 cycles until 250 is reached
kernel_regularizer=l1_l2(l1=0.001, l2=0.001)
dropout_rate = 0.1
# --------------------------------- Encoder --------------------------------------
inputs = Input(shape=(timesteps, input_dim,), name='encoder_input')
mask = Masking(mask_value=masking_value)(inputs)
h = Bidirectional(LSTM(intermediate_dim))(mask) # LSTM encodingmu = Dense(latent_dim, kernel_regularizer = kernel_regularizer)(h) # VAE Z layer
mu = Dropout(dropout_rate)(mu)sigma = Dense(latent_dim, kernel_regularizer = kernel_regularizer)(h)
sigma = Dropout(dropout_rate)(sigma)z = Sampling()([mu, sigma])# Instantiate the encoder model:
encoder = keras.Model(inputs, [mu, sigma, z], name='encoder')
# ------------------------------- Regressor --------------------------------------
reg_latent_inputs = Input(shape=(latent_dim,), name='z_sampling_reg')reg_intermediate = Dense(16, activation='tanh', kernel_regularizer = kernel_regularizer)(reg_latent_inputs)
reg_intermediate = BatchNormalization()(reg_intermediate)
reg_intermediate = Dropout(dropout_rate)(reg_intermediate)reg_outputs = Dense(1, name='reg_output', kernel_regularizer = kernel_regularizer)(reg_intermediate)
reg_outputs = Dropout(dropout_rate)(reg_outputs)# Instantiate the classifier model:
regressor = keras.Model(reg_latent_inputs, reg_outputs, name='regressor')print("Shape of x_train:", x_train.shape)
print("Shape of y_train:", y_train.shape)
print("Shape of x_val:", x_val.shape)
print("Shape of y_val:", y_val.shape)
print("Shape of x_test:", x_test.shape)
print("Shape of y_test:", len(y_test))

Shape of x_train: (8795, 200, 3)
Shape of y_train: (8795, 1)
Shape of x_val: (1758, 200, 3)
Shape of y_val: (1758, 1)
Shape of x_test: (10, 200, 3)
Shape of y_test: 10

rve = RVE(encoder, regressor)
lr_finder = LRFinder(rve)rve.compile(optimizer=keras.optimizers.Adam(learning_rate=0.0000001))# with learning rate growing exponentially from 0.0000001 to 1
lr_finder.find(x_train, y_train, start_lr=0.0000001, end_lr=1, batch_size=batch_size, epochs=10)# Plot the loss
lr_finder.plot_loss()

图片

图片

开始训练

# Instantiate the RVE model
rve = RVE(encoder, regressor)#Compile the RVE model with the Adam optimizer
rve.compile(optimizer=keras.optimizers.Adam(0.01))# Define the early stopping callback
early_stopping = EarlyStopping(monitor='loss', min_delta=3, patience=5,  verbose=1, mode='min', restore_best_weights=True)# Call get_data_with_kfold to get the necessary data
x_train, y_train, x_val, y_val, x_test, y_test = get_data(df1, sensors, sequence_length, alpha)# Fit the RVE model with the callbacks
rve.fit(x_train, y_train, epochs=500, batch_size=batch_size, validation_data=(x_val, y_val), callbacks=[early_stopping])

RUL估计

train_mu = viz_latent_space(rve.encoder, np.concatenate((x_train, x_val)), np.concatenate((y_train, y_val)))
test_mu = viz_latent_space(rve.encoder, x_test, y_test)# Evaluate
y_hat_train = rve.regressor.predict(train_mu)
y_hat_test = rve.regressor.predict(test_mu)evaluate(np.concatenate((y_train, y_val)), y_hat_train, 'train')
evaluate(y_test, y_hat_test, 'test')

图片

图片

330/330 [==============================] - 0s 1ms/step 1/1 [==============================] - 0s 20ms/step train set RMSE:26.346721649169922, R2:0.9899616368349312 test set RMSE:246.51723899515346, R2:0.5192274671464132


(246.51723899515346, 0.5192274671464132)

工学博士,担任《Mechanical System and Signal Processing》《中国电机工程学报》《控制与决策》等期刊审稿专家,擅长领域:现代信号处理,机器学习,深度学习,数字孪生,时间序列分析,设备缺陷检测、设备异常检测、设备智能故障诊断与健康管理PHM等。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.xdnf.cn/news/1451307.html

如若内容造成侵权/违法违规/事实不符,请联系一条长河网进行投诉反馈,一经查实,立即删除!

相关文章

期货到底难在哪里?

第一难&#xff1a;使用杠杠&#xff0c;杠杠放大的其实是你性格、天赋和技能上的弱点&#xff0c;同时相应缩小你这三个方面的优点&#xff1b;第二难&#xff1a;双向交易。如果只能做多&#xff0c;理论上你每次交易将有50%的概率盈利。现在既能做多又能做空&#xff0c;只剩…

centos7.9部署k8s的几种方式

文章目录 一、常见的k8s部署方式1、使用kubeadm工具部署2、基于二进制文件的部署方式3、云服务提供商的托管 Kubernetes 服务4、使用容器镜像部署或自动化部署工具 二、使用kubeadm工具部署1、硬件准备&#xff08;虚拟主机&#xff09;2、环境准备2.1、所有机器关闭防火墙2.2、…

二分+ST表+递推,Cf 1237D - Balanced Playlist

一、题目 1、题目描述 2、输入输出 2.1输入 2.2输出 3、原题链接 Problem - 1237D - Codeforces 二、解题报告 1、思路分析 case3提示我们一件事情&#xff1a;如果存在某个位置永远不停止&#xff0c;那么所有位置都满足永远不停止 很容易证明 随着下标右移&#xff0c…

欧式家居官网源码系统-轻奢大气设计风格

一款家居家私的官方网站系统&#xff0c;设计轻奢大气。 前端内容均可通过后台修改。当然你也可以用于其他行业的官网使用&#xff0c;只要你喜欢这个设计。 大致功能&#xff1a; 1、会员系统 2、支付功能 3、标签功能 4、熊掌号提交功能 5、文章发布功能 6、SEO设置功能 7、多…

U盘文件损坏且无法读取怎么修复?五个方法帮你搞定

在现代社会&#xff0c;U盘已经是我们日常生活和工作中不可缺少的工具之一。U盘的容量大、体积小、携带方便&#xff0c;很多人都喜欢使用U盘用于个人和工作数据的存储和传输。但是&#xff0c;U盘和其他的电子设备一样&#xff0c;在试用期间有时候会出现U盘打不开提示目录结构…

【触想智能】工业显示器的分类与应用领域分析

工业显示器作为智能制造的一种重要设备之一&#xff0c;已经被广泛应用于各种工业领域。根据应用场景和特定需求&#xff0c;工业显示器分为很多不同的种类&#xff0c;本文将从这些分类及其应用领域进行分析。 一、工业显示器分类 1、工业液晶显示器&#xff1a;工业液晶显示器…

Web应用安全测试-防护功能缺失

Web应用安全测试-防护功能缺失 1、Cookie属性问题 漏洞描述&#xff1a; Cookie属性缺乏相关的安全属性&#xff0c;如Secure属性、HttpOnly属性、Domain属性、Path属性、Expires属性等。 测试方法&#xff1a; 通过用web扫描工具进行对网站的扫描&#xff0c;如果存在相关…

生命在于学习——Python人工智能原理(3.3)

三、深度学习 4、激活函数 激活函数的主要作用是对神经元获得的输入进行非线性变换&#xff0c;以此反映神经元的非线性特性。常见的激活函数有线性激活函数、符号激活函数、Sigmod激活函数、双曲正切激活函数、高斯激活函数、ReLU激活函数。 &#xff08;1&#xff09;线性…

突破管理瓶颈:基于前端技术的全面预算编制系统解析

前言 在现代商业环境中&#xff0c;预测销售数据和实际成本是每个公司CEO和领导都极为重视的关键指标。然而&#xff0c;由于市场的不断变化&#xff0c;准确地预测和管理这些数据变得愈发具有挑战性。为了应对这一挑战&#xff0c;建立一个高效的系统来管理和审查销售数据的重…

Airtest 使用指南

Airtest 介绍 准备工作 AirtestIDE 安装与启动: https://airtest.doc.io.netease.com/IDEdocs/getting_started/AirtestIDE_install/ 电脑端的准备工作完成后,对于手机端只需要打开允许USB调试,当首次运行时会提示安装PocoService,同意即可。 界面介绍

华为 无线控制器 AirEngine9700-M1 AirEngine5760-51 AP供电降档问题

1 故障现象,一台Huawei Switch S5720-28TP-PWR-LI-AC poe交换机接入ap(5760-51) 20个&#xff0c;其中一个网口灯不亮&#xff0c;随机拔掉一个AP网线&#xff0c;之前不亮的网口&#xff0c;正常闪亮启动。 # AirEngine5760-51 满载功率28.8w Huawei Switch S5720-28TP-PWR-L…

基于Verilog表达的FSM状态机

基于Verilog表达的FSM状态机 1 FSM1.1 Intro1.2 Why FSM?1.3 How to do 在这里聚焦基于Verilog的三段式状态机编程&#xff1b; 1 FSM 1.1 Intro 状态机是一种代码实现功能的范式&#xff1b;一切皆可状态机&#xff1b; 状态机编程四要素&#xff1a;– 1.状态State&#…

接口自动化测试工程化——了解接口测试

什么是接口测试 接口测试也是一种功能测试 我理解的接口测试&#xff0c;其实也是一种功能测试&#xff0c;只是平时大家说的功能测试更多代指 UI 层面的功能测试&#xff0c;而接口测试更偏向于服务端层面的功能测试。 接口测试的目的 测试左移&#xff0c;尽早介入测试&a…

【MySQL】日志详解

本文使用的MySQL版本是8 日志概览 它们记录了数据库系统中的不同操作和事件&#xff0c;以便于故障排除、性能优化和数据恢复。本文将介绍MySQL中常见的几种日志&#xff0c;同时也会介绍一点常用的选项。 官方文档&#xff1a;MySQL :: MySQL 8.0 Reference Manual :: 7.4 M…

【机器学习】简答

1.什么是机器学习&#xff1f; 机器学习致力于研究如何通过计算的手段&#xff0c;利用经验来改善系统自身的性能。“训练”与“预测”是机器学习的两个过程&#xff0c;“模型”则是过程的中间输出结果&#xff0c;“训练”产生“模型”&#xff0c;“模型”指导 “预测”。计…

揭秘数据资产的核心价值:从数据收集到分析应用的全方位解决方案,引领企业驶向智能化未来

一、引言 在数字化浪潮席卷全球的今天&#xff0c;数据已成为企业最重要的资产之一。从海量的数据中提取有价值的信息&#xff0c;转化为企业的竞争优势&#xff0c;是每一家企业都面临的挑战和机遇。本文将深入探讨数据资产的核心价值&#xff0c;以及如何通过从数据收集到分…

【计算机毕业设计】259基于微信小程序的医院综合服务平台

&#x1f64a;作者简介&#xff1a;拥有多年开发工作经验&#xff0c;分享技术代码帮助学生学习&#xff0c;独立完成自己的项目或者毕业设计。 代码可以私聊博主获取。&#x1f339;赠送计算机毕业设计600个选题excel文件&#xff0c;帮助大学选题。赠送开题报告模板&#xff…

QUIC 和 TCP: 深入解析为什么 QUIC 更胜一筹

引言 在过去的三十年里&#xff0c;HTTP&#xff08;超文本传输协议&#xff09;一直是互联网的支柱。我们可以通过 HTTP 浏览网页、下载文件、流式传输电影等。这一协议随着时间的推移已经得到了重大改进。 HTTP 协议是一个应用层协议&#xff0c;它基于 TCP&#xff08;传输…

电商客服日常必备的快捷回复软件

在数字化时代&#xff0c;客户服务的质量和效率直接影响着企业的品牌形象和客户满意度。尽管智能机器人自动回复功能日益强大&#xff0c;但人工客服的个性化服务和问题解决能力仍然不可或缺。 今天&#xff0c;我向大家推荐一款电商客服日常必备的快捷回复软件——客服宝聊天…

Unity资源 之 最受欢迎的三消游戏开发包 - Bubble Shooter Kit 【免费领取】

三消游戏开发包 - Bubble Shooter Kit 免费领取 前言资源包内容领取兑换码 前言 如果你是一名 Unity 游戏开发者&#xff0c;并且正在寻找一种快速、简单的方式来创建自己的三消游戏&#xff0c;那么 Bubble Shooter Kit 就是你所需要的。 资源包内容 Bubble Shooter Kit 是…