- Generating sample dataset
- Building the model
- Training the model and checking the accuracy
- Predicting test data
- Source code listing
library(keras)
Generating sample dataset
First, we'll create sample regression dataset for this tutorial.
set.seed(123)
N = 450
n = seq(1:N)
a = n/10+4*sin(n/10)+sample(-1:6,N,replace=T)+rnorm(N)
b = n/8+4*sin(n/10)+sample(-3:3,N,replace=T)+rnorm(N)
c = n/6+4*sin(n/10)+sample(-5:1,N,replace=T)+rnorm(N)
y = (a+b+c)/3+rnorm(N)
plot(n, c, col="orange", pch=20, cex=.9)
points(n, a, col="blue", pch=20, cex=.9)
points(n, b, col="green", pch=20, cex=.9)
points(n, y, col="red", type = "l",lwd=2)
Red-line is y output, and the remaining dots are series of x input.
We need to convert x input data into matrix type.
x = as.matrix(data.frame(a,b,c))
y = as.matrix(y)
head(x)
a b c
[1,] 1.559084 -3.4897805 -0.6143715
[2,] 5.290081 -2.7466366 1.3203803
[3,] 2.764863 -1.1433255 -3.1107472
[4,] 8.842324 0.6967253 1.1809927
[5,] 7.402110 0.6273192 -1.7591250
[6,] 3.813864 2.3840024 3.4351513
Building the model
Next, we'll create a keras sequential model.
model = keras_model_sequential() %>%
layer_dense(units=64, activation="relu", input_shape=3) %>%
layer_dense(units=32, activation = "relu") %>%
layer_dense(units=1, activation="linear")
model %>% compile(
loss = "mse",
optimizer = "adam",
metrics = list("mean_absolute_error")
)
model %>% summary()
_________________________________________________________________________________
Layer (type) Output Shape Param #
=================================================================================
dense_166 (Dense) (None, 64) 256
_________________________________________________________________________________
dense_167 (Dense) (None, 32) 2080
_________________________________________________________________________________
dense_168 (Dense) (None, 1) 33
=================================================================================
Total params: 2,369
Trainable params: 2,369
Non-trainable params: 0
_________________________________________________________________________________
| |
Next, we'll fit the model with x, y data and check the accuracy.
model %>% fit(x, y, epochs = 100,verbose = 0)
scores = model %>% evaluate(x, y, verbose = 0)
print(scores)
$loss
[1] 0.9134331
$mean_absolute_error
[1] 0.7495633
Next, we'll predict x data and compare it with original y value in a plot.
y_pred = model %>% predict(x)
x_axes = seq(1:length(y_pred))
plot(x_axes, y, type="l", col="red")
lines(x_axes, y_pred, col="blue")
legend("topleft", legend=c("y-original", "y-predicted"),
col=c("red", "blue"), lty=1,cex=0.8)
Predicting test data
Next, we'll split the dataset into the train and test parts, train the model again, predict test data.
train_x = x[1:400,]
test_x = x[401:N,]
train_y = y[1:400]
test_y = y[401:N]
model %>% fit(train_x,train_y,epochs = 100,verbose = 0)
y_pred = model %>% predict(test_x)
Finally, we'll plot original test data y value and predicted one.
x_axes = seq(1:length(y_pred))
plot(x_axes, test_y, col="red", type="l")
lines(x_axes, y_pred, col="blue")
legend("topleft", legend=c("y-original", "y-predicted"),
col=c("red", "blue"), lty=1,cex=0.8)
In this tutorial, we've briefly learned how to fit regression data with keras neural networks model in R. The full source code is listed below.
Source code listing
library(keras)
set.seed(123)
N = 450
n = seq(1:N)
a = n/10+4*sin(n/10)+sample(-1:6,N,replace=T)+rnorm(N)
b = n/8+4*sin(n/10)+sample(-3:3,N,replace=T)+rnorm(N)
c = n/6+4*sin(n/10)+sample(-5:1,N,replace=T)+rnorm(N)
y = (a+b+c)/3+rnorm(N)
plot(n, c, col="orange", pch=20, cex=.9)
points(n, a, col="blue", pch=20, cex=.9)
points(n, b, col="green", pch=20, cex=.9)
points(n, y, col="red", type = "l",lwd=2)
x = as.matrix(data.frame(a,b,c))
y = as.matrix(y)
head(x)
model = keras_model_sequential() %>%
layer_dense(units=64, activation="relu", input_shape=3) %>%
layer_dense(units=32, activation = "relu") %>%
layer_dense(units=1, activation="linear")
model %>% compile(
loss = "mse",
optimizer = "adam",
metrics = list("mean_absolute_error")
)
model %>% summary()
model %>% fit(x, y, epochs = 100,verbose = 0)
scores = model %>% evaluate(x, y, verbose = 0)
print(scores)
y_pred = model %>% predict(x)
x_axes = seq(1:length(y_pred))
plot(x_axes, y, type="l", col="red")
lines(x_axes, y_pred, col="blue")
legend("topleft", legend=c("y-original", "y-predicted"),
col=c("red", "blue"), lty=1,cex=0.8)
# split into train and test parts
train_x = x[1:400,]
test_x = x[401:N,]
train_y = y[1:400]
test_y = y[401:N]
model %>% fit(train_x,train_y,epochs = 100,verbose = 0)
y_pred = model %>% predict(test_x)
x_axes = seq(1:length(y_pred))
plot(x_axes, test_y, col="red", type="l")
lines(x_axes, y_pred, col="blue")
legend("topleft", legend=c("y-original", "y-predicted"),
col=c("red", "blue"), lty=1,cex=0.8)
References:- https://keras.rstudio.com/index.html
- https://keras.rstudio.com/reference/keras_model_sequential.html
Great example to start with regression! Thanks for the post
ReplyDeletevery nice one... thank you
ReplyDeleteFantastic, thank you!
ReplyDelete