# Gaussian process

PHYSBO performs Bayesian optimization while running Gaussian process regression.

Therefore, it is possible to run Gaussian process regression given training data, and to predict test data using the trained model.

In this section, the procedure is introduced.

## Preparation of search candidate data

In this tutorial, the problem of finding a stable interface structure for Cu is used as an example. The values that have already been evaluated are used, although the evaluation of the objective function, i.e., the structural relaxation calculation, actually takes on the order of several hours per calculation. For more information on the problem setup, please refer to the following references

1. Kiyohara, H. Oda, K. Tsuda and T. Mizoguchi, “Acceleration of stable interface structure searching using a kriging approach”, Jpn. J. Appl. Phys. 55, 045502 (2016).

Save the dataset file s5-210.csv into the subdirectory data, and load dataset from this file as the following:

[1]:

import physbo

import numpy as np

X = A[:,0:3]
t  = -A[:,3]
return X, t

X = physbo.misc.centering( X )


## Defining training data

A randomly selected 10% of the target data will be used as training data, and another randomly selected 10% will be used as test data.

[2]:

N = len(t)
Ntrain = int(N*0.1)
Ntest = min(int(N*0.1), N-Ntrain)

id_all   = np.random.choice(N, N, replace=False)
id_train  = id_all[0:Ntrain]
id_test = id_all[Ntrain:Ntrain+Ntest]

X_train = X[id_train]
X_test = X[id_test]

t_train = t[id_train]
t_test = t[id_test]

print("Ntrain =", Ntrain)
print("Ntest =", Ntest)

Ntrain = 1798
Ntest = 1798


## Learning and Prediction of Gaussian Processes

The following process is used to learn the Gaussian process and predict the test data.

1. Generate a model of the Gaussian process

2. The model is trained using X_train (parameters of the training data) and t_train (objective function value of the training data).

3. Run predictions on the test data (X_test) using the trained model.

Definition of covariance (Gaussian)

[3]:

cov = physbo.gp.cov.gauss( X_train.shape[1],ard = False )


Definition of mean value

[4]:

mean = physbo.gp.mean.const()


Definition of likelihood function (Gaussian)

[5]:

lik = physbo.gp.lik.gauss()


Generation of a Gaussian Process Model

[6]:

gp = physbo.gp.model(lik=lik,mean=mean,cov=cov)
config = physbo.misc.set_config()


Learning a Gaussian process model.

[7]:

gp.fit(X_train, t_train, config)

Start the initial hyper parameter searching ...
Done

Start the hyper parameter learning ...
0 -th epoch marginal likelihood 17312.31220145003
50 -th epoch marginal likelihood 6291.292745798703
100 -th epoch marginal likelihood 3269.1167759139516
150 -th epoch marginal likelihood 1568.3930580794922
200 -th epoch marginal likelihood 664.2847129159145
250 -th epoch marginal likelihood -249.28468708456558
300 -th epoch marginal likelihood -869.7604930929888
350 -th epoch marginal likelihood -1316.6809532065581
400 -th epoch marginal likelihood -1546.1623851368954
450 -th epoch marginal likelihood -1660.7298135295766
500 -th epoch marginal likelihood -1719.5056128528097
Done



Output the parameters in the learned Gaussian process.

[8]:

gp.print_params()



likelihood parameter =   [-2.81666924]
mean parameter in GP prior:  [-1.05939674]
covariance parameter in GP prior:  [-0.91578975 -2.45544347]



Calculating the mean (predicted value) and variance of the test data

[9]:

gp.prepare(X_train, t_train)
fmean = gp.get_post_fmean(X_train, X_test)
fcov = gp.get_post_fcov(X_train, X_test)


Results of prediction

[10]:

fmean

[10]:

array([-1.00420815, -1.10923758, -0.97840623, ..., -1.00323733,
-0.97015759, -1.11076236])


Results of covariance

[11]:

fcov

[11]:

array([0.00056069, 0.00075529, 0.00043006, ..., 0.0016925 , 0.00070103,
0.00073499])


Output mean square error of prediction

[12]:

np.mean((fmean-t_test)**2)

[12]:

0.008107085662147708


## Prediction by trained models

Read the parameters of the trained model as gp_params and make predictions using them.

By storing gp_params and training data (X_train, t_train), prediction by the trained model is possible.

Prepare the learned parameters (must be done immediately after learning)

[13]:

#Prepare the learned parameters as a 1D array
gp_params =  np.append(np.append(gp.lik.params, gp.prior.mean.params), gp.prior.cov.params)

gp_params

[13]:

array([-2.81666924, -1.05939674, -0.91578975, -2.45544347])


Prepare a model similar to the one used for training as gp

[14]:

#Definition of covariance (Gaussian)
cov = physbo.gp.cov.gauss( X_train.shape[1],ard = False )

#Definition of mean value
mean = physbo.gp.mean.const()

#Definition of likelihood function (Gaussian)
lik = physbo.gp.lik.gauss()

#Generation of a Gaussian Process Model
gp = physbo.gp.model(lik=lik,mean=mean,cov=cov)


Prepare a model similar to the one used for training as gp

[15]:

#Input learned parameters into the Gaussian process.
gp.set_params(gp_params)

#Calculate the mean (predicted value) and variance of the test data
gp.prepare(X_train, t_train)
fmean = gp.get_post_fmean(X_train, X_test)
fcov = gp.get_post_fcov(X_train, X_test)


Results of prediction

[16]:

fmean

[16]:

array([-1.00420815, -1.10923758, -0.97840623, ..., -1.00323733,
-0.97015759, -1.11076236])


Results of covariance

[17]:

fcov

[17]:

array([0.00056069, 0.00075529, 0.00043006, ..., 0.0016925 , 0.00070103,
0.00073499])


Output mean square error of prediction

[18]:

np.mean((fmean-t_test)**2)

[18]:

0.008107085662147708

[ ]:

Note: In the example above, we used the same pre-registered X to make predictions.
If you want to make predictions for parameters X_new that is not included in X using the trained model,
you first obtain the mean (X_{mean}) and standard deviation (X_{std}) of the data X and
then normalize X_{new} by (X_{new} - X_{mean}) / X_{std}.
Also, the data format for X is ndarray format.
Therefore, if X_{new} is a single data, it must be transformed to ndarray format.
For example, if X_{new} is a real number, you should replace X_new as
X_new = np.array(X_new).reshape(1)