This jupyter notebook file is available at ISSP Data Repository (develop branch).
Running PHYSBO interactively
You can run PHYSBO interactively in the following way:
Get the next parameter to run from PHYSBO
Get the evaluation values outside of PHYSBO
Register the evaluation values into PHYSBO
For example, it is suitable for the following cases.
You want to perform an experiment manually and give the evaluation values to PHYSBO.
You want to control the execution flexibly, such as running the simulator in a separate process.
Preparation of search candidate data
As the previous tutorials, save the dataset file s5-210.csv and load dataset from this file as the following:
[1]:
import physbo
import numpy as np
def load_data():
A = np.asarray(np.loadtxt('s5-210.csv',skiprows=1, delimiter=',') )
X = A[:,0:3]
t = -A[:,3]
return X, t
X, t = load_data()
X = physbo.misc.centering(X)
Definition of simulator
[2]:
class Simulator:
def __init__( self ):
_, self.t = load_data()
def __call__( self, action ):
return self.t[action]
Executing optimization
[3]:
# Set policy
policy = physbo.search.discrete.Policy(test_X=X)
# Set seed
policy.set_seed( 0 )
In each search step, the following processes are performed.
Running random_search or bayes_search with
max_num_probes=1, simulator=Noneto get action IDs (parameters).Getting the evaluation value (array of actions) by
t = simulator(actions).Registering the evaluation value for the action ID (parameter) with
policy.write(actions, t).Showing the history with
physbo.search.utility.show_search_results.
In the following, we will perform two random sampling (1st, and 2nd steps) and two Bayesian optimization proposals (3rd, and 4th steps).
[4]:
simulator = Simulator()
''' 1st step (random sampling) '''
actions = policy.random_search(max_num_probes=1, simulator=None)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
''' 2nd step (random sampling) '''
actions = policy.random_search(max_num_probes=1, simulator=None)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
''' 3rd step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1, simulator=None, score='EI', interval=0, num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
''' 4-th step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1, simulator=None, score='EI', interval=0, num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
interactive mode starts ...
current best f(x) = -1.070602 (best action = 15673)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
current best f(x) = -1.070602 (best action = 15673)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
Start the initial hyper parameter searching ...
Done
Start the hyper parameter learning ...
0 -th epoch marginal likelihood -3.530887145012274
50 -th epoch marginal likelihood -3.5308860688302293
100 -th epoch marginal likelihood -3.530887353880805
150 -th epoch marginal likelihood -3.53088736635786
200 -th epoch marginal likelihood -3.530887366409382
250 -th epoch marginal likelihood -3.530887366409449
300 -th epoch marginal likelihood -3.530887366409449
350 -th epoch marginal likelihood -3.5308873664094484
400 -th epoch marginal likelihood -3.5308873664094484
450 -th epoch marginal likelihood -3.5308873664094484
500 -th epoch marginal likelihood -3.530887366409448
Done
current best f(x) = -0.993712 (best action = 11874)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
Start the initial hyper parameter searching ...
Done
Start the hyper parameter learning ...
0 -th epoch marginal likelihood -3.9335555735844907
50 -th epoch marginal likelihood -3.9335591785815924
100 -th epoch marginal likelihood -3.9335601169323873
150 -th epoch marginal likelihood -3.9335601169714645
200 -th epoch marginal likelihood -3.9335601170149537
250 -th epoch marginal likelihood -3.9335601170150656
300 -th epoch marginal likelihood -3.933560117015065
350 -th epoch marginal likelihood -3.9335601170150656
400 -th epoch marginal likelihood -3.9335601170150634
450 -th epoch marginal likelihood -3.933560117015065
500 -th epoch marginal likelihood -3.9335601170150656
Done
current best f(x) = -0.985424 (best action = 8061)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
f(x)=-0.985424 (action = 8061)
Suspend and restart
You can suspend and restart the optimization process by saving the following predictor, training, and history to an external file.
predictor: Prediction model of the objective function
training: Data used to train the predictor (
physbo.Variableobject)history: History of optimization runs (
physbo.search.discrete.Historyobject)
[5]:
policy.save(file_history='history.npz', file_training='training.npz', file_predictor='predictor.dump')
[6]:
# delete policy
del policy
# load policy
policy = physbo.search.discrete.Policy(test_X=X)
policy.load(file_history='history.npz', file_training='training.npz', file_predictor='predictor.dump')
''' 5-th step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1, simulator=None, score='EI', interval=0, num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
# It is also possible to specify predictor and training separately.
''' 6-th step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1,
predictor=policy.predictor, training=policy.training,
simulator=None, score='EI', interval=0, num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
Start the initial hyper parameter searching ...
Done
Start the hyper parameter learning ...
0 -th epoch marginal likelihood -5.282317837949537
50 -th epoch marginal likelihood -5.314394752726803
100 -th epoch marginal likelihood -5.333676604278033
150 -th epoch marginal likelihood -5.3441484571091165
200 -th epoch marginal likelihood -5.34938823279445
250 -th epoch marginal likelihood -5.351839835112543
300 -th epoch marginal likelihood -5.35293349984645
350 -th epoch marginal likelihood -5.353415283021384
400 -th epoch marginal likelihood -5.3536398281059405
450 -th epoch marginal likelihood -5.3537614274261145
500 -th epoch marginal likelihood -5.353841697978967
Done
current best f(x) = -0.985424 (best action = 8061)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
f(x)=-0.985424 (action = 8061)
f(x)=-1.033129 (action = 8509)
Start the initial hyper parameter searching ...
Done
Start the hyper parameter learning ...
0 -th epoch marginal likelihood -7.1246867953581425
50 -th epoch marginal likelihood -7.125240732973937
100 -th epoch marginal likelihood -7.125738402977025
150 -th epoch marginal likelihood -7.1263205708923465
200 -th epoch marginal likelihood -7.127007900107862
250 -th epoch marginal likelihood -7.127824732094982
300 -th epoch marginal likelihood -7.128801985843515
350 -th epoch marginal likelihood -7.129979143770722
400 -th epoch marginal likelihood -7.13140679352988
450 -th epoch marginal likelihood -7.133149836164179
500 -th epoch marginal likelihood -7.135291360142487
Done
current best f(x) = -0.985424 (best action = 8061)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
f(x)=-0.985424 (action = 8061)
f(x)=-1.033129 (action = 8509)
f(x)=-1.061677 (action = 11352)
[ ]: