This jupyter notebook file is available at ISSP Data Repository (develop branch).
Running PHYSBO interactively
You can run PHYSBO interactively in the following way:
Get the next parameter to run from PHYSBO
Get the evaluation values outside of PHYSBO
Register the evaluation values into PHYSBO
For example, it is suitable for the following cases.
You want to perform an experiment manually and give the evaluation values to PHYSBO.
You want to control the execution flexibly, such as running the simulator in a separate process.
[1]:
import numpy as np
import physbo
Preparation of search candidate data
As the previous tutorials, save the dataset file s5-210.csv and load dataset from this file as the following:
[2]:
def load_data():
A = np.asarray(np.loadtxt('s5-210.csv',skiprows=1, delimiter=',') )
X = A[:,0:3]
t = -A[:,3]
return X, t
X, t = load_data()
X = physbo.misc.centering(X)
Definition of the simulator
Unlike in Basic Usage of PHYSBO, here you need to define your own Simulator class. In this example, since we know all the objective function values t in advance, we can simply return t[action].
[3]:
class Simulator:
def __init__(self, t):
self.t = t
def __call__( self, action ):
return self.t[action]
simulator = Simulator(t)
Executing optimization
[4]:
# Set policy
policy = physbo.search.discrete.Policy(test_X=X)
# Set seed
policy.set_seed( 0 )
In each search step, the following processes are performed.
Running random_search or bayes_search with
max_num_probes=1, simulator=Noneto get action IDs (parameters).Getting the evaluation value (array of actions) by
t = simulator(actions).Registering the evaluation value for the action ID (parameter) with
policy.write(actions, t).Showing the history with
physbo.search.utility.show_search_results.
In the following, we will perform two random sampling (1st, and 2nd steps) and two Bayesian optimization proposals (3rd, and 4th steps).
[5]:
# 1st step (random sampling)
actions = policy.random_search(max_num_probes=1, simulator=None)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
# 2nd step (random sampling)
actions = policy.random_search(max_num_probes=1, simulator=None)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
# 3rd step (bayesian optimization)
actions = policy.bayes_search(
max_num_probes=1, simulator=None, score="EI", interval=0, num_rand_basis=5000
)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
# 4-th step (bayesian optimization)
actions = policy.bayes_search(
max_num_probes=1, simulator=None, score="EI", interval=0, num_rand_basis=5000
)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
interactive mode starts ...
0001-th step: f(x) = -1.070602 (action=15673)
current best f(x) = -1.070602 (best action=15673)
current best f(x) = -1.070602 (best action = 15673)
list of simulation results
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
Start the initial hyper parameter searching ...
Done
Start the hyper parameter learning ...
0 -th epoch marginal likelihood -3.530887145012274
50 -th epoch marginal likelihood -3.5308860688302293
100 -th epoch marginal likelihood -3.530887353880805
150 -th epoch marginal likelihood -3.53088736635786
200 -th epoch marginal likelihood -3.530887366409382
250 -th epoch marginal likelihood -3.530887366409449
300 -th epoch marginal likelihood -3.530887366409449
350 -th epoch marginal likelihood -3.5308873664094484
400 -th epoch marginal likelihood -3.5308873664094484
450 -th epoch marginal likelihood -3.5308873664094484
500 -th epoch marginal likelihood -3.530887366409448
Done
current best f(x) = -0.993712 (best action = 11874)
list of simulation results
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
Start the initial hyper parameter searching ...
Done
Start the hyper parameter learning ...
0 -th epoch marginal likelihood -3.9335555735844907
50 -th epoch marginal likelihood -3.9335591785815924
100 -th epoch marginal likelihood -3.9335601169323873
150 -th epoch marginal likelihood -3.9335601169714645
200 -th epoch marginal likelihood -3.9335601170149546
250 -th epoch marginal likelihood -3.9335601170150643
300 -th epoch marginal likelihood -3.9335601170150643
350 -th epoch marginal likelihood -3.933560117015066
400 -th epoch marginal likelihood -3.9335601170150643
450 -th epoch marginal likelihood -3.933560117015067
500 -th epoch marginal likelihood -3.9335601170150647
Done
current best f(x) = -0.985424 (best action = 8061)
list of simulation results
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
f(x)=-0.985424 (action = 8061)
Suspend and restart
You can suspend and restart the optimization process by saving the following predictor, training, and history to an external file.
predictor: Prediction model of the objective function
training: Data used to train the predictor (
physbo.Variableobject)history: History of optimization runs (
physbo.search.discrete.Historyobject)
[6]:
policy.save(file_history='history.npz', file_training='training.npz', file_predictor='predictor.dump')
[7]:
# delete policy
del policy
# load policy
policy = physbo.search.discrete.Policy(test_X=X)
policy.load(
file_history="history.npz",
file_training="training.npz",
file_predictor="predictor.dump",
)
""" 5-th step (bayesian optimization) """
actions = policy.bayes_search(
max_num_probes=1, simulator=None, score="EI", interval=0, num_rand_basis=5000
)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
# It is also possible to specify predictor and training separately.
""" 6-th step (bayesian optimization) """
actions = policy.bayes_search(
max_num_probes=1,
predictor=policy.predictor,
training=policy.training,
simulator=None,
score="EI",
interval=0,
num_rand_basis=5000,
)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
Start the initial hyper parameter searching ...
Done
Start the hyper parameter learning ...
0 -th epoch marginal likelihood -5.282317837949536
50 -th epoch marginal likelihood -5.314394752726803
100 -th epoch marginal likelihood -5.333676604278032
150 -th epoch marginal likelihood -5.3441484571091165
200 -th epoch marginal likelihood -5.349388232794448
250 -th epoch marginal likelihood -5.3518398351125445
300 -th epoch marginal likelihood -5.352933499846449
350 -th epoch marginal likelihood -5.353415283021384
400 -th epoch marginal likelihood -5.3536398281059405
450 -th epoch marginal likelihood -5.353761427426113
500 -th epoch marginal likelihood -5.353841697978968
Done
current best f(x) = -0.985424 (best action = 8061)
list of simulation results
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
f(x)=-0.985424 (action = 8061)
f(x)=-1.033129 (action = 8509)
Start the initial hyper parameter searching ...
Done
Start the hyper parameter learning ...
0 -th epoch marginal likelihood -7.1246867953581425
50 -th epoch marginal likelihood -7.125240732973937
100 -th epoch marginal likelihood -7.125738402977026
150 -th epoch marginal likelihood -7.126320570892345
200 -th epoch marginal likelihood -7.127007900107862
250 -th epoch marginal likelihood -7.127824732094982
300 -th epoch marginal likelihood -7.1288019858435145
350 -th epoch marginal likelihood -7.129979143770723
400 -th epoch marginal likelihood -7.131406793529882
450 -th epoch marginal likelihood -7.133149836164178
500 -th epoch marginal likelihood -7.135291360142488
Done
current best f(x) = -0.985424 (best action = 8061)
list of simulation results
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
f(x)=-0.985424 (action = 8061)
f(x)=-1.033129 (action = 8509)
f(x)=-1.061677 (action = 11352)