This jupyter notebook file is available at ISSP Data Repository (v3.0.0 branch).

Running PHYSBO interactively

You can run PHYSBO interactively in the following way:

  1. Get the next parameter to run from PHYSBO

  2. Get the evaluation values outside of PHYSBO

  3. Register the evaluation values into PHYSBO

For example, it is suitable for the following cases.

  • You want to perform an experiment manually and give the evaluation values to PHYSBO.

  • You want to control the execution flexibly, such as running the simulator in a separate process.

Preparation of search candidate data

As the previous tutorials, save the dataset file s5-210.csv and load dataset from this file as the following:

[1]:
import physbo

import numpy as np


def load_data():
    A =  np.asarray(np.loadtxt('s5-210.csv',skiprows=1, delimiter=',') )
    X = A[:,0:3]
    t  = -A[:,3]
    return X, t

X, t = load_data()
X = physbo.misc.centering(X)
Cythonized version of physbo is used

Definition of simulator

[2]:
class Simulator:
    def __init__( self ):
        _, self.t = load_data()

    def __call__( self, action ):
        return self.t[action]

Executing optimization

[3]:
# Set policy
policy = physbo.search.discrete.Policy(test_X=X)

# Set seed
policy.set_seed( 0 )

In each search step, the following processes are performed.

  1. Running random_search or bayes_search with max_num_probes=1, simulator=None to get action IDs (parameters).

  2. Getting the evaluation value (array of actions) by t = simulator(actions).

  3. Registering the evaluation value for the action ID (parameter) with policy.write(actions, t).

  4. Showing the history with physbo.search.utility.show_search_results.

In the following, we will perform two random sampling (1st, and 2nd steps) and two Bayesian optimization proposals (3rd, and 4th steps).

[4]:
simulator = Simulator()

''' 1st step (random sampling) '''
actions = policy.random_search(max_num_probes=1, simulator=None)
t  = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)

''' 2nd step (random sampling) '''
actions = policy.random_search(max_num_probes=1, simulator=None)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)

''' 3rd step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1, simulator=None, score='EI', interval=0,  num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)

''' 4-th step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1, simulator=None, score='EI', interval=0,  num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
interactive mode starts ...

current best f(x) = -1.070602 (best action = 15673)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)


current best f(x) = -1.070602 (best action = 15673)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)


Start the initial hyper parameter searching ...
Done

Start the hyper parameter learning ...
0 -th epoch marginal likelihood -3.530887145012235
50 -th epoch marginal likelihood -3.530886068830362
100 -th epoch marginal likelihood -3.530887353880807
150 -th epoch marginal likelihood -3.5308873663578604
200 -th epoch marginal likelihood -3.530887366409382
250 -th epoch marginal likelihood -3.5308873664094484
300 -th epoch marginal likelihood -3.5308873664094484
350 -th epoch marginal likelihood -3.530887366409448
400 -th epoch marginal likelihood -3.530887366409449
450 -th epoch marginal likelihood -3.530887366409449
500 -th epoch marginal likelihood -3.530887366409448
Done

current best f(x) = -0.993712 (best action = 11874)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)


Start the initial hyper parameter searching ...
Done

Start the hyper parameter learning ...
0 -th epoch marginal likelihood -3.9335555735844907
50 -th epoch marginal likelihood -3.933559178581591
100 -th epoch marginal likelihood -3.9335601169323873
150 -th epoch marginal likelihood -3.933560116971464
200 -th epoch marginal likelihood -3.9335601170149546
250 -th epoch marginal likelihood -3.933560117015064
300 -th epoch marginal likelihood -3.933560117015065
350 -th epoch marginal likelihood -3.933560117015066
400 -th epoch marginal likelihood -3.933560117015066
450 -th epoch marginal likelihood -3.933560117015066
500 -th epoch marginal likelihood -3.933560117015065
Done

current best f(x) = -0.985424 (best action = 8061)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
f(x)=-0.985424 (action = 8061)


Suspend and restart

You can suspend and restart the optimization process by saving the following predictor, training, and history to an external file.

  • predictor: Prediction model of the objective function

  • training: Data used to train the predictor (physbo.Variable object)

  • history: History of optimization runs (physbo.search.discrete.History object)

[5]:
policy.save(file_history='history.npz', file_training='training.npz', file_predictor='predictor.dump')
[6]:
# delete policy
del policy

# load policy
policy = physbo.search.discrete.Policy(test_X=X)
policy.load(file_history='history.npz', file_training='training.npz', file_predictor='predictor.dump')

''' 5-th step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1, simulator=None, score='EI', interval=0,  num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)

# It is also possible to specify predictor and training separately.
''' 6-th step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1,
                                            predictor=policy.predictor, training=policy.training,
                                            simulator=None, score='EI', interval=0,  num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
Start the initial hyper parameter searching ...
Done

Start the hyper parameter learning ...
0 -th epoch marginal likelihood -5.282317837949537
50 -th epoch marginal likelihood -5.314394752726803
100 -th epoch marginal likelihood -5.333676604278033
150 -th epoch marginal likelihood -5.344148457109117
200 -th epoch marginal likelihood -5.349388232794448
250 -th epoch marginal likelihood -5.351839835112543
300 -th epoch marginal likelihood -5.352933499846448
350 -th epoch marginal likelihood -5.353415283021385
400 -th epoch marginal likelihood -5.3536398281059405
450 -th epoch marginal likelihood -5.3537614274261145
500 -th epoch marginal likelihood -5.353841697978967
Done

current best f(x) = -0.985424 (best action = 8061)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
f(x)=-0.985424 (action = 8061)
f(x)=-1.033129 (action = 8509)


Start the initial hyper parameter searching ...
Done

Start the hyper parameter learning ...
0 -th epoch marginal likelihood -7.1246867953581425
50 -th epoch marginal likelihood -7.125240732973938
100 -th epoch marginal likelihood -7.125738402977026
150 -th epoch marginal likelihood -7.126320570892344
200 -th epoch marginal likelihood -7.127007900107863
250 -th epoch marginal likelihood -7.127824732094982
300 -th epoch marginal likelihood -7.1288019858435145
350 -th epoch marginal likelihood -7.129979143770724
400 -th epoch marginal likelihood -7.131406793529879
450 -th epoch marginal likelihood -7.133149836164181
500 -th epoch marginal likelihood -7.135291360142487
Done

current best f(x) = -0.985424 (best action = 8061)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
f(x)=-0.985424 (action = 8061)
f(x)=-1.033129 (action = 8509)
f(x)=-1.061677 (action = 11352)


[ ]: