Running PHYSBO interactively

You can run PHYSBO interactively in the following way:

  1. Get the next parameter to run from PHYSBO

  2. Get the evaluation values outside of PHYSBO

  3. Register the evaluation values into PHYSBO

For example, it is suitable for the following cases.

  • You want to perform an experiment manually and give the evaluation values to PHYSBO.

  • You want to control the execution flexibly, such as running the simulator in a separate process.

Preparation of search candidate data

As the previous tutorials, save the dataset file s5-210.csv into the subdirectory data, and load dataset from this file as the following:

[1]:
import physbo

import numpy as np


def load_data():
    A =  np.asarray(np.loadtxt('data/s5-210.csv',skiprows=1, delimiter=',') )
    X = A[:,0:3]
    t  = -A[:,3]
    return X, t

X, t = load_data()
X = physbo.misc.centering(X)

Definition of simulator

[2]:
class simulator:
    def __init__( self ):
        _, self.t = load_data()

    def __call__( self, action ):
        return self.t[action]

Executing optimization

[3]:
# Set policy
policy = physbo.search.discrete.policy(test_X=X)

# Set seed
policy.set_seed( 0 )

In each search step, the following processes are performed.

  1. Running random_search or bayes_search with max_num_probes=1, simulator=None to get action IDs (parameters).

  2. Getting the evaluation value (array of actions) by t = simulator(actions).

  3. Registering the evaluation value for the action ID (parameter) with policy.write(actions, t).

  4. Showing the history with physbo.search.utility.show_search_results.

In the following, we will perform two random sampling (1st, and 2nd steps) and two Bayesian optimization proposals (3rd, and 4th steps).

[ ]:
simulator = simulator()

''' 1st step (random sampling) '''
actions = policy.random_search(max_num_probes=1, simulator=None)
t  = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)

''' 2nd step (random sampling) '''
actions = policy.random_search(max_num_probes=1, simulator=None)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)

''' 3rd step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1, simulator=None, score='EI', interval=0,  num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)

''' 4-th step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1, simulator=None, score='EI', interval=0,  num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)

Suspend and restart

You can suspend and restart the optimization process by saving the following predictor, training, and history to an external file.

  • predictor: Prediction model of the objective function

  • training: Data used to train the predictor (physbo.variable object)

  • history: History of optimization runs (physbo.search.discrete.results.history object)

[5]:
policy.save(file_history='history.npz', file_training='training.npz', file_predictor='predictor.dump')
[ ]:
# delete policy
del policy

# load policy
policy = physbo.search.discrete.policy(test_X=X)
policy.load(file_history='history.npz', file_training='training.npz', file_predictor='predictor.dump')

''' 5-th step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1, simulator=None, score='EI', interval=0,  num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)

# It is also possible to specify predictor and training separately.
''' 6-th step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1,
                                            predictor=policy.predictor, training=policy.training,
                                            simulator=None, score='EI', interval=0,  num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
[ ]: