Restart calculations by reading existing calculation results

You can read existing action IDs (parameters) and their evaluation values and run PHYSBO in the following flow.

  1. Load an external file and read the existing action IDs (parameters) and their evaluation values.

  2. Register the action ID (parameter) and evaluation value to PHYSBO.

  3. Get the parameters for the next execution from PHYSBO.

This can be used in cases where PHYSBO cannot be left open for a long time due to time constraints, and thus cannot be executed interactively.

Prepare the search candidate data

As the previous tutorials, save the dataset file s5-210.csv into the subdirectory data, and load dataset from this file as the following:

[2]:
import physbo

import numpy as np


def load_data():
    A =  np.asarray(np.loadtxt('data/s5-210.csv',skiprows=1, delimiter=',') )
    X = A[:,0:3]
    t  = -A[:,3]
    return X, t

X, t = load_data()
X = physbo.misc.centering(X)

Preparing the precomputed data

In the load_data function above, all X and t are stored. Here, as precomputed, we get a random list of 20 actoin IDs and their evaluation values.

[4]:
import random
random.seed(0)
calculated_ids = random.sample(range(t.size), 20)
print(calculated_ids)
t_initial = t[calculated_ids]
[12623, 13781, 1326, 8484, 16753, 15922, 13268, 9938, 15617, 11732, 7157, 16537, 4563, 9235, 4579, 3107, 8208, 17451, 4815, 10162]

Register action ID (parameter) and evaluation value to PHYSBO.

Register calculated_ids and t[calculated_ids] as a list in the initial variable initial_data of policy.

[5]:
# set policy
policy = physbo.search.discrete.policy(test_X=X, initial_data=[calculated_ids, t_initial])

# set seed
policy.set_seed( 0 )

Get the next parameter to be executed from PHYSBO

Perform Bayesian optimization to obtain the next candidate point.

[6]:
actions = policy.bayes_search(max_num_probes=1, simulator=None, score="TS", interval=0,  num_rand_basis = 5000)
print(actions, X[actions])
Start the initial hyper parameter searching ...
Done

Start the hyper parameter learning ...
0 -th epoch marginal likelihood -20.09302189053099
50 -th epoch marginal likelihood -23.11964735598211
100 -th epoch marginal likelihood -24.83020118385076
150 -th epoch marginal likelihood -25.817906570042602
200 -th epoch marginal likelihood -26.42342027124426
250 -th epoch marginal likelihood -26.822598600211865
300 -th epoch marginal likelihood -27.10872736571494
350 -th epoch marginal likelihood -27.331572599126865
400 -th epoch marginal likelihood -27.517235815448124
450 -th epoch marginal likelihood -27.67892333553869
500 -th epoch marginal likelihood -27.82299469827059
Done

[73] [[-1.6680279  -1.46385011  1.68585446]]

Perform external calculations on the obtained candidate points, and register the actions and their scores in a file. The process of reading the file again, running the Bayesian optimization, and obtaining the next candidate point is repeated to advance the Bayesian optimization.