この jupyter notebook ファイルは ISSP Data Repository (v3.0.0 branch) から入手できます。
インタラクティブに実行する
以下の流れで、PHYSBO をインタラクティブに実行することができます。
PHYSBO から次に実行するパラメータを得ます。
PHYSBO の外部で評価値を得ます。
評価値をPHYSBOに登録します。
例えば、以下の様な場合に適しています。
人手による実験を行い、その評価値をPHYSBOに与えたい。
simulator の実行を別プロセスで行うなど、柔軟に実行制御を行いたい。
探索候補データの準備
これまでのチュートリアルと同様、データセットファイル s5-210.csv を保存し、次のように読み出します。
[1]:
import physbo
import numpy as np
def load_data():
A = np.asarray(np.loadtxt('s5-210.csv',skiprows=1, delimiter=',') )
X = A[:,0:3]
t = -A[:,3]
return X, t
X, t = load_data()
X = physbo.misc.centering(X)
Cythonized version of physbo is used
simulator の定義
[2]:
class Simulator:
def __init__( self ):
_, self.t = load_data()
def __call__( self, action ):
return self.t[action]
最適化の実行
[3]:
# policy のセット
policy = physbo.search.discrete.Policy(test_X=X)
# シード値のセット
policy.set_seed( 0 )
各探索ステップでは以下の処理を行っています。
max_num_probes=1, simulator=None
として random_search または bayes_search を実行して action ID (パラメータ)を得る。t = simulator(actions)
により評価値(の array) を得る。policy.write(actions, t)
により action ID (パラメータ) に対する評価値を登録する。physbo.search.utility.show_search_results
により履歴を表示する。
以下では、ランダムサンプリングを2回(1st, and 2nd steps)、ベイズ最適化による提案を2回(3rd, and 4th steps)を行います。
[4]:
simulator = Simulator()
''' 1st step (random sampling) '''
actions = policy.random_search(max_num_probes=1, simulator=None)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
''' 2nd step (random sampling) '''
actions = policy.random_search(max_num_probes=1, simulator=None)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
''' 3rd step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1, simulator=None, score='EI', interval=0, num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
''' 4-th step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1, simulator=None, score='EI', interval=0, num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
interactive mode starts ...
current best f(x) = -1.070602 (best action = 15673)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
current best f(x) = -1.070602 (best action = 15673)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
Start the initial hyper parameter searching ...
Done
Start the hyper parameter learning ...
0 -th epoch marginal likelihood -3.530887145012235
50 -th epoch marginal likelihood -3.530886068830362
100 -th epoch marginal likelihood -3.530887353880807
150 -th epoch marginal likelihood -3.5308873663578604
200 -th epoch marginal likelihood -3.530887366409382
250 -th epoch marginal likelihood -3.5308873664094484
300 -th epoch marginal likelihood -3.5308873664094484
350 -th epoch marginal likelihood -3.530887366409448
400 -th epoch marginal likelihood -3.530887366409449
450 -th epoch marginal likelihood -3.530887366409449
500 -th epoch marginal likelihood -3.530887366409448
Done
current best f(x) = -0.993712 (best action = 11874)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
Start the initial hyper parameter searching ...
Done
Start the hyper parameter learning ...
0 -th epoch marginal likelihood -3.9335555735844907
50 -th epoch marginal likelihood -3.933559178581591
100 -th epoch marginal likelihood -3.9335601169323873
150 -th epoch marginal likelihood -3.933560116971464
200 -th epoch marginal likelihood -3.9335601170149546
250 -th epoch marginal likelihood -3.933560117015064
300 -th epoch marginal likelihood -3.933560117015065
350 -th epoch marginal likelihood -3.933560117015066
400 -th epoch marginal likelihood -3.933560117015066
450 -th epoch marginal likelihood -3.933560117015066
500 -th epoch marginal likelihood -3.933560117015065
Done
current best f(x) = -0.985424 (best action = 8061)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
f(x)=-0.985424 (action = 8061)
中断と再開
以下の predictor, training, history を外部ファイルに保存することで、最適化プロセスを中断し、途中から再開することができます。
predictor: 目的関数の予測モデル
training: predictor の学習に用いるデータ (
physbo.Variable
オブジェクト)history: 最適化実行の履歴 (
physbo.search.discrete.History
オブジェクト)
[5]:
policy.save(file_history='history.npz', file_training='training.npz', file_predictor='predictor.dump')
[6]:
# policy を削除
del policy
# 保存した policy をロード
policy = physbo.search.discrete.Policy(test_X=X)
policy.load(file_history='history.npz', file_training='training.npz', file_predictor='predictor.dump')
''' 5-th step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1, simulator=None, score='EI', interval=0, num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
# predictor と training を個別に指定することも可
''' 6-th step (bayesian optimization) '''
actions = policy.bayes_search(max_num_probes=1,
predictor=policy.predictor, training=policy.training,
simulator=None, score='EI', interval=0, num_rand_basis = 5000)
t = simulator(actions)
policy.write(actions, t)
physbo.search.utility.show_search_results(policy.history, 10)
Start the initial hyper parameter searching ...
Done
Start the hyper parameter learning ...
0 -th epoch marginal likelihood -5.282317837949537
50 -th epoch marginal likelihood -5.314394752726803
100 -th epoch marginal likelihood -5.333676604278033
150 -th epoch marginal likelihood -5.344148457109117
200 -th epoch marginal likelihood -5.349388232794448
250 -th epoch marginal likelihood -5.351839835112543
300 -th epoch marginal likelihood -5.352933499846448
350 -th epoch marginal likelihood -5.353415283021385
400 -th epoch marginal likelihood -5.3536398281059405
450 -th epoch marginal likelihood -5.3537614274261145
500 -th epoch marginal likelihood -5.353841697978967
Done
current best f(x) = -0.985424 (best action = 8061)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
f(x)=-0.985424 (action = 8061)
f(x)=-1.033129 (action = 8509)
Start the initial hyper parameter searching ...
Done
Start the hyper parameter learning ...
0 -th epoch marginal likelihood -7.1246867953581425
50 -th epoch marginal likelihood -7.125240732973938
100 -th epoch marginal likelihood -7.125738402977026
150 -th epoch marginal likelihood -7.126320570892344
200 -th epoch marginal likelihood -7.127007900107863
250 -th epoch marginal likelihood -7.127824732094982
300 -th epoch marginal likelihood -7.1288019858435145
350 -th epoch marginal likelihood -7.129979143770724
400 -th epoch marginal likelihood -7.131406793529879
450 -th epoch marginal likelihood -7.133149836164181
500 -th epoch marginal likelihood -7.135291360142487
Done
current best f(x) = -0.985424 (best action = 8061)
list of simulation results
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=0.000000 (action = 0)
f(x)=-1.070602 (action = 15673)
f(x)=-1.153410 (action = 16489)
f(x)=-0.993712 (action = 11874)
f(x)=-0.985424 (action = 8061)
f(x)=-1.033129 (action = 8509)
f(x)=-1.061677 (action = 11352)
[ ]: