在 Python 中從頭開始迭代本地搜索
本文轉(zhuǎn)載自微信公眾號「Python中文社區(qū)」,作者沂水寒城。轉(zhuǎn)載本文請聯(lián)系Python中文社區(qū)公眾號。
迭代局部搜索是一種隨機全局優(yōu)化算法。它涉及將本地搜索算法重復(fù)應(yīng)用于先前找到的好的解決方案的修改版本。這樣,它就像是具有隨機重啟算法的隨機爬山的巧妙版本。
該算法背后的直覺是,隨機重新啟動可以幫助找到問題中的許多局部最優(yōu)值,并且更好的局部最優(yōu)值通常接近于其他局部最優(yōu)值。因此,對現(xiàn)有局部最優(yōu)值的適度擾動可能會為優(yōu)化問題找到更好甚至最好的解決方案。
在本教程中,您將發(fā)現(xiàn)如何從頭開始實現(xiàn)迭代的本地搜索算法。完成本教程后,您將知道:
- 迭代本地搜索是一種隨機全局搜索優(yōu)化算法,它是具有隨機重啟功能的隨機爬山的更智能版本。
 - 如何從頭開始隨機重啟隨機爬山。
 - 如何實現(xiàn)并將迭代的局部搜索算法應(yīng)用于非線性目標(biāo)函數(shù)。
 
教程概述
本教程分為五個部分。他們是:
- 什么是迭代本地搜索
 - 客觀目標(biāo)函數(shù)
 - 隨機爬山算法
 - 隨機重新開始的隨機爬山
 - 迭代局部搜索算法
 
什么是迭代本地搜索
迭代本地搜索(簡稱ILS)是一種隨機的全局搜索優(yōu)化算法。它與隨機爬山和隨機爬山隨機開始有關(guān)。
隨機爬山是一種本地搜索算法,它涉及對現(xiàn)有解決方案進(jìn)行隨機修改,并且僅當(dāng)修改產(chǎn)生比當(dāng)前工作解決方案更好的結(jié)果時,才接受修改。
通常,本地搜索算法會陷入本地最優(yōu)狀態(tài)。解決此問題的一種方法是從新的隨機選擇的起點重新開始搜索。重新啟動過程可以執(zhí)行多次,也可以在固定數(shù)量的功能評估之后觸發(fā),或者在給定數(shù)量的算法迭代中看不到進(jìn)一步的改善時,可以觸發(fā)重新啟動過程。該算法稱為隨機重新啟動的隨機爬山。
迭代的本地搜索類似于具有隨機重啟的隨機爬坡,除了不是為每次重啟選擇隨機的起點,而是根據(jù)迄今為止在更廣泛的搜索中找到的最佳點的修改版本來選擇一個點。到目前為止,最佳解決方案的擾動就像是搜索空間中向新區(qū)域的大幅躍遷,而隨機爬山算法產(chǎn)生的擾動要小得多,僅限于搜索空間的特定區(qū)域。這允許在兩個級別上執(zhí)行搜索。爬山算法是一種本地搜索,用于從特定的候選解決方案或搜索空間區(qū)域中獲取最大收益,并且重新啟動方法允許探索搜索空間的不同區(qū)域。這樣,迭代局部搜索算法可在搜索空間中探索多個局部最優(yōu),從而增加了定位全局最優(yōu)的可能性。盡管可以通過在搜索空間中使用不同的步長將其應(yīng)用于連續(xù)功能優(yōu)化,但迭代局部搜索是針對組合優(yōu)化問題(如旅行推銷員問題(TSP))提出的:爬坡的步幅較小,爬坡的步幅較大隨機重啟。既然我們熟悉了迭代本地搜索算法,那么讓我們探索如何從頭開始實現(xiàn)該算法。
客觀目標(biāo)函數(shù)
首先,讓我們定義一個渠道優(yōu)化問題,作為實現(xiàn)“迭代本地搜索”算法的基礎(chǔ)。Ackley函數(shù)是多模式目標(biāo)函數(shù)的一個示例,該函數(shù)具有單個全局最優(yōu)值和多個局部最優(yōu)值,可能會卡住局部搜索。因此,需要全局優(yōu)化技術(shù)。這是一個二維目標(biāo)函數(shù),其全局最佳值為[0,0],其值為0.0。下面的示例實現(xiàn)了Ackley,并創(chuàng)建了一個三維表面圖,顯示了全局最優(yōu)值和多個局部最優(yōu)值。
- # ackley multimodal function
 - from numpy import arange
 - from numpy import exp
 - from numpy import sqrt
 - from numpy import cos
 - from numpy import e
 - from numpy import pi
 - from numpy import meshgrid
 - from matplotlib import pyplot
 - from mpl_toolkits.mplot3d import Axes3D
 - # objective function
 - def objective(x, y):
 - return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20
 - # define range for input
 - r_min, r_max = -5.0, 5.0
 - # sample input range uniformly at 0.1 increments
 - xaxis = arange(r_min, r_max, 0.1)
 - yaxis = arange(r_min, r_max, 0.1)
 - # create a mesh from the axis
 - x, y = meshgrid(xaxis, yaxis)
 - # compute targets
 - results = objective(x, y)
 - # create a surface plot with the jet color scheme
 - figure = pyplot.figure()
 - axis = figure.gca(projection='3d')
 - axis.plot_surface(x, y, results, cmap='jet')
 - # show the plot
 - pyplot.show()
 
運行示例將創(chuàng)建Ackley函數(shù)的表面圖,以顯示大量的局部最優(yōu)值。
我們將以此為基礎(chǔ)來實現(xiàn)和比較簡單的隨機爬山算法,隨機重啟的隨機爬山算法以及最終迭代的本地搜索。我們希望隨機爬山算法容易陷入局部極小值。我們希望隨機爬山并重新啟動可以找到許多本地最小值,并且如果配置得當(dāng),我們希望迭代本地搜索比任何一種方法在此問題上的執(zhí)行效果都更好。
隨機爬山算法
迭代本地搜索算法的核心是本地搜索,在本教程中,我們將為此目的使用隨機爬山算法。隨機爬山算法涉及到首先生成一個隨機的起點和當(dāng)前的工作解決方案,然后生成當(dāng)前工作解決方案的擾動版本,如果它們優(yōu)于當(dāng)前的工作解決方案,則接受它們。假設(shè)我們正在研究連續(xù)優(yōu)化問題,則解決方案是目標(biāo)函數(shù)要評估的值的向量,在這種情況下,該向量是二維空間中以-5和5為邊界的點。我們可以通過以均勻的概率分布對搜索空間進(jìn)行采樣來生成隨機點。例如:
- # generate a random point in the search space
 - solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
 
我們可以使用高斯概率分布,當(dāng)前解決方案中當(dāng)前值的平均值以及由超參數(shù)控制的標(biāo)準(zhǔn)偏差來生成當(dāng)前正在工作的解決方案的擾動版本,該超參數(shù)控制允許搜索從當(dāng)前工作解決方案進(jìn)行多遠(yuǎn)的探索。
我們將此超參數(shù)稱為“ step_size”,例如:
- # generate a perturbed version of a current working solution
 - candidate = solution + randn(len(bounds)) * step_size
 
重要的是,我們必須檢查生成的解決方案是否在搜索空間內(nèi)。
這可以通過一個名為in_bounds()的自定義函數(shù)來實現(xiàn),該函數(shù)采用候選解和搜索空間的邊界,如果該點位于搜索空間中,則返回True,否則返回False。
- # check if a point is within the bounds of the search
 - def in_bounds(point, bounds):
 - # enumerate all dimensions of the point
 - for d in range(len(bounds)):
 - # check if out of bounds for this dimension
 - if point[d] < bounds[d, 0] or point[d] > bounds[d, 1]:
 - return False
 - return True
 
然后可以在爬坡期間調(diào)用此函數(shù),以確認(rèn)新點在搜索空間的邊界內(nèi),如果沒有,則可以生成新點。
結(jié)合在一起,下面的函數(shù)hillclimbing()實現(xiàn)了隨機爬山局部搜索算法。它以目標(biāo)函數(shù)的名稱,問題的范圍,迭代次數(shù)和步長為參數(shù),并返回最佳解決方案及其評估。
- # hill climbing local search algorithm
 - def hillclimbing(objective, bounds, n_iterations, step_size):
 - # generate an initial point
 - solution = None
 - while solution is None or not in_bounds(solution, bounds):
 - solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
 - # evaluate the initial point
 - solution_eval = objective(solution)
 - # run the hill climb
 - for i in range(n_iterations):
 - # take a step
 - candidate = None
 - while candidate is None or not in_bounds(candidate, bounds):
 - candidate = solution + randn(len(bounds)) * step_size
 - # evaluate candidate point
 - candidte_eval = objective(candidate)
 - # check if we should keep the new point
 - if candidte_eval <= solution_eval:
 - # store the new point
 - solution, solution_eval = candidate, candidte_eval
 - # report progress
 - print('>%d f(%s) = %.5f' % (i, solution, solution_eval))
 - return [solution, solution_eval]
 
我們可以在Ackley函數(shù)上測試該算法。
我們將為偽隨機數(shù)生成器固定種子,以確保每次運行代碼時都得到相同的結(jié)果。
該算法將運行1,000次迭代,步長為0.05個單位。經(jīng)過一些反復(fù)試驗后,才選擇了這兩個超參數(shù)。
運行結(jié)束時,我們將報告找到的最佳解決方案。
- # seed the pseudorandom number generator
 - seed(1)
 - # define range for input
 - bounds = asarray([[-5.0, 5.0], [-5.0, 5.0]])
 - # define the total iterations
 - n_iterations = 1000
 - # define the maximum step size
 - step_size = 0.05
 - # perform the hill climbing search
 - best, score = hillclimbing(objective, bounds, n_iterations, step_size)
 - print('Done!')
 - print('f(%s) = %f' % (best, score))
 
結(jié)合在一起,下面列出了將隨機爬山算法應(yīng)用于Ackley目標(biāo)函數(shù)的完整示例。
- # hill climbing search of the ackley objective function
 - from numpy import asarray
 - from numpy import exp
 - from numpy import sqrt
 - from numpy import cos
 - from numpy import e
 - from numpy import pi
 - from numpy.random import randn
 - from numpy.random import rand
 - from numpy.random import seed
 - # objective function
 - def objective(v):
 - x, y = v
 - return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20
 - # check if a point is within the bounds of the search
 - def in_bounds(point, bounds):
 - # enumerate all dimensions of the point
 - for d in range(len(bounds)):
 - # check if out of bounds for this dimension
 - if point[d] < bounds[d, 0] or point[d] > bounds[d, 1]:
 - return False
 - return True
 - # hill climbing local search algorithm
 - def hillclimbing(objective, bounds, n_iterations, step_size):
 - # generate an initial point
 - solution = None
 - while solution is None or not in_bounds(solution, bounds):
 - solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
 - # evaluate the initial point
 - solution_eval = objective(solution)
 - # run the hill climb
 - for i in range(n_iterations):
 - # take a step
 - candidate = None
 - while candidate is None or not in_bounds(candidate, bounds):
 - candidate = solution + randn(len(bounds)) * step_size
 - # evaluate candidate point
 - candidte_eval = objective(candidate)
 - # check if we should keep the new point
 - if candidte_eval <= solution_eval:
 - # store the new point
 - solution, solution_eval = candidate, candidte_eval
 - # report progress
 - print('>%d f(%s) = %.5f' % (i, solution, solution_eval))
 - return [solution, solution_eval]
 - # seed the pseudorandom number generator
 - seed(1)
 - # define range for input
 - bounds = asarray([[-5.0, 5.0], [-5.0, 5.0]])
 - # define the total iterations
 - n_iterations = 1000
 - # define the maximum step size
 - step_size = 0.05
 - # perform the hill climbing search
 - best, score = hillclimbing(objective, bounds, n_iterations, step_size)
 - print('Done!')
 - print('f(%s) = %f' % (best, score))
 
運行示例將對目標(biāo)函數(shù)執(zhí)行隨機爬山搜索。搜索過程中發(fā)現(xiàn)的每個改進(jìn)都會報告出來,然后在搜索結(jié)束時報告最佳解決方案。
注意:由于算法或評估程序的隨機性,或者數(shù)值精度的差異,您的結(jié)果可能會有所不同。考慮運行該示例幾次并比較平均結(jié)果。
在這種情況下,我們可以看到搜索過程中約有13處改進(jìn),最終解決方案約為f(-0.981,1.965),得出的評估值為5.381,與f(0.0,0.0)= 0相去甚遠(yuǎn)。
- >0 f([-0.85618854 2.1495965 ]) = 6.46986
 - >1 f([-0.81291816 2.03451957]) = 6.07149
 - >5 f([-0.82903902 2.01531685]) = 5.93526
 - >7 f([-0.83766043 1.97142393]) = 5.82047
 - >9 f([-0.89269139 2.02866012]) = 5.68283
 - >12 f([-0.8988359 1.98187164]) = 5.55899
 - >13 f([-0.9122303 2.00838942]) = 5.55566
 - >14 f([-0.94681334 1.98855174]) = 5.43024
 - >15 f([-0.98117198 1.94629146]) = 5.39010
 - >23 f([-0.97516403 1.97715161]) = 5.38735
 - >39 f([-0.98628044 1.96711371]) = 5.38241
 - >362 f([-0.9808789 1.96858459]) = 5.38233
 - >629 f([-0.98102417 1.96555308]) = 5.38194
 - Done!
 - f([-0.98102417 1.96555308]) = 5.381939
 
隨機重新開始的隨機爬山
具有隨機重啟功能的隨機爬山算法涉及重復(fù)運行隨機爬山算法并跟蹤找到的最佳解決方案。首先,讓我們修改hillclimbing()函數(shù)以獲取搜索的起點,而不是隨機生成它。這將在以后實現(xiàn)迭代本地搜索算法時有所幫助。
- # hill climbing local search algorithm
 - def hillclimbing(objective, bounds, n_iterations, step_size, start_pt):
 - # store the initial point
 - solution = start_pt
 - # evaluate the initial point
 - solution_eval = objective(solution)
 - # run the hill climb
 - for i in range(n_iterations):
 - # take a step
 - candidate = None
 - while candidate is None or not in_bounds(candidate, bounds):
 - candidate = solution + randn(len(bounds)) * step_size
 - # evaluate candidate point
 - candidte_eval = objective(candidate)
 - # check if we should keep the new point
 - if candidte_eval <= solution_eval:
 - # store the new point
 - solution, solution_eval = candidate, candidte_eval
 - return [solution, solution_eval]
 
接下來,我們可以通過重復(fù)調(diào)用hillclimbing()函數(shù)一定的次數(shù)來實現(xiàn)隨機重啟算法。每次通話時,我們都會為爬山搜索生成一個隨機選擇的新起點。
- # generate a random initial point for the search
 - start_pt = None
 - while start_pt is None or not in_bounds(start_pt, bounds):
 - start_pt = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
 - # perform a stochastic hill climbing search
 - solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)
 
然后,我們可以檢查結(jié)果并將其保留,以使其比我們到目前為止所看到的任何搜索結(jié)果都要好。
- # check for new best
 - if solution_eval < best_eval:
 - best, best_eval = solution, solution_eval
 - print('Restart %d, best: f(%s) = %.5f' % (n, best, best_eval))
 
結(jié)合在一起,random_restarts()函數(shù)實現(xiàn)了具有隨機重啟功能的隨機爬山算法。
- # hill climbing with random restarts algorithm
 - def random_restarts(objective, bounds, n_iter, step_size, n_restarts):
 - best, best_eval = None, 1e+10
 - # enumerate restarts
 - for n in range(n_restarts):
 - # generate a random initial point for the search
 - start_pt = None
 - while start_pt is None or not in_bounds(start_pt, bounds):
 - start_pt = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
 - # perform a stochastic hill climbing search
 - solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)
 - # check for new best
 - if solution_eval < best_eval:
 - best, best_eval = solution, solution_eval
 - print('Restart %d, best: f(%s) = %.5f' % (n, best, best_eval))
 - return [best, best_eval]
 
然后,我們可以將此算法應(yīng)用于Ackley目標(biāo)函數(shù)。在這種情況下,我們會將隨機重啟的數(shù)量限制為任意選擇的30次。
下面列出了完整的示例。
- # hill climbing search with random restarts of the ackley objective function
 - from numpy import asarray
 - from numpy import exp
 - from numpy import sqrt
 - from numpy import cos
 - from numpy import e
 - from numpy import pi
 - from numpy.random import randn
 - from numpy.random import rand
 - from numpy.random import seed
 - # objective function
 - def objective(v):
 - x, y = v
 - return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20
 - # check if a point is within the bounds of the search
 - def in_bounds(point, bounds):
 - # enumerate all dimensions of the point
 - for d in range(len(bounds)):
 - # check if out of bounds for this dimension
 - if point[d] < bounds[d, 0] or point[d] > bounds[d, 1]:
 - return False
 - return True
 - # hill climbing local search algorithm
 - def hillclimbing(objective, bounds, n_iterations, step_size, start_pt):
 - # store the initial point
 - solution = start_pt
 - # evaluate the initial point
 - solution_eval = objective(solution)
 - # run the hill climb
 - for i in range(n_iterations):
 - # take a step
 - candidate = None
 - while candidate is None or not in_bounds(candidate, bounds):
 - candidate = solution + randn(len(bounds)) * step_size
 - # evaluate candidate point
 - candidte_eval = objective(candidate)
 - # check if we should keep the new point
 - if candidte_eval <= solution_eval:
 - # store the new point
 - solution, solution_eval = candidate, candidte_eval
 - return [solution, solution_eval]
 - # hill climbing with random restarts algorithm
 - def random_restarts(objective, bounds, n_iter, step_size, n_restarts):
 - best, best_eval = None, 1e+10
 - # enumerate restarts
 - for n in range(n_restarts):
 - # generate a random initial point for the search
 - start_pt = None
 - while start_pt is None or not in_bounds(start_pt, bounds):
 - start_pt = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
 - # perform a stochastic hill climbing search
 - solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)
 - # check for new best
 - if solution_eval < best_eval:
 - best, best_eval = solution, solution_eval
 - print('Restart %d, best: f(%s) = %.5f' % (n, best, best_eval))
 - return [best, best_eval]
 - # seed the pseudorandom number generator
 - seed(1)
 - # define range for input
 - bounds = asarray([[-5.0, 5.0], [-5.0, 5.0]])
 - # define the total iterations
 - n_iter = 1000
 - # define the maximum step size
 - step_size = 0.05
 - # total number of random restarts
 - n_restarts = 30
 - # perform the hill climbing search
 - best, score = random_restarts(objective, bounds, n_iter, step_size, n_restarts)
 - print('Done!')
 - print('f(%s) = %f' % (best, score))
 
運行該示例將執(zhí)行隨機爬山,并隨機重啟以查找Ackley目標(biāo)函數(shù)。每次發(fā)現(xiàn)改進(jìn)的整體解決方案時,都會進(jìn)行報告,并匯總通過搜索找到的最終最佳解決方案。
注意:由于算法或評估程序的隨機性,或者數(shù)值精度的差異,您的結(jié)果可能會有所不同。考慮運行該示例幾次并比較平均結(jié)果。
在這種情況下,我們可以看到搜索過程中的三處改進(jìn),發(fā)現(xiàn)的最佳解決方案約為f(0.002,0.002),其評估值為大約0.009,這比單次爬山算法要好得多。
- Restart 0, best: f([-0.98102417 1.96555308]) = 5.38194
 - Restart 2, best: f([1.96522236 0.98120013]) = 5.38191
 - Restart 4, best: f([0.00223194 0.00258853]) = 0.00998
 - Done!
 - f([0.00223194 0.00258853]) = 0.009978
 
接下來,讓我們看看如何實現(xiàn)迭代的本地搜索算法。
迭代局部搜索算法
迭代本地搜索算法是具有隨機重啟算法的隨機爬坡的改進(jìn)版本。重要的區(qū)別在于,隨機爬山算法的每種應(yīng)用的起點都是到目前為止找到的最佳點的一種擾動版本。我們可以通過使用random_restarts()函數(shù)作為起點來實現(xiàn)此算法。每次重新啟動迭代時,我們可以生成到目前為止找到的最佳解決方案的修改版本,而不是隨機的起點。這可以通過使用步長超參數(shù)來實現(xiàn),就像在隨機爬山者中使用的一樣。在這種情況下,考慮到搜索空間中較大的擾動,將使用較大的步長值。
- # generate an initial point as a perturbed version of the last best
 - start_pt = None
 - while start_pt is None or not in_bounds(start_pt, bounds):
 - start_pt = best + randn(len(bounds)) * p_size
 
結(jié)合在一起,下面定義了iterated_local_search()函數(shù)。
- # iterated local search algorithm
 - def iterated_local_search(objective, bounds, n_iter, step_size, n_restarts, p_size):
 - # define starting point
 - best = None
 - while best is None or not in_bounds(best, bounds):
 - best = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
 - # evaluate current best point
 - best_eval = objective(best)
 - # enumerate restarts
 - for n in range(n_restarts):
 - # generate an initial point as a perturbed version of the last best
 - start_pt = None
 - while start_pt is None or not in_bounds(start_pt, bounds):
 - start_pt = best + randn(len(bounds)) * p_size
 - # perform a stochastic hill climbing search
 - solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)
 - # check for new best
 - if solution_eval < best_eval:
 - best, best_eval = solution, solution_eval
 - print('Restart %d, best: f(%s) = %.5f' % (n, best, best_eval))
 - return [best, best_eval]
 
然后,我們可以將該算法應(yīng)用于Ackley目標(biāo)函數(shù)。在這種情況下,我們將使用較大的步長值1.0進(jìn)行隨機重啟,這是在經(jīng)過反復(fù)試驗后選擇的。
下面列出了完整的示例。
- # iterated local search of the ackley objective function
 - from numpy import asarray
 - from numpy import exp
 - from numpy import sqrt
 - from numpy import cos
 - from numpy import e
 - from numpy import pi
 - from numpy.random import randn
 - from numpy.random import rand
 - from numpy.random import seed
 - # objective function
 - def objective(v):
 - x, y = v
 - return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20
 - # check if a point is within the bounds of the search
 - def in_bounds(point, bounds):
 - # enumerate all dimensions of the point
 - for d in range(len(bounds)):
 - # check if out of bounds for this dimension
 - if point[d] < bounds[d, 0] or point[d] > bounds[d, 1]:
 - return False
 - return True
 - # hill climbing local search algorithm
 - def hillclimbing(objective, bounds, n_iterations, step_size, start_pt):
 - # store the initial point
 - solution = start_pt
 - # evaluate the initial point
 - solution_eval = objective(solution)
 - # run the hill climb
 - for i in range(n_iterations):
 - # take a step
 - candidate = None
 - while candidate is None or not in_bounds(candidate, bounds):
 - candidate = solution + randn(len(bounds)) * step_size
 - # evaluate candidate point
 - candidte_eval = objective(candidate)
 - # check if we should keep the new point
 - if candidte_eval <= solution_eval:
 - # store the new point
 - solution, solution_eval = candidate, candidte_eval
 - return [solution, solution_eval]
 - # iterated local search algorithm
 - def iterated_local_search(objective, bounds, n_iter, step_size, n_restarts, p_size):
 - # define starting point
 - best = None
 - while best is None or not in_bounds(best, bounds):
 - best = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
 - # evaluate current best point
 - best_eval = objective(best)
 - # enumerate restarts
 - for n in range(n_restarts):
 - # generate an initial point as a perturbed version of the last best
 - start_pt = None
 - while start_pt is None or not in_bounds(start_pt, bounds):
 - start_pt = best + randn(len(bounds)) * p_size
 - # perform a stochastic hill climbing search
 - solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)
 - # check for new best
 - if solution_eval < best_eval:
 - best, best_eval = solution, solution_eval
 - print('Restart %d, best: f(%s) = %.5f' % (n, best, best_eval))
 - return [best, best_eval]
 - # seed the pseudorandom number generator
 - seed(1)
 - # define range for input
 - bounds = asarray([[-5.0, 5.0], [-5.0, 5.0]])
 - # define the total iterations
 - n_iter = 1000
 - # define the maximum step size
 - s_size = 0.05
 - # total number of random restarts
 - n_restarts = 30
 - # perturbation step size
 - p_size = 1.0
 - # perform the hill climbing search
 - best, score = iterated_local_search(objective, bounds, n_iter, s_size, n_restarts, p_size)
 - print('Done!')
 - print('f(%s) = %f' % (best, score))
 
運行該示例將對Ackley目標(biāo)函數(shù)執(zhí)行“迭代本地搜索”。
每次發(fā)現(xiàn)改進(jìn)的整體解決方案時,都會進(jìn)行報告,并在運行結(jié)束時匯總通過搜索找到的最終最佳解決方案。
注意:由于算法或評估程序的隨機性,或者數(shù)值精度的差異,您的結(jié)果可能會有所不同??紤]運行該示例幾次并比較平均結(jié)果。
在這種情況下,我們可以在搜索過程中看到四個改進(jìn),發(fā)現(xiàn)的最佳解決方案是兩個非常小的輸入,它們接近于零,其估計值為0.0003,這比單次爬山或爬山都要好。登山者重新啟動。
- Restart 0, best: f([-0.96775653 0.96853129]) = 3.57447
 - Restart 3, best: f([-4.50618519e-04 9.51020713e-01]) = 2.57996
 - Restart 5, best: f([ 0.00137423 -0.00047059]) = 0.00416
 - Restart 22, best: f([ 1.16431936e-04 -3.31358206e-06]) = 0.00033
 - Done!
 - f([ 1.16431936e-04 -3.31358206e-06]) = 0.000330
 
















 
 
 












 
 
 
 