Imperial Algorithm (IA): A Novel Metaheuristics Inspired From The War
Imperial Algorithm (IA): A Novel Metaheuristics Inspired From The War
PROJECT DESCRIPTION:
I have been working in producing a metaheuristic-based intelligent algorithms that also belongs to computational intelligence, soft computing, and artificial intelligence.
"Imperial Algorithm: A New Metaheuristics Inspired from The War" is the one that interests some researchers and practitioners.
This algorithm can outperform other well-known algorithms .
SHARED DATA/PROGRAMS/PAPER:
To support open science and disclosure of the project, I decide to publish the code below.
For further description of algorithm, please read the journal paper directly (under review). Below is the Python code.
# PYTHON CODE
import time
import numpy as np
np.random.seed(0)
# IA parameters ----------> CHANGABLE
N=20 #population size
max_iter=100 #iteration numbers
runmax=5 # Define max run for multiple runnings
M=int(N/5) # Multi imperials, suggested each imperial should have individual = 5
D=NUM_OF_PARAMS #dimensions or number of decision variables
Lb=-2*np.ones(D) # Lower bounds
Ub=2*np.ones(D) # Upper bounds
# Initialization
w=1 # initial inertia
wmax=1 #inertia weight max
wmin=0 #inertia weight min
convIA=[] # To store average fitness for convergence history
pos=np.zeros((N,D)) #initialize position of swarm
f_ans=np.zeros(runmax) #to store the best value for each run
f_gbest=np.zeros((runmax,D)) #to store the best decision variables for each run
comp_timeIA=[] # Record computational time per iteration
r=0 # for desertion phase, running from stucking
# Updating equation
def update(Xi,Xj):
r = np.sqrt(np.sum(np.square(Xj-Xi))) #euclidean distance
beta= np.pi**(-0.01*r) # 0.01 is constant value
Xnew=Xi+beta*(Xj-Xi)
return Xnew
# MAIN CODE
for run in (range(runmax)): # multiple runnings
# Store value to produce convergence history
fminval1=[]
faverage1=[]
#iterate
it=1
# Start to record computational time
start = time.time()
# Initialization with DRG for pos[] and v[]
for k in range(M):
for i in range(int(N*k/M),int(((k+1)/M)*N)):
for j in range(D):
pos[i,j]=Lb[j]+(Ub[j]-Lb[j])*np.random.uniform(k/M,(k+1)/M);
vel=0*pos # Initialize the velocity into 0
#compute function F() value
out=fun(pos)
while it<=max_iter:
#Update P_best/Warrior & G_best/King
if it == 1 or r == 1: # if r=1, means it comes from restart strategy
r=0
#update best (min) pbest values
pbestval,pbest,gbest,gbestval=np.copy(out),np.copy(pos),np.zeros((M,D)),np.zeros(M)
for k in range(M):
#update gbest values for each swarm i
index=np.argmin(pbestval[int(N*k/M):int(((k+1)/M)*N)])+int(N*k/M) #---> CHANGABLE. If maximization use: np.argmax
gbest[k]=pbest[index] #Store the position of king
gbestval[k]=pbestval[index] #Store the fitness value of king
else: # How if the best value of current iteration is worse than previous iterations?
#update best (min) pbest values
ar=np.where(out<pbestval)
pbestval[ar]=out[ar]
pbest[ar]=pos[ar]
for k in range(M):
#update gbest values for each imperial
index=np.argmin(pbestval[int(N*k/M):int(((k+1)/M)*N)])+int(N*k/M) #---> CHANGABLE. If maximization use: np.argmax
gbest[k]=pbest[index]
gbestval[k]=pbestval[index]
#calculate w
w=wmax-(it/max_iter)*(wmax-wmin)
# Strategize phase: Updating position and velocity closer to the king
for k in range(M):
for i in range(int(N*k/M),int(((k+1)/M)*N)):
for j in range(D):
#update the vel[] here
vel[i,j]=w*vel[i,j]+np.random.rand()*(gbest[k][j]-pos[i,j])
#update position
pos[i,j]=vel[i,j]+pos[i,j]
# Pick 1 warrior, and place them in equal coordinat to spy and also check bounds
if i == int(N*k/M) and out[i] != gbestval[k]:
pos[i]=np.clip(np.random.choice(pos[i]),Lb,Ub)
pos[i]=np.clip(pos[i],Lb,Ub)
out[i]=fun(np.expand_dims(pos[i], axis=0))
# Spy and Attack Phase: Each inividual in an imperial goes closer to another stronger imperial randomly
m=np.random.choice(np.delete(np.arange(M),k)) #select the another imperium
m2=np.random.choice(range(int(N*m/M),int(((m+1)/M)*N))) # select warior
#if imperium j is stronger/more attractive (the lower the better for minimizing),
# Then use offensive strategy which warrior will spy/attack. Otherwise, apply deffensive strategy (nothing to do)
if out[i]==gbestval[k]:
if out[i]>gbestval[m]: # CHANGABLE: '>' is for min case and '<' is for max case
#Find Xnew using Xi and Xj
pos[i]=update(pos[i],gbest[m])
#check Xnew is within bounds
pos[i]=np.clip(pos[i],Lb,Ub)
else:
if out[i]>pbestval[m2]: # CHANGABLE: '>' is for min case and '<' is for max case
#Find Xnew using Xi and Xj
pos[i]=update(pos[i],pbest[m2])
#check Xnew is within bounds
pos[i]=np.clip(pos[i],Lb,Ub)
#compute function
out=fun(pos)
# get the min F() value to store the convergence curve
fminval1.append(np.min(pbestval)) #---> CHANGABLE. If maximization use: np.max
faverage1.append(np.mean(out))
# Desertion Phase: Apply restart strategy to all particles except the best king
if it>1:
if faverage1[-2]==faverage1[-1]: # Apply restart strategy to all particles except gbest
pos=Lb[j]+(Ub[j]-Lb[j])*np.random.random((N,D))
r=1
# Keep the gbest in all imperials
index=np.argmin(pbestval) #---> CHANGABLE. If maximization use: np.argmax
pos[index]=pbest[index]
out=fun(pos)
it=it+1
#record computational time
end=time.time()
comp_timeIA.append(end-start)
#outer loop for multiple runs
convIA.append(fminval1) # Store convergence history each running
f_ans[run]=fminval1[-1]
f_gbest[run]=pbest[np.argmin(pbestval)] #---> CHANGABLE. If maximization use: np.argmax
IA=f_ans
convIA=np.mean(convIA,axis=0) # Calculate mean fitness of convergence history
bfun=np.min(f_ans) # Find the best value from all runnings #---> CHANGABLE. If maximization use: np.max
brun=np.argmin(f_ans) # Find the index of the best value from all runnings #---> CHANGABLE. If maximization use: np.argmax
bestX=f_gbest[brun] # Find the decision variables of the best value from all runnings
# Print all results. NOTE: Best and Worst depends on whether it is maximization or minimization problem
print("Best:", bfun, "Mean:", np.mean(f_ans), "Worst:", np.max(f_ans), "Std:", np.std(f_ans), "\nBest Solution:",bestX, "\nAvg. Time (s):", np.mean(comp_timeIA))
General War Behavior
In the past, there will be some imperia living together at a time.
In each imperium, there must be a king and warriors. There is also a disobedient warrior who does not follow the king's instruction.
Before war happens, the warriors will go to the king to strategize and decide which imperial should be attacked.
Then, they will go to the other imperia to hit and then war happens in the battlespace.
After a certain time, the war will stop and they will desert and move to other places.
They will re-strategize and hit again and so on.