Cooperatively Coevolving Particle Swarms for Large Scale Optimization
来源:互联网 发布:linux多网卡与多网关 编辑:程序博客网 时间:2024/05/10 20:27
Abstract
ccpso2算法是ccpso算法的改进,它吸纳了后者 对相关变量(effectivevariable) 采用的random grouping 技术。
在多峰复杂优化问题上,ccpso2算法的性能要明显优于sep-CMA-ES 、two existing PSO algorithms、cooperativecoevolving differetial evolution algorithm 等 目前最先进的优化算法。即使,它在单峰问题的表现上不尽如人意。
Ⅰ. Introduction
PSO是一种求优化问题满意解的近似算法。然而,它会陷入维度灾难。能求解髙维度优化问题的算法很稀有。
Potter和 Jong 提出了 CCEA 算法(CC:协同演化),采用分治的算法对变量进行decomposite。各组迭代求出最优解后再行组合。每个子分量为一维但是,在解不可分问题 (存在大量相关变量) 的时候,表现较差。
基于CC模型,Van denBergh 和 Engelbrecht 提出了两个模型:CPSO-Sk ,CPSO-Hk。然而,这两个模型只被用于解决30-190维的低维问题。
Yang在他的论文中提出了一个decomposite的技术:random grouping。相关变量更有几率被放在同一个分组当中更新。
有人提出一种adaptive weighting的方案对解进行微调。
ccpso算法吸纳了 ① random grouping ② adaptive weighting 的思想。主要用于解决高维不可分问题,能精确到1000维。
作者的研究表明,相对于①,②简直是鸡肋。结合其他,提出ccpso2。
ccpso2 与 ccpso算法的区别:
新的模型:
① 使用柯西-高斯分布 在 局部最优 和 邻域最优 内取样。
② 使用环形拓扑结构。
③ 使用新方法更新 局部最优 和 全局最优。
④ 抛弃adaptive weighting
⑤ 运行时动态确定每一分组的大小。
⑥ 将ccpso2与sep-CMA-ES,two existing PSO 与 CC DE进行比较。
⑦ 函数优化问题扩展到 2000 维。
优点:
① “柯西-高斯分布”+“ 环形拓扑结构” 提升 搜索能力
② 不必认为指定分组大小,由自适应函数指定。
Ⅱ.Cooperative coevolution
CCGA
Potter 和 Jong 提出的是:CCGA算法,基于GA。 先后创建了2个模型:CCGA1、CCGA2。
CCGA1在可分问题上表现比 GA好,但不可分问题上较差
CCGA2表现均比GA好,但是维度只被测试到30维
FEP
使用CC模型的FEP算法在 标准测试函数 上可扩展到 1000维,但是对于其中一个不可分问题,表现极差。充分暴露了 “Potter 和 Jong 提出的decomposition” 技巧(每个子分量只有一维)的缺陷!
CPSO-Sk & CPSO-Hk
把整个变量分成k部分,每部分不一定是一维。
Random Grouping of Variables
CPSO-Sk,在整个循环体中,被分成k个组,每个组s个一维变量。(n = kxs)。很不好!
è 两个相关变量可能始终都不能在一个组中。
RandomGrouping:
每个循环中动态分组,如:每一个循环就重新分一次组,(也是分成k个组,每个组s个一维变量)。那么两个相关变量,在50次循环中至少在一个组中一次的几率为:
P(x≥1) = p(1) + … +p(50) = 1 –(50,0) [(0.1)^0] * [(1-0.1)^50] = 0.9948
提高了解不可分问题的可靠性 !
Ⅲ. Particle Swarm Optimization
基本pso算法
cpso算法
:CPSO-Sk 、CPSO-Hk
CPSO-Sk,单纯的分治:一个n维种群 -> k个s维种群 (kxs = n)
CPSO-Hk,结合CPSO-Sk 和 basic PSO ,轮换交流 全局最优因子。
Ⅳ. NEW CCPSO2 ALGORITHM
A
(1) 基于高斯分布的PSO算法:
单纯使用高斯分布 -> 搜索空间有限
使用柯西-高斯 联合分布
(2) 基于环形拓扑 的 局部PSO算法,只知邻域的信息->收敛慢,利于解决多峰函数寻优问题。
B
基于柯西和高斯分布的PSO算法
高斯分布->降低搜索能力
柯西分布->提升搜索能力
C
Dynamically changing group size
Pseudocode of CCPSO2:
初始化时,手工选定一个k.( s = n/k )
每迭代一次后,若全局最优被更新了,则继续按这个k值更新下去。否则,在集合S中选择另外一个k进行迭代。 其中,若一个k值,它被用于更新全局最优成功的比重越大,则,在需要选择k值时,它被选择的比重越大。(集合S是需要人为指定的)
理论上,使用randomgrouping 技术越频繁,对于解决高维不可分问题越有好处。
interacting variables 相关变量
D
CCPSO2
Ⅴ. EXPERIMENTAL STUDIES
...
Codes for CCPSO2 (implemented by myself) are released here:
/******************************************************//************** CCPSO2 algorithm *******************//************** cocurrentcy version *******************/#include <stdio.h>#include <stdlib.h>#include <limits.h>#include <math.h> #include <time.h>#include <sys/time.h>#include <unistd.h>#include "test_funcs.h"#include "./CEC2010/Benchmarks.h"#include "./CEC2010/Header.h"#define randf (double)rand()/RAND_MAX#include "boost/random.hpp"#include "boost/random/uniform_real.hpp"#include "boost/random/cauchy_distribution.hpp"#include "boost/random/normal_distribution.hpp"#include <iostream>#include <fstream>#include <sstream>using namespace boost;using namespace std;// Set generator and C-G distributionkreutzer1986 engine(randf);cauchy_distribution <double> distribution1(0,1); normal_distribution <double> distribution2(0,1); variate_generator<kreutzer1986& , cauchy_distribution<double> > CAUCHY(engine,distribution1);variate_generator<kreutzer1986& , normal_distribution<double> > GAUSSIAN(engine,distribution2); /******************* Variables **********************/const int MAX_DIM = 2000;const double Xmax = 500;const int n = 1000;const int sw = 30;int k,s;int func_num;bool improved;const int SIZE = 10;const int iteration_time = 500;int S[SIZE] = {5,10,20,25,50,100,200,250,500,1000};// X -> current pos of particle// Y -> pbest pos of particle// N -> lbest pos of particle// G -> gbest pos of D-dimension swarmdouble X[sw][n];double Y[sw][n];double N[sw][n];double G[n];// array used for shuffleint list[MAX_DIM];// functions:void init ();int select ();void shuffle ();void fly ();void updatePOS ();void updateN (int,int,int);void updateLbest (int,int);void check (int par_inde,int begin_col,int end_col);double func (int func_number , int dim , double * x);double Simple (int dim , double* x);void assign1 (double*,double*,int,int);void assign2 (double[][n],double*,int,int,int);void assign3 (double[][n],double[][n],int,int,int);void assign4 (double[][n],double[][n],int,int,int,int);/****************** Function implementation ********************/void init() { for(int i = 0 ; i < n ; i ++) list[i] = i; double interval = 2*Xmax;for(int i = 0 ; i < sw ; i ++){for(int j = 0 ; j < n ; j ++){X[i][j] = -Xmax + interval * randf;N[i][j] = Y[i][j] = X[i][j];}} assign2(X,G,0,0,n-1);}// valid inde range: 0 ~ SIZE-1int select() {int inde = int(rand()%SIZE);return S[inde];}void check(int par_inde , int begin_col , int end_col) {for(int j = begin_col ; j <= end_col ; j ++){if(X[par_inde][j] > Xmax)X[par_inde][j] = Xmax;if(X[par_inde][j] < -Xmax)X[par_inde][j] = -Xmax;}}double Simple(int dim , double* x) { double Sum = 0; for(int i = 1 ; i <= dim ; i ++) { Sum += -1 * x[i] * sin(sqrt(fabs(x[i]))); } return Sum; }double func(int func_number , int dim , double * x) { switch(func_number){ case 1: return Simple(dim,x); break; default: printf("Error : Function number out of range\n"); exit(0); break; }}// randomly change the matrix use algorithm --- shuffle-fisher yatesdouble tmp_store[sw][n];void shuffle() { int i,j; int fr = 0, to = n-1; for(i = fr ; i <= to ; i ++) { int select = int(i+ (to-i+1) * randf); int tmp = list[select]; list[select] = list[i]; list[i] = tmp; } // X for( i = 0 ; i < sw ; i ++) for( j = 0 ; j < n; j ++) tmp_store[i][j] = X[i][list[j]]; for( i = 0 ; i < sw ; i ++) for( j = 0 ; j < n ; j ++) X[i][j] = tmp_store[i][j]; // Y for( i = 0 ; i < sw ; i ++) for( j = 0 ; j < n; j ++) tmp_store[i][j] = Y[i][list[j]]; for( i = 0 ; i < sw ; i ++) for( j = 0 ; j < n ; j ++) Y[i][j] = tmp_store[i][j]; // G for( j = 0 ; j < n ; j ++) tmp_store[0][j] = G[list[j]]; for( j = 0 ; j < n ; j ++) G[j] = tmp_store[0][j]; // N for( i = 0 ; i < sw ; i ++) for( j = 0 ; j < n ; j ++) tmp_store[i][j] = N[i][list[j]]; for( i = 0 ; i < sw ; i ++) for( j = 0 ; j < n ; j ++) N[i][j] = tmp_store[i][j];}// both 1-dimension arraysvoid assign1(double * source , double * dest , int fr , int to ) { for(int i = fr ; i <= to ; i ++) dest[i] = source[i];}// 1-dimension array and 2-dimension arrayvoid assign2(double source[][n] ,double* dest , int k , int fr , int to ) { for(int i = fr ; i <= to ; i ++) dest[i] = source[k][i];}// both 2-dimension arraysvoid assign3(double source[][n] , double dest[][n] , int k , int fr , int to) { for(int i = fr ; i <= to ; i ++) dest[k][i] = source[k][i];}void assign4(double source[][n] , double dest[][n] , int source_row , int dest_row , int fr , int to) { for(int i = fr ; i <= to ; i ++) dest[dest_row][i] = source[source_row][i];}// particles fly//void fly(Benchmarks *fp) {void fly() { int fr,to; double tmp1[n]; double tmp2[n]; for(int j = 1 ; j <= k ; j ++) { //update and record the best particle of the jth swarm int inde = -1; double min_f = func(func_num,n,G); //double min_f = fp->compute(G); fr = (j-1)*s ; to = fr + s - 1; double re1,re2; for(int i = 0 ; i < sw ; i ++) { assign1(G,tmp1,0,n-1); assign1(G,tmp2,0,n-1); assign2(X,tmp1,k,fr,to); assign2(Y,tmp2,k,fr,to); double f1 = func(func_num,n,tmp1); double f2 = func(func_num,n,tmp2); //double f1 = fp->compute(tmp1); //double f2 = fp->compute(tmp2); if(f1 < f2) { assign3(X,Y,k,fr,to); if(f1 < min_f) { min_f = f1; inde = k; improved = 1; } } } //update the gbest of the swarm if(inde != -1) { assign2(X,G,inde,fr,to); improved = 1; } // update the lbest of the swarm //updateLbest(fr,to,fp); updateLbest(fr,to); }}void updatePOS() { int fr,to; for(int j = 1 ; j <= k ; j ++) { for(int i = 0 ; i < sw ; i ++) { fr = (j-1)*s; to = fr+s-1; // Cauchy if(randf <= 0.5) for(int t = fr ; t <= to ; t ++) X[i][t] = Y[i][t] + CAUCHY()* fabs(Y[i][t] - N[i][t]); // Gaussian else for(int t = fr ; t <= to ; t ++) X[i][t] = N[i][t] + GAUSSIAN()* fabs(Y[i][t] - N[i][t]); } } for(int j = 0 ; j < sw ; j ++) check(j,0,n-1);}// update the lbest of the particles//void updateLbest(int fr ,int to, Benchmarks *fp){void updateLbest(int fr, int to) { int i,j,k,p; int l_inde, m_inde, r_inde; int inde = -1; double tmpL[n]; double tmpM[n]; double tmpR[n]; double fL,fM,fR; for(k = 0 ; k < sw ; k ++) { m_inde = k; if(m_inde == 0) { l_inde = sw-1; r_inde = m_inde + 1; } else if(m_inde == sw-1) { l_inde = m_inde-1; r_inde = 0; } else { l_inde = m_inde-1; r_inde = m_inde+1; } assign1(G,tmpL,0,n-1); assign1(G,tmpM,0,n-1); assign1(G,tmpR,0,n-1); assign2(Y,tmpL,l_inde,fr,to); assign2(Y,tmpM,m_inde,fr,to); assign2(Y,tmpR,r_inde,fr,to); fL = func(func_num,n,tmpL); fM = func(func_num,n,tmpM); fR = func(func_num,n,tmpR); //fL = fp->compute(tmpL); //fM = fp->compute(tmpM); //fR = fp->compute(tmpR); if(fL < fM) { inde = l_inde; if(fL > fR) inde = r_inde; } else { inde = m_inde; if(fM > fR) inde = r_inde; } assign4(Y,N,inde,m_inde,fr,to); }}/*int main() { Benchmarks * fp = NULL; generateFuncObj(1); srand((unsigned)time(NULL)); init(); improved = false; func_num = 1; for(int t = 0 ; t < iteration_time ; t ++) { if(improved == false) { s = select(); k = n / s; } shuffle(); improved = false; fly(fp); updatePOS(); printf("%lg\n",func(1,n,G)); //printf("%lg\n",fp->compute(G)); } return 0;}*/int main() { func_num = 1; srand((unsigned)time(NULL)); init(); improved = false; for(int t = 0 ; t < iteration_time ; t ++) { if(improved == false) { s = select(); k = n / s; } shuffle(); improved = false; fly(); updatePOS(); printf("%lg\n",func(1,n,G)); } return 0;}
- Cooperatively Coevolving Particle Swarms for Large Scale Optimization
- Large scale optimization
- Cooperative Co-Evolution With Differential Grouping for Large Scale Optimization
- Ceres Solver — A Large Scale Non-linear Optimization Library
- 数值优化(Numerical Optimization)学习系列-大规模无约束最优化(Large-Scale Unconstrained Optimization)
- Particle Swarm Optimization
- cvpr-Edgel Index for Large-Scale Sketch-based Image Search
- Pregel: A System for Large-Scale Graph Processing【转】
- Pregel: A System for Large-Scale Graph Processing(译)
- Pregel: A System for Large-Scale Graph Processing
- Pregel: A System for Large-Scale Graph Processing(译)
- Large-scale Parallel Collaborative Filtering for the Netflix Prize
- Large-scale Parallel Collaborative Filtering for the Netflix Prize
- Machine Learning for Large Scale Recommender Systems--Yahoo! Research
- Weak Attributes for Large-Scale Image Retrieval 阅读笔记
- Very Deep Convolutional Networks for Large-Scale Image Recognition
- Deep Fisher Networks for Large-Scale Image Classification(精读)
- Very Deep Convolutional Networks for Large-Scale Image Recognition(精读)
- 每天一个shell脚本--1
- 自定义的绘图线程和逻辑线程,SurfaceView的绘图
- 获取手机信息
- Java可变参数的使用
- 【ThinkingInC++】39、const的传递和返回地址
- Cooperatively Coevolving Particle Swarms for Large Scale Optimization
- android 判断一个组件是什么
- mysql---分区
- 自定义的View,,SLabel好搓...--SBar 简单进度条
- A Simple But Complete Implementation of Binary Search Tree
- 【ThinkingInC++】40、论const对函数返回值的作用
- 自定义的View,,SLabel好搓...--SButton 按钮
- python MySQLdb安装和使用
- leetcode 之 Substring with Concatenation of All Words