SMO算法,platt论文的原始算法及优化算法

avatar
作者
筋斗云
阅读量:0

 platt论文:[PDF] Sequential Minimal Optimization : A Fast Algorithm for Training Support Vector Machines | Semantic Scholar

算法优化:[PDF] Improvements to Platt's SMO Algorithm for SVM Classifier Design | Semantic Scholar

包含个人platt版SMO代码实现、SMO 优化算法、libsvm:yanzhi0922/SVM: 2024.07.12 (github.com)

数据集获取:LIBSVM Data: Classification, Regression, and Multi-label (ntu.edu.tw)

platt原论文SMO伪代码:

target = desired output vector point = training point matrix procedure takeStep(i1,i2)     if (i1 == i2) return 0     alph1 = Lagrange multiplier for i1     y1 = target[i1]     E1 = SVM output on point[i1] – y1 (check in error cache)     s = y1*y2     Compute L, H via equations (13) and (14)     if (L == H)         return 0     k11 = kernel(point[i1],point[i1])     k12 = kernel(point[i1],point[i2])     k22 = kernel(point[i2],point[i2])     eta = k11+k22-2*k12     if (eta > 0)     {         a2 = alph2 + y2*(E1-E2)/eta         if (a2 < L) a2 = L         else if (a2 > H) a2 = H     }     else     {         Lobj = objective function at a2=L         Hobj = objective function at a2=H         if (Lobj < Hobj-eps)             a2 = L         else if (Lobj > Hobj+eps)             a2 = H         else             a2 = alph2     }     if (|a2-alph2| < eps*(a2+alph2+eps))         return 0     a1 = alph1+s*(alph2-a2)     Update threshold to reflect change in Lagrange multipliers     Update weight vector to reflect change in a1 & a2, if SVM is linear     Update error cache using new Lagrange multipliers     Store a1 in the alpha array     Store a2 in the alpha array     return 1 endprocedure  procedure examineExample(i2)     y2 = target[i2]     alph2 = Lagrange multiplier for i2     E2 = SVM output on point[i2] – y2 (check in error cache)     r2 = E2*y2     if ((r2 < -tol && alph2 < C) || (r2 > tol && alph2 > 0))     {         if (number of non-zero & non-C alpha > 1)         {             i1 = result of second choice heuristic (section 2.2)             if takeStep(i1,i2)                 return 1         }         loop over all non-zero and non-C alpha, starting at a random point         {             i1 = identity of current alpha             if takeStep(i1,i2)                 return 1         }         loop over all possible i1, starting at a random point         {             i1 = loop variable             if (takeStep(i1,i2)                 return 1         }     }     return 0 endprocedure  main routine:     numChanged = 0;     examineAll = 1;     while (numChanged > 0 | examineAll)     {         numChanged = 0;         if (examineAll)             loop I over all training examples                 numChanged += examineExample(I)         else             loop I over examples where alpha is not 0 & not C                 numChanged += examineExample(I)         if (examineAll == 1)             examineAll = 0         else if (numChanged == 0)             examineAll = 1 }

以上算法基本原理相同,结果相同,优化是时间复杂度上的优化 ,libsvm时间复杂度最优

广告一刻

为您即时展示最新活动产品广告消息,让您随时掌握产品活动新动态!