iterative method optimization 中文意思是什麼

iterative method optimization 解釋
迭代法最優化
  • iterative : 迭代的
  • method : n 1 方法,方式;順序。2 (思想、言談上的)條理,規律,秩序。3 【生物學】分類法。4 〈M 〉【戲劇】...
  • optimization : n. 最佳化,最優化。
  1. Secondly, the method applies a linear iterative procedure in calculation and the svd is used as a main tool, avoiding complex nonlinear optimization processes

    實現過程是以奇異值分解為基本工具的分步線性迭代計算,避免了傳統射影重建方法復雜的非線性優化環節。
  2. An iterative particle swarm algorithm is proposed for the robust optimization problem of hatch processes without state independent and end - point constraints, which combines the iteration method and the particle swarm optimization algorithm together

    摘要針對無狀態獨立約束和終端約束的間歇過程魯棒優化問題,將迭代方法與粒子群優化演算法相結合,提出了迭代粒子群演算法。
  3. Conventional clustering criteria - based algorithms is a kind of local search method by using iterative mountain climbing technique to find optimization solution, which has two severe defects - sensitive to initial data and easy as can get into local minimum

    傳統的基於聚類準則的聚類演算法本質上是一種局部搜索演算法,它們採用了一種迭代的爬山技術來尋找最優解,存在著對初始化敏感和容易陷入局部極小的致命缺點。
  4. In the computational methods of tpbvp, in order to reduce some difficulties involved in solving a tpbvp via adjoint variables, we discuss a direct method in which state and control variables are indirectly parameterized, the method employs a recently developed direct optimization technique that uses a piecewise polynomial representation for the state and control variables, thus converting the optimal control problem into a nonlinear programming problem, which can be solved numerically. it makes the initial iterative variable more easy to be determined

    在數值解法中,為了減少解決兩點邊值問題共軛變量帶來的困難,主要討論了將狀態變量和控制變量進行參數化的一種直接方法,這種方法採用了近段發展起來的使用分段的多項式來代替狀態和控制變量的直接優化方法,然後最優控制問題就轉化成可以用數值方法解決的非線性規劃問題,使得迭代初值更加容易選取。
  5. The solution methods of support vector machine, including quadratic programming method, chunking method, decomposing method, sequential minimization optimization method, iterative solution method named lagrange support vector machine based on lagrange function and newton method based on the smoothing technique, are studied systematically

    主要有支持向量機的二次規劃求解法、選塊法、分解法、序列最小優化方法、基於lagrange函數的迭代求解方法即lagrange支持向量機、基於smoothing處理的牛頓求解方法。
  6. In the image reconstruction based on optimization without constraint, the variable metric method, the steepest descent method, and conjugate gradient method were applied to image reconstruction to improve iterative efficiency and reconstructed quality, and their virtue and shortcoming were analyzed

    摘要在無約束最優化為基礎的圖像重建問題中,為了提高迭代效率以及重建圖像質量,首次提出將變度量法應用到圖像重建中。
  7. A dynamic approach for the minimization subproblem in alm method is discussed, and then a neural network iterative algorithm is proposed for general constrained nonlinear optimization. 3

    使用增廣lagrange乘子法求解時,雖然可以避免罰參數無限增大的弊病,但同時也提出了一個難以求解的子命題。
  8. At present, constrained optimization methods may be classified to two classes, one is search method which firstly asserts whether the current point is an optimal point, if the point is not, then we must choose search directions, and along the search direction, find the next iterative point which make the objective function or merit function to decrease

    目前的約束優化演算法可以分成兩大類,一類是搜索演算法,這種演算法首先判斷當前點是否為最優點,若非最優點則要確定搜索方向,然後沿此方向確定一個使目標函數或評價函數下降的點。這種演算法一般為下降演算法,如可行方向法、約束變尺度法等。
  9. But, pso convergence ' s speed become slow in latter iterative phase, and pso is easy to fall into local optimization. at present, some scholars improve base pso mostly using 3 methods : disperse algorithm, increase convergence speed, enhance particle ' kinds. in the paper, i put forward 2 methods aiming at local best resutl but not whole best result. i modify base pso using the last method. some scholars put forward times initializations, so i select best result after circulating some times to be a parameter of formula. first, put particle into some small region, and ensure every region having one paticle at least. second, every region ' s particle has probability transfer other regions. although increase running time, enhance particle ' kinds, decrese the probability of convergence far from whole best result. nerms ( network educational resource management system ) is one of the research projects in the science and technology development planning of jilin province. the aim of nerms is to organize and manage various twelve kinds of network educational resources effectively so that people can share and gain them easily and efficiently, so as to quicken the development of network education

    但粒子群演算法仍存在如下不足:首先在多峰的情況下,粒子群有可能錯過全局最優解,遠離最優解的空間,最終得到局部最優解;其次在演算法收斂的情況下,由於所有的粒子都向最優解的方向群游,所有的粒子趨向同一,失去了粒子間解的多樣性,使得後期的收斂速度明顯變慢,同時演算法收斂到一定精度時,演算法無法繼續優化,本文對原始粒子群演算法提出了二點改進方案: 1 .演算法迭代到一定代數后,把此時找到的全局最優解當作速度更新公式的另一參數(本文稱之為階段最優解)再進行迭代; 2 .每次迭代過程中除最優解以外的每個粒子都有一定概率「變異」到一個步長以外的區域,其中「變異」的粒子在每一維上都隨機生成一個步長。
分享友人