梯度下降 的英文怎麼說

中文拼音 [xiàjiàng]
梯度下降 英文
gradient descent
  • : Ⅰ名詞1 (梯子; 樓梯) ladder; stairs; steps; staircase 2 (姓氏) a surname Ⅱ形容詞(形狀像樓梯的...
  • : 度動詞[書面語] (推測; 估計) surmise; estimate
  • : 下動詞1. (用在動詞后,表示由高處到低處) 2. (用在動詞后, 表示有空間, 能容納) 3. (用在動詞后, 表示動作的完成或結果)
  • : 降動詞1. (投降) surrender; capitulate 2. (降伏) subdue; vanquish; tame
  1. A coupled exercise algorithm of forward neural network combined with gradient search and chaotic optimization search

    基於規則的前饋神經網路的混沌梯度下降耦合學習演算法
  2. A soft sensor modeling algorithm based on improved fuzzy neural network is presented. the normalized average output membership functions are defined as fuzzy basis functions for defuzzification calculation. in order to improve the property of convergence, some parameters of the fuzzy neural network are trained by levenberg - marquardt algorithm, and the others are trained by gradient descent algorithm

    提出了一種改進的模糊神經網路軟測量建模方法,採用規則化的平均輸出隸屬函數作為模糊基函數進行反模糊化運算;在訓練網路時,部分參數採用levenberg - marquardt演算法來訓練,另一部分採用一階梯度下降法。
  3. Compared with the classical bp algorithm, robust adaptive bp algorithm possesses some advantages as following : ( 1 ) increasing the accuracy of the network training by means of using both the relative and absolute residual to adjust the weight values ; ( 2 ) improve the robustness and the network convergence rate through combining with the robust statistic technique by way of judging the values of the samples " relative residual to establish the energy function so that can suppress the effect on network training because of the samples with high noise disturbances ; ( 3 ) prevent entrapping into the local minima area and obtain the global optimal result owing to setting the learning rate to be the function of the errors and the error gradients when network is trained. the learning rate of the weights update change with the error values of the network adaptively so that can easily get rid of the disadvantage of the classical bp algorithm that is liable to entrap into the local minima areas

    與基本bp演算法相比,本文提出的魯棒自適應bp演算法具有以優點: ( 1 )與魯棒統計技術相結合,通過訓練樣本相對偏差的大小,確定不同訓練樣本對能量函數的貢獻,來抑制含高噪聲干擾樣本對網路訓練的不良影響,從而增強訓練的魯棒性,提高網路訓練的收斂速; ( 2 )採用相對偏差和絕對偏差兩種偏差形式對權值進行調整,提高了網路的訓練精; ( 3 )在採用梯度下降演算法對權值進行調整的基礎上,通過將學習速率設為訓練誤差及誤差的特殊函數,使學習速率依賴于網路訓練時誤差瞬時的變化而自適應的改變,從而可以克服基本bp演算法容易陷入局部極小區域的弊端,使訓練過程能夠很快的「跳出」局部極小區域而達到全局最優。
  4. Anfis based on takagi and sugeno ' s fuzzy model has the advantage of being linear - in - parameter ; thus the conventional adaptive methods can be efficiently utilized to estimate its parameters

    由於節點參數是線性的,用梯度下降和最小二乘的混合學習演算法來調節參數,減少了運算量,加快了收斂速
  5. Finally, a soft sensor model of melt index in polymer reaction based on the proposed method is established, and the simulation results show that in contrast to the traditional fuzzy neural network the proposed method is not sensitive to initial parameters and possesses good convergence capability and prediction precision

    最後用該建模方法建立了聚合反應中熔融指數的軟測量模型,並與完全基於梯度下降的模糊神經網路軟測量模型進行比較。結果表明改進的模糊神經網路對初始值的選擇不敏感,並且具有很好的收斂性,同時還能達到指定的預測精,很適合工程應用。
  6. The weights are trained with gradient descent method. the increase algorithm of bvs, and restricted algorithm, was induced

    利用梯度下降法對網路的權值進行訓練,並且推導了bvs的增長演算法,以及網路訓練的限制記憶遞推公式。
  7. To fulfill the requirement and characteristics of this track, which include paragraph - based and a relevant document supplied by user before retrieval, the rocchio model and vector model are merged to compute relevance between query and document. then, gradient decrease method is used to train the parameters of rocchio model. then, based on the paragraph - level relevance, the sorted documents are returned

    本文針對文本檢索大會子項目的要求和基於段落的,用戶查詢時可能提供一篇相關文章的查詢特點,首先將rocchio模型和向量空間演算法結合起來來把握用戶需求並計算文檔與查詢的相關,再使用梯度下降技術來訓練模型中的參數,最後依據查詢和段落層的相關,使用基於段落切分的方法返回包含用戶查詢最相關文章。
  8. The transition zone quite often shows a decrease in temperature gradient due to the ability of water to absorb up to a 1 / 3 more heat than rock

    由於水具有比巖石能多吸收三分之一熱量的能力,因此,這類有變化的地層通常都顯示出地溫梯度下降的趨勢。
  9. Firstly, it introduces the gradient boosting theory and the no weight regression algorithm based on this theory, then it presents the experimental results of a practical problem

    首先對以損失函數梯度下降為原理的樣本無權值演算法進行了闡述,並給出了一個實際問題的模擬結果。
  10. Secondly, by incorporating the temporal difference prediction technique with the genetic algorithm, a novel hybrid genetic neural networks for controlling nonlinear chaotic system based on the scheme of small perturbations and the use of gradient descent learning method is presented ( known as hygann strategy )

    2 .研究了一種將暫態誤差預測技術、小擾動控制技術、梯度下降法和遺傳演算法( ga )融合起來控制非線性混沌系統的復合遺傳神經網路方法(簡稱hygann法) 。
  11. The main contributions of this dissertation are summarized as follow : ( 1 ) an ilc approach combining feedforward with current feedback is developed based on optimal feedback control and the gradient method. a sufficient condition that guarantees the convergences is given for linear system. the procedures of designing the algorithm can employ lqr, h2 or h approaches to improve the convergence rate of learning in iterations

    本文的主要成果有: 1 、在開閉環綜合迭代學習控制結構的基礎上,分析了利用梯度下降法設計前饋迭代學習控制器時,為保證演算法的收斂性,閉環控制系統應該滿足的充分條件,並依據提高演算法收斂速率的優化條件,給出了基於lqr 、 h _ 2和h等優化控制技術的迭代學習控制演算法的設計方法。
  12. The self - tuning fuzzy control algorithm including a back - propagation - based fuzzy controller and the gradient descent method is applied to reduce the pitch motion of the swath ship with control stabilizing fins

    控制模式為一自調模糊控制,包括應用一類神經倒傳遞觀念的模糊控制器及梯度下降法來減少縱搖運動。
  13. The gaussian function set depends on its centroid and width, therefore, we derive new supervised gradient descent algorithms to tune the parameters of " if - part " of gaussian functions. it also tunes the " then - part " of gaussian function

    高斯型函數是由它的中心和寬兩個參數決定的,所以,我們給出新的有監督的梯度下降演算法來調整如果部分高斯函數與則部分的高斯函數的中心和寬
  14. A cost function is explored by considering the independence of the improper sources, and the online separation algorithm is then deduced with descending gradient

    結合非常態信號的獨立性,構造出代價函數,利用梯度下降法推導出在線盲分離演算法。
  15. Natural gradient is a new optimization way that is proposed in a special space ? riemannian space. the parameter space in blind source separation is riemannian space. natural gradient has many better characters compared with normal gradient

    自然梯度下降法是一種新的最優化方法,自然梯度下降法是在黎曼空間提出的,自然相比于標準有很多優點。
  16. But it has intrinsic defects such as low convergence and local minimum because the negative gradient method is adopted in weight adjusting. an improved rbf network is introduced which has the advantage in digital approximation, classification and learning rate and at the same time, the corresponding sensibility is also analysed

    但bp網路在用於函數逼近時,權值的調節採用的是負梯度下降法,這種調節方法有它的局限性,如收斂速慢和容易陷入局部極小等缺點。
  17. Based on gradient descent rule, the bp ( back propagation ) algorithm is a local optimization algorithm

    Bp演算法基於梯度下降原理,是一種局部尋優演算法。
  18. Different training algorithms, namely levenburg - marquart algorithm and the gradient - based algorithm with an adaptive learning rate and momentum, are compared in this paper. according to the engineering requirement, dimensions of idc can be designed using the trained ann model and ga

    將自適應調整學習率並加入動量因子的梯度下降法和levengurg - marquart訓練演算法的訓練結果做了比較分析,同時引入了性能函數的改進形式。
  19. A new kind of optimal selection cluster algorithm ( osca ) is presented in this paper, and a new hybrid algorithm is obtained by combining osca with least squares method and gradient algorithm. the algorithm can synchronously solve the identification problems of the new fuzzy neural network model ' s structure and parameters

    提出一種新型優選聚類演算法,並將該演算法與最小二乘法和梯度下降法相結合,形成一種新的混合學習演算法。該演算法能同時解決上述新型模糊神經網路模型結構和參數的辨識問題,進一步提高了模型的辨識精
  20. The parameter of local model can be calculated by gradient descent in neighborhood with the sofm weight together, or estimated by least - squared estimation ( lse )

    局部模型的參數既可和映射網路權值一起在鄰域內採用梯度下降法修正,也可結合最小二乘法得到其最佳估計。
分享友人