local minimum 中文意思是什麼

local minimum 解釋
局部最小值
  • local : adj 1 地方的,當地的,本地的。2 局部的。3 鄉土的,狹隘的,片面的。4 【郵政】本市的,本地的;【鐵...
  • minimum : n (pl minimums ma ) 最小,最低,最少限度;【數學】極小(值)。 the irreducible minimum 無法減少...
  1. The traditional neural networks, bp networks, are subject to three hardly conquerable drawbacks in network training and network design a long time, including slow training speed, the training tending to sinking into local minimum and the trained networks having poor generalization capability

    傳統的神經網路( bp網路)在網路訓練和網路設計上長期受困於三個難以克服的缺陷,即網路訓練速度慢、訓練易陷入局部極小點和網路學習的推廣性能差。
  2. Based on the clustering property of the basis function of sparse coding, a basis function initialization method using fuzzy c mean algorithm is proposed to help the energy function of sparse coding to converge to a better local minimum for recognition. experimental results show that the classification and the sparseness of the features are both improved

    經過模糊c均值聚類初始化后的基函數能夠讓稀疏編碼的能量函數收斂到一個更有利於識別的局部最小點,試驗結果表明特徵的分類性和稀疏性都得到了提高。
  3. A stationary point which is neither a local maximum nor a local minimum point is called a saddle point.

    一個既不是局部極大點又不是局部極小點的平穩點稱為一個鞍點。
  4. Blade design optimization problems are multimodal and discontinuous problems, the use of gradient - based numerical optimization algorithms perform inefficiently and drop into local minimum prematurely. hence, the exploratory algorithms such as gas are required for global exploration. 4

    4 .通過rsm方法擬合復雜的響應關系,平滑設計空間「噪聲」 ,能夠防止數值優化方法陷入局部極值點,獲得良好的魯棒性和適應性。
  5. The learning process of rbf network is much faster and easier than that of bp network, and not existing local minimum areas

    徑向基函數神經網路的訓練時間遠小於反向傳播神經網路,且不存在局部極小問題。
  6. Thirdly, considering the characters of bp neural networks which is good at local minimum and bad in global optimization and the feature of ga neural networks which is bad in local minimum and good at global optimization, the paper proposes a new algorithm combined ga with bp, referred as to hybrid intelligence learning algorithm, which is applied to the problem optimizing the connection weight of the feedforward neural networks

    第三,針對bp神經網路局部搜索能力強、全局搜索能力差和基於遺傳演算法的神經網路全局搜索能力強、局部搜索能力差的特點,本文提出了一種集bp演算法和遺傳演算法優點為一體的混合智能學習法,並將其應用到優化多層前饋型神經網路連接權問題。
  7. The minimum payment for a legal working hour of an employee in ffes must not be lower than the standard for the local minimum payment

    職工法定工作時間內的最低工資,不得低於當地最低工資標準。
  8. Conventional clustering criteria - based algorithms is a kind of local search method by using iterative mountain climbing technique to find optimization solution, which has two severe defects - sensitive to initial data and easy as can get into local minimum

    傳統的基於聚類準則的聚類演算法本質上是一種局部搜索演算法,它們採用了一種迭代的爬山技術來尋找最優解,存在著對初始化敏感和容易陷入局部極小的致命缺點。
  9. Normal bp algorithm can be used in many fields and resolved many practical problems, however, normal bp algorithm has many limitations such as it ' s easy to fall into the local minimum in the course of convergence, its " convergent speed is very slow, the method which set the structural parameter and the operational parameter has n ' t be widely accepted, and so on

    標準bp演算法應用甚廣,解決了許多的實際問題,但同時它也存在著諸如在收斂過程中容易陷入局部最小點、收斂速度很慢以及網路的結構參數(隱層數、隱層單元)和運算參數(步長、非線性函數的選擇)等都尚無公認的理論指導等問題。
  10. From the se algorithm, the relationship between the local minimum and the equalizer delay is demonstrated, and it is more accurate compared to the system delay relation formulation

    ?圍繞線性均衡器的時延極小值問題,從超指數演算法推導了線性均衡器相對于均衡器時延的極小值關系,修正了線性均衡器相對于系統時延的關系式。
  11. The initialization method to achieve different equalizer delay local minimum is proposed for btea and se. comparison study using several uwac with different zero locations is made to demonstrate the equivalent of different initialization method for least mean square ( lms ) algorithm, btea, se and cma

    盡管常數模盲均衡演算法的初始化仍然是一個公認的未能解決的問題,但本文通過幾條不同零點位置的水聲通道,對比研究了自適應最小均方誤差演算法、倒三譜演算法、超指數演算法和常數模演算法的不同權向量初始化的等效性。
  12. The simulation tests result indicates that the speed and precision of sample training are increased because of sample clustering for fuzzy modular networks. and the problem of slow training speed and local minimum point are avoided when bp networks are applied in the fault diagnosis of complex boiler

    本文所建的用於鍋爐故障診斷的模糊模塊化神經網路模型因進行了樣本聚類,實驗結果表明:其網路訓練的速度和精度明顯提高,同時有效地解決了bp網路應用於復雜的鍋爐系統故障診斷時,存在訓練收斂慢並容易陷入局部最小點的問題。
  13. However, the neural network easily falls into local minimum, and weakly search the overall situation. the genetic algorithm ( ga ) has the ability of searching overall situation. the genetic neural network recombines the genetic algorithm ’ s of seeking the superior overall situation and the neural network ’ s nonlinear characteristic and rapid convergence

    但神經網路具有易陷入局部極小值以及全局搜索能力弱等缺點;而遺傳演算法具有較好的全局最優搜索能力,遺傳神經網路將兩者結合,既保留了遺傳演算法的全局尋優的特點,又兼有神經網路的非線性特性和收斂的快速性。
  14. But it has intrinsic defects such as low convergence and local minimum because the negative gradient method is adopted in weight adjusting. an improved rbf network is introduced which has the advantage in digital approximation, classification and learning rate and at the same time, the corresponding sensibility is also analysed

    但bp網路在用於函數逼近時,權值的調節採用的是負梯度下降法,這種調節方法有它的局限性,如收斂速度慢和容易陷入局部極小等缺點。
  15. Moreover, this method is robust to the variety of snr and avoids overfitting and local minimum in neural nelwork. the percenlage of correcl idenlificalion for signals is salisfied wilh the fewer training data

    該方法在信噪比變化范圍較大的情況下,採用較少的訓練數據就可以達到令人滿意的識別正確率。
  16. The neural network has been studied for about many years and won successes in many fields, including pattern recognition, data mining and so on. unfortunately, many problems, such as local minimum, overtraining, generalization and so on, are encountered in the theories, design and applications

    神經網路在模式識別、函數逼近、數據挖掘等許多領域已經取得了很多成功,但目前神經網路的理論和應用都還存在一些困難,如局部極小點、過學習、欠學習以及推廣能力等問題。
  17. In training of back - propagation neural network, parameter adaptable method which can automatically adjust learning rate and inertia factor is employed in order to avoiding systemic error immersed in a local minimum and accelerating the network ' s convergence ; introduced the further optimization of the network ' s structure, it gives the research result of selection of the hidden layers, neurons, and the strategy of re - learning, compared the sums of the deviation square of this algorithm with conventional bp algorithm, as a result, the approach accuracy and the generalization ability of the network were extremely improved

    在對前饋神經網路的訓練中,使用參數自適應方法實現了學習率、慣性因子的自我調節,以避免系統誤差陷入局部最小,加快網路的收斂速度;提出了優化bp網路結構的實驗研究方法,並給出了有關隱含層數和節點數選擇以及再學習策略引進的研究結果。將該演算法同傳統bp演算法的預測偏差平方和進行比較,結果證實網路的逼近精度及泛化能力均得到了極大的提高和改善。
  18. Experiments results show that the modified bp arithmetic not only has shorted study time, high efficiency, but also meet with the error goal, improve the generalization capability. so it can averted from getting into local minimum in some degree and achieve global optimization

    通過對bp改進模型的比較的研究及實驗證明:改進的bp演算法縮短了學習時間、提高了學習效率,不僅滿足了誤差目標的要求,而且提高了網路的泛化能力,在一定程度上避免了學習中的局部極小問題,實現了全局優化。
  19. To local minimum areas. with the ability of modulation to mask the unreliable areas such as noise, local shadow, and the ability to deal with under - sampling areas successfully, the new algorithm can be applied to complex phase unwrapping

    新演算法不僅保留了調制度函數本身處理局部陰影噪聲等截斷位相不可靠區域的能力,而且對于采樣不足的區域有著很好的處理效果,適用於復雜的位相場展開。
  20. As compared with the simple bp network, this diagnosis result is avoiding getting in local - minimum, more accurately and reliably. it also avoids the omitting and mistakes diagnosis of the rotating rectifier faults

    與純bp神經網路相k腿免bp網路漲陷入的局部極小,診斷結果更力dwh和可靠,有效防止了旋轉整流器故障的誤判和u 。
分享友人