hidden parameter 中文意思是什麼

hidden parameter 解釋
隱參量
  • hidden : adj 隱藏的;秘密的;神秘的。 A hidden danger 隱患。 A hidden meaning 言外之意。 A hidden micropho...
  • parameter : n. 1. 【數學】參數,變數;參詞;參項。2. 【物理學】參量;(結晶體的)標軸。3. 〈廢語〉【天文學】通徑。vt. -ize 使參數化。
  1. In the second layer, k - nearest neighbor algorithm is introduced to ascertain searching scope firstly, and then the nerve cell function ' s parameter in hidden layers begin to be evolved in this scope. the least - square is also introduced to calculate connection power between hidden layer and output layer

    其中在第二級演化中,先用最小鄰聚法確定搜索空間,然後再在此空廣西大學頎十論文i 13f神經網路在ect圖像重注中的應用研穴間內進行演化,其中用最小二乘法來確定從隱層到輸出層的連接權值。
  2. Firstly, influence factors of generalization of neural network are presented in this thesis, in order to improve neural network ’ s generalization ability and dynamic knowledge acquirement adaptive ability, a structure auto - adaptive neural network new model based on genetic algorithm is proposed to optimize structure parameter of nn including hidden layer nodes, training epochs, initial weights, and so on ; secondly, through establishing integrating neural network and introducing data fusion technique, the integrality and precision of acquired knowledge is greatly improved. then aiming at the incompleteness and uncertainty problem consisting in the process of knowledge acquirement, knowledge acquirement method based on rough sets is explored to fulfill the rule extraction for intelligent diagnosis expert system, by completing missing value data and eliminating unnecessary attributes, discretization of continuous attribute, reducing redundancy, extracting rules in this thesis. finally, rough sets theory and neural network are combined to form rnn ( rough neural network ) model for acquiring knowledge, in which rough sets theory is employed to carry out some preprocessing and neural network is acted as one role of dynamic knowledge acquirement, and rnn can improve the speed and quality of knowledge acquirement greatly

    本文首先討論了影響神經網路的泛化能力的因素,提出了一種新的結構自適應神經網路學習演算法,在新方法中,採用了遺傳演算法對神經網路的結構參數(隱層節點數、訓練精度、初始權值)進行優化,大大提高了神經網路的泛化能力和知識動態獲取自適應能力;其次,構造集成神經網路,引入數據融合演算法,實現了基於集成神經網路的融合診斷,有效地提高了知識獲取的全面性、完善性及精度;然後,針對知識獲取過程中所存在的不確定性、不完備性等問題,探討了運用粗糙集理論的知識獲取方法,通過缺損數據補齊、連續數據的離散、沖突消除、冗餘信息約簡、知識規則抽取等一系列的演算法實現了智能診斷的知識規則獲取;最後,將粗糙集理論與神經網路相結合,研究了粗糙集-神經網路的知識獲取方法。
  3. The main factors of probabilistic neural network including the hidden neuron size, hidden central vector and the smoothing parameter, to influence the pnn classification, are analyzed ; the xor problem is implemented by using pnn. a new supervised learning algorithm for the pnn is developed : the learning vector quantization is employed to group training samples and the genetic algorithms ( ga ’ s ) is used for training the network ’ s smoothing parameters and hidden central vector for determining hidden neurons. simulations results show that, the advantage of our method in the classification accuracy is over other unsupervised learning algorithms for pnn

    本文主要分析了pnn隱層神經元個數,隱中心矢量,平滑參數等要素對網路分類效果的影響,並用pnn實現了異或邏輯問題;提出了一種新的pnn有監督學習演算法:用學習矢量量化對各類訓練樣本進行聚類,對平滑參數和距離各類模式中心最近的聚類點構造區域,並採用遺傳演算法在構造的區域內訓練網路,實驗表明:該演算法在分類效果上優于其它pnn學習演算法
  4. In training of back - propagation neural network, parameter adaptable method which can automatically adjust learning rate and inertia factor is employed in order to avoiding systemic error immersed in a local minimum and accelerating the network ' s convergence ; introduced the further optimization of the network ' s structure, it gives the research result of selection of the hidden layers, neurons, and the strategy of re - learning, compared the sums of the deviation square of this algorithm with conventional bp algorithm, as a result, the approach accuracy and the generalization ability of the network were extremely improved

    在對前饋神經網路的訓練中,使用參數自適應方法實現了學習率、慣性因子的自我調節,以避免系統誤差陷入局部最小,加快網路的收斂速度;提出了優化bp網路結構的實驗研究方法,並給出了有關隱含層數和節點數選擇以及再學習策略引進的研究結果。將該演算法同傳統bp演算法的預測偏差平方和進行比較,結果證實網路的逼近精度及泛化能力均得到了極大的提高和改善。
  5. Watch out for hidden temporaries created by parameter conversions. one good way to avoid this is to make ctors explicit when possible

    :時刻注意因為參數轉換操作而產生隱藏的臨時對象。一個避免它的好辦法就是盡可能顯式的使用構造函數。
  6. [ guideline ] watch out for hidden temporaries created by parameter conversions. one good way to avoid this is to make ctors explicit when possible

    :時刻注意因為參數轉換操作而產生隱藏的臨時對象。一個避免它的好辦法就是盡可能顯式的使用構造函數。
  7. Object passed in as a parameter has hidden columns, those columns are not propagated to the result set sent to the client

    對象具有隱藏的列,這些隱藏列不會傳播到發送到客戶端的結果集中。
  8. In visual basic, and it can be represented by a delegate type that exposes this hidden parameter

    表示) ,它可以由公開此隱藏參數的委託類型表示。
  9. The method invocation involves the hidden cost of parameter construction

    方法調用涉及隱藏的參數結構開銷。
  10. The application of the three damage identification techniques above for simple supported beams and cantilever beams confirm that vague or wrong location will occur in the damage location method based on flexibility difference, and the curvature modal method need not rely on the structural parameter before the damage, but has the hidden trouble of vague location for small damage

    將以上三種損傷識別技術應用於簡支梁、懸臂梁的模擬損傷識別,證實了基於柔度差值的損傷定位方法存在模糊或者錯誤定位的問題,曲率模態法可以不依賴損傷前的結構參數,但對于小損傷也存在模糊定位的隱患。
分享友人