training set 中文意思是什麼

training set 解釋
訓練集
  • training : n 訓練,教練,練習;鍛煉;(馬等的)調馴;(槍炮、攝影機等的)瞄準,對準;【園藝】整枝法。 be in ...
  • set : SET =safe electronic transaction 安全電子交易〈指用信用卡通過因特網支付款項的商業交易〉。n 【埃...
  1. The paper limits the range of the text which is wanted to be extracted in the teaching material of the civil aviation. by gathering many sentences which is used to describe the typical knowledge, the exploder team constructs the training set

    本文把待抽取的文本的范圍限定在民航領域的培訓教材,收集了民航領域培訓教材中描述知識點的典型句式的句子,構建出原始的語料庫即訓練集。
  2. We use control chart to characterize states of security environment and data mining to construct intrusion detection strategies. the latter includes pattern mining, pattern consolidation arid pattern comparing. in succession to it, we construct attribute set and training set for classification of net data

    其次討論了自適應空間的構成,使用控制圖來構建條件空間,用數據挖掘技術來構建策略空間,重點討論了怎樣把數據挖掘技術應用到策略空間的構造中,包括模式的挖掘、合併、比較以及在此基礎上構建分類器所需要的屬性集與學習集。
  3. Sliq and sprint are all faced with fixed training set

    目前這兩種演算法所處理的都是固定大小的訓練集。
  4. Tree pruning can descend the existent noise of training set

    剪枝的目的是降低由於訓練集存在噪聲而產生的起伏。
  5. The training set and testing set were constructed to verify the effect of this model

    為了驗證該方法的效果,構造了問題的訓練集和測試集。
  6. Secondly, to optimize the system ' s settings, some important issues about the recognition of short - term speech recognition are discussed, such as the state number of markov chains, the scale of training - set, the gauss mixture number, etc. thirdly, we have deeply studied the noise - restrained parameters

    深入研究和分析了參數級的抗噪問題,通過實驗分析比較了靜態特徵參數以及高階動態參數之間的抗噪性能,得出了一般噪聲環境下性能相對較好的特徵參數形式。
  7. In multi - instance learning, the training set comprises labeled bags that are composed of unlabeled instances, and the task is to predict the labels of unseen bags

    在多示例學習中,訓練樣本是由多個示例組成的包,包是有概念標記的,但示例本身卻沒有概念標記。
  8. The selection of classify attribute from web page training - set base on rough sets

    基於粗糙集的網頁訓練樣本集的分類屬性的選擇
  9. Meanwhile, the svm ' s parameters selection method and the representations of different models are researched. ( 5 ) three types of flood forecast models based on svm are presented. these models are svm flood forecast model with changeless training set, dynamic recursion svm flood forecast model with fixed length training set and dynamic recursion svm flood forecast model with memory

    ( 5 )根據支持向量機的特點,建立了固定訓練樣本集的svm洪水預報模型、固定訓練樣本集長度的動態遞推svm洪水預報模型和帶記憶的動態遞推svm洪水預報模型三種基於svm的洪水預報模型,它們在實例中的表現體現出了良好的應用前景。
  10. In the practice stage, an integrated classifier is trained for the current user, whose training set is composed of genuine signatures, and random signatures selected from genuine signatures of existed users registered in the system

    在系統使用階段,對每個用戶,其訓練集中包含該用戶的真實簽名,並抽取系統中已有用戶的真實簽名作為隨機偽造,訓練組合分類器。
  11. Firstly, some basic algorithms for inducing decision tree are discussed, including id3, which uses information gain to select a splitting attribute when partitioning a training set ; c4. 5, which can deal with numeric attributes ; cart, which uses gini rule in attribute selection and induces a binary tree ; public, which puts tree pruning in the tree building phase ; interactive method, which puts artificial intelligence and human - computer interaction into the procedure of decision tree induction ; as well as sliq and sprint which are scalable and can be easily parallelized. advantages and disadvantages of these algorithms are also presented

    文中詳細闡述了幾種極具代表性的決策樹演算法:包括使用信息熵原理分割樣本集的id3演算法;可以處理連續屬性和屬性值空缺樣本的c4 . 5演算法;依據gini系數尋找最佳分割並生成二叉決策樹的cart演算法;將樹剪枝融入到建樹過程中的public演算法;在決策樹生成過程中加入人工智慧和人為干預的基於人機交互的決策樹生成方法;以及突破主存容量限制,具有良好的伸縮性和并行性的sliq和sprint演算法。
  12. Since the random selection of radius sometimes results in the problem such as the slow learning speed, we propose a center selection method based on the statistic information of the training set. i apply this method to the credit approval prediction, the result indicates the effectiveness of this - method.

    基函數的中心選取問題是rbf網路應用的關鍵因素,本文採用最近鄰學習演算法確定基函數的中心,針對rbf網路通常所採用的隨機選取半徑往往導致網路訓練速度慢等問題,提出了基於樣本統計的中心選取方法。
  13. It is observed experimentally and algorithmically that when training data are noisy and overlapping, many support vectors have lagrange multipliers on the upper bound. if it were known beforehand which examples are bound support vectors, these examples could be removed from the training set and their values are fixed at the upper bound. due to the reduced free variable counts, this method is promising to improve training time

    實驗和演算法推導顯示在強噪聲和類間重疊數據下訓練svm得到的支持向量很多處于邊界位置,如果我們能夠預先知道哪些樣本是邊界支持向量,這些邊界支持向量的值就可以被固定在邊界處,從而不參加訓練過程,這樣,訓練過程中要優化的變量就可以減少,運行時間也可以縮短。
  14. A decision tree classifier using a scalable id3 algorithm is developed by microsoft visual c + + 6. 0. some actual training set has been put to test the classifier and the experiment shows that the classifier can successfully build decision trees and has good scalability

    最後著重介紹了作者獨立完成的一個決策樹分類器。它使用的核心演算法為可伸縮的id3演算法,分類器使用microsoftvisualc + + 6 . 0開發。
  15. All the gfa models produced corrosion rates that are very close to the experimental ones for the training set ( see figure )

    所有通過gfa模型所得的銹蝕速率同實驗值非常接近。
  16. In the training stage, two integrated classifiers are trained separately when the training set contained simple forgery samples or not, and a mapping function is built based on parameters of two classifiers

    在系統訓練階段,利用第一數據集,建立起當訓練集中包含簡單偽造前後的組合分類器參數的映射關系。
  17. But in many cases, the traditional algorithms are hard to apply, or the application effects are not good. these cases include noisy data, redundant information, incomplete data, and sparse data in database. neural networks can acquire knowledge by training set

    本文針對現有數據挖掘方法在很多情況下難以推廣應用,例如對于有噪聲的數據、有冗餘的信息、數據不完整、數據稀疏等情況下,這些傳統演算法的使用效果往往不佳。
  18. In this dissertation, we propose improved genetic algorithm and utilize it to search sample space for classification and evaluation with the best representative subset of training set

    本文提出一種改進的遺傳演算法,利用改進的遺傳演算法搜索樣本空間,將得到的訓練集的近似最優代表性子集作為訓練集去分類評估集。
  19. Limiting ourselves to nets with no hidden nodes, but possibly having more than one output node, let p be an element in a training set, and t be the corresponding target of output node n

    將我們的網路限制為沒有隱藏節點,但是可能會有不止一個的輸出節點,設p是一組培訓中的一個元素, t ( p , n )是相應的輸出節點n的目標。
  20. In view of insufficient number of fault samples in process industry, support vector machines ( svm ) with the capacity of sufficient learning from a limited training set are used to identify the projection coefficient matrix of te process, and satisfying results are also obtained

    而考慮到實際工業過程故障數據都是少量的,支撐向量機在小樣本學習方面具有良好的泛化能力。利用支撐向量機對te過程的投影系數矩陣進行識別,也獲得較滿意的結果。
分享友人