超平面 的英文怎麼說

中文拼音 [chāopíngmiàn]
超平面 英文
hyperebene hyperplane
  • : Ⅰ動詞1 (越過; 高出) exceed; surpass; overtake 2 (在某個范圍以外; 不受限制) transcend; go beyo...
  • : Ⅰ形容詞1 (沒有高低凹凸 不頃斜) flat; level; even; smooth 2 (高度相同; 不相上下) on the same l...
  • : Ⅰ名詞1 (頭的前部; 臉) face 2 (物體的表面) surface; top 3 (外露的一層或正面) outside; the ri...
  1. Ancillary hyperlane method solve sectional intersecting line

    輔助超平面法求截交線
  2. Research on distance from point in to hyperplane in euclidean space

    歐氏空間中點到超平面的距離研究
  3. The separating hyperplane of traditional support vector machines is sensitive to noises and outliers

    摘要傳統的支持向量機分類超平面對噪聲和野值非常敏感。
  4. When traditional support vector machines separate data containing noises, the obtained hyperplane is not an optimal one

    使用傳統的支持向量機對含有噪聲的數據分類時,所得到的超平面往往不是最優超平面
  5. Svm maps input vectors nonlinearly into a high dimensional feature space and constructs the optimum separating hyperplane in the spade to realize modulation recognition

    支撐矢量機把各個識別特徵映射到一個高維空間,並在高維空間中構造最優識別超平面分類數據,實現通信信號的調制識別。
  6. The multiple - hyperplane classifier, which is investigated from the complexity of optimization problem and the generalization performance, is the explicit extension of the optimal separating hyperplanes classifier

    超平面分類器從優化問題的復雜度和運行泛化能力兩方進行研究,是最優分離超平面分類器一種顯而易見的擴展。
  7. For this problem, a separating hyperplane is designed with the principle of maximizing the distance between two class centers, and a novel support vector machine, called maximal class - center margin support vector machine ( mccm - svm ) is designed

    為了解決這個問題,本文以兩個類中心距離最大為準則建立分類超平面,構造一個新的支持向量機,稱作類中心最大間隔支持向量機。
  8. Is that if a set of points in n - space is cut by a hyperplane, then the application of the perceptron training algorithm will eventually result in a weight distribution that defines a tlu whose hyperplane makes the wanted cut

    )下的結論是,如果n維空間的點集被超平面切割,那麼感知器的培訓演算法的應用將會最終導致權系數的分配,從而定義了一個tlu ,它的超平面會進行需要的分割。
  9. By mapping input data into a high dimensional characteristic space in which an optimal separating hyperplane is built, svm presents a lot of advantages for resolving the small samples, nonlinear and high dimensional pattern recognition, as well as other machine - learning problems such as function fitting

    Svm的基本思想是通過非線性變換將輸入空間變換到一個高維空間,然後在這個新的空間中求取最優分類超平面。它在解決小樣本、非線性及高維模式識別問題中表現出許多特有的優勢,並能夠推廣應用到函數擬合等其他機器學習問題中。
  10. In chapter 4 we obtain the helly number for hyperplane transversal to translates of a convex cube in r ~ ( d ). where we prove that the helly number for such families is 5 when d = 2, and is greater than or equal to d + 3 when d 3

    在第4章中我們探討了o中超平面橫截單位立方體移形成的集族的heily數,證得碑中此heily數為5 ,在呼中此heily數z民並推廣至呼,在胸中此heily數d 3
  11. This paper consists of three parts as follows, 1. a nonsmooth convex programming is relaxed to a smooth convex programming by using a cutting - plane, which is constructed by subgradient. an algorithm based on the cutting - plane is presented. in this way, a cutting plane algorithm and it ' s convergence for semide ? nite programming are provided

    利用次微分的概念給出了一種非光滑凸規劃割的構造技巧,找到了半定規劃可行域的一個支撐超平面,從而給出了求解半定規劃的一種割演算法
  12. Some new ideas are proposed in this thesis based on svm and ica : firstly, a modified svm method based on posteriori probability theory is given, which makes the classification super plane corrected from the original one. a better classification result is obtained without finding the best quadric optimization algorithm and large scale training datasets are reduced to small scale training datasets at the same time. secondly, ica is applied to the preprocessing period of the recognition character images for purpose of feature extraction and dimension reduction

    本文在系統研究svm和ica的基礎上提出了以下新的觀點:其一是採用了引入后驗概率的修正svm方法,它在原分類超平面的基礎上不斷修正分類超平面,提高分類正確率,從而避免了尋找最優二次規劃的麻煩,同時將大規模訓練樣本集化為小規模訓練樣本集;其二是應用獨立分量分析ica對需要進行識別的字元圖像預處理,提取字元特徵,降低輸入數據的維數,從而可以為下一步的svm識別過程提供好的數據集,用以提高識別率和識別速度。
  13. Here the author emphasizes non - linear neural networks used to data mining. the neural networks currently studied are almost linear based on super - flat. usually they need long training time, and are hardly understood

    目前的神經網路研究基本上是基於超平面的線性神經網路,通常這種網路存在著學習時間長,網路結構不容易理解等問題。
  14. This paper proposes weighted support vector machine algorithms based on the analysis of the cause of such problem, and this algorithm overcomes the drawback which standard support vector machine algorithm ca n ' t deal with each sample flexibly and compensates for the unfavorable impact caused by this bias

    通過比較它們各自的優缺點等情況,為提出新的支持向量機演算法做了理論準備。 ( 2 )介紹了支持向量機演算法的思想,以及超平面的區別。
  15. If these points can be cut by a hyperplane - in other words, an n - dimensional geometric figure corresponding to the line in the above example - then there is a set of weights and a threshold that define a tlu whose classifications match this cut

    如果這些點可以被超平面換句話說,對應于上示例中的線的n維的幾何外形切割,那麼就有一組權系數和一個閾值來定義其分類剛好與這個切割相匹配的tlu 。
  16. A geometric transversal is defined to be an affine subspace ( such as a point, a line, a plane, or a hyperplane ) intersecting every member of a given family. in part i we discuss three kinds of such problems. in chapter 2 we discuss point transversal to a family of translates of a convex sets in the plane, where we prove a famous conjecture of griinbaum ' s by a concrete and straightforward method for some special cases

    如果一仿射子空間(如一個點,一條直線,一個,或一個超平面)與一給定集族的每一個元都相交,則我們稱該仿射子空間為該給定集族的一個幾何橫截(點橫截,直線橫截,橫截等) ,也稱該仿射子空間橫截該給定集族。
  17. The separating plane with maximal margin is the optimal separating hyperplane which has good generation ability. to find a optimal separating hyperplane leads to a quadratic programming problem which is a special optimization problem. after optimization all vectors are evaluated a weight. the vector whose weight is not zero is called support vector

    而尋找最優分類超平面需要解決二次規劃這樣一個特殊的優化問題,通過優化,每個向量(樣本)被賦予一個權值,權值不為0的向量稱為支持向量,分類超平面是由支持向量構造的。
  18. Orthogonal matrixes have special structures, and every row vector of them can be taken as a plot, which may be parametrized in n - sphere space. through the research of structures of orthogonal matrixes, the writer finds a parametrized matrix, which can express all the orthogonal matrixes. through analysing uprightness between related high - order planes and the number of required parameters, we get the maturity of this method

    自然的,這些點可以用其球坐標,即與各坐標軸的夾角來參數化,作者通過觀察正交矩陣的幾何結構,最終找到了任意維數的隨機正交矩陣的參數表示方法,通過分析相關超平面之間的垂直關系和參數化正交矩陣需要的參數個數,論證了這種表示的完備性。
  19. In the procedure of control, pnn studies on - line and gives the signals to fnnc. one of the algorithms to train networks is genetic algorithm which is based on darwinism. to avoid converge ahead of schedule or enter into the super - plane, the ratio of intercross and aberrance is self - adaptive to enhance the efficiency

    在神經網路離線訓練時應用了基於達爾文進化論的遺傳演算法,為了解決一般遺傳演算法的早期收斂和陷入超平面等問題,採取對交叉和變異率自適應調整的方法來提高搜索效率。
  20. In contrast to other variable control methods, variable structure terminal sliding mode control ( vstsmc ) avoids local sliding mode existing on each switching hyper plane. therefore, the system states can enter terminal sliding mode directly with desired dynamic characteristics

    變結構最終滑動模態控制( variablestructureterminalslidingmodecontrol ) ( vstsmc )方法與其它滑動模念變結構控制方法相比,避免了在各個切換超平面上存在局部滑動模態的情況。
分享友人