神經元函數 的英文怎麼說
中文拼音 [shénjīngyuánhánshǔ]
神經元函數
英文
neuron function- 神 : Ⅰ名詞1 (神靈) god; deity; divinity 2 (精神; 精力) spirit; mind 3 (神氣; 神情) expression; l...
- 經 : 經動詞[紡織] (把紡好的紗或線梳整成經紗或經線) warp
- 函 : 名詞1. [書面語] (匣; 封套) case; envelope 2. (信件) letter 3. (姓氏) a surname
- 數 : 數副詞(屢次) frequently; repeatedly
- 神經 : nerve; nervus
- 函數 : [數學] function函數計算機 function computer; 函數計算器 function calculator; 函數運算 functional operation
-
Furthermore, due to the symmetry restriction of traditional radial basis function networks ( rbfn ) with gaussian function, the asymmetric gaussian basis function ( agbf ) is proposed to construct the full adaptive agbfn. because the asymmetry gaussian function ' s variability and malleability are higher than the traditional one, the asymmetry gaussian basis function can provide the agbfn which own a higher flexibility and can approach the true result more easily
針對這一問題,文中提出了一種全自適應的不對稱高斯基函數網路( agbfn )結構,網路的基函數採用具有不對稱寬度的偽高斯函數,和常規的高斯函數相比,具有更大的可變性和延展性,從而使得隱層神經元在函數近似上具有更高的適應性,提高了神經網路的學習能力。A experienced equation which is summarized by many experiments is used to determine the number of mesosphere nerve cell and a sort of new square - sum function of errors is adopted. its characteristic is that weight errors of possible exceptional point is less. accordingly, the effect of errors of possible exceptional point is reduced, which make actual function relation simulation easier
本系統針對bp演算法的局限性,給出了一種優化的bp演算法,採用經過大量實驗總結出的經驗公式來確定隱層神經元的個數,並選取了一種新的誤差平方和函數,該函數的特點是對一些可能的異常點的誤差權值設計的較小,從而降低了異常值誤差帶來的影響,便於模擬出真實的函數關系。First we introduce its basic idea, typical activating functions, learning rules and its main application : classification & c1ustering, associated memory and optimiza tion
我們先簡要介紹了人工神經元網路的基本思想,典型激活函數,學習規則和主要應用:分類聚類,聯想記憶和優化。It showed that activated inhibition by weak noise could be gabaergic inhibition. from above the findings, we could give a hypothesis that the input of random oscillation induced by noise in the cochlea to central auditory system could be integrated in the central auditory nucleus and the response of sound - sensitive neuron to sound stimuli could be adjusted to an optimum state for signal intensity coding
根據這些變化可以推測,這種背景噪聲的生物學作用可能是通過弱噪聲所引起的耳蝸隨機共振的輸入,在上行過程中經各級聽覺核團的整合,將中樞聲敏感神經元調定在一種準備狀態,並定型放電率函數和調制神經元對聲馨碩士學位論文master 』 st砰iesis強的編碼。This thesis has studied the dynamic features of a class of the discrete - time neural network model of two neurons, such as the convergence and periodicity and etc. the function of the neuron signal transmission in this model, which belongs to three piecewise constant argument, indicates the following charactersif the signal of one neuron on the network is active between a and b, it will produce invariable encouragement effect on another neuron ; if the signal of one neuron is lower than a, it will produce invariable restrain effect on another one, if the signal of one neuron is higher than b, it will produce no effect on another one
本文研究了一類二元離散人工神經網路模型的解的收斂性及周期解的存在性等動力學特徵。該模型的神經元信號傳遞函數是三段常數不連續函數。這種信號傳遞函數表明如果某神經元的信號在a與b之間活躍,則它對另一個神經元產生恆定的激勵效果,如果某神經元的信號低於a ,則它對另一個神經元產生恆定的抑制效果,如果某神經元的信號高於b ,則它對另一個神經元不產生作用。Hi the aspect of symmetry analyzing to the hopfield model neural network with hebbian learning, we study on the dynamical behavior of the state space under the action of isometric transformation group g = z2 ? n, and prove the invariant property of the energy orientation ? / / " ) of the state space under the action of g. we find that the symmetry relationship of the network is sx - sw = sh when the active function of the neuron is odd, where sx is the symmetry of the patterns set x under hebbian learning rule, sh is the symmetry of the network and sw is the symmetry of the weight matrix w of the network
) s _ n為手段,研究了網路狀態空間在群g作用下各點的運動情況,證明了群g作用下的不變性。證明了當神經元的激活函數f為奇函數時, hebb法則下存儲樣本集x的對稱性s _ x 、網路對稱性s _ h以及連接矩陣對稱性s _ w三者之間滿足s _ x = s _ w = s _ h的關系;同時,我們還證明了:網路穩定態集vf同一s _ h軌道中的兩個穩定態的動力學行為(能量和吸引域大小)相同;兩個等距網路h和h 1 = g ? h , ( ? ) g (Importance of optimizing neural activation function types
優化神經元激活函數類型的重要性Our condition and estimate are formulated in terms of the network parameters, the neurons ’ activation functions and the associated equilibrium point. hence, they are easily checkable. it is believed that these results are significant and useful for the design and applications of the delayed hopfield neural networks
這些條件和估計的公式是由網路參數、神經元激活函數以及相應的平衡點構成,所以它們很容易使用,相信這些結果對于帶時間延遲的hopfield神經網路的設計和應用具有一定的重要性和使用價值。By analyzing limitation of the traditional neural network, this paper presents intelligent neuron model based on linear independently function. the knowledge storing capacity of the intelligent neuron is analyzed
在分析傳統神經網路缺陷基礎上,運用線性獨立函數構建了智能神經元模型,並對這種神經元的知識存儲能力進行了理論分析。Firstly, the basic theories of artificial neural network are introduced, including neural cell model, the basic structure of artificial neural network and the study methods. the essential of bp network with function approximation and rbf network and the performance of technology are analyzed. the algorithm of bp and rbf networks are also studied
論文首先闡述了神經網路的基本理論,包括神經元模型,神經網路的基本結構和神經網路的學習方法;分析了具有函數逼近能力的bp網路和徑向基函數( rbf )網路實質、技術實現問題,並研究了bp和rbf網路學習演算法。A voltage - mode synapse circuits with simple structure, high speed and high precision was designed firstly. then, a bicmos technics based circuit called verdict - converting switch ( vcs ) was put forward as substantial components of mtncs. based on this, an approach was proposed for designing the circuits of multi - thresholded verdicting function ( mtvf ) at switch level
先提出了一種結構簡單,線性度好,併兼顧精度的突觸電路,然後提出一種基於bicmos工藝的判別轉換開關電路,在此基礎上,結合限幅電壓開關理論,提出從開關級設計多閾值神經元閾值判別函數電路的一般方法。Because the difference of real - valued and complex - valued system, the different neuromime is adopt in different system. in addition, this paper design two kinds of transmission function to different input signal. the new algorithm make up the flaw such as small application domain and difficult chose of parameter
文中針對實數和復數系統的差異提出了兩種不同的網路神經元結構,並且根據傳輸信號的差異設計了兩種傳輸函數,彌補了原來演算法應用范圍小、參數不易選取的缺陷。The application of adaline in recognition of mine water quality types
具有線性功能函數的神經元在礦井水質類型識別中的應用So, it could be seen that the structures research, function approximation properties and learning algorithms of procedure neural network models is quite significant
研究過程神經元網路模型的拓撲結構,函數逼近性質,學習演算法等具有十分普遍的意義。Hence, the advantage of mtn over stn was shown with the facts that the nns need fewer neurons by using mtns than by using stns. in addition, the literal, and, or operation as three basic operations in ternary logic were separately implemented by single mtn. with these basic mtns, arbitrary ternary function can be achieved by nns
利用這一方法,用一個多閾值神經元即實現了需三個單閾值神經元方能實現的異或運算,由此大幅減少了神經元個數;用一個多閾值神經元分別實現了三值邏輯中的文字、與、或三種基本運算,由這三種基本運算的多閾值神經元,可組成實現任意三值函數的多閾值神經元網路,由於提高了單個神經元信息處理的能力,使神經網路可實現復雜的多值邏輯,性能得以提高。The feed back procedure neural network structure is researched. in the course of research of pnn algorithm, applying the concept of vertical base function, integral operation is convert to additive operation and complicated aggregation in time field is avoided
在過程神經元網路的演算法研究中,應用函數空間正交基的概念,可將積分運算元變換為求和運算元,從而有效避免了繁雜的時域聚合運算。This network ’ s structure is similar to the rbf neural network rather than the conventional ellipsoidal unit neural network. in the new network, the ellipsoidal unit functions are used in the hidden layer, and the weights between the hidden nods and the output nods are all connected. a method of rough k - means is used for obtaining the centers of ellipsoidal basic functions and the way of deciding the threshold
本文改進了一種橢球單元神經網路,它與經典橢球網路的結構不同,而與rbf神經網路結構類似:它的隱層節點採用橢球單元函數,代替了rbf網路的高斯基函數,並且用粗糙k -均值方法求取橢球基函數的中心,給出了確定初始閾值的方法。Using the indices of trajectory follow error and steering busyness in evaluation of the steering stability, a quadratic form performance index function for the neuron learning was established and the tuning of the connection weight values of the single neuron controller was realized using the algorithm of gradient descent
利用操縱穩定性評價中的軌跡跟隨誤差和方向盤忙碌程度的評價指標,建立了神經元學習的二次型性能指標函數,並採用梯度下降演算法實現了單神經元控制器的滾接權值的調整。The inner product of the mapping value of the original data in feature space is replaced by a kernel function, and the weights of each neuron can be initialized and updated by initializing and updating the combinatorial coefficient vector of each weight in the algorithm of ksom, so some intuitive and simple iteration formulas are obtained
該演算法以核函數代替原始數據在特徵空間中映射值的內積,並且神經元權值向量的初始化和更新都可由其組合系數向量表示,從而獲得了直觀而簡單的迭代公式。In particular, when the number of neurons in hidden layer equals the number of training patterns, the global minima with zero cost functions are obtained
一個特例是當隱層神經元的個數與樣本個數相等時,就可以求得代價函數值為0的全局最小點。分享友人