cache system 中文意思是什麼

cache system 解釋
超高速存儲器系統
  • cache : n. 1. (探險者等貯藏糧食、器材等的)暗窖,密藏處。2. 貯藏物。3. 【計算機】高速緩沖內存。vt. 1. 貯藏;密藏;窖藏。2. 【計算機】把…儲存到硬盤上。
  • system : n 1 體系,系統;分類法;組織;設備,裝置。2 方式;方法;作業方法。3 制度;主義。4 次序,規律。5 ...
  1. In the late 1980s and early 90s, the coda file system premiered a different cache manager

    在上個世紀80年代末90年代初, coda文件系統首次發布了一個不同的緩存管理器:
  2. He job of cache coherency is done partially by the hardware and partially by the operating system

    保證高速緩存一致性的工作由硬體和操作系統共同分擔。
  3. The control system included the following units : video decode unit, data format conversion unit, fpga controller, cache unit and d / a monitor. the above self - design control unit plus row and column power supply units make the whole fed driving system, thus drove the 25 inch sample and realized color video display. the 25 inch vga sample thus fabricated could display video images, and obtained its brightness 400cd / m2, contrast ratio 1000 : 1, 256 circuit gray scale

    本文介紹了fed驅動系統的工作原理,重點論述了基於fpga的vga級彩色fed新型驅動控制系統的研製,這種新型fed驅動控制系統主要包括視頻解碼電路、數據格式轉換電路、 fpga控制電路、數據緩存電路和d / a監控電路,配合后級列灰度調制單元和行掃描單元,組成完整的fed驅動系統,可以驅動25英寸vga級fed顯示屏,實現彩色視頻顯示,樣機亮度達400cd / m2 、對比度為1000 : 1 ,灰度等級為256級。
  4. Similarily, according to the dsp data access pattern and design philosophy of md32, a data memory combined data cache and data rani is proposed. in the md32 rtos, memory management system is one the most important part

    同時,針對dsp中數據訪問的特點和md32同時兼顧dsp與risc特性的設計特點,設計了包含數據ram和數據cache的數據存儲系統,並對給出了數據存儲系統的訪問規則。
  5. Their write cache whenever the system is rebooted or suspended

    它們的寫高速緩存。
  6. The overhead of this request can make a system unscalable if the validating parser does not cache the schema definitions

    如果確認解析器沒有對模式定義進行高速緩存,這種請求的開銷會使系統失去可擴展性。
  7. Through the implementing of kernel level file and cache mechanism at the client side, this newly proposed distributed network file system provides seamless network file access and reduces the performance decline caused by network transmission. utilizing the concept of logic block server, it provides the reliable data block storage and implements redundant storage capacity. utilizing the concept of the index server, it provide s the cost of the greatly for server and network during data access process and realizes the computing with balancing capacity

    在客戶端通過實現內核級文件的調用和緩沖機制,實現了文件的無縫網路存取,並減少由於網路傳輸帶來的性能下降的影響;利用邏輯塊服務器實現邏輯塊的冗餘存取,實現數據塊的安全存放;利用索引服務器進行負載均衡計算,實現資料存取的較低網路和服務器開銷;利用索引服務器實現服務器組的零管理,使該系統具有高效性、穩定性和可伸縮性。
  8. At first we test and verify the communality between original design and data acquisition system with fifo cache ; then we quantitatively analyze the performance of data acquisition system with fifo cache, and the results are satisfied

    在數據採集系統的改進設計和實現中:首先對加入fifo緩存后數據採集系統工作的一致性進行了驗證;然後對加入緩存后系統的工作性能進行了定量分析得到了較為理想的結果。
  9. In the preprocessing stage the method of user and session identification often adopt heuristic algorithm for the being of cache and agent. this induce the uncertainty of data resource. the cppc algorithm avoid the limitation and has no use for complicated hash data structure. in this algorithm, by constructing a userld - url revelant matrix similar customer groups are discovered by measuring similarity between column vectors and relevant web pages are obtained by measuring similarity between row vectors ; frequent access paths can also be discovered by further processing of the latter. experiments show the effectiveness of the algorithm. in the fourth part, this thesis bring some key techniques of data mining into web usage mining, combine the characteristic of relation database design and implement a web usage mining system wlgms with function of visible. lt can provide the user with decision support, and has good practicability

    本文演算法避免了這個缺陷,且不需要復雜的hash數據結構,通過構造一個userid - uel關聯矩陣,對列向量進行相似性分析得到相似客戶群體,對行向量進行相似性度量獲得相關web頁面,對後者再進一步處理得到頻繁訪問路徑。實驗結果表明了演算法的有效性。第四是本文將傳統數據挖掘過程中的各種關鍵技術,引入到對web使用信息的挖掘活動中,結合關系數據庫的特點設計並實現了一個具有可廣西人學頎士學位論義視化功能的web使用挖掘系統wlgms 。
  10. After discussing the strategies of data migration and scalability, we propose a dynamic data migrating algorithm comparing with disk cache in the clustered system

    本文討論了它的數據遷移策略和系統擴展性問題,並與集群式磁盤cache進行比較,提出了一種動態數據遷移演算法。
  11. This paper discusses msu ' s design, implementation and verification, implements the integration of the " longtengrl " system and studies the optimization of instruction cache

    本課題組設計的「龍騰r1 」微處理器晶元,指令系統與motorola公司的powerpc603e兼容,體系結構自主設計。
  12. In the design, we make use of two eda tools max + plus ii and protel99. because of the using of complex programmable logical device ( cpld ), we can keep untuched the original hard circuit in design and realization of counting card, so it inherited the advantage of its predecessor. in order to quantitatively analyze the performance of data acquisition system with fifo cache, we introduced the queueing theory to build mathematic model to test its quality

    在設計中藉助了max + plusii和protel99兩個eda設計軟體。由於採用了復雜可編程邏輯器件cpld ,使得在計數卡的設計和實現中不用更改原硬體電路,對原設計的優點有很好的繼承。在驗證系統改進性能時,引入排隊論建立了數學模型對系統的工作性能進行定量分析,證明其達到了設計要求。
  13. There are three layers in deltafile, virtual file system 、 logic file system and buffer cache. besides, deltafile provides the posix api interface

    為了對用戶透明的實現多種不同文件系統支持的功能, deltafile中採用了虛擬文件系統和邏輯文件系統相分離的體系結構。
  14. The premise to assure qos is to provide sufficient resource to meet demands. we come up with a method to partition the resources of web cluster for each class with its resource demand and priority requirement taken for granted, where system processing time or average access rates is summed up for each class periodically and resource demands is evaluated with stretch factor as performance metric. also the nodes marked in number will be orderly assigned to the classes ordered in priorities, which helps to maintain data locality and improves memory cache hits

    資源滿足需求是實現服務質量保證的前提,為滿足業務類動態的資源需求,我們提出一種支持業務類優先級和資源需求的資源劃分方法,通過按周期對業務類請求處理時間或平均訪問率進行統計,以響應擴展因子為質量指標對業務類預期的資源需求作出評估,採取按主機編號有序地分配給按優先級排序的業務類,減少業務類資源變動和提高主存cache命中率。
  15. Different performance parameters of obs are presented, which are burst lost rate, length of bhp cache, lowest service rate and the end to end delay in obs system

    通過模擬, obs核心路由器的突發包丟失率、 bhp緩存長度、最低服務速率和系統端到端時延等性能指標得到了確定。
  16. Using the distributedmap interface, which is a simple interface for the dynamic cache, j2ee applications and system components can cache and share java objects by storing a reference to the object in the cache

    Distributedmap介面是用於動態緩存的簡單介面,使用此介面,通過將引用存儲到緩存中的對象, j2ee應用程序和系統組件可以緩存和共享java對象。
  17. Operating system name and version, processor vendor model version speed cache size, number of processors, total physical memory, total virtual memory, devices, service type protocol port, and so forth

    操作系統名稱、版本號、處理器提供商/類型/版本/速率/緩存大小、處理器數量、物理內存總量、虛存總量、設備、服務類型/協議/埠號,等等。
  18. The original design program of counting data acquisition system continue to use the train of thought within dos operating system, but the multitask operating system windows are absolutely distinguished form dos. because the way of system data transmittance are program inquiring, at the same time the acquisition and transmittance process cannot be interrupted. this data transmittance mode within windows will be disturbed by other program ' s interruption and lost the requested data. so we have improved the original design of cd400bx ict data acquisition system. we shift the data transmittance mode from program inquiring to hard interruption, and install fifo cache in front of data channel, in order to guarantee the stability, reliability of data acquisiton and transmittance

    由於計數式數據採集系統的原設計方案沿用了以前dos操作系統下的設計思路,而windows操作系統作為多任務操作系統其運行機制和dos有根本的區別。由於系統採用的數據傳送方式為程序查詢方式,而數據採集和傳送的過程都不能被中斷。在windows操作系統下這種數據傳送方式會因為其他程序的中斷造成數據丟失。
  19. The special replacemetlt algorithin of this information cache assures that the nodes in the cache are the k most overloaded nodes and the k most underloaded nodes in the cluster system

    信息cache通過特殊的替換演算法,保證兩個cache中的信息分別為系統中負載最大的k個超載節點的信息和負載最小的k個欠載節點的信息。
  20. Then the structure and the function of each component of the cache system are shown. client semantic caching is different from traditional object caching in granularity and query processing. to get better cache hits in a semantic cache, a cache replacement strategy, called lwi ( least weight item ), is proposed

    由於客戶語義緩存的粒度及基於緩存的查詢處理與傳統以頁或元組對象為粒度的緩存不同,因此本文提出語義緩存最小權值項lwi替換策略。
分享友人