cache memory system 中文意思是什麼

cache memory system 解釋
超高速緩沖存儲器系統
  • cache : n. 1. (探險者等貯藏糧食、器材等的)暗窖,密藏處。2. 貯藏物。3. 【計算機】高速緩沖內存。vt. 1. 貯藏;密藏;窖藏。2. 【計算機】把…儲存到硬盤上。
  • memory : n. 1. 記憶;記憶力;【自動化】存儲器;信息存儲方式;存儲量。2. 回憶。3. 紀念。4. 死後的名聲,遺芳。5. 追想得起的年限[范圍]。
  • system : n 1 體系,系統;分類法;組織;設備,裝置。2 方式;方法;作業方法。3 制度;主義。4 次序,規律。5 ...
  1. Similarily, according to the dsp data access pattern and design philosophy of md32, a data memory combined data cache and data rani is proposed. in the md32 rtos, memory management system is one the most important part

    同時,針對dsp中數據訪問的特點和md32同時兼顧dsp與risc特性的設計特點,設計了包含數據ram和數據cache的數據存儲系統,並對給出了數據存儲系統的訪問規則。
  2. The premise to assure qos is to provide sufficient resource to meet demands. we come up with a method to partition the resources of web cluster for each class with its resource demand and priority requirement taken for granted, where system processing time or average access rates is summed up for each class periodically and resource demands is evaluated with stretch factor as performance metric. also the nodes marked in number will be orderly assigned to the classes ordered in priorities, which helps to maintain data locality and improves memory cache hits

    資源滿足需求是實現服務質量保證的前提,為滿足業務類動態的資源需求,我們提出一種支持業務類優先級和資源需求的資源劃分方法,通過按周期對業務類請求處理時間或平均訪問率進行統計,以響應擴展因子為質量指標對業務類預期的資源需求作出評估,採取按主機編號有序地分配給按優先級排序的業務類,減少業務類資源變動和提高主存cache命中率。
  3. Operating system name and version, processor vendor model version speed cache size, number of processors, total physical memory, total virtual memory, devices, service type protocol port, and so forth

    操作系統名稱、版本號、處理器提供商/類型/版本/速率/緩存大小、處理器數量、物理內存總量、虛存總量、設備、服務類型/協議/埠號,等等。
  4. Now the emphasis is transferred to the design and implementation of memory system. before a deep discussing, some basic knowledge is introduced, such as the memory classification and hierarchy, address space, access controlling, cache, memory coherence, and the popular memory system implementation method. then the archteture of the target memory system is presented and divided into two subsystems, the virtual memory management subsystem and memory access subsystem, according to their function

    接下來本文將重點轉移到研究已定目標vliw處理器存儲系統的設計和實現,先對存儲系統設計涉及的存儲器及其組織層次、地址空間、訪問控制、 cache 、存儲一致性、常見的實現方式等因素作了分析,然後給出了目標存儲系統的總體框架,並按功能將其劃分為虛存管理和訪存兩個子系統。
  5. 6. make some amendments to the daily logfile system : write the daily logging data to the memory using cache technology, mitigating the problem of dom insufficient space ; avoid logfile overflow by using daily log queue technology ; cope with repulsion problem of data logging and data sending

    6 .對日誌系統做了如下改進:使用cache技術,把日誌數據、寫在內存中,緩解了dom空間不足的問題;使用日誌隊列技術避免日誌溢出;實現了日誌和日誌發送程序互斥的訪問日誌數據。
  6. In the first part, this paper discusses the key problems in designing architecture of each component, which include why we choose partitioned regiater files, use 2 - way connected data cache with write - back strategy and add scratch - pad sram to original momory system, and how to identify their parameters. following that, a memory configuration based on the discussion above is presented

    本文首先介紹了dpc各個存儲器的設計和實現,詳細討論了寄存器文件分體結構的選擇並提出了寄存器文件參數配置的四條規律,介紹了數據cache容量及策略的權衡與選擇,闡述了scratch - padsram與cache並存的優勢。
  7. In practice, the file system calls the kernel cache manager, which fulfills requests from an in - memory cache if possible and makes recursive calls to the file system driver to fill cache buffers

    實際上,文件系統調用內核高速緩沖存儲器管理器,這個管理器執行來自於一個內存中的高速緩沖存儲器的請求,如果可能的話,並且遞歸調用文件系統驅動來填充高速緩沖存儲器。
  8. The method can cache data of backend corporate database and manage them relying on the technology of main memory database, then structures an in - memory relational database management system

    它能夠高速緩存后臺數據庫中的數據,依靠內存數據庫對這些數據進行管理,構成內存數據庫管理系統。
  9. Since a cache memory system can reduce the need for main memory access, it greatly reduces the potential memory access contention in shared memory multiprocessor systems

    可以把cache看成是主存與cpu之間的緩沖適配器,藉助于cache ,可以高效地完成dram內存和cpu之間的速度匹配。
  10. This algorithm can improve hit rate on backend servers " main memory cache, thus increase the performance of the whole cluster system

    該演算法可提高後端服務器的主存cache命中率,從而提高了整個集群系統的性能。
  11. Based on investigations of memory access behavior, through experimentations of spec cpu2000 benchmarks running on godson - 2 processor, several policies that can improve performance of cache and memory system significantly are proposed and evaluated in this dissertation. the proposed techniques can increase memory access bandwidth while decrease access latency so that ipc of the processor is increased

    本文從提高處理器的ipc值和優化處理器的訪存延時及帶寬的角度出發,結合分析龍芯2號處理器運行speccpu2000測試程序的訪存行為特徵,對存儲系統性能優化技術進行研究,提出了一系列存儲系統的性能優化技術並對所提出的優化技術進行性能評測與分析。
  12. To resolve the issue of the execution of efficiency of the system, in addition to a fine structural design of the program, methods such as using a cache memory can both enable the best execution efficiency to your protection management interface program

    要解決執行效率的問題,除了程序結構設計要精良外,還可以使用類似高速緩存cache memory的做法,如此將可使保護管理介面程序能有最佳的執行效率。
  13. Cache items with this priority level are the most likely to be deleted from the cache as the server frees system memory

    在服務器釋放系統內存時,具有該優先級級別的緩存項最有可能被從緩存刪除。
  14. Cache items with this priority level are the least likely to be deleted from the cache as the server frees system memory

    在服務器釋放系統內存時,具有該優先級級別的緩存項最不可能被從緩存刪除。
  15. Cache items with this priority level are less likely to be deleted as the server frees system memory than those assigned a

    在服務器釋放系統內存時,具有該優先級級別的緩存項被刪除的可能性比分配了
  16. Cache items with this priority level are more likely to be deleted from the cache as the server frees system memory than items assigned a

    在服務器釋放系統內存時,具有該優先級級別的緩存項比分配了
  17. The cache items with this priority level will not be automatically deleted from the cache as the server frees system memory

    在服務器釋放系統內存時,具有該優先級級別的緩存項將不會被自動從緩存刪除。
  18. And hardware / software coverification is carried out to guarantee the correctness of design. in the design of hardware of memory system, according to the system specification, we select the appropriate memory capacity, sram block, associativity and the placement of cache in the pipeline

    在存儲系統的硬體設計中,始終以性能指標作為依據,克服了存儲器容量、庫單元規格選擇、聯合度的選擇和cache在流水線中的位置選擇等困難,設計出了符合指標要求的指令存儲系統。
  19. Pipelining and parallel technology, accompanying with fast fifo as cache memory, instead of direct program operation, are adopted in the scheme and increase the transmitting speed dramatically ; fpga ( field programmable gate array ) is applied to realize the complex control logic of the system and makes it integrative, flexible and fast ; 386ex based embedded system, along with vxworks real - time operating system is introduced to substitute the microcontroller based system to simplify the hardware design and enhance the overall performance of ssr, and will make the system more easier to be applied to the projects in the future

    該設計方案採用了流水線和并行技術,配以快速fifo緩存的方式取代了直接對flash進行編程的方式,極大地提高了閃存晶元存儲數據的速率;採用fpga技術實現系統的主要控制邏輯,集成度高、靈活性好、速度快;採用基於386ex的嵌入式系統及基於vxworks的嵌入式實時操作系統,取代單片機系統及其編程,提高了系統的整體性能,減輕了硬體設計的負擔,且使系統研發的延續性好。
  20. Allocates video memory as a cache of system memory

    分配視頻內存作為系統內存的緩存。
分享友人