Arşiv logosu
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • Sistem İçeriği
  • Analiz
  • Talep/Soru
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Tuncer, Kadir" seçeneğine göre listele

Listeleniyor 1 - 2 / 2
Sayfa Başına Sonuç
Sıralama seçenekleri
  • [ X ]
    Öğe
    Reducing Performance Impact of Process Variation For Data Caches
    (IEEE, 2013) Kadayif, Ismail; Tuncer, Kadir
    In concurrent with finer-granular process technologies, it is becoming extremely difficult to keep critical physical device parameters within desired bounds, including channel length, gate oxide thickness, and dopant ion concentration. Variations in these parameters can lead to dramatic variations in access latencies in Static Random Access Memory (SRAM) devices: Different lines of the same cache may have different access latencies. A simple solution to this problem is to adopt the worst-case latency paradigm. While this egalitarian cache management is simple, it may introduce significant performance overhead for data cache accesses. To overcome varying access latencies across different data cache lines, we employ a small table storing the access latencies of cache lines. This table is accessed during data cache access to give a hint to the hardware about how long to wait for data to become available.
  • [ X ]
    Öğe
    Reducing performance impact of process variation for data caches
    (IEEE Computer Society, 2013) Kadayif, Ismail; Tuncer, Kadir
    In concurrent with finer-granular process technologies, it is becoming extremely difficult to keep critical physical device parameters within desired bounds, including channel length, gate oxide thickness, and dopant ion concentration. Variations in these parameters can lead to dramatic variations in access latencies in Static Random Access Memory (SRAM) devices: Different lines of the same cache may have different access latencies. A simple solution to this problem is to adopt the worst-case latency paradigm. While this egalitarian cache management is simple, it may introduce significant performance overhead for data cache accesses. To overcome varying access latencies across different data cache lines, we employ a small table storing the access latencies of cache lines. This table is accessed during data cache access to give a hint to the hardware about how long to wait for data to become available. © 2013 The Chamber of Turkish Electrical Engineers-Bursa.

| Çanakkale Onsekiz Mart Üniversitesi | Kütüphane | Açık Erişim Politikası | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


Çanakkale Onsekiz Mart Üniversitesi, Çanakkale, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2026 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim