, , , e.a.

Time–Predictable Architectures

Specificaties
Gebonden, 190 blz. | Engels
John Wiley & Sons | e druk, 2013
ISBN13: 9781848215931
Rubricering
John Wiley & Sons e druk, 2013 9781848215931
Onderdeel van serie FOCUS Series
Verwachte levertijd ongeveer 16 werkdagen

Samenvatting

Building computers that can be used to design embedded real–time systems is the subject of this title. Real–time embedded software requires increasingly higher performances. The authors therefore consider processors that implement advanced mechanisms such as pipelining, out–of–order execution, branch prediction, cache memories, multi–threading, multicorearchitectures, etc. The authors of this book investigate the timepredictability of such schemes.

Specificaties

ISBN13:9781848215931
Taal:Engels
Bindwijze:gebonden
Aantal pagina's:190

Inhoudsopgave

<p>PREFACE&nbsp;ix</p>
<p>CHAPTER 1. REAL–TIME SYSTEMS AND TIME PREDICTABILITY 1</p>
<p>1.1. Real–time systems&nbsp;1</p>
<p>1.1.1. Introduction&nbsp;1</p>
<p>1.1.2. Soft, firm and hard real–time systems&nbsp;4</p>
<p>1.1.3. Safety standards 6</p>
<p>1.1.4. Examples 7</p>
<p>1.2. Time predictability&nbsp;15</p>
<p>1.3. Book outline 16</p>
<p>CHAPTER 2. TIMING ANALYSIS OF REAL–TIME SYSTEMS&nbsp;19</p>
<p>2.1. Real–time task scheduling&nbsp;19</p>
<p>2.1.1. Task model&nbsp;19</p>
<p>2.1.2. Objectives of task scheduling algorithms&nbsp;20</p>
<p>2.1.3. Mono–processor scheduling for periodic tasks&nbsp;21</p>
<p>2.1.4. Scheduling sporadic and aperiodic tasks 23</p>
<p>2.1.5. Multiprocessor scheduling for periodic tasks&nbsp;23</p>
<p>2.2. Task–level analysis&nbsp;24</p>
<p>2.2.1. Flow analysis: identifying possible paths&nbsp;25</p>
<p>2.2.2. Low–level analysis: determining partial execution times 27</p>
<p>2.2.3. WCET computation 29</p>
<p>2.2.4. WCET analysis tools 32</p>
<p>2.2.5. Alternative approaches to WCET analysis&nbsp;32</p>
<p>2.2.6. Time composability 35</p>
<p>CHAPTER 3. CURRENT PROCESSOR ARCHITECTURES&nbsp; 37</p>
<p>3.1. Pipelining&nbsp;37</p>
<p>3.1.1. Pipeline effects&nbsp;38</p>
<p>3.1.2. Modeling for timing analysis&nbsp;41</p>
<p>3.1.3. Recommendations for predictability 49</p>
<p>3.2. Superscalar architectures&nbsp;49</p>
<p>3.2.1. In–order execution&nbsp;50</p>
<p>3.2.2. Out–of–order execution&nbsp;52</p>
<p>3.2.3. Modeling for timing analysis 55</p>
<p>3.2.4. Recommendations for predictability 56</p>
<p>3.3. Multithreading&nbsp;57</p>
<p>3.3.1. Time–predictability issues raised by multithreading 58</p>
<p>3.3.2. Time–predictable example architectures 60</p>
<p>3.4. Branch prediction&nbsp;62</p>
<p>3.4.1. State–of–the–art branch prediction 62</p>
<p>3.4.2. Branch prediction in real–time systems 64</p>
<p>3.4.3. Approaches to branch prediction modeling&nbsp;65</p>
<p>CHAPTER 4. MEMORY HIERARCHY 69</p>
<p>4.1. Caches&nbsp;71</p>
<p>4.1.1. Organization of cache memories 71</p>
<p>4.1.2. Static analysis of the behavior of caches 74</p>
<p>4.1.3. Recommendations for timing predictability&nbsp;81</p>
<p>4.2. Scratchpad memories&nbsp;87</p>
<p>4.2.1. Scratchpad RAM 87</p>
<p>4.2.2. Data scratchpad&nbsp;87</p>
<p>4.2.3. Instruction scratchpad&nbsp;88</p>
<p>4.3. External memories 93</p>
<p>4.3.1. Static RAM&nbsp;93</p>
<p>4.3.2. Dynamic RAM&nbsp;97</p>
<p>4.3.3. Flash memory&nbsp;103</p>
<p>CHAPTER 5. MULTICORES&nbsp;105</p>
<p>5.1. Impact of resource sharing on time predictability&nbsp;105</p>
<p>5.2. Timing analysis for multicores&nbsp;106</p>
<p>5.2.1. Analysis of temporal/bandwidth sharing 107</p>
<p>5.2.2. Analysis of spatial sharing&nbsp;110</p>
<p>5.3. Local caches 111</p>
<p>5.3.1. Coherence techniques&nbsp;112</p>
<p>5.3.2. Discussion on timing analyzability&nbsp;115</p>
<p>5.4. Conclusion&nbsp;121</p>
<p>5.5. Time–predictable architectures&nbsp;121</p>
<p>5.5.1. Uncached accesses to shared data&nbsp;121</p>
<p>5.5.2. On–demand coherent cache&nbsp;123</p>
<p>CHAPTER 6. EXAMPLE ARCHITECTURES&nbsp;127</p>
<p>6.1. The multithreaded processor Komodo&nbsp;127</p>
<p>6.1.1. The Komodo architecture&nbsp;128</p>
<p>6.1.2. Integrated thread scheduling&nbsp;130</p>
<p>6.1.3. Guaranteed percentage scheduling&nbsp;131</p>
<p>6.1.4. The jamuth IP core&nbsp;132</p>
<p>6.1.5. Conclusion&nbsp;134</p>
<p>6.2. The JOP architecture 134</p>
<p>6.2.1. Conclusion&nbsp;136</p>
<p>6.3. The PRET architecture&nbsp;136</p>
<p>6.3.1. PRET pipeline architecture&nbsp;136</p>
<p>6.3.2. Instruction set extension&nbsp;137</p>
<p>6.3.3. DDR2 memory controller&nbsp;137</p>
<p>6.3.4. Conclusion&nbsp;138</p>
<p>6.4. The multi–issue CarCore processor 138</p>
<p>6.4.1. The CarCore architecture&nbsp;139</p>
<p>6.4.2. Layered thread scheduling&nbsp;140</p>
<p>6.4.3. CarCore thread scheduling algorithms&nbsp;142</p>
<p>6.4.4. Conclusion&nbsp;146</p>
<p>6.5. The MERASA multicore processor&nbsp;146</p>
<p>6.5.1. The MERASA architecture&nbsp;147</p>
<p>6.5.2. The MERASA processor core&nbsp;148</p>
<p>6.5.3. Interconnection bus 149</p>
<p>6.5.4. Memory hierarchy&nbsp;149</p>
<p>6.5.5. Conclusion&nbsp;150</p>
<p>6.6. The T–CREST multicore processor&nbsp;151</p>
<p>6.6.1. The Patmos processor core&nbsp;151</p>
<p>6.6.2. The T–CREST interconnect&nbsp;152</p>
<p>6.6.3. Conclusion&nbsp;153</p>
<p>6.7. The parMERASA manycore processor&nbsp;154</p>
<p>6.7.1. System overview 154</p>
<p>6.7.2. Memory hierarchy&nbsp;155</p>
<p>6.7.3. Communication infrastructure&nbsp;157</p>
<p>6.7.4. Peripheral devices and interrupt system 159</p>
<p>6.7.5. Conclusion&nbsp;161</p>
<p>BIBLIOGRAPHY&nbsp;163</p>
<p>INDEX&nbsp;179</p>

Rubrieken

    Personen

      Trefwoorden

        Time–Predictable Architectures