Top PDF Performance and Energy Trade-offs for 3D IC NoC Interconnects and Architectures

Performance and Energy Trade-offs for 3D IC NoC Interconnects and Architectures

Performance and Energy Trade-offs for 3D IC NoC Interconnects and Architectures

A cycle accurate simulator implementing the dense 3D mesh architectures with core counts of 64 and 256 cores is used for the experiments. The switches are modeled with input arbitration, output arbitration, and routing stages [3]. Each switch has 8 virtual channels (VCs) to prevent deadlocking. There are 16 buffers for each switch as well as to enable switches to route multiple flits at once. Energy metrics are calculated using a 2.5 GHz global clock and all simulations are run for 5000 cycles with the energy and performance metrics starting after the 1000 th cycle to allow the network to settle. Wireline links are designed to be able to transfer an entire flit in a single cycle unless the link is too long. In that case, FIFO buffers are used so that flits can be transferred between stages in a single cycle. The simulations are run both with a flit size of 32 bits and a flit size of 64 bits and all of the simulations are run with packet sizes of 64 flits. The system is designed so that there are enough wires to transmit a single flit in one cycle. With 32 bits per flit there are 32 data wires for each link and with 64 bits per flit there are 64 data wires for each network link. The wormhole routing table is constructed by using a hop based Dijkstra algorithm.
Show more

66 Read more

Reliability-Performance Trade-offs in Photonic NoC Architectures

Reliability-Performance Trade-offs in Photonic NoC Architectures

The 2DFT with full path multiplicity, which also has dedicated single hop photonic links between all nodes, has higher energy than the corona PNoC because of its underlying blocking electronic mesh architecture, which is used by the header flit for setting up the path. Despite the multi-cycle and multi-hop links in both MSB and NMSB PNoC, the NMSB has lower packet energy than MSB PNoC due to its non- blocking links. The NMSB has multiple links between the source and destination, due to the use of special segmented buses. This helps in alleviating the problem of skewed traffic distributions and hence the NMSB architecture is able to achieve the lower average packet energy than the Clos and MSB PNoC in all the application based traffic scenarios studied here.
Show more

65 Read more

Design Trade-offs for reliable On-Chip Wireless Interconnects in NoC Platforms

Design Trade-offs for reliable On-Chip Wireless Interconnects in NoC Platforms

There have been many NOC architectures proposed. In [2], the authors lists the most prominent interconnect architectures suggested so far which includes SPIN (Scalable, Programmable and Integrated Network), CLICHE (Chip-Level Integration of Communicating Heterogeneous Elements), torus, folded torus, octagon and Butterfly Fat-Tree (BFT). However if all of these topologies are implemented as completely wired interconnects, none of them would be scalable beyond a certain point. This is because as the technology shrinks, delay and power dissipations on traditional metal wires become the limiting factor in performance compared to gate delays. Also as the wires become thinner, they become more susceptible to noise and thus become less reliable.
Show more

74 Read more

Monetary policy with sectoral trade offs

Monetary policy with sectoral trade offs

Along with major di¤erences in the time span over which they yield consumer utility, durable and non-durable consumption goods are characterized by deep peculiarities in their production and price- setting. Such structural traits are paramount to the monetary transmission mechanism (Barsky et al., 2007) and need to be accounted for when designing realistic multi-sector economies. 1 From a normative viewpoint, the literature available to date has extensively reported that sectoral heterogeneity presents the central bank with a nontrivial trade-o¤. With a single instrument, the policy maker cannot replicate the frictionless equilibrium allocation in each sector of the economy. This principle applies whenever sectoral discrepancies concern at least one of the following characteristics: price rigidity (Aoki, 2001), durability of di¤erent consumption goods (Erceg and Levin, 2006), inter-sectoral trade of input materials (Huang and Liu, 2005; Petrella and Santoro, 2011). 2 All these factors are widely recognized to be major determinants of the relative price of goods produced by di¤erent sectors, which in turn exerts a strong in‡uence on aggregate in‡ation (Reis and Watson, 2010). Therefore, drawing predictions based on single-sector models fails to re‡ect the underlying sources of aggregate in‡ation dynamics. 3 The present study addresses these issues from a normative perspective, integrating the main sources of sectoral heterogeneity into a two-sector New Keynesian economy.
Show more

38 Read more

Assessing trade-offs between energy consumption and security in sensor networks: simulations or testbeds?

Assessing trade-offs between energy consumption and security in sensor networks: simulations or testbeds?

For the attackers: if any position with the realistic radio channel is too particular/friendly, the only choice is to build an ideal position ... Exagerating Unfriendliness of the Execut[r]

43 Read more

Trade-offs of ETFs - An Examination of Clean and Dirty Exchange Traded Funds in the Energy Sector

Trade-offs of ETFs - An Examination of Clean and Dirty Exchange Traded Funds in the Energy Sector

Moreover, it is interesting to note that while the standard deviation is slightly higher for the clean portfolio than for the dirty portfolio, the differences in terms of beta are larger. For the dirty portfolio, with a beta closer to 1, this implies that over the period measured in this study the returns for the dirty portfolio are more market-reverting (Bodie et al., 2014). Such a value is in line with the research by Buetow and Henderson (2012) showing that the majority of ETFs traded on U.S exchanges track the returns to their benchmark indices closely. The notion that ETFs that are tracking oil prices follow their underlying asset closely gives more weight to this possibility (Ivanov, 2013). A possible explanation to the differences could be the fact that many of the companies in the oil sector are much larger in size than companies in the renewable energy sector. This translates into the fact that oil companies, due to their size, are more likely than their counterparts to drive the market in a certain direction. Hence, the beta close to 1 could at least partially be a result of this relationship between the market and the large oil companies. To mention one company as a relevant example, Exxon Mobil Corporation (XOM) has been the largest company in the world in terms of market capitalization and is an energy company engaged in the production and exploration of crude oil and natural gas (Reuters). Also, given that the clean energy sector is younger and smaller, higher volatility compared to the larger, “dirty”, energy sector is reasonable to assume.
Show more

37 Read more

Mitigating Turkey's trilemma trade offs

Mitigating Turkey's trilemma trade offs

Then, our analysis continues by revealing a role for central bank foreign reserves and required reserves in mitigating the trilemma tradeoffs – we show that foreign reserves to GDP ratio[r]

24 Read more

On the use of fake sources for source location privacy : trade offs between energy and privacy

On the use of fake sources for source location privacy : trade offs between energy and privacy

Figure 6b shows an increase in the capture ratio of approximately 100%, to an overall of around 80%, over FS2 with unique messages. This increase can be explained by the simple coordination between attackers. If an attacker has a fake source within its operating quadrant, it will receive corresponding fake messages and prevent the other attackers from reacting to the associated fake source, even when they receive the same fake messages. An attacker A will be located in a quadrant that contains the real source. As other attackers will potentially receive fake messages before they are received by A, A can move towards the real source whilst dropping messages from fake sources, i.e., A will not be perturbed by the fake messages. The energy consumption associated with the attacker is expected to increase in the context of multiple, coordinating attackers, and this is corroborated by Figure 6d, which shows an increase in attacker energy as compared to Figure 4d.
Show more

22 Read more

Communication--Computation  Trade-offs  in  PIR

Communication--Computation Trade-offs in PIR

We evaluate our new PIR construction, MulPIR, that uses somewhat-homomorphic encryption (SHE) and compare it against SealPIR. MulPIR enables a trade-off of computation for communication, which reduces the communication of SealPIR by 80% while increasing the computation roughly twice. We also provide the first empirical evaluation of PIR with recursive level beyond three (see Appendix D). Surpris- ingly, we observe that higher recursion level does not necessar- ily improve communication. This is due to the fact that lattice- based HE encryptions have a complex relationship between parameters sizes, support for homomorphic operations and number of encryption slots. While recursion improves com- plexity when the database size increases beyond the number of encryption slots in a ciphertext, increasing the database size requires support for more homomorphic operations, which leads to larger parameters and more slots. In our experiments, Gentry–Ramzan PIR always achieves the best communication complexity but comes with a significant computation cost that can be prohibitive in some settings. However, we show that in terms of monetary cost, Gentry–Ramzan can outperform all other PIR approaches considered when database elements are small.
Show more

21 Read more

Trade offs in designing High Performance Digital Adder based on Heterogeneous Architecture

Trade offs in designing High Performance Digital Adder based on Heterogeneous Architecture

Adders are most commonly used in various electronic applications e.g. Digital signal processing in which adders are used to perform various algorithms like FIR, IIR etc. In past, the major challenge for VLSI designer is to reduce area of chip by using efficient optimization techniques. Then the next phase is to increase the speed of operation to achieve fast calculations like, in today’s microprocessors millions of instructions are performed per second. Speed of operation is one of the major constraints in designing DSP processors. Now, as most of today’s commercial electronic products are portable like Mobile, Laptops etc. that require more battery back up. Therefore, lot of research is going on to reduce power consumption. Therefore, there are three performance parameters on which a VLSI designer has to optimize their design i.e. Area, Speed and Power. It is very difficult to achieve all constraints for particular design, therefore depending on demand or application some compromise between constraints has to be made.
Show more

5 Read more

Connecting the Speed-Accuracy Trade-Offs in Sensorimotor Control and Neurophysiology Reveals Diversity Sweet Spots in Layered Control Architectures

Connecting the Speed-Accuracy Trade-Offs in Sensorimotor Control and Neurophysiology Reveals Diversity Sweet Spots in Layered Control Architectures

the reaching time and target width allows faster speed to be achieved with a small decrement in accuracy. On the other hand, from Chapter 2, we can observe that the speed-accuracy tradeoffs (SATs) of the hardware implementing control can be much more severe. Improving either speed or accuracy in nerve signaling or muscle actuation requires profligate biological resources [185]; as a consequence, only a few types of nerves and muscles are built to be both fast and accurate (Fig. 2.1a. In this chapter, we build upon the theory presented in Section 5 to study how na- ture de-constrains neurophysiological hardware constraints in sensorimotor control. These results show that diversity between hardware components can be exploited to achieve both fast and accurate control performance despite being implemented using slow or inaccurate hardware. Such “diversity sweet spots” (DSSs) are ubiquitous in biology and technology, and are arguably the central benefit of layered architectures. DSSs explain why large heterogeneities exist in biological components and also sys- tematize how systems engineers routinely create fast and accurate technologies from imperfect hardware.
Show more

221 Read more

Exploration of 2D EDA tool impact on the 3D MPSoC architectures performance

Exploration of 2D EDA tool impact on the 3D MPSoC architectures performance

The need for higher performance devices to enable more complex applications continues to drive the growth of electronic design especially in the mobile markets. 3D integration is one of the feasible technologies to increase the system’s performance and device integration by stacking multiple dies interconnected using through silicon vias (TSV). NoC-based Multiprocessor System on Chip (MPSoC) architecture has become the primary technology to provide higher performance to support more complex applications. In this paper, we perform an exploration and analysis of 2D EDA tool parameters impact on the 3D MPSoC architectures (3D Mesh MPSoC and heterogeneous 3D MPSoC stacking) performance in terms of timing and power characteristics. Exploration results show that the 2D EDA tool parameters have strong impact on the timing performance compared with power consumption. Furthermore, it is also shown that heterogeneous 3D MPSoC architecture has less footprint area, higher speed and less power consumption compared with 3D Mesh MPSoC for the same number of processing elements suggesting that it is a better design approach considering the limitation capability of 2D EDA tools for 3D design.
Show more

7 Read more

Multihop relaying and multiple antenna techniques: performance trade offs in cellular systems

Multihop relaying and multiple antenna techniques: performance trade offs in cellular systems

It has been observed that a cellular capacity wall of 350 Mb/s/cell [6] is on the horizon. Therefore, it is necessary to use smaller cells in order to achieve a higher spectral efficiency over an area (b/s/Hz/km 2 ). One method of achieving this is to divide the larger cell, typically 1 to 2 kilometer in radius, into smaller subcells in which relay stations (RSs) serve mobile stations (MSs) closest to them. Numerous researchers have looked at the various approaches to MH relaying in cellular sys- tems [7-12]. Two proposals under consideration for 4G IMT-Advanced [13-15]: IEEE 802.16m [16,17] and LTE- Advanced [18,19] will include relaying as options. Clearly, relaying requires more complicated system level algorithms (medium access control – MAC – layer and higher) in order to achieve good results in a network of wireless stations. Also, MH relaying requires additional system resources (time or frequency slots), and hence the spectral efficiency (measured in b/s/Hz) may suffer under some conditions. It seems natural to combine MIMO and relaying techniques in order to improve the performance of a cellular system, but it is necessary to determine how well they work together and what trade- offs exist in combining them. In addition, it is necessary to use a system model that captures the radio frequency (RF) propagation of a typical cellular system accurately.
Show more

19 Read more

Trade-offs in System of Systems Acquisition

Trade-offs in System of Systems Acquisition

The aim of the acquisition scenario is to perform a trade-off between satisfying the three ob- jectives with the limited budget. There are also several concerns that need to be considered in the acquisition scenario. Currently, due to political pressures, it is not possible to obtain approval for the acquisition of more troops. In light of this, the head of the SAS training division has suggested rolling out SAS training more widely to make better use of the existing troops. Another major concern is that the contract for the maintenance of the L118 Light Guns is about to expire. To keep the existing L118 Light Guns in-service or to acquire more L118 Light Guns will require the contract to be renewed at considerable expense. Additionally, the manufacturer of the Mobile Artillery Battlefield Radar system, which so far has had limited roll out, has filed for bankruptcy meaning that the existing systems are unmaintainable. The systems could be replaced with an al- ternative system current in-service in a friendly allied nation however this is an expensive option. The UK MoD acquisition budget to be allocated to this acquisition scenario is £185 million being made as four payments over a three-year time period. Any potential solutions to the acquisition scenario need to deal with all of the above concerns simultaneously rather than trying to solve the concerns one at a time.
Show more

184 Read more

STG NoC: A Tool for Generating Energy Optimized Custom Built NoC Topology

STG NoC: A Tool for Generating Energy Optimized Custom Built NoC Topology

Network on Chip (NoC) has emerged as a viable solution to the complex communication requirements of constantly evolving System on Chip (SoC). The communication centric architecture of NoC can be optimized across a variety of parameters as per the design requirements. With the development of customized application the inclination has shifted from regular architectures to irregular topology which leaves researchers with larger spectrum of optimization parameters. Many heuristic methods have been explored as the optimization problems encountered are NP-hard. This paper presents a customized topology generator STG-NoC which implements a heuristic technique based on simulated annealing for achieving the objective of energy optimization.
Show more

5 Read more

Contactless Test Access Mechanism for 3D IC

Contactless Test Access Mechanism for 3D IC

The main focus of this work was to devise a contactless test access mechanism (TAM) for TSV based 3D ICs. For this we explored all three coupling options and concluded that the radiative coupling is not feasible due to area constraint. However, both capacitive and inductive coupling provided viable options for contactless probing. We characterized a TSV using High Frequency Structure Simulator (HFSS) environment, and extracted lumped models for both inductive and capacitive coupling. The extracted spice models were used in Advanced Design System (ADS) environment for circuit level simulation. Simulations were performed to characterize the wireless communication links and the bond wires. The effect of different data rates on the reconstructed data was analyzed. Cross-talk, probe size, and misalignment between the probe and TSVs were characterized through simulations. A prototype was fabricated and tested to verify the performance of wireless links. Experimental measurements on the fabricated prototype indicate that the proposed contactless solutions can be used for TSV probing. The results of this research project have been published in one IEEE transaction article and two conference papers. A journal article covering the proposed inductive link has been submitted to the IEEE Transactions on Instrumentation and Measurement (TIM) for review.
Show more

100 Read more

Iterative receivers combining MIMO detection with turbo decoding: performance complexity trade offs

Iterative receivers combining MIMO detection with turbo decoding: performance complexity trade offs

Linear equalization consists of a linear filter according to the zero forcing (ZF) or the minimum mean square error (MMSE) criteria [7]. These algorithms need low complex- ity but suffer from unsatisfactory performance. On the other hand, interference cancellation-based algorithms use an estimation of the previous detected symbols to cancel their interference from the received signal such as ordered successive interference cancellation (OSIC) also referred to as VBLAST [8]. However, their performances suffer from error propagation in the decision feedback loop. The signal detection can also be transformed into a tree-search problem [9-13]. The sphere decoder (SD) is an efficient tree-search-based method that limits the search space of the ML solution to the symbols that lie inside a hyper-sphere. The sphere decoder performs a depth-first search to efficiently find the best solution and achieve near optimal performance. However, it suffers from variable throughput depending on the noise levels and channel conditions [16,17]. Moreover, the sequential nature of the tree search makes it unsuitable for parallel implementation. The breath-first based K-Best decoder [14] and fixed sphere decoder (FSD) [15] are thus pro- posed to obtain a constant throughput and to reduce the hardware complexity at a cost of certain performance loss. Recently, many efforts have been made in the design of soft-input soft-output (SISO) MIMO detectors in order to achieve high throughput and low computational complex- ity. An improved VBLAST (I-VBLAST) for SISO detec- tion was proposed in [18,19]. In addition, a SISO detector based on MMSE interference cancellation (MMSE-IC) was proposed in [20,21]. The list sphere decoder (LSD) was proposed in [22] as a variant of the sphere decoder to provide soft outputs. Consequently, in [23], a list sequen- tial decoder based on metric-first search strategy was proposed for the iterative process. The single tree-search (STS) algorithm [24,25] was proposed
Show more

19 Read more

Executing Bag of Distributed Tasks on the Cloud : investigating the trade offs between performance and cost

Executing Bag of Distributed Tasks on the Cloud : investigating the trade offs between performance and cost

While data transfer times can be reduced in BoDT on the cloud, a different kind of a problem emerges since a user pays for the cloud resources required by the tasks. Often more resources will boost the performance of the task, but this comes at a monetary cost. Hence, there is a trade-off between the performance gain by acquiring more resources on the cloud for executing a task and the budget available for executing the task on the cloud. For example, the performance of the BoDT applications that aggregate news considered above can be maximised by gathering more resources for increasing parallelism. However, this is more expensive and may not be cost-effective.
Show more

8 Read more

Does individual quality mask the detection of performance trade offs? A test using analyses of human physical performance

Does individual quality mask the detection of performance trade offs? A test using analyses of human physical performance

traits associated with either maximum physical performance or motor skill function. In a similar way, Hamel and colleagues (Hamel et al., 2009) quantified individual quality using a species-specific combination of longevity, success in the last breeding opportunity before death, adult mass and social rank. Using longitudinal data from three ungulate populations, Hamel and colleagues (Hamel et al., 2009) explored how individual quality affects the probability of detecting life-history trade-offs between current reproduction and future reproduction for females. They found high-quality females consistently had a higher probability of reproduction that was independent of previous reproductive status (Hamel et al., 2009). However, they did detect a reproductive trade- off for female mountain goats after accounting for differences in individual quality; low-quality female goats were less likely to reproduce following years of breeding than following non-breeding (Hamel et al., 2009). In addition, offspring survival was lower in bighorn ewes after a successful breeding season than after those seasons when no lamb was produced – but this occurred only for low-quality females (Hamel et al., 2009).
Show more

7 Read more

Back on the Farm: The Trade-offs in Ecocritical Lives

Back on the Farm: The Trade-offs in Ecocritical Lives

Kristin:   I   think   we   should   talk   more   about   the   notion   of   a   “trade-­‐off”   itself,   which   seems   to   be   what   we’re  grappling  with  in  this  discussion  about  praxis  and  theory.  When  you  read  the  conversations  about   the   conflicts   between   scholarship   and   lived   practice,   in,   say,   the   “Special   Forum   on   Ecocriticism   and   Theory”  in  ISLE’s  Autumn  2010  issue,  overwhelmingly,  folks  seem  to  concur  that  both  praxis  and  theory   are  important.  For  example,  Serpil  Oppermann  writes  that  theory  can  create  the  “space”  for  cultural  and   political  change.  But  there’s  also  a  tendency  to  set  up  the  praxis/theory  tension  as  though  there  are  only   two  variables.  In  his  infamous  “Woodshed”  article,  a  spirited  critique  of  Simon  Estok’s  “Theorizing  in  a   Space  of  Ambivalent  Openness:  Ecocriticism  and  Ecophobia,”  S.  K.  Robisch  implies  that  if  you  theorize   too  abstractly  about  the  environment,  then  you  can’t  actually  be  familiar  with  that  environment.  This   sets  up  a  pretty  clear  dichotomy:  either  you’re  living  it,  or  you’re  theorizing  about  it.  You  can’t  do  both.   Aubrey:   I   think   my   response   would   be   that   as   humans,   we   find   it   difficult   not   to   do   both.   Isn’t   our   conversation  here  a  form  of  theory,  for  instance,  as  well  as  a  lived  practice  involving  both  of  us?  Yet  I   realize  that  for  some,  this  particular  lived  practice  may  not  seem  radical  enough,  or  this  particular  form   of  theorizing  may  not  seem  scholarly  enough.  
Show more

12 Read more

Show all 10000 documents...