and provided a good performance in the spatial query processing. In special, the page size equal to 16KB guaranteed better performance on spatial queries with high selectivity. Regarding the cloud server, we highlight the following two situations. First, the Microsoft Azure offers a special treatment for virtual machines equipped with SSDs, resulting in a better maximum throughput and IOPS compared to virtual machines equipped with conventional disks. Thus, our experiments showed the best performance results on the virtual machine with SSD, in the most of the cases. Second, we can improve the performance of spatial indices stored on virtual machines with SSDs by taking into account the intrinsic characteristics of these storage devices. The reason is that the FAST-based spatial indices showed expressive performance gains in this environment. Therefore, our experiments showed that flash-aware spatial indices often improve the performance of spatial indexing on SSDs, independently of the running environment, compared to the direct use of disk-based spatial indices without any additional treatment.
RAID 0, also referred to as striping, writes stripes of data across multiple diskdrives. RAID 0 does not provide any data redundancy, but does offer the best high-speed data throughput. RAID 0 breaks up data into smaller blocks and then writes a block to each drive in the array. Disk striping enhances performance because multiple drives are accessed simultaneously; but the reliability of RAID Level 0 is less than any of its member diskdrives due to its lack of redundancy.
Flash-based solid-state disks (SSDs) help address some of these challenges. They have excellent read and random I/O performance and low latency, which are crucial for business applications, server virtualization, and VDI. However, due to high cost and write endurance issues, using flash-only arrays is only practical for a very limited set of applications. Consequently, data storage vendors have been promoting various hybrid combinations involving flash and hard drives. Some vendors have taken a bolt-on approach, which simply layers flash on top of disk as an additional tier. This approach fails to leverage flash in a cost-effective way; and does not maximize disk utilization. It does little to simplify IT administrators’ jobs as they now have to contend with management and data migration between multiple tiers of storage.
It is true that the probability of a rebuild failure is smaller for a single array it is not smaller for the storage installation as a whole. Further, smaller arrays have many disadvantages. They require more parity drives and waste storage, limit volume sizes and thus risk “disk full” messages and numerous volume expansions and require more management intervention. Smaller arrays deliver reduced performance since a smaller number of disks reduces the stripe size and limits the parallel reading and writing that is provided by accessing many disks at once.
The original magnetic hard disk drive was first created by IBM scientists in 1956 (Noyes & Dickinson, 1956, p. 42). The original setup featured 50 disks spaced 0.3 inches apart to allow space for reading and writing via magnetic heads (Fig. 2.1, Noyes & Dickinson, 1956, p. 42). Each disk had a magnetic coating, and the device utilized a motor to rotate the disks for access (Noyes & Dickinson, 1956, p. 43). Modern hard diskdrives are composed of less disks (Fig. 2.2) but are capable of reading and writing far more information at much higher speeds (Anderson, Dykes, & Riedel, 2003, p. 247). However, the response times of these drives has fallen behind that of processors due to rotational latency (Ekker, Coughlin, & Handy, 2009, p. 2). The drives also suffer from natural “wear and tear associated with mechanical devices” (U.S. Patent No. 5,459,850, 1995) which culminated in the creation of a storage reliability method called Redundant Arrays of Inexpensive Disks (RAID) (discussed later in this chapter) to combat the unreliability of hard diskdrives (Patterson, Gibson, & Katz, 1988, p. 110).
The use of asynchronous motors particularly squirrel-cage rotor has increased tremendously since the day of its invention. They are being used as actuators in many types of industrial processes, robotics, house appliances (generally single-phase) and other similar applications. The reason for its daily increasing popularity can be primarily attributed to its simplicity in design, robust construction and cost effectiveness, high efficiency, reliability and good self –starting capability [1-3]. The analysis of induction motor is carried out in steady state whereby the machine is modeled as a second order electromechanical system.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.
Internal SAS cables are narrower than internal parallel SCSI cables. The connectors vary in size depending on the number of links they support, from single link connectors to 4-wide (or larger) connectors. Internal fan-out cables (shown in the next figure) let you attach four diskdrives to a single 4-wide connector.
A SAS expander device literally expands the number of end devices that you can connect together. Expander devices, typically embedded into a system backplane (see page 66), support large configurations of SAS end devices, including SAS cards and SAS and SATA diskdrives. With expander devices, you can build large and complex storage topologies.
This SNW tutorial session is about the number one reason an IT manager would move away from HDDs and towards solid state disks (SSD)...latency. More importantly low latency. Latency becomes less of just a number and more an important metric when considering implementing any serious performance storage related solution. Today low latency can only effectively be addressed by one particular type of storage architecture and that’s an enterprise SSD design. Latency in a technical environment is synonymous with delay. More succinctly latency in terms of a SSD is how long it will take for a request to complete its round trip cycle from the time the request enters the device to the time that it leaves the device with the “payload” in tow. In a storage world where metrics such as $/GB are entrenched as a de-facto standard of measurement, and $/IOPS has arisen to become a “relevant” metric, we continue to miss a critical discussion point. And that is that low latency is the most important thing that can be delivered to a performance sensitive application or a workhorse database environment. In this session we will discuss the merits of low latency solutions and what they mean when coupled with a high IOPS and a large bandwidth design. From a business
AC induction motors have been widely used in industrial applications such as machine tools, steel mills and paper machines owing to their good performance provided by their solid architecture, low moment of inertia, low ripple of torque and high starting torque. some control techniques have been developed to regulate these induction motors servo drives in high- performance applications. One of the most popular technique is the indirect field oriented control method (Egiguren et al., 2008). The field-oriented technique guarantees the decoupling of torque and flux control commands of the induction motor, so that the induction motor can be controlled linearly as a separated excited d.c. motor. However, the control performance of the resulting linear system is still influenced by uncertainties, which are usually composed of unpredictable parameter variations, external load disturbances, measurement noise and unmodelled and nonlinear dynamics. therefore, many studies have been made on the motor drives in order to preserve the performance under these parameter variations and external load disturbances, such as nonlinear control, optimal control, variable structure system control, adaptive control, neural control and predictive control (Egiguren et al., 2008 and Marino et al., 1998).
Data growth is one of the tallest hurdles fac- ing the enterprise data center, and managing that data has become an increasingly difficult propo- sition. It is not a question of “if” a disk drive will fail; it is a matter of “when” will a drive fail. It is an issue of how much data has been lost and how fast the staff can recover it. How many backups can be taken and preserved before the data center runs out of backup capacity? What is the recovery point objective (RPO) of your enterprise? How quickly does the data need to be recovered in order to satisfy your recovery time objective (RTO)? Due to the growth of backup data and the time that has been commit- ted to recover it, in many instances tape may no longer be a viable alternative for backup in your data center.
This feature is designed for fault-tolerant logical drives (RAID 0, 1, 5, 6, and 10). It is generally recommended to use physical drives of the same size in your disk arrays. When this is not possible, physical drives of different sizes will work but the system must adjust for the size differences by reducing or coercing the capacity of the larger drives to match the smaller ones. You can choose to enable Capacity Coercion and any one of four methods.
Murray et al.  compared the performance of SVM, unsupervised clustering, rank-sum test and reverse arrangements test. In their subsequent work , they developed a new algorithm termed multiple-instance naive Bayes (mi-NB). They found that, on the dataset concerning 369 drives, ranksum test outperformed SVM for certain small set of SMART attributes (28:1% failure detection at 0% FAR). When using all features, SVM achieved the best performance of 50:6 % detection with 0% FAR.