In this chapter, we target these two regimes and attempt to find faster algorithms by pruning the search tree beyond what is done in the standard sphere decoding algorithm.
The search tree is pruned by computing lower bounds on the optimal value of the objective function as the algorithm proceeds to descend down the search tree. We observe a trade-off between the computational complexity required to compute a lower bound and the size of the pruned tree: the more effort we spend in computing a tight lower bound, the more branches that can be eliminated in the tree. Using ideas from SDP-duality theory and H ∞ estimation theory, we propose general frameworks for computing lower bounds on integer least-squares problems. We propose two families of algorithms, one of which is appropriate for large problem dimensions and binary modulation, and the other of which is appropriate for moderate-size dimensions yet high-order constellations. We then show how in each case these bounds can be efficiently incorporated in the sphere decoding algorithm, often resulting in significant improvement of the expected complexity of solving the ML decoding problem, while maintaining the exact ML performance.
When surveying the distributed algorithms and network optimization lit- erature, one cannot avoid but notice the significant gap between the the- oretician and practitioner sides of the research community. Obviously, it can hardly be the goal of this thesis to bridge this gap. However, it is our aim to study mathematical properties from network-optimization theory and apply these to practical problems arising in wireless networks to obtain sim- ple algorithms that can be implemented based on a realistic network infras- tructure. For this purpose, we approach various problems, such as routing, transmission power assignment and node activity scheduling. For most algo- rithms proposed in this thesis we also provide network simulations that offer an insight into the operation of an algorithm. Rather than considering the algorithms in isolation, these should be seen as a step towards a mathemati- cal toolbox consisting of techniques for approaching problems by distributed network optimization. In this context, one interesting observation, which we have made on several occasions, is the observation that dual problems some- times lead to algorithms that can be implemented efficiently in a distributed setting. We believe that methods for exploiting this locality by duality princi- ple are promising candidates for future generation network protocols.
by applications in wirelesscommunications, rather than the “wired” optical-ﬁber links that appear to have dominated research in optical quantumcommunications so far. As such, this makes the study and development of quantum MIMO systems considerably more diﬃcult than ﬁber-optics quantumcommunications since not much is known experimentally and theoretically about optical wireless networks. Indeed, since scattering eﬀects are very severe in the optical regime, while building quantumwireless links at lower frequencies like RF, microwave, or mmWave is not feasible due to weakness of quantum eﬀects at such low frequency, much of the research on quantum telecommunications have focused on using guided waves to encode classical information using quantum states, mostly coherent laser modes. In our opinion, such overall situation in the nascent ﬁeld of quantumcommunications makes it even more urgent to consider alternative system design for wirelessquantumcommunications capable of addressing the multiplying diﬃculties arising from both using optical quantumwireless and the presence of dense-and-complex propagation environments in present-day urban settings. Moreover, it is expected that similar to classical MIMO, the use of multiple antennas and receivers can enhance the spectral eﬃciency of quantum MIMO links over quantum SISO.
Figure 1.1: Sensor array signal processing.
The algorithms used in the space-time processor need to be smart and adaptive  in order to deal with time and space varying signals. In wireless receivers, optimization techniques are commonly needed. Minimiz- ing certain error criterion, or maximizing certain gain function needs to be done iteratively, in an online fashion. Numerical optimization may be the only solution because a closed-form solution may not exist, or if it exists, it is hard to find. In multi-channel signal processing the optimization is often performed subject to matrix constraints. An elegant way to solve this type of constraint optimization problems is to view the error criterion as a multi-dimensional surface contained on the space of the free parameters. The constraint is viewed as a second surface. The feasible solutions may be found in the parameter space determined by the intersection of the two surfaces. This parameter space is usually a non-Euclidean space, i.e., a dif- ferential manifold. Consequently, powerful geometric optimization methods are needed. Classical optimization techniques operating on the usual Eu- clidean space suffer from low convergence speed, and/or deviation from the constraint. In order to overcome these impairments, state-of-the-art Rie- mannian optimizationalgorithms may be employed. They provide efficient solutions and allow better understanding of the problem. By using the Rie- mannian geometry approach, the initial constrained optimization problem is converted into an unconstrained one, in a different parameter space . The constraints are fully satisfied, in a natural way. The geometric properties of the constrained space may be exploited in order to reduce to computational complexity.
Liljana Gavrilovska, Senior Member, IEEE, and Petri Mähönen
Abstract—In this paper, we describe the design and implementation of an extensible and flexible self-optimization framework for cognitive home networks that employs cogni- tive wireless networking and agent-based design principles. We provide a “first-principles” derivation of its architecture based on a careful analysis of user requirements on wireless home networks (WHNs), state the related design objectives and con- straints, and address those in the context of present-day and emerging radio platforms. We utilize the cognitive resource manager as an architecture of the individual agents. This archi- tecture serves as a “constraint that de-constrains,” i.e., it allows achieving high system flexibility, while providing structural con- straints to ensure robustness. We show that the designed system is capable of solving complex utility maximization problems constrained with user, operator, and regulatory policies in the crowded ISM bands. The system successfully operates on an extensible parameter configuration space across multiple protocol layers. For example, the prototype employs diverse optimiza- tion algorithms, and can also benefit from radio environment map information on primary transmitters propagated through the integrated policy mechanism. The proposed system deliv- ers both efficient and robust radio resource management, and enables comfortable WHN experimentation by providing exten- sibility of sensory inputs, actuation parameters, and optimizationalgorithms.
3.1.1 Exhaustive search
A large class of optimal synthesis algorithms work by exhaustively enumerating all circuits and returning the lowest cost circuit implementing the desired computation; this technique is commonly called exhaustive or brute-force searching. This method is particularly popular in the circuit synthesis community as a way of optimally compiling small commonly used gates or functions to the target gate set, and in these small instances it can be very effective. Typical exhaustive search algorithms generate circuits in increasing size or depth, com- bined with heuristics to reduce the number of candidates to be generated. Such a search is called a breadth-first search, as the space of circuits over a gate set G can be viewed as a tree where each branch corresponds to the next gate in the circuit. Breadth-first searches are thus naturally tuned to optimize size and depth, though they can be used indirectly to optimize other costs. Such searches also typically involve the computation of some database of circuits, where the database is computed once and loaded whenever a new circuit is needed. If the database provides efficient lookups, circuits that are already in the database can be synthesized efficiently.
* Correspondence: email@example.com; Tel.: +90-533-3621947
Abstract: Quantum computers are machines that are designed to use quantum mechanics in order to improve upon classical computers by running quantumalgorithms. One of the main applications of quantum computing is solving optimization problems. For addressing optimization problems we can use linear programming. Linear programming is a method to obtain the best possible outcome in a special case of mathematical programming. Application areas of this problem consist of resource allocation, production scheduling, parameter estimation, etc. In our study, we looked at the duality of resource allocation problems. First, we chose a real world optimization problem and looked at its solution with linear programming. Then, we restudied this problem with a quantum algorithm in order to understand whether if there is a speedup of the solution. The improvement in computation is analysed and some interesting results are reported.
While some small-scale exploration of quantumalgorithms and heuristics for approximate opti- mization beyond quantum annealing has been possible through classical simulation, the exponential overhead in such simulations has greatly limited their usefulness. The next decade will see a blos- soming of quantumalgorithms as a broader and more flexible array of quantum computational hardware becomes available. The immediate question is: which algorithms should we prioritize that will give us insight into the power and utility of quantum computers? One leading candidate is the Quantum Approximate Optimization Algorithm (QAOA), for which a number of tantaliz- ing related results have been obtained [86, 90, 135, 228, 235, 238, 251]. As discussed in Chapter 5, QAOA facilitates low-resource implementations for unconstrained optimization problems, although the performance of QAOA for these problems remains open. It is important to derive constructions for even more general classes of problems, where we may not have good classical approximation algorithms at all, that in particular also exhibit low or modest resource requirements. Indeed, im- plementing such QAOA constructions to find approximate solutions may lead to the first examples of experimental quantum computers performing truly practically useful computations.
We thank Yin Tat Lee for numerous helpful discussions and anonymous reviewers for sugges- tions on preliminary versions of this paper. We also thank Joran van Apeldoorn, Andr´ as Gily´ en, Sander Gribling, and Ronald de Wolf for sharing a preliminary version of their manuscript  and for their detailed feedback on a preliminary version of this paper, includ- ing identifying some mistakes in previous lower bound arguments and pointing out a minor technical issue in the evaluation of the height function in Line 6 of Algorithm 4 (and in ). This work was supported in part by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, QuantumAlgorithms Teams program. AMC also received support from the Army Research Office (MURI award W911NF-16-1-0349), the Canadian Institute for Advanced Research, and the National Science Foundation (grant CCF-1526380). XW also received support from the National Science Foundation (grants CCF-1755800 and CCF-1816695).
Lab Demonstration of Interference Detection
While the developed algorithms have been evaluated using powerful simulations exten- sively, the objective of this chapter is to go one step further and evaluate the performance of the developed algorithms experimentally using SDR. An SDR is a communications platform that uses software for fast prototyping of digital communicationsalgorithms, while allowing analog transmissions over a physical medium. In this thesis, the NI USRP platforms have been utilized as the SDR. USRPs are computer-hosted software radios which provide a powerful hardware platform for lab experiments. Due to their simple programmability as well as inexpensiveness, they have been used extensively by research labs and universities for radio technology experiments. They can perform as radio transceivers and the required signal processing algorithms can be programmed using generic programming languages including MATLAB, LabVIEW and C.
years, variational quantum eigensolvers and QAOA [ 39 ,
47 , 19 ] are favored methods for providing low-depth
quantumalgorithms for solving important problems in quantum simulation and optimization. Current quan- tum computers are limited by decoherence, hence the option to solve optimization problems using very short circuits can be enticing even if such algorithms are poly- nomially more expensive than alternative strategies that could possibly require long gate sequences. Since these methods are typically envisioned as being appropriate only for low-depth applications, comparably less atten- tion is paid to the question of what their complexity would be, if they were executed on a fault-tolerant quan- tum computer. In this paper, we consider the case that these algorithms are in fact implemented on a fault- tolerant quantum computer and show that the gradient computation step in these algorithms can be performed quadratically faster compared to the earlier approaches that were tailored for pre-fault-tolerant applications.
14], we only consider static networks because mobility adds a whole new dimension to the problem and it is out of the scope of this paper.
Unlike the previous work, we would like to design the distributed algorithms that can run on the wireless nodes with limited recourses (i.e., bandwidth, memory, computa- tional capacity, and power). We first use graph-theoretic ap- proach to solve the special case of single multicast session us- ing omnidirectional antenna. This graph-theoretic approach provides us insights into more general case of using direc- tional antennas, and inspires us to produce a group of dis- tributed algorithms. We will extend these solutions to max- imize the network lifetime over multiple sessions as well in more realistic scenarios for a wide range of potential civil and military applications. A straightforward approach is that the same trees that were optimized for single session operation are used for the multiple session operations.
The basic optimization flow using BBO and PSO algorithm is as follows: Firstly, the performances of decision fusion rules such as OR and Half-Voting are compared in terms of their P d Vs P f plots. The rule with better system performance i.e. smaller value of P f and larger value of P d is then used in the optimization environment to optimize the throughput (T) of SUs. The optimization parameters for both PSO and BBO such as swarm size, mutation probability, weight factors, and learning factors are defined. Then solutions are randomly initialized for BBO with random positions and zero initial velocities in case of PSO. Based on the migration rate, the features of solutions are updated and mutation if desired is applied in BBO. The positions of particles are updated in terms of velocity vectors in PSO. This process continues until the desired output or termination condition is achieved. Finally, the results are compared to conclude the performances of both evolutionary algorithms.
Upamanyu Madhow, Michael L. Honig and Ken Steiglitz†
In personal communications applications, users communicate via wireless with a wireline net- work. The wireline network tracks the current location of the user, and can therefore route mes- sages to a user regardless of the user’s location. In addition to its impact on signaling within the wireline network, mobility tracking requires the expenditure of wireless resources as well, includ- ing the power consumption of the portable units carried by the users and the radio bandwidth used for registration and paging. Ideally, the mobility tracking scheme used for each user should depend on the user’s call and mobility pattern, so that the standard approach, in which all cells in a registration area are paged when a call arrives, may be wasteful of wireless resources. In order to conserve these resources, the network must have the capability to page selectively within a registration area, and the user must announce his or her location more frequently. In this paper, we propose and analyze a simple model that captures this additional flexibility. Dynamic pro- gramming is used to determine an optimal announcing strategy for each user. Numerical results for a simple one-dimensional mobility model show that the optimal scheme may provide significant savings when compared to the standard approach even when the latter is optimized by suitably choosing the registration area size on a per-user basis. Ongoing research includes com- puting numerical results for more complicated mobility models and determining how existing system designs might be modified to incorporate our approach.
The wireless sensor node, being a microelectronic device, can only be equipped with a limited power source (<0.5 Ah, 1.2 V). In some application scenarios, replenishment of power resources might be impossible. The lifetime of a sensor node shows a strong dependence on battery lifetime. In a multi-hop sensor network, each node plays the dual role of data originator and data router. The malfunctioning of few nodes can cause significant topological changes and might require rerouting of packets and reorganization of the entire network. Hence, power conservation and power management take on additional importance. It is for these reasons that researchers are currently focusing on the design of power-aware protocols and algorithms for sensor networks. In other mobile and ad hoc networks, power consumption has been an important design factor, but not the primary consideration, simply because power resources can be replaced by the user. The emphasis is more on Quality of Service provisioning than the power efficiency. In sensor networks though, power efficiency is an important performance metric, directly influencing the network lifetime. Application specific protocols can be designed by appropriately trading off other performance metrics such as utility, delay and throughput with power efficiency.
As a matter of fact, the number of users and the diversity in user requirements, e.g., de- manded services, channel conditions, used applications, types of mobile devices etc., are going to keep increasing with the lapse of time. For example, in a forecast provided by , the number of smartphone users will be 264.3 million in the United States by 2021, while it was 189 million in 2015. As a result of a global projection of that increase, monthly mobile data traffic will reach up to 30.6 exabytes which is eightfold of the one in 2015 . Such a growth in traffic will likely lead to some enhancements on current concepts such as operation in much higher frequencies beyond the currently discussed mmWave bands (<100 GHz), deployment of more antennas than the presented massive-MIMO systems. In this case, current problems faced in these concepts will definitely be more severe. On the other hand, researchers should get ready for brand new problems as well, with the development of the novel future concepts which are hard to predict for the time being. Consid- ering these facts, potential future scenarios will lead to an increase in aforementioned radio access flexibility requirement for the standards beyond 5G. However, the majority of the current discus- sions on flexibility for the RATs in NR design are conducted in a limited range by only focusing on adopting OFDM-based waveform parameters. Reviewing the user requirements along with the trend from 2G to potential 5G technologies, we believe that the definition of radio access flexibility should gain a much broader meaning for meeting the future service needs optimally. Therefore, in this chapter, we discussed some potential directions and provide our proposals on RATs that enable much more flexibility for standards beyond 5G. In order to avoid any confusion in terminology, let us define the fundamental terms of RATs in the context of our discussions and wirelesscommunications literature.
A quantum-dot-based quantum computer uses spins  or energy levels  of electrons confined in quantum dots (QDs) as qubits that are fabricated in semiconductor materials. Because we can control states of qubits electrically, as we do in classical circuits, this scheme has an advantage because current semiconductor technology may be applied to the fabrication of a quantum computer.
Another recent development is the study of quantum communication complexity. If two people share quantum entanglement, as well as a classical communications channel, this permits them to send each other qubits, but does not reduce the number of bits required for transmission of classical information. However, if they both have some classical data, and they wish to compute some classical function of this data, shared quantum entanglement may help reduce the amount of purely classical communication required to compute this function. This was first shown by Cleve and Burhman .
One possible use of quantum computing is matrix inversion. There is the potential that quantum computing could provide an exponential speedup when it comes to matrix inversion of large matrices (Preskill, 2018). There is another possible benefit of quantum computing in the area of deep learning. Quantum computers could potentially be very successful when it comes to deep learning involving data sets that are quantum mechanical themselves (Preskill, 2018). One other potential area where quantum computers could be useful is quantum annealing (Preskill, 2018). Quantum annealers can be used to solve optimization problems; however, they may not outperform classical computers at these problems unless the quantum technology advances and becomes significantly more resilient to outside error caused by quantum noise (Preskill, 2018). High-Level Quantum Programming
LAJOS HANZO (M’91–SM’92–F’08) received the F.R.Eng., FIEEE, FIET, fellow of EURASIP, and D.Sc. degrees in electronics in 1976, and the Doctorate in 1983. In 2009, he was awarded the Honorary Doctorate ‘‘Doctor Honoris Causa’’ by the Technical University of Budapest. During his 35-year career in telecommunications he has held various research and academic posts in Hungary, Germany, and U.K. Since 1986, he has been with the School of Electronics and Computer Science, University of Southampton, Southampton, U.K., where he holds the chair in telecommunications. He has successfully supervised 80 Ph.D. students, co- authored 20 John Wiley/IEEE Press books on mobile radio communications totaling in excess of 10 000 pages, published 1300+ research entries, acted both as TPC and General Chair of IEEE conferences, presented keynote lec- tures and has been awarded a number of distinctions. Currently, he is direct- ing a 100-strong academic research team, working on a range of research projects in the field of wireless multimedia communications sponsored by industry, the Engineering and Physical Sciences Research Council, U.K., the European IST Programme and the Mobile Virtual Centre of Excellence, U.K. He is an enthusiastic supporter of industrial and academic liaison and he offers a range of industrial courses. He is also a Governor of the IEEE VTS. From 2008 to 2012, he was the Editor-in-Chief of the IEEE Press and