# WCAN Talks

Click here for WCAN Talks up to Fall 2014

# WCAN Talks are held in 13A EE West unless stated otherwise.

Date | TBA |

Time-Place | 2:30 PM @ 13A EE West |

Presenter | Burak Varan |

Title | A Minimax Approach to Supervised Learning |

Abstract |
We will discuss the following paper: Farzan Farnia, and David Tse "A Minimax Approach to Supervised Learning" Abstract: Given a task of predicting Y from X, a loss function L, and a set of probability distributions Γ on (X,Y), what is the optimal decision rule minimizing the worst-case expected loss over Γ? In this paper, we address this question by introducing a generalization of the principle of maximum entropy. Applying this principle to sets of distributions with marginal on X constrained to be the empirical marginal from the data, we develop a general minimax approach for supervised learning problems which reduces to the maximum likelihood problem over generalized linear models. Through this framework, we develop two classification algorithms called the minimax SVM and the minimax Brier classifier. The minimax SVM, which is a relaxed version of the standard SVM, minimizes the worst-case 0-1 loss over the structured set of distribution, and by our numerical experiments can outperform the SVM. The minimax Brier classifier utilizes the Huber penalty function for a robust classification. We also explore the application of the developed framework on robust feature selection. |

Date | 10/05/2016 |

Time-Place | 12:00 PM @ 13A EE West |

Presenter | Basak Guler |

Title | Fundamental Limits of Caching in Wireless D2D Networks |

Abstract |
We will discuss the following paper: Mingyue Ji, Giuseppe Caire, and Andreas F. Molisch "Fundamental Limits of Caching in Wireless D2D Networks" Abstract: We consider a wireless device-to-device (D2D) network where communication is restricted to be single-hop. Users make arbitrary requests from a finite library of files and have pre-cached information on their devices, subject to a per-node storage capacity constraint. A similar problem has already been considered in an infrastructure setting, where all users receive a common multicast (coded) message from a single omniscient server (e.g., a base station having all the files inthe library) through a shared bottleneck link. In this paper, we consider a D2D infrastructureless version of the problem. We propose a caching strategy based on deterministic assignment of subpackets of the library files, and a coded delivery strategy where the users send linearly coded messages to each other in order to collectively satisfy their demands. We also consider a random caching strategy, which is more suitable to a fully decentralized implementation. Under certain conditions, both approaches can achieve the information theoretic outer bound within a constant multiplicative factor. |

Date | 8/31/2016 |

Time-Place | 12:00 PM @ 13A EE West |

Presenter | Gan Chao |

Title | A Layered Caching Architecture for the Interference Channel |

Abstract |
We will discuss the following paper: Jad Hachem, Urs Niesen, and Suhas Diggavi "A Layered Caching Architecture for the Interference Channel" Abstract: Recent work has studied the benefits of caching in the interference channel, particularly by placing caches at the transmitters. In this paper, we study the two-user Gaussian interference channel in which caches are placed at both the transmitters and the receivers. We propose a separation strategy that divides the physical and network layers. While a natural separation approach might be to abstract the physical layer into several independent bit pipes at the network layer, we argue that this is inefficient. Instead, the separation approach we propose exposes interacting bit pipes at the network layer, so that the receivers observe related (yet not identical) quantities. We find the optimal strategy within this layered architecture, and we compute the degrees-of-freedom it achieves. Finally, we show that separation is optimal in regimes where the receiver caches are large. |

Date | 8/30/2016 |

Time-Place | 3:00 PM @ 333 IST |

Presenter | Paul Cuff, Department of EE, Princeton University |

Title | Secure Communication through Wiretap Channels |

Abstract |
The secrecy capacity of a wiretap channel is the maximum communication rate for which it can be guaranteed that an eavesdropper (who is listening in on the communication but is affected by noise) gains no knowledge of the message. This work provides new results for the secrecy capacity of wiretap channels with state. We begin with results on arbitrarily varying channels, giving tight capacity characterization when the state sequence is typeconstrained. Variations of Type II wiretap channels, which appear in the recent literature, are special cases of this. The talk then moves to random states known to the encoder but not necessarily the decoder (the GelfandPinsker setting). For this we have a superposition coding scheme which is unusual in that the first layer of the superposition code does not contain any message. This scheme is shown to be better than previously proposed schemes. If time permits, another topic will be briefly discussed. An information theoretic notion of differential privacy will be presented, which establishes an equivalence between differential privacy and a mutual information constraint. |

Date | 8/25/2016 |

Time-Place | 4:35 PM @ 220 Hammond |

Presenter | Osman Yagan, Department of ECE, Carnegie Mellon University |

Title | Robustness of electrical power systems against cascading failures: optimal load-capacity distributions and hardness of attacking |

Abstract |
Electrical power systems are one of the most important infrastructures that support our society. However, their vulnerabilities have raised great concern recently due to several large-scale blackouts around the world. In this talk, we will present several results concerning the robustness of power systems against cascading failures initiated by a random attack. This will be done under a simple yet useful model based on global and equal redistribution of load upon failures. We provide a comprehensive understanding of system robustness under this model by (i) deriving an expression for the final system size as a function of the size of initial attacks; (ii) deriving the critical attack size after which system breaks down completely; (iii) showing that complete system breakdown takes place through a first-order (i.e., discontinuous) transition in terms of the attack size; and (iv) establishing the optimal load-capacity distribution that maximizes robustness. In particular, we show that robustness is maximized when the difference between the capacity and initial load is the same for all lines; i.e., when all lines have the same redundant space regardless of their initial load. This is in contrast with the intuitive and commonly used setting where capacity of a line is a fixed factor of its initial load. We will also consider the case where an adversary can launch a targeted attack, and present several results on the hardness of attacking optimally. Time permitting, we will present several other application areas for the same model with pointers for future work. |

Date | 4/19/2016 |

Time-Place | 4:15 PM @ 160 Willard |

Presenter | Yingbin Liang, Department of EECS, Syracuse University |

Title | Kernel-based Detection of Anomalous Structures over Large Networks |

Abstract |
This talk will focus on a type of problems, the goal of which is to detect existence of an anomalous object over a network. An anomalous object, if it exists, corresponds to a cluster of nodes in the network that take data samples generated by an anomalous distribution q whereas all other nodes in the network receive samples generated by a distinct distribution p. Such a problem models a variety of applications such as detection of an anomalous intrusion via sensor networks and detection of an anomalous segment in a DNA sequence. All previous studies of this problem have taken parametric models, i.e., distributions p and q are known. Our work studies the nonparametric model, in which distributions can be arbitrary and unknown a priori. In this talk, I will first introduce the approach that we apply, which is based on mean embedding of distributions into a reproducing kernel Hilbert space (RKHS). In particular, we adopt the quantity of maximum mean discrepancy (MMD) as a metric of distance between mean embeddings of two distributions. I will then present our construction of MMD-based tests for anomalous detection over networks and our analysis of performance guarantee of the proposed tests. I will finally present a number of numerical results to demonstrate our results. Towards the end of the talk, I will discuss some related problems and conclude with a few future directions. |

Date | 4/12/2016 |

Time-Place | 4:10 PM @ 13A EE West |

Presenter | Abdelrahman Ibrahim |

Title | Wireless Content Caching for Small Cell and D2D Networks |

Abstract |
We will discuss the following paper: Maria Gregori, Jesus Gomez-Vilardebo, Javier Matamoros, and Deniz Gunduz, "Wireless Content Caching for Small Cell and D2D Networks," IEEE Journal on Selected Areas in Communications, accepted for publication, Mar. 2016. Abstract: The fifth generation wireless networks must provide fast and reliable connectivity while coping with the ongoing traffic growth. It is of paramount importance that the required resources, such as energy and bandwidth, do not scale with traffic. While the aggregate network traffic is growing at an unprecedented rate, users tend to request the same popular contents at different time instants. Therefore, caching the most popular contents at the network edge is a promising solution to reduce the traffic and the energy consumption over the backhaul links. In this paper, two scenarios are considered, where caching is performed either at a small base station, or directly at the user terminals, which communicate using Device-to-Device (D2D) communications. In both scenarios, joint design of the transmission and caching policies is studied when the user demands are known in advance. This joint design offers two different caching gains, namely, the pre-downloading and local caching gains. It is shown that the finite cache capacity limits the attainable gains, and creates an inherent tradeoff between the two types of gains. In this context, a continuous time optimization problem is formulated to determine the optimal transmission and caching policies that minimize a generic cost function, such as energy, bandwidth, or throughput. The jointly optimal solution is obtained by demonstrating that caching files at a constant rate is optimal, which allows to reformulate the problem as a finite dimensional convex program. The numerical results show that the proposed joint transmission and caching policy dramatically reduces the total cost, which is particularised to the total energy consumption at the Macro Base Station (MBS), as well as to the total economical cost for the service provider, when users demand economical incentives for delivering content to other users over the D2D links. |

Date | 3/29/2016 |

Time-Place | 4:45 PM @ 209 EE West |

Presenter | Ahmed Zewail |

Title | Fundamental Limits of Caching |

Abstract |
We will discuss the following paper: Mohammad Ali Maddah-Ali and Urs Niesen, "Fundamental Limits of Caching," IEEE Transactions on Information Theory, vol. 60, no. 5, pp. 2856-2867, May 2014. Abstract: Caching is a technique to reduce peak traffic rates by prefetching popular content into memories at the end users. Conventionally, these memories are used to deliver requested content in part from a locally cached copy rather than through the network. The gain offered by this approach, which we term local caching gain, depends on the local cache size (i.e., the memory available at each individual user). In this paper, we introduce and exploit a second, global, caching gain not utilized by conventional caching schemes. This gain depends on the aggregate global cache size (i.e., the cumulative memory available at all users), even though there is no cooperation among the users. To evaluate and isolate these two gains, we introduce an information-theoretic formulation of the caching problem focusing on its basic structure. For this setting, we propose a novel coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared with previously known schemes. In particular, the improvement can be on the order of the number of users in the network. In addition, we argue that the performance of the proposed scheme is within a constant factor of the informationtheoretic optimum for all values of the problem parameters. |

Date | 3/22/2016 |

Time-Place | 4:45 PM @ 101 EE East |

Presenter | Basak Guler |

Title | Bayesian Learning in Social Networks |

Abstract |
We will discuss the following paper: Daron Acemoglu, Munther A. Dahleh, Ilan Lobel, and Asuman Ozdaglar, "Bayesian Learning in Social Networks," The Review of Economic Studies, vol. 78, no. 4, pp. 1201-1236, 2011. Abstract: We study the (perfect Bayesian) equilibrium of a sequential learning model over a general social network. Each individual receives a signal about the underlying state of the world, observes the past actions of a stochastically generated neighbourhood of individuals, and chooses one of two possible actions. The stochastic process generating the neighbourhoods defines the network topology.We characterize pure strategy equilibria for arbitrary stochastic and deterministic social networks and characterize the conditions under which there will be asymptotic learning-convergence (in probability) to the right action as the social network becomes large. We show that when private beliefs are unbounded (meaning that the implied likelihood ratios are unbounded), there will be asymptotic learning as long as there is some minimal amount of "expansion in observations". We also characterize conditions under which there will be asymptotic learning when private beliefs are bounded. |

Date | 3/15/2016 |

Time-Place | 4:45 PM @ 209 EE West |

Presenter | Mohamed Nafea |

Title | Information Theoretic Privacy |

Abstract |
We will discuss the following papers: Shahab Asoodeh, Mario Diaz, Fady Alajaji, and Tamas Linder, "Information Extraction Under Privacy Constraints," arXiv: 1511.02381, Jan. 2016. Abstract: A privacy-constrained information extraction problem is considered where for a pair of correlated discrete random variables (X, Y) governed by a given joint distribution, an agent observes Y and wants to convey to a potentially public user as much information about Y as possible without compromising the amount of information revealed about X. To this end, the so-called rate-privacy function is investigated to quantify the maximal amount of information (measured in terms of mutual information) that can be extracted from Y under a privacy constraint between X and the extracted information, where privacy is measured using either mutual information or maximal correlation. Properties of the rate-privacy function are analyzed and information-theoretic and estimation-theoretic interpretations of it are presented for both the mutual information and maximal correlation privacy measures. It is also shown that the rate-privacy function admits a closed-form expression for a large family of joint distributions of (X, Y). Finally, the rate-privacy function under the mutual information privacy measure is considered for the case where (X, Y) has a joint probability density function by studying the problem where the extracted information is a uniform quantization of Y corrupted by additive Gaussian noise. The asymptotic behavior of the rate-privacy function is studied as the quantization resolution grows without bound and it is observed that not all of the properties of the rate-privacy function carry over from the discrete to the continuous case. Shahab Asoodeh, Fady Alajaji, and Tamas Linder, "Notes on Information-Theoretic Privacy," Proceedings of the 52nd Annual Allerton Conference on Communication, Control, and Computing, Allerton'14, Monticello, IL, Oct. 2014. Abstract: We investigate the tradeoff between privacy and utility in a situation where both privacy and utility are measured in terms of mutual information. For the binary case, we fully characterize this tradeoff in case of perfect privacy and also give an upper-bound for the case where some privacy leakage is allowed. We then introduce a new quantity which quantifies the amount of private information contained in the observable data and then connect it to the optimal tradeoff between privacy and utility. |

Date | 3/1/2016 |

Time-Place | 4:30 PM @ 209 EE West |

Presenter | Remi A. Chou |

Title | Agile Broadcast Services: Addressing the Wireless Spectrum Crunch via Coalitional Game Theory |

Abstract |
We will discuss the following paper: Nikhil Karamchandani, Paolo Minero, and Massimo Franceschetti, "Agile Broadcast Services: Addressing the Wireless Spectrum Crunch via Coalitional Game Theory," IEEE Transactions on Wireless Communications, vol. 13, no. 2, pp. 794-808, Feb. 2014. Abstract: The performance of cooperation strategies for broadcast services sharing a common wireless channel is studied in the framework of coalitional game theory. Two strategies are examined. The first represents an open sharing model where each service provider is allowed to transmit at any time but simultaneous transmissions result in interference. It is shown analytically that in the absence of coordination cost, the grand coalition formed by all providers cooperating to avoid simultaneous transmissions is both sum-rate optimal and stable. The second strategy represents an orthogonal access method where service providers are granted exclusive access to a subset of the available channels, each having a guaranteed successful transmission opportunity. In the absence of coordination cost, the grand coalition where all providers cooperate by sharing their guaranteed right to access the channel is sum-rate optimal but unstable, in the sense that some group of providers may have an incentive to deviate from the grand coalition. In the presence of coordination cost, a different scenario arises. In both models large coalitions do not form, and simulation results suggest that the open access model for large networks can lead to a regime where performance is considerably limited by interference. |

Date | 2/19/2016 |

Time-Place | 2:30 PM @ 209 EE West |

Presenter | Ebrahim MolavianJazi |

Title | A Dichotomy of Functions F(X,Y) of Correlated Sources (X,Y) from the Viewpoint of the Achievable Rate Region |

Abstract |
We will discuss the following paper: Te Sun Han and Kingo Kobayashi, "A Dichotomy of Functions F(X,Y) of Correlated Sources (X,Y) from the Viewpoint of the Achievable Rate Region," IEEE Transactions on Information Theory, vol. 33, no. 1, pp. 69-76, Jan. 1987. Abstract: Consider separate encoding of correlated sources X^n = (X_1,...,X_n), Y^n = (Y_1,...,Y_n) for the decoder to reliably reproduce a function F(X_i, Y_i)^n_{i=1}. We establish the necessary and sufficient condition for the set of all achievable rates to coincide with the Slepian-Wolf region whenever the probability density p(x,y) is positive for all (x,y). |

Date | 2/9/2016 |

Time-Place | 4:20 PM @ 13A EE West |

Presenter | Shiyang Leng |

Title | Optimal Resource Allocation for Pervasive Health Monitoring Systems with Body Sensor Networks |

Abstract |
We will discuss the following paper: Yifeng He, Wenwu Zhu, and Ling Guan, "Optimal Resource Allocation for Pervasive Health Monitoring Systems with Body Sensor Networks," IEEE Transactions on Mobile Computing, vol. 10, no. 11, pp. 1558-1575, Nov. 2011. Abstract: Pervasive health monitoring is an eHealth service, which plays an important role in prevention and early detection of diseases. There are two major challenges in pervasive health monitoring systems with Body Sensor Networks (BSNs). The first challenge is the sustainable power supply for BSNs. The second challenge is Quality of Service (QoS) guarantee for the delivery of data streams. In this paper, we optimize the resource allocations to provide a sustainable and high-quality service in health monitoring systems. Specifically, we formulate and solve two resource optimization problems, respectively. In the first optimization problem, steady-rate optimization problem, we optimize the source rate at each sensor to minimize the rate fluctuation with respect to the average sustainable rate, subject to the requirement of uninterrupted service. The first optimization problem is solved by a proposed analytical solution. The second optimization problem is formulated based on the optimal source rates of the sensors obtained in the steady-rate optimization problem. In the second optimization problem, we jointly optimize the transmission power and the transmission rate at each aggregator to provide QoS guarantee to data delivery. The second optimization problem is converted into a convex optimization problem, which is then solved efficiently. In the simulations, we demonstrate that the proposed optimized scheme enables the pervasive health monitoring system to provide a sustainable service with guaranteed low delay and low Packet Loss Rate (PLR) to subscribers. |

Date | 1/26/2016 |

Time-Place | 6:00 PM @ 13A EE West |

Presenter | Burak Varan |

Title | Optimal Policies for Wireless Networks With Energy Harvesting Transmitters and Receivers: Effects of Decoding Costs |

Abstract |
We will discuss the following paper: Ahmed Arafa and Sennur Ulukus, "Optimal Policies for Wireless Networks With Energy Harvesting Transmitters and Receivers: Effects of Decoding Costs," IEEE Journal on Selected Areas in Communications, vol. 33, no. 12, pp. 2611-2625, Dec. 2015. Abstract: We consider the effects of decoding costs in energyharvesting communication systems. In our setting, receivers, in addition to transmitters, rely solely on energy harvested from nature, and need to spend some energy in order to decode their intended packets. We model the decoding energy as an increasing convex function of the rate of the incoming data. In this setting, in addition to the traditional energy causality constraints at the transmitters, we have the decoding causality constraints at the receivers, where energy spent by the receiver for decoding cannot exceed its harvested energy. We first consider the point-to-point singleuser problem where the goal is to maximize the total throughput by a given deadline subject to both energy and decoding causality constraints. We show that decoding costs at the receiver can be represented as generalized data arrivals at the transmitter, and thereby moving all system constraints to the transmitter side. Then, we consider several multiuser settings. We start with a twohop network where the relay and the destination have decoding costs, and show that separable policies, where the transmitter's throughput is maximized irrespective of the relay's transmission energy profile, are optimal. Next, we consider the multiple access channel (MAC) and the broadcast channel (BC) where the transmitters and the receivers harvest energy from nature, and characterize the maximum departure region. In all multiuser settings considered, we decompose our problems into inner and outer problems. We solve the inner problems by exploiting the structure of the particular model, and solve the outer problems by water-filling algorithms. |

Date | 7/1/2015 |

Time-Place | 2:00 PM @ 13A EE West |

Presenter | Basak Guler, Abdelrahman Ibrahim, Ebrahim MolavianJazi, Mohamed Nafea, Burak Varan, and Ahmed Zewail |

Title | Selected ISIT 2015 Publications |

Abstract |
We will discuss the following papers from the proceedings of the 2015 IEEE International Symposium on Information Theory: [1] Himanshu Tyagi, Pramod Viswanath, and Shun Watanabe, "Interactive Communication for Data Exchange", in Proceedings of the 2015 IEEE ISIT, Jun. 2015 (presented by Basak Guler). [2] Dor Shaviv and Ayfer Ozgur, "Capacity of the AWGN Channel with Random Battery Recharges", in Proceedings of the 2015 IEEE ISIT, Jun. 2015 (presented by Abdelrahman Ibrahim). [3] Hiroki Koga, "On a Partition of a Finite Set and its Relationships to Encoding Tasks and the Renyi Entropy", in Proceedings of the 2015 IEEE ISIT, Jun. 2015 (presented by Ebrahim MolavianJazi). [4] Jingbo Liu, Paul Cuff, and Sergio Verdu, "One-Shot Mutual Covering Lemma and Marton's Inner Bound with a Common Message", in Proceedings of the 2015 IEEE ISIT, Jun. 2015 (presented by Mohamed Nafea). [5] Omur Ozel, Sennur Ulukus, and Pulkit Grover, "Optimal Scheduling for Energy Harvesting Transmitters under Temperature Constraints", in Proceedings of the 2015 IEEE ISIT, Jun. 2015 (presented by Burak Varan). [6] Rafael F. Schaefer, Ashish Khisti, and H. Vincent Poor, "How to Use Independent Secret Keys for Secure Broadcasting of Common Messages", in Proceedings of the 2015 IEEE ISIT, Jun. 2015 (presented by Ahmed Zewail). |

Date | 3/30/2015 |

Time-Place | 10:00 AM @ 13A EE West |

Presenter | Janis Notzel |

Title | The Arbitrarily Varying Wiretap Channel |

Abstract |
We will discuss the following papers: [1] Moritz Wiese, Janis Notzel, and Holger Boche, "The Arbitrarily Varying Wiretap Channel - Deterministic and Correlated Random Coding Capacities under the Strong Secrecy Criterion," preprint, available at arXiv:1410.8078, Oct. 2014. [2] Janis Notzel, Moritz Wiese, and Holger Boche, "The Arbitrarily Varying Wiretap Channel - Secret Randomness, Stability and Super-Activation," preprint, available at arXiv:1501.07439, Jan. 2015. Abstract: I give an overview over some recent developments concerning arbitrarily varying wiretap channels, with results mainly taken from [1] and [2]. The common thread will be the role that common randomness (CR) plays for this model. We start with the case where no CR is available. Here, we observe super-activation of the corresponding capacity as well as discontinuities. We characterize the points of discontinuity. Then, we switch attention to the case where small amounts of CR are available. Corresponding capacity formulas will be given. In that regime it is of little importance whether the Eavesdropper gets to know the CR or not. If one wants to benefit from large amounts of CR, it has to be kept secret from the Eavesdropper. A capacity formula for this case will be given as well. Our results generally involve multi-letter capacity formulae, with the one exception being the case where lots of secret common randomness are available. |

Date | 3/11/2015 |

Time-Place | 1:45 PM @ 13A EE West |

Presenter | Abdelrahman Ibrahim and Burak Varan |

Title | Optimal Power Allocation on Discrete Energy Harvesting Model |

Abstract |
We will discuss the following paper: Xiaolei Wang, Jie Gong, Congshi Hu, Sheng Zhou, and Zhisheng Niu, "Optimal Power Allocation on Discrete Energy Harvesting Model," EURASIP Journal on Wireless Communications and Networking, vol. 2015, no. 48, pp. 1-14, Mar. 2015. Abstract: This paper studies the power allocation problem in energy harvesting systems with finite battery. We adopt the discretized energy arrival and power allocation model. Hence, the service process can be modeled as a finite state Markov chain. Based on the discretized model, we analyze the stationary distribution of the Markov chain and formulate the utility maximization problem, which is then reformed as a linear programming problem. By analyzing the linear programming problem, we provide some intuition on the structure of the optimal power allocation policy and find the condition in which the greedy power allocation is optimal. Numerical simulations show the influence of the energy arrival process on the optimal power allocation policy, and the results are consistent with our analysis. |

Date | 10/24/2014 |

Time-Place | 2:30 PM @ 13A EE West |

Presenter | Ahmed Zewail |

Title | Capacity of Gaussian Channels With Energy Harvesting and Processing Cost |

Abstract |
We will discuss the following paper: Ramachandran Rajesh, Vinod Sharma, and Pramod Viswanath, "Capacity of Gaussian Channels With Energy Harvesting and Processing Cost," IEEE Transactions on Information Theory, vol. 60, no. 5, pp. 2563-2575, May 2014. Abstract: Energy harvesting sensor nodes are gaining popularity due to their ability to improve the network life time and are becoming a preferred choice supporting green communication. In this paper, we focus on communicating reliably over an additive white Gaussian noise channel using such an energy harvesting sensor node. An important part of this paper involves appropriate modeling of energy harvesting, as done via various practical architectures. Our main result is the characterization of the Shannon capacity of the communication system. The key technical challenge involves dealing with the dynamic (and stochastic) nature of the (quadratic) cost of the input to the channel. As a corollary, we find close connections between the capacity achieving energy management policies and the queueing theoretic throughput optimal policies. |

Date | 9/26/2014 |

Time-Place | 2:30 PM @ 13A EE West |

Presenter | Mohamed Nafea |

Title | Achieving AWGN Capacity Under Stochastic Energy Harvesting |

Abstract |
We will discuss the following paper: Omur Ozel and Sennur Ulukus, "Achieving AWGN Capacity Under Stochastic Energy Harvesting," IEEE Transactions on Information Theory, vol. 58, no. 10, pp. 6471-6483, Oct. 2012. Abstract: In energy harvesting communication systems, an exogenous recharge process supplies energy necessary for data transmission and the arriving energy can be buffered in a battery before consumption. We determine the information-theoretic capacity of the classical additive white Gaussian noise (AWGN) channel with an energy harvesting transmitter with an unlimited sized battery. As the energy arrives randomly and can be saved in the battery, codewords must obey cumulative stochastic energy constraints. We show that the capacity of the AWGN channel with such stochastic channel input constraints is equal to the capacity with an average power constraint equal to the average recharge rate. We provide two capacity achieving schemes: save-and-transmit and best-effort-transmit. In the save-and-transmit scheme, the transmitter collects energy in a saving phase of proper duration that guarantees that there will be no energy shortages during the transmission of code symbols. In the best-effort-transmit scheme, the transmission starts right away without an initial saving period, and the transmitter sends a code symbol if there is sufficient energy in the battery, and a zero symbol otherwise. Finally, we consider a system in which the average recharge rate is time varying in a larger time scale and derive the optimal offline power policy that maximizes the average throughput, by using majorization theory. |

Date | 9/15/2014 |

Time-Place | 4:00 PM @ 13A EE West |

Presenter | Kaya Tutuncuoglu |

Title | The Capacity of Discrete-Time Memoryless Rayleigh-Fading Channels |

Abstract |
We will discuss the following paper: Ibrahim C. Abou-Faycal, Mitchell D. Trott, and Shlomo Shamai (Shitz), "The Capacity of Discrete-Time Memoryless Rayleigh-Fading Channels," IEEE Transactions on Information Theory, vol. 47, no. 4, pp. 1290-1301, May 2001. Abstract: We consider transmission over a discrete-time Rayleigh fading channel, in which successive symbols face independent fading, and where neither the transmitter nor the receiver has channel state information. Subject to an average power constraint, we study the capacity-achieving distribution of this channel and prove it to be discrete with a finite number of mass points, one of them located at the origin. We numerically compute the capacity and the corresponding optimal distribution as a function of the signal-to-noise ratio (SNR). The behavior of the channel at low SNR is studied and finally a comparison is drawn with the ideal additive white Gaussian noise channel. |

Date | 9/12/2014 |

Time-Place | 12:15 PM @ 13A EE West |

Presenter | Ebrahim MolavianJazi |

Title | On Finite Blocklength Information Theory |

Abstract |
Ebrahim will be talking about his previous work with emphasis on the following papers: Yury Polyanskiy, H. Vincent Poor, and Sergio Verdu, "Channel Coding Rate in the Finite Blocklength Regime," IEEE Transactions on Information Theory, vol. 56, no. 5, pp. 2307-2359, May 2010. Abstract: This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths n as short as 100. It is also shown analytically that the maximal rate achievable with error probability e is closely approximated by C-sqrt(V/n) Q^{-1}(e) where C is the capacity, V is a characteristic of the channel referred to as channel dispersion, and Q is the complementary Gaussian cumulative distribution function. Ebrahim MolavianJazi and J. Nicholas Laneman, "A Finite Blocklength Perspective on Gaussian Multi-access Channels", submitted to IEEE Transactions on Information Theory, 2014, available arxiv:1309.2343. Abstract: Motivated by the growing application of wireless multi-access networks with stringent delay constraints, we investigate the Gaussian multiple access channel (MAC) in the finite blocklength regime. Building upon information spectrum concepts, we develop several non-asymptotic inner bounds on channel coding rates over the Gaussian MAC with a given finite blocklength, positive average error probability, and maximal power constraints. Employing Central Limit Theorem (CLT) approximations, we also obtain achievable second-order coding rates for the Gaussian MAC based on an explicit expression for its dispersion matrix. We observe that, unlike the pentagon shape of the asymptotic capacity region, the second-order region has a curved shape with no sharp corners. A main emphasis of the paper is to provide a new perspective on the procedure of handling input cost constraints for tight achievability proofs. Contrary to the complicated achievability techniques in the literature, we show that with a proper choice of input distribution, tight bounds can be achieved via the standard random coding argument and a modified typicality decoding. In particular, we prove that codebooks generated randomly according to independent uniform distributions on the respective "power shells" perform far better than both independent and identically distributed (i.i.d.) Gaussian inputs and TDMA with power control. Interestingly, analogous to an error exponent result of Gallager, the resulting achievable region lies roughly halfway between that of the i.i.d. Gaussian inputs and that of a hypothetical "sum-power shell" input. However, dealing with such a non-i.i.d. input requires additional analysis such as a new change of measure technique and application of a Berry-Esseen CLT for functions of random variables. |

Date | 9/10/2014 |

Time-Place | 4:00 PM @ 13A EE West |

Presenter | Burak Varan |

Title | Distributed Interference Compensation for Wireless Networks |

Abstract |
Burak will present the following paper: Jianwei Huang, Randall A. Berry, Michael L. Honig, "Distributed Interference Compensation for Wireless Networks," IEEE Journal on Selected Areas in Communications, vol. 24, no. 5, pp. 1074-1084, May 2006. Abstract: We consider a distributed power control scheme for wireless ad hoc networks, in which each user announces a price that reflects compensation paid by other users for their interference. We present an asynchronous distributed algorithm for updating power levels and prices. By relating this algorithm to myopic best response updates in a fictitious game, we are able to characterize convergence using supermodular game theory. Extensions of this algorithm to a multichannel network are also presented, in which users can allocate their power across multiple frequency bands. |

Date | 8/29/2014 |

Time-Place | 12:15 PM @ 13A EE West |

Presenter | Abdelrahman Ibrahim |

Title | On Slotted ALOHA with Opportunistic Energy Harvesting |

Abstract |
Abdelrahman will be talking about his previous work with emphasis on the following papers: Abdelrahman M. Ibrahim, Tamer ElBatt, Amr El-Keyi, "Coverage Probability Analysis for Wireless Networks Using Repulsive Point Processes," in Proceedings of the 2013 IEEE 24th International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), Sep. 2013, pp. 1002-1007. Abstract: The recent witnessed evolution of cellular networks from a carefully planned deployment to more irregular, heterogeneous deployments of Macro, Pico and Femto-BSs motivates new analysis and design approaches. In this paper, we analyze the coverage probability in cellular networks assuming repulsive point processes for the base station deployment. In particular, we characterize, analytically using stochastic geometry, the downlink probability of coverage under a Matern hardcore point process to ensure minimum distance between the randomly located base stations. Assuming a mobile user connects to the nearest base station and Rayleigh fading, we derive two lower bounds expressions on the downlink probability of coverage that is within 4% from the simulated scenario. To validate our model, we compare the probability of coverage of the Matern hardcore topology against an actual base station deployment obtained from a public database. The comparison shows that the actual base station deployment can be fitted by setting the appropriate Matern point process density. Abdelrahman M. Ibrahim, Ozgur Ercetin, and Tamer ElBatt, "Analysis of Slotted ALOHA with Opportunistic RF Energy Harvesting," 2014. Abstract: Energy harvesting (EH) technology is a prominent solution for designing energy efficient autonomous wireless networks. In EH-networks, we utilize the ambient energy sources to replenish the batteries of the EH-nodes. In particular, RF signals radiated by ambient transmitters become a viable new source for EH-networks. However, RF energy harvesting imposes new challenges in the design of wireless networks. In this work, we investigate the performance of an ALOHA random access wireless network consisting of nodes with and without RF energy harvesting capability. We consider a wireless network consisting of two source nodes, where type I node has unlimited energy supply and type II is powered by an RF energy harvesting circuit. The transmissions of type I node is recycled by type II node to replenish its battery. Our contribution is multi-fold, first we propose a discrete time stochastic process for modeling the RF energy harvesting process. Second, we generalize the Stochastic dominance technique for analyzing RF EH-networks. Third, we characterize the stable throughput region for our RF EH-network under two RF energy modes. Fourth, we investigate the impact of finite capacity batteries on the stable throughput region. |

Date | 8/25/2014 |

Time-Place | 1:00 PM @ 13A EE West |

Presenter | Mahmoud Ashour |

Title | On Cognitive Radio with Energy Harvesting |

Abstract |
Mahmoud will be talking about his previous work with emphasis on the following paper: Ahmed El Shafie, Mahmoud Ashour, Tamer Khattab, and Amr Mohamed, "On Spectrum Sharing Between Energy Harvesting Cognitive Radio Users and Primary Users," preprint, available at arXiv:1407.7267, Jul. 2014. Abstract: This paper investigates the maximum secondary throughput for a rechargeable secondary user (SU) sharing the spectrum with a primary user (PU) plugged to a reliable power supply. The SU maintains a finite energy queue and harvests energy from natural resources and primary radio frequency (RF) transmissions. We propose a power allocation policy at the PU and analyze its effect on the throughput of both the PU and SU. Furthermore, we study the impact of the bursty arrivals at the PU on the energy harvested by the SU from RF transmissions. Moreover, we investigate the impact of the rate of energy harvesting from natural resources on the SU throughput. We assume fading channels and compute exact closed-form expressions for the energy harvested by the SU under fading. Results reveal that the proposed power allocation policy along with the implemented RF energy harvesting at the SU enhance the throughput of both primary and secondary links. |