Wireless Communications & Networking

Laboratory

Department of Electrical Engineering



wcan_logo

WCAN Talks


Click here for WCAN Talks up to Fall 2014

WCAN Talks are held in 13A EE West unless stated otherwise.

Date 4/23/2019
Time-Place 4:30 PM
Presenter Malak Shah
Title Age of Information in Multihop Multicast Networks
Abstract

We will discuss the following paper:

Baturalp Buyukates, Alkan Soysal, Sennur Ulukus. "Age of Information in Multihop Multicast Networks" arXiv:1812.10455 (2018).

Abstract: We consider the age of information in a multihop multicast network where there is a single source node sending time-sensitive updates to n^L end nodes, and L denotes the number of hops. In the first hop, the source node sends updates to n first-hop receiver nodes, and in the second hop each first-hop receiver node relays the update packets that it has received to n further users that are connected to it. This network architecture continues in further hops such that each receiver node in hop l is connected to n further receiver nodes in hop l+1. We study the age of information experienced by the end nodes, and in particular, its scaling as a function of n. We show that, using an earliest k transmission scheme in each hop, the age of information at the end nodes can be made a constant independent of n. In particular, the source node transmits each update packet to the earliest k_1 of the n first-hop nodes, and each first-hop node that receives the update relays it to the earliest k_2 out of n second-hop nodes that are connected to it and so on. We determine the optimum k_l stopping value for each hop l for arbitrary shifted exponential link delays.

 

Date 4/16/2019
Time-Place 4:30 PM
Presenter Malak Shah
Title A Blockchain Example for Cooperative Interference Management
Abstract

We will discuss the following paper:

Aly El Gamal, Hesham El Gamal. "A Blockchain Example for Cooperative Interference Management." arXiv:1808.015389 (2018).

Abstract: We present an example where a distributed coordinated protocol supported by a blockchain-enabled monetary mechanism leads to achieving optimal information theoretic degrees of freedom gains. The considered setting is that of a linear interference network, where cooperative transmission is allowed, but at no cost in terms of the overall backhaul load. In other words, the average number of messages assigned to a transmitter is one. We show that a simple monetary mechanism that consists only of one coin type can enable the achievability of the optimal centralized solution. The proposed greedy distributed algorithm relies on incentivizing the users to share their resources in one channel use, in return of credit they receive for maximizing their rate gains in future channel uses. This example is the first in its class and it opens the door for constructing a unified framework for blockchain-enabled monetary mechanisms for optimal interference management and spectrum sharing.

 

Date 3/26/2019
Time-Place 4:30 PM
Presenter Abdelrahman Ibrahim
Title Communication-Efficient Learning of Deep Networks from Decentralized Data
Abstract

We will discuss the following paper:

H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Agüera y Arcas. "Communication-Efficient Learning of Deep Networks from Decentralized Data." arXiv:1602.05629 (2016).

Abstract: Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10-100x as compared to synchronized stochastic gradient descent.

 

Date 3/12/2019
Time-Place 4:30 PM
Presenter Omar Sleem
Title Multi-Agent Deep Reinforcement Learning for Dynamic Power Allocation in Wireless Networks
Abstract

We will discuss the following paper:

Yasar Sinan Nasir, Dongning Guo. "Multi-Agent Deep Reinforcement Learning for Dynamic Power Allocation in Wireless Networks." arXiv:1808.00490 (2018).

Abstract: This work demonstrates the potential of deep reinforcement learning techniques for transmit power control in emerging and future wireless networks. Various techniques have been proposed in the literature to find near-optimal power allocations, often by solving a challenging optimization problem. Most of these algorithms are not scalable to large networks in real-world scenarios because of their computational complexity and instantaneous cross-cell channel state information (CSI) requirement. In this paper, a model-free distributively executed dynamic power allocation scheme is developed based on deep reinforcement learning. Each transmitter collects CSI and quality of service (QoS) information from several neighbors and adapts its own transmit power accordingly. The objective is to maximize a weighted sum-rate utility function, which can be particularized to achieve maximum sum-rate or proportionally fair scheduling (with weights that are changing over time). Both random variations and delays in the CSI are inherently addressed using deep Q-learning. For a typical network architecture, the proposed algorithm is shown to achieve near-optimal power allocation in real time based on delayed CSI measurements available to the agents. This work indicates that deep reinforcement learning based radio resource management can be very fast and deliver highly competitive performance, especially in practical scenarios where the system model is inaccurate and CSI delay is non-negligible.

 

Date 2/19/2019
Time-Place 4:30 PM @ 127 EE East
Presenter Abdelrahman Ibrahim
Title CodedPrivateML: A Fast and Privacy-Preserving Framework for Distributed Machine Learning
Abstract

We will discuss the following paper:

Jinhyun So, Basak Guler, A. Salman Avestimehr, Payman Mohassel. "CodedPrivateML: A Fast and Privacy-Preserving Framework for Distributed Machine Learning." arXiv:1902.00641 (2019).

Abstract: How to train a machine learning model while keeping the data private and secure? We present CodedPrivateML, a fast and scalable approach to this critical problem. CodedPrivateML keeps both the data and the model information-theoretically private, while allowing efficient parallelization of training across distributed workers. We characterize CodedPrivateML's privacy threshold and prove its convergence for logistic (and linear) regression. Furthermore, via experiments over Amazon EC2, we demonstrate that CodedPrivateML can provide an order of magnitude speedup (up to ~ 34×) over the state-of-the-art cryptographic approaches.

 

Date 2/5/2019
Time-Place 4:30 PM
Presenter Shiyang Leng
Title Optimizing Age of Information in Wireless Networks with Throughput Constraints
Abstract

We will discuss the following paper:

Igor Kadota, Abhishek Sinha and Eytan Modiano. "Optimizing Age of Information in Wireless Networks with Throughput Constraints." IEEE INFOCOM 2018.

Abstract: Age of Information (AoI) is a performance metric that captures the freshness of the information from the perspective of the destination. The AoI measures the time that elapsed since the generation of the packet that was most recently delivered to the destination. In this paper, we consider a singlehop wireless network with a number of nodes transmitting timesensitive information to a Base Station and address the problem of minimizing the Expected Weighted Sum AoI of the network while simultaneously satisfying timely-throughput constraints from the nodes. We develop three low-complexity transmission scheduling policies that attempt to minimize AoI subject to minimum throughput requirements and evaluate their performance against the optimal policy. In particular, we develop a randomized policy, a MaxWeight policy and a Whittle's Index policy, and show that they are guaranteed to be within a factor of two, four and eight, respectively, away from the minimum AoI possible. In contrast, simulation results show that Max-Weight outperforms the other policies, both in terms of AoI and throughput, in every network configuration simulated, and achieves near optimal performance.

 

Date 12/4/2018
Time-Place 4:00 PM
Presenter Ahmed Zewail
Title Cache-Aided Private Information Retrieval With Partially Known Uncoded Prefetching: Fundamental Limits
Abstract

We will discuss the following paper:

Yi-Peng Wei, Karim Banawan, and Sennur Ulukus. "Cache-Aided Private Information Retrieval With Partially Known Uncoded Prefetching: Fundamental Limits." IEEE Journal on Selected Areas in Communications, vol 36, issue 6, June 2018.

Abstract: We consider the problem of private information retrieval from N non-colluding and replicated databases, when the user is equipped with a cache that holds an uncoded fraction r of the symbols from each of the K stored messages in the databases. This model operates in a two-phase scheme, namely, the prefetching phase where the user acquires side information and the retrieval phase where the user privately downloads the desired message. In the prefetching phase, the user receives r/N uncoded fraction of each message from the nth database. This side information is known only to the nth database and unknown to the remaining databases, i.e., the user possesses partially known side information. We investigate the optimal normalized download cost D*(r) in the retrieval phase as a function of K, N, and r. We develop lower and upper bounds for the optimal download cost. The bounds match in general for the cases of very low caching ratio and very high caching ratio. We fully characterize the optimal download cost caching ratio tradeoff for K = 3. For general K, N, and r values, we show that the largest additive gap between the achievability and the converse bounds is 5/32.

 

Date 11/13/2018
Time-Place 4:35 PM @ 358 Willard
Presenter Prof. Matthieu R. Bloch
Title Towards covert and secret quantum networks
Abstract

Despite steady progress in "post-quantum" cryptography, quantum-secured communication, especially in the form of Quantum Key Distribution (QKD), remains to date the only unconditionally secure technology to distribute secret keys. Quantum communication has effectively "leaped out of the lab" as most recently demonstrated in January 2018 with the deployment of a satellite-relayed intercontinental quantum network between China and Austria, leveraging the unique possibilities offered by the Micius quantum communication satellite.

In this talk we will discuss the possibility of deploying quantum key distribution that are also covert, in the sense of being provable undetectable by an adversary. While covert key generation over quantum channels is not possible under the same assumptions than QKD, we will show that, perhaps surprisingly, covert secret key generation is possible under mild assumptions regarding the quantum channels. In particular, we will provide a more nuanced perspective regarding the impossibility of covert key expansion in covert quantum key distribution. We will also discuss the construction of reconciliation algorithms for covert secret key generation, where the main challenge is to efficiently process the diffuse information that is embedded in covert signals. We show that astute signaling and coding techniques enable one to "concentrate" the information and approach the information-theoretic performance with low-complexity.

 

Date 11/6/2018
Time-Place 4:00 PM
Presenter Mohamed Nafea
Title Generative Adversarial Privacy
Abstract

We will discuss the following paper:

C. Huang, P. Kairouz, X. Chen, L. Sankar, R. Rajagopal. "Generative Adversarial Privacy." IEEE ICML (2018).

Abstract: We present a data-driven framework called generative adversarial privacy (GAP). Inspired by recent advancements in generative adversarial networks (GANs), GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. We show that for appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. We also evaluate the performance of GAP on multi-dimensional Gaussian mixture models and the GENKI face database.

 

Date 10/16/2018
Time-Place 4:00 PM
Presenter Omar Sleem
Title Real-Time Status: How Often Should One Update?
Abstract

We will discuss the following paper:

Sanjit Kaul, Roy Yates, and Marco Gruteser. "Real-Time Status: How Often Should One Update?." IEEE INFOCOM (2012).

Abstract: Increasingly ubiquitous communication networks and connectivity via portable devices have engendered a host of applications in which sources, for example people and environmental sensors, send updates of their status to interested recipients. These applications desire status updates at the recipients to be as timely as possible; however, this is typically constrained by limited network resources. In this paper, we employ a time-average age metric for the performance evaluation of status update systems. We derive general methods for calculating the age metric that can be applied to a broad class of service systems. We apply these methods to queue-theoretic system abstractions consisting of a source, a service facility and monitors, with the model of the service facility (physical constraints) a given. The queue discipline of first-come-first-served (FCFS) is explored. We show the existence of an optimal rate at which a source must generate its information to keep its status as timely as possible at all its monitors. This rate differs from those that maximize utilization (throughput) or minimize status packet delivery delay. While our abstractions are simpler than their real-world counterparts, the insights obtained, we believe, are a useful starting point in understanding and designing systems that support real time status updates.

 

Date 10/02/2018
Time-Place 12:00 PM
Presenter Ahmed Zewail
Title Lagrange Coded Computing: Optimal Design for Resiliency, Security and Privacy
Abstract

We will discuss the following paper:

Qian Yu, Netanel Raviv, Jinhyun So, and A. Salman Avestimehr. "Lagrange Coded Computing: Optimal Design for Resiliency, Security and Privacy." arXiv:1806.00939 (2018).

Abstract: We consider a distributed computing scenario that involves computations over a massive dataset that is stored distributedly across multiple workers. We propose Lagrange Coded Computing, a new framework to simultaneously provide (1) resiliency against straggler workers that may prolong computations; (2) security against Byzantine (or malicious) workers that deliberately modify the computation for their benefit; and (3) (information-theoretic) privacy of the dataset amidst possible collusion of workers. Lagrange Coded Computing, which leverages the well-known Lagrange polynomial to create computation redundancy in a novel coded form across the workers, can be applied to any computation scenario in which the function of interest is an arbitrary multivariate polynomial of the input dataset, hence covering many computations of interest in machine learning. Lagrange Coded Computing significantly generalizes prior works to go beyond linear computations that have so far been the main focus in this area. We prove the optimality of Lagrange Coded Computing by developing matching converses, and empirically demonstrate its impact in mitigating stragglers and malicious workers.

 

Date 9/18/2018
Time-Place 4:00 PM
Presenter Abdelrahman Ibrahim
Title A Unified Coding Framework for Distributed Computing with Straggling Servers
Abstract

We will discuss the following paper:

Songze Li, Mohammad Ali Maddah-Ali, and A. Salman Avestimehr. "A Unified Coding Framework for Distributed Computing with Straggling Servers." arXiv:1609.01690 (2016).

Abstract: We propose a unified coded framework for distributed computing with straggling servers, by introducing a tradeoff between "latency of computation" and "load of communication" for some linear computation tasks. We show that the coded scheme of [1]-[3] that repeats the intermediate computations to create coded multicasting opportunities to reduce communication load, and the coded scheme of [4], [5] that generates redundant intermediate computations to combat against straggling servers can be viewed as special instances of the proposed framework, by considering two extremes of this tradeoff: minimizing either the load of communication or the latency of computation individually. Furthermore, the latency-load tradeoff achieved by the proposed coded framework allows to systematically operate at any point on that tradeoff to perform distributed computing tasks. We also prove an information-theoretic lower bound on the latency-load tradeoff, which is shown to be within a constant multiplicative gap from the achieved tradeoff at the two end points.

 

Date 9/11/2018
Time-Place 4:00 PM
Presenter Shiyang Leng
Title Age-Minimal Transmission for Energy Harvesting Sensors with Finite Batteries: Online Policies
Abstract

We will discuss the following paper:

Ahmed Arafa, Jing Yang, Sennur Ulukus, and H. Vincent Poor. "Age-Minimal Transmission for Energy Harvesting Sensors with Finite Batteries: Online Policies." arXiv:1806.07271v1 (2018).

Abstract: An energy-harvesting sensor node that is sending status updates to a destination is considered. The sensor is equipped with a battery of finite size to save its incoming energy, and consumes one unit of energy per status update transmission, which is delivered to the destination instantly over an error-free channel. The setting is online in which the harvested energy is revealed to the sensor causally over time after it arrives, and the goal is to design status update transmission times (policy) such that the long term average age of information (AoI) is minimized. The AoI is defined as the time elapsed since the latest update has reached at the destination. Two energy arrival models are considered: a random battery recharge (RBR) model, and an incremental battery recharge (IBR) model. In both models, energy arrives according to a Poisson process with unit rate, with values that completely fill up the battery in the RBR model, and with values that fill up the battery incrementally in a unit-by-unit fashion in the IBR model. The key approach to characterizing the optimal status update policy for both models is showing the optimality of renewal policies, in which the inter-update times follow a renewal process in a certain manner that depends on the energy arrival model and the battery size. It is then shown that the optimal renewal policy has an energy-dependent threshold structure, in which the sensor sends a status update only if the AoI grows above a certain threshold that depends on the energy available in its battery. For both the random and the incremental battery recharge models, the optimal energy-dependent thresholds are characterized explicitly, i.e., in closed-form, in terms of the optimal long term average AoI. It is also shown that the optimal thresholds are monotonically decreasing in the energy available in the battery, and that the smallest threshold, which comes in effect when the battery is full, is equal to the optimal long term average AoI.

 

Date 4/17/2018
Time-Place 4:00 PM @ 127 EE East
Presenter Mohamed Nafea
Title Information-theoretic Analysis of Generalization Capability of Learning Algorithms
Abstract

We will discuss the following paper:

Aolin Xu and Maxim Raginsky. "Information-theoretic Analysis of Generalization Capability of Learning Algorithms." arXiv:1705.07809 (2017).

Abstract: We derive upper bounds on the generalization error of a learning algorithm in terms of the mutual information between its input and output. The bounds provide an information-theoretic understanding of generalization in learning problems, and give theoretical guidelines for striking the right balance between data fit and generalization by controlling the input-output mutual information. We propose a number of methods for this purpose, among which are algorithms that regularize the ERM algorithm with relative entropy or with random noise. Our work extends and leads to nontrivial improvements on the recent results of Russo and Zou.

 

Date 4/10/2018
Time-Place 4:00 PM @ 127 EE East
Presenter Shiyang Leng
Title A Multi-Leader Stackelberg Game for Two-Hop Systems with Wireless Energy Transfer
Abstract

We will discuss the following paper:

Shiyang Leng and Aylin Yener. "A Multi-Leader Stackelberg Game for Two-Hop Systems with Wireless Energy Transfer." to appear in Proc. WCNC (2018).

Abstract: We study a two-hop network with wireless energy transfer (WET) from the source to multiple energy harvesting relays. Both the source and relays intend to transmit dedicated information to the destination. The source, without direct reliable channels to the destination, needs the relays to forward signals, while the relays are short of energy and have to harvest energy from the source to transmit their own data and relaying the source's data. Relays use time division to harvest then transmit. For the multiple access channel (MAC) from the relays to the destination, we consider both time division multiple access (TDMA) between the relays and simultaneous transmission (ST) by all relays. The source and the relays are all selfish and aim to maximize their own utility. We take a game theoretic viewpoint to model the hierarchical competition between the source and the relays. In particular, multi-leader Stackelberg games are formulated where the relays play as the leaders and the source plays as the follower. The existence and the uniqueness of Stackelberg equilibrium (SE) are analyzed, based on which algorithms are proposed to achieve the SE. The numerical results verify that the proposed algorithms improve the system performance comparing to the baseline scheme.

 

Date 4/03/2018
Time-Place 4:00 PM @ 127 EE East
Presenter Ahmed Zewail
Title Efficient File Delivery for Coded Prefetching in Shared Cache Networks with Multiple Requests Per User
Abstract

We will discuss the following paper:

Haisheng Xu, Chen Gong, and Xiaodong Wang. "Efficient File Delivery for Coded Prefetching in Shared Cache Networks with Multiple Requests Per User." arXiv:1803.09408 (2018).

Abstract: We consider a centralized caching network, where a server serves several groups of users, each having a common shared fixed-size cache and requesting multiple files. An existing coded prefetching scheme is employed where each file is broken into multiple fragments and each cache stores multiple coded packets each formed by XORing fragments from different files. For such a system, we propose an efficient file delivery scheme by the server to meet the multi-requests of all user-groups. Specifically, the stored coded packets of each cache are classified into four types based on the composition of the file fragments. For each packet type, the corresponding delivery strategy is developed. The rate as well as the worst rate of the proposed delivery scheme are analyzed. We show that our caching model and delivery scheme can incorporate some existing coded caching schemes as special cases. Moreover, for the special case of uniform requests and uncoded prefetching, we make a comparison with an existing scheme, and show that our approach achieves better performance. We also provide numerical results on the delivery rate for the proposed scheme.

 

Date 3/27/2018
Time-Place 4:45 PM @ 127 EE East
Presenter Dr Sudarshan Guruacharya
Title Self-Sustainability of Energy Harvesting Systems: Concept, Analysis, and Design
Abstract

Abstract: Ambient energy harvesting is touted as a low cost solution to prolong the life of low-powered devices, reduce the carbon footprint, and make the system self-sustainable. Most research to date have focused either on the physical aspects of energy conversion process or on the optimal consumption policies of the harvested energy at the system level. However, although intuitively understood, to the best of our knowledge, the idea of self-sustainability is yet to be systematically studied as a performance metric. In this paper, we provide a mathematical definition of the concept of self-sustainability of an energy harvesting system, based on the complementary idea of eventual energy outage, rather than the usual energy outage. In particular, we analyze a harvest-store- consume system with infinite battery capacity, stochastic energy arrivals, and fixed energy consumption rate. Using the random walk theory, we identify the necessary condition for the system to be self-sustainable. General formulas are given for the self-sustainability probability in the form of integral equations. Since these integral equations are difficult to solve analytically, an exponential upper bound for eventual energy outage is given using martingales. This bound guarantees that the eventual energy outage can be made arbitrarily small simply by increasing the initial battery energy. We then give an asymptotic formula; and for the special case when the energy arrival follows a Poisson process, we are also able to find an exact formula for the eventual energy outage probability. We also show that the harvest-store- consume system is mathematically equivalent to a GI/G/1 queueing system, which allows us to easily find the energy outage probability, in case the necessary condition for self-sustainability is violated. Monte-Carlo simulations verify our analysis.

 

Date 2/27/2018
Time-Place 4:30 PM @ 13A EE West
Presenter Abdelrahman Ibrahim
Title A Fundamental Tradeoff between Computation and Communication in Distributed Computing
Abstract

We will discuss the following paper:

Songze Li, Mohammad Ali Maddah-Ali, Qian Yu, A Salman Avestimehr. "A Fundamental Tradeoff between Computation and Communication in Distributed Computing." IEEE Transactions on Information Theory, 64(1), 109-128, Jan. 2018.

Abstract: How can we optimally trade extra computing power to reduce the communication load in distributed computing? We answer this question by characterizing a fundamental tradeoff between computation and communication in distributed computing, i.e., the two are inversely proportional to each other. More specifically, a general distributed computing framework, motivated by commonly used structures like MapReduce, is considered, where the overall computation is decomposed into computing a set of "Map" and "Reduce" functions distributedly across multiple computing nodes. A coded scheme, named "coded distributed computing" (CDC), is proposed to demonstrate that increasing the computation load of the Map functions by a factor of r (i.e., evaluating each function at r carefully chosen nodes) can create novel coding opportunities that reduce the communication load by the same factor. An information-theoretic lower bound on the communication load is also provided, which matches the communication load achieved by the CDC scheme. As a result, the optimal computation-communication tradeoff in distributed computing is exactly characterized. Finally, the coding techniques of CDC is applied to the Hadoop TeraSort benchmark to develop a novel CodedTeraSort algorithm, which is empirically demonstrated to speed up the overall job execution by 1.97× -3.39×, for typical settings of interest.

 

Date 2/20/2018
Time-Place 4:30 PM @ 13A EE West
Presenter Shiyang Leng
Title A Learning Theoretic Approach to Energy Harvesting Communication System Optimization
Abstract

We will discuss the following paper:

Pol Blasco, Deniz Gunduz, and Mischa Dohler. "A Learning Theoretic Approach to Energy Harvesting Communication System Optimization." IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 4, APRIL 2013

Abstract: A point-to-point wireless communication system in which the transmitter is equipped with an energy harvesting device and a rechargeable battery, is studied. Both the energy and the data arrivals at the transmitter are modeled as Markov processes. Delay-limited communication is considered assuming that the underlying channel is block fading with memory, and the instantaneous channel state information is available at both the transmitter and the receiver. The expected total transmitted data during the transmitter's activation time is maximized under three different sets of assumptions regarding the information available at the transmitter about the underlying stochastic processes. A learning theoretic approach is introduced, which does not assume any a priori information on the Markov processes governing the communication system. In addition, online and offline optimization problems are studied for the same setting. Full statistical knowledge and causal information on the realizations of the underlying stochastic processes are assumed in the online optimization problem, while the offline optimization problem assumes non-causal knowledge of the realizations in advance. Comparing the optimal solutions in all three frameworks, the performance loss due to the lack of the transmitter's information regarding the behaviors of the underlying Markov processes is quantified.

 

Date 2/01/2018
Time-Place 4:30 PM @ 13A EE West
Presenter Mohamed Nafea
Title Privacy aware learning
Abstract

We will discuss the following paper:

J. Duchi, M. Jordan, and M. Wainwright. "Privacy aware learning." Advances in Neural Information Processing Systems. 2012.

Abstract: We study statistical risk minimization problems under a version of privacy in which the data is kept confidential even from the learner. In this local privacy framework, we establish sharp upper and lower bounds on the convergence rates of statistical estimation procedures. As a consequence, we exhibit a precise tradeoff between the amount of privacy the data preserves and the utility, measured by convergence rate, of any statistical estimator.

 

Date 1/25/2018
Time-Place 4:30 PM @ 13A EE West
Presenter Ahmed Zewail
Title The capacity of private information retrieval
Abstract

We will discuss the following paper:

H Sun H, SA Jafar "The capacity of private information retrieval". IEEE Transactions on Information Theory. 2017 Mar 29.

Abstract: In the private information retrieval (PIR) problem, a user wishes to retrieve, as efficiently as possible, one out of K messages from N non-communicating databases (each holds all K messages) while revealing nothing about the identity of the desired message index to any individual database. The information theoretic capacity of PIR is the maximum number of bits of desired information that can be privately retrieved per bit of downloaded information. For K messages and N databases, we show that the PIR capacity is (1+1/N+1/N2+· · ·+1/NK-1)-1. A remarkable feature of the capacity achieving scheme is that if we eliminate any subset of messages (by setting the message symbols to zero), the resulting scheme also achieves the PIR capacity for the remaining subset of messages.

 

Date 1/16/2018
Time-Place 4:30 PM @ 13A EE West
Presenter Abdelrahman Ibrahim
Title Fundamental Limits of Caching
Abstract

We will discuss the following paper:

Mohammad Ali Maddah-Ali and Urs Niesen, "Fundamental Limits of Caching," IEEE Transactions on Information Theory, May 2014.

Abstract: Caching is a technique to reduce peak traffic rates by prefetching popular content into memories at the end users. Conventionally, these memories are used to deliver requested content in part from a locally cached copy rather than through the network. The gain offered by this approach, which we term local caching gain, depends on the local cache size (i.e., the memory available at each individual user). In this paper, we introduce and exploit a second, global, caching gain not utilized by conventional caching schemes. This gain depends on the aggregate global cache size (i.e., the cumulative memory available at all users), even though there is no cooperation among the users. To evaluate and isolate these two gains, we introduce an information-theoretic formulation of the caching problem focusing on its basic structure. For this setting, we propose a novel coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared with previously known schemes. In particular, the improvement can be on the order of the number of users in the network. In addition, we argue that the performance of the proposed scheme is within a constant factor of the informationtheoretic optimum for all values of the problem parameters.

 

Date 10/11/2016
Time-Place 2:30 PM @ 13A EE West
Presenter Burak Varan
Title A Minimax Approach to Supervised Learning
Abstract

We will discuss the following paper:

Farzan Farnia, and David Tse "A Minimax Approach to Supervised Learning"

Abstract: Given a task of predicting Y from X, a loss function L, and a set of probability distributions Γ on (X,Y), what is the optimal decision rule minimizing the worst-case expected loss over Γ? In this paper, we address this question by introducing a generalization of the principle of maximum entropy. Applying this principle to sets of distributions with marginal on X constrained to be the empirical marginal from the data, we develop a general minimax approach for supervised learning problems which reduces to the maximum likelihood problem over generalized linear models. Through this framework, we develop two classification algorithms called the minimax SVM and the minimax Brier classifier. The minimax SVM, which is a relaxed version of the standard SVM, minimizes the worst-case 0-1 loss over the structured set of distribution, and by our numerical experiments can outperform the SVM. The minimax Brier classifier utilizes the Huber penalty function for a robust classification. We also explore the application of the developed framework on robust feature selection.

 

Date 10/05/2016
Time-Place 12:00 PM @ 13A EE West
Presenter Basak Guler
Title Fundamental Limits of Caching in Wireless D2D Networks
Abstract

We will discuss the following paper:

Mingyue Ji, Giuseppe Caire, and Andreas F. Molisch "Fundamental Limits of Caching in Wireless D2D Networks"

Abstract: We consider a wireless device-to-device (D2D) network where communication is restricted to be single-hop. Users make arbitrary requests from a finite library of files and have pre-cached information on their devices, subject to a per-node storage capacity constraint. A similar problem has already been considered in an infrastructure setting, where all users receive a common multicast (coded) message from a single omniscient server (e.g., a base station having all the files inthe library) through a shared bottleneck link. In this paper, we consider a D2D infrastructureless version of the problem. We propose a caching strategy based on deterministic assignment of subpackets of the library files, and a coded delivery strategy where the users send linearly coded messages to each other in order to collectively satisfy their demands. We also consider a random caching strategy, which is more suitable to a fully decentralized implementation. Under certain conditions, both approaches can achieve the information theoretic outer bound within a constant multiplicative factor.

 

Date 8/31/2016
Time-Place 12:00 PM @ 13A EE West
Presenter Gan Chao
Title A Layered Caching Architecture for the Interference Channel
Abstract

We will discuss the following paper:

Jad Hachem, Urs Niesen, and Suhas Diggavi "A Layered Caching Architecture for the Interference Channel"

Abstract: Recent work has studied the benefits of caching in the interference channel, particularly by placing caches at the transmitters. In this paper, we study the two-user Gaussian interference channel in which caches are placed at both the transmitters and the receivers. We propose a separation strategy that divides the physical and network layers. While a natural separation approach might be to abstract the physical layer into several independent bit pipes at the network layer, we argue that this is inefficient. Instead, the separation approach we propose exposes interacting bit pipes at the network layer, so that the receivers observe related (yet not identical) quantities. We find the optimal strategy within this layered architecture, and we compute the degrees-of-freedom it achieves. Finally, we show that separation is optimal in regimes where the receiver caches are large.

 

Date 8/30/2016
Time-Place 3:00 PM @ 333 IST
Presenter Paul Cuff, Department of EE, Princeton University
Title Secure Communication through Wiretap Channels
Abstract

The secrecy capacity of a wiretap channel is the maximum communication rate for which it can be guaranteed that an eavesdropper (who is listening in on the communication but is affected by noise) gains no knowledge of the message. This work provides new results for the secrecy capacity of wiretap channels with state. We begin with results on arbitrarily varying channels, giving tight capacity characterization when the state sequence is typeconstrained. Variations of Type II wiretap channels, which appear in the recent literature, are special cases of this. The talk then moves to random states known to the encoder but not necessarily the decoder (the GelfandPinsker setting). For this we have a superposition coding scheme which is unusual in that the first layer of the superposition code does not contain any message. This scheme is shown to be better than previously proposed schemes.

If time permits, another topic will be briefly discussed. An information theoretic notion of differential privacy will be presented, which establishes an equivalence between differential privacy and a mutual information constraint.

 

Date 8/25/2016
Time-Place 4:35 PM @ 220 Hammond
Presenter Osman Yagan, Department of ECE, Carnegie Mellon University
Title Robustness of electrical power systems against cascading failures: optimal load-capacity distributions and hardness of attacking
Abstract

Electrical power systems are one of the most important infrastructures that support our society. However, their vulnerabilities have raised great concern recently due to several large-scale blackouts around the world. In this talk, we will present several results concerning the robustness of power systems against cascading failures initiated by a random attack. This will be done under a simple yet useful model based on global and equal redistribution of load upon failures. We provide a comprehensive understanding of system robustness under this model by (i) deriving an expression for the final system size as a function of the size of initial attacks; (ii) deriving the critical attack size after which system breaks down completely; (iii) showing that complete system breakdown takes place through a first-order (i.e., discontinuous) transition in terms of the attack size; and (iv) establishing the optimal load-capacity distribution that maximizes robustness.

In particular, we show that robustness is maximized when the difference between the capacity and initial load is the same for all lines; i.e., when all lines have the same redundant space regardless of their initial load. This is in contrast with the intuitive and commonly used setting where capacity of a line is a fixed factor of its initial load. We will also consider the case where an adversary can launch a targeted attack, and present several results on the hardness of attacking optimally. Time permitting, we will present several other application areas for the same model with pointers for future work.

 

Date 4/19/2016
Time-Place 4:15 PM @ 160 Willard
Presenter Yingbin Liang, Department of EECS, Syracuse University
Title Kernel-based Detection of Anomalous Structures over Large Networks
Abstract

This talk will focus on a type of problems, the goal of which is to detect existence of an anomalous object over a network. An anomalous object, if it exists, corresponds to a cluster of nodes in the network that take data samples generated by an anomalous distribution q whereas all other nodes in the network receive samples generated by a distinct distribution p. Such a problem models a variety of applications such as detection of an anomalous intrusion via sensor networks and detection of an anomalous segment in a DNA sequence. All previous studies of this problem have taken parametric models, i.e., distributions p and q are known. Our work studies the nonparametric model, in which distributions can be arbitrary and unknown a priori.

In this talk, I will first introduce the approach that we apply, which is based on mean embedding of distributions into a reproducing kernel Hilbert space (RKHS). In particular, we adopt the quantity of maximum mean discrepancy (MMD) as a metric of distance between mean embeddings of two distributions. I will then present our construction of MMD-based tests for anomalous detection over networks and our analysis of performance guarantee of the proposed tests. I will finally present a number of numerical results to demonstrate our results. Towards the end of the talk, I will discuss some related problems and conclude with a few future directions.

 

Date 4/12/2016
Time-Place 4:10 PM @ 13A EE West
Presenter Abdelrahman Ibrahim
Title Wireless Content Caching for Small Cell and D2D Networks
Abstract

We will discuss the following paper:

Maria Gregori, Jesus Gomez-Vilardebo, Javier Matamoros, and Deniz Gunduz, "Wireless Content Caching for Small Cell and D2D Networks," IEEE Journal on Selected Areas in Communications, accepted for publication, Mar. 2016.

Abstract: The fifth generation wireless networks must provide fast and reliable connectivity while coping with the ongoing traffic growth. It is of paramount importance that the required resources, such as energy and bandwidth, do not scale with traffic. While the aggregate network traffic is growing at an unprecedented rate, users tend to request the same popular contents at different time instants. Therefore, caching the most popular contents at the network edge is a promising solution to reduce the traffic and the energy consumption over the backhaul links. In this paper, two scenarios are considered, where caching is performed either at a small base station, or directly at the user terminals, which communicate using Device-to-Device (D2D) communications. In both scenarios, joint design of the transmission and caching policies is studied when the user demands are known in advance. This joint design offers two different caching gains, namely, the pre-downloading and local caching gains. It is shown that the finite cache capacity limits the attainable gains, and creates an inherent tradeoff between the two types of gains. In this context, a continuous time optimization problem is formulated to determine the optimal transmission and caching policies that minimize a generic cost function, such as energy, bandwidth, or throughput. The jointly optimal solution is obtained by demonstrating that caching files at a constant rate is optimal, which allows to reformulate the problem as a finite dimensional convex program. The numerical results show that the proposed joint transmission and caching policy dramatically reduces the total cost, which is particularised to the total energy consumption at the Macro Base Station (MBS), as well as to the total economical cost for the service provider, when users demand economical incentives for delivering content to other users over the D2D links.

 

Date 3/29/2016
Time-Place 4:45 PM @ 209 EE West
Presenter Ahmed Zewail
Title Fundamental Limits of Caching
Abstract

We will discuss the following paper:

Mohammad Ali Maddah-Ali and Urs Niesen, "Fundamental Limits of Caching," IEEE Transactions on Information Theory, vol. 60, no. 5, pp. 2856-2867, May 2014.

Abstract: Caching is a technique to reduce peak traffic rates by prefetching popular content into memories at the end users. Conventionally, these memories are used to deliver requested content in part from a locally cached copy rather than through the network. The gain offered by this approach, which we term local caching gain, depends on the local cache size (i.e., the memory available at each individual user). In this paper, we introduce and exploit a second, global, caching gain not utilized by conventional caching schemes. This gain depends on the aggregate global cache size (i.e., the cumulative memory available at all users), even though there is no cooperation among the users. To evaluate and isolate these two gains, we introduce an information-theoretic formulation of the caching problem focusing on its basic structure. For this setting, we propose a novel coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared with previously known schemes. In particular, the improvement can be on the order of the number of users in the network. In addition, we argue that the performance of the proposed scheme is within a constant factor of the informationtheoretic optimum for all values of the problem parameters.

 

Date 3/22/2016
Time-Place 4:45 PM @ 101 EE East
Presenter Basak Guler
Title Bayesian Learning in Social Networks
Abstract

We will discuss the following paper:

Daron Acemoglu, Munther A. Dahleh, Ilan Lobel, and Asuman Ozdaglar, "Bayesian Learning in Social Networks," The Review of Economic Studies, vol. 78, no. 4, pp. 1201-1236, 2011.

Abstract: We study the (perfect Bayesian) equilibrium of a sequential learning model over a general social network. Each individual receives a signal about the underlying state of the world, observes the past actions of a stochastically generated neighbourhood of individuals, and chooses one of two possible actions. The stochastic process generating the neighbourhoods defines the network topology.We characterize pure strategy equilibria for arbitrary stochastic and deterministic social networks and characterize the conditions under which there will be asymptotic learning-convergence (in probability) to the right action as the social network becomes large. We show that when private beliefs are unbounded (meaning that the implied likelihood ratios are unbounded), there will be asymptotic learning as long as there is some minimal amount of "expansion in observations". We also characterize conditions under which there will be asymptotic learning when private beliefs are bounded.

 

Date 3/15/2016
Time-Place 4:45 PM @ 209 EE West
Presenter Mohamed Nafea
Title Information Theoretic Privacy
Abstract

We will discuss the following papers:

Shahab Asoodeh, Mario Diaz, Fady Alajaji, and Tamas Linder, "Information Extraction Under Privacy Constraints," arXiv: 1511.02381, Jan. 2016.

Abstract: A privacy-constrained information extraction problem is considered where for a pair of correlated discrete random variables (X, Y) governed by a given joint distribution, an agent observes Y and wants to convey to a potentially public user as much information about Y as possible without compromising the amount of information revealed about X. To this end, the so-called rate-privacy function is investigated to quantify the maximal amount of information (measured in terms of mutual information) that can be extracted from Y under a privacy constraint between X and the extracted information, where privacy is measured using either mutual information or maximal correlation. Properties of the rate-privacy function are analyzed and information-theoretic and estimation-theoretic interpretations of it are presented for both the mutual information and maximal correlation privacy measures. It is also shown that the rate-privacy function admits a closed-form expression for a large family of joint distributions of (X, Y). Finally, the rate-privacy function under the mutual information privacy measure is considered for the case where (X, Y) has a joint probability density function by studying the problem where the extracted information is a uniform quantization of Y corrupted by additive Gaussian noise. The asymptotic behavior of the rate-privacy function is studied as the quantization resolution grows without bound and it is observed that not all of the properties of the rate-privacy function carry over from the discrete to the continuous case.

Shahab Asoodeh, Fady Alajaji, and Tamas Linder, "Notes on Information-Theoretic Privacy," Proceedings of the 52nd Annual Allerton Conference on Communication, Control, and Computing, Allerton'14, Monticello, IL, Oct. 2014.

Abstract: We investigate the tradeoff between privacy and utility in a situation where both privacy and utility are measured in terms of mutual information. For the binary case, we fully characterize this tradeoff in case of perfect privacy and also give an upper-bound for the case where some privacy leakage is allowed. We then introduce a new quantity which quantifies the amount of private information contained in the observable data and then connect it to the optimal tradeoff between privacy and utility.

 

Date 3/1/2016
Time-Place 4:30 PM @ 209 EE West
Presenter Remi A. Chou
Title Agile Broadcast Services: Addressing the Wireless Spectrum Crunch via Coalitional Game Theory
Abstract

We will discuss the following paper:

Nikhil Karamchandani, Paolo Minero, and Massimo Franceschetti, "Agile Broadcast Services: Addressing the Wireless Spectrum Crunch via Coalitional Game Theory," IEEE Transactions on Wireless Communications, vol. 13, no. 2, pp. 794-808, Feb. 2014.

Abstract: The performance of cooperation strategies for broadcast services sharing a common wireless channel is studied in the framework of coalitional game theory. Two strategies are examined. The first represents an open sharing model where each service provider is allowed to transmit at any time but simultaneous transmissions result in interference. It is shown analytically that in the absence of coordination cost, the grand coalition formed by all providers cooperating to avoid simultaneous transmissions is both sum-rate optimal and stable. The second strategy represents an orthogonal access method where service providers are granted exclusive access to a subset of the available channels, each having a guaranteed successful transmission opportunity. In the absence of coordination cost, the grand coalition where all providers cooperate by sharing their guaranteed right to access the channel is sum-rate optimal but unstable, in the sense that some group of providers may have an incentive to deviate from the grand coalition. In the presence of coordination cost, a different scenario arises. In both models large coalitions do not form, and simulation results suggest that the open access model for large networks can lead to a regime where performance is considerably limited by interference.

 

Date 2/19/2016
Time-Place 2:30 PM @ 209 EE West
Presenter Ebrahim MolavianJazi
Title A Dichotomy of Functions F(X,Y) of Correlated Sources (X,Y) from the Viewpoint of the Achievable Rate Region
Abstract

We will discuss the following paper:

Te Sun Han and Kingo Kobayashi, "A Dichotomy of Functions F(X,Y) of Correlated Sources (X,Y) from the Viewpoint of the Achievable Rate Region," IEEE Transactions on Information Theory, vol. 33, no. 1, pp. 69-76, Jan. 1987.

Abstract: Consider separate encoding of correlated sources X^n = (X_1,...,X_n), Y^n = (Y_1,...,Y_n) for the decoder to reliably reproduce a function F(X_i, Y_i)^n_{i=1}. We establish the necessary and sufficient condition for the set of all achievable rates to coincide with the Slepian-Wolf region whenever the probability density p(x,y) is positive for all (x,y).

 

Date 2/9/2016
Time-Place 4:20 PM @ 13A EE West
Presenter Shiyang Leng
Title Optimal Resource Allocation for Pervasive Health Monitoring Systems with Body Sensor Networks
Abstract

We will discuss the following paper:

Yifeng He, Wenwu Zhu, and Ling Guan, "Optimal Resource Allocation for Pervasive Health Monitoring Systems with Body Sensor Networks," IEEE Transactions on Mobile Computing, vol. 10, no. 11, pp. 1558-1575, Nov. 2011.

Abstract: Pervasive health monitoring is an eHealth service, which plays an important role in prevention and early detection of diseases. There are two major challenges in pervasive health monitoring systems with Body Sensor Networks (BSNs). The first challenge is the sustainable power supply for BSNs. The second challenge is Quality of Service (QoS) guarantee for the delivery of data streams. In this paper, we optimize the resource allocations to provide a sustainable and high-quality service in health monitoring systems. Specifically, we formulate and solve two resource optimization problems, respectively. In the first optimization problem, steady-rate optimization problem, we optimize the source rate at each sensor to minimize the rate fluctuation with respect to the average sustainable rate, subject to the requirement of uninterrupted service. The first optimization problem is solved by a proposed analytical solution. The second optimization problem is formulated based on the optimal source rates of the sensors obtained in the steady-rate optimization problem. In the second optimization problem, we jointly optimize the transmission power and the transmission rate at each aggregator to provide QoS guarantee to data delivery. The second optimization problem is converted into a convex optimization problem, which is then solved efficiently. In the simulations, we demonstrate that the proposed optimized scheme enables the pervasive health monitoring system to provide a sustainable service with guaranteed low delay and low Packet Loss Rate (PLR) to subscribers.

 

Date 1/26/2016
Time-Place 6:00 PM @ 13A EE West
Presenter Burak Varan
Title Optimal Policies for Wireless Networks With Energy Harvesting Transmitters and Receivers: Effects of Decoding Costs
Abstract

We will discuss the following paper:

Ahmed Arafa and Sennur Ulukus, "Optimal Policies for Wireless Networks With Energy Harvesting Transmitters and Receivers: Effects of Decoding Costs," IEEE Journal on Selected Areas in Communications, vol. 33, no. 12, pp. 2611-2625, Dec. 2015.

Abstract: We consider the effects of decoding costs in energyharvesting communication systems. In our setting, receivers, in addition to transmitters, rely solely on energy harvested from nature, and need to spend some energy in order to decode their intended packets. We model the decoding energy as an increasing convex function of the rate of the incoming data. In this setting, in addition to the traditional energy causality constraints at the transmitters, we have the decoding causality constraints at the receivers, where energy spent by the receiver for decoding cannot exceed its harvested energy. We first consider the point-to-point singleuser problem where the goal is to maximize the total throughput by a given deadline subject to both energy and decoding causality constraints. We show that decoding costs at the receiver can be represented as generalized data arrivals at the transmitter, and thereby moving all system constraints to the transmitter side. Then, we consider several multiuser settings. We start with a twohop network where the relay and the destination have decoding costs, and show that separable policies, where the transmitter's throughput is maximized irrespective of the relay's transmission energy profile, are optimal. Next, we consider the multiple access channel (MAC) and the broadcast channel (BC) where the transmitters and the receivers harvest energy from nature, and characterize the maximum departure region. In all multiuser settings considered, we decompose our problems into inner and outer problems. We solve the inner problems by exploiting the structure of the particular model, and solve the outer problems by water-filling algorithms.

 

Date 7/1/2015
Time-Place 2:00 PM @ 13A EE West
Presenter Basak Guler, Abdelrahman Ibrahim, Ebrahim MolavianJazi, Mohamed Nafea, Burak Varan, and Ahmed Zewail
Title Selected ISIT 2015 Publications
Abstract

We will discuss the following papers from the proceedings of the 2015 IEEE International Symposium on Information Theory:

[1] Himanshu Tyagi, Pramod Viswanath, and Shun Watanabe, "Interactive Communication for Data Exchange", in Proceedings of the 2015 IEEE ISIT, Jun. 2015 (presented by Basak Guler).

[2] Dor Shaviv and Ayfer Ozgur, "Capacity of the AWGN Channel with Random Battery Recharges", in Proceedings of the 2015 IEEE ISIT, Jun. 2015 (presented by Abdelrahman Ibrahim).

[3] Hiroki Koga, "On a Partition of a Finite Set and its Relationships to Encoding Tasks and the Renyi Entropy", in Proceedings of the 2015 IEEE ISIT, Jun. 2015 (presented by Ebrahim MolavianJazi).

[4] Jingbo Liu, Paul Cuff, and Sergio Verdu, "One-Shot Mutual Covering Lemma and Marton's Inner Bound with a Common Message", in Proceedings of the 2015 IEEE ISIT, Jun. 2015 (presented by Mohamed Nafea).

[5] Omur Ozel, Sennur Ulukus, and Pulkit Grover, "Optimal Scheduling for Energy Harvesting Transmitters under Temperature Constraints", in Proceedings of the 2015 IEEE ISIT, Jun. 2015 (presented by Burak Varan).

[6] Rafael F. Schaefer, Ashish Khisti, and H. Vincent Poor, "How to Use Independent Secret Keys for Secure Broadcasting of Common Messages", in Proceedings of the 2015 IEEE ISIT, Jun. 2015 (presented by Ahmed Zewail).

 

Date 3/30/2015
Time-Place 10:00 AM @ 13A EE West
Presenter Janis Notzel
Title The Arbitrarily Varying Wiretap Channel
Abstract

We will discuss the following papers:

[1] Moritz Wiese, Janis Notzel, and Holger Boche, "The Arbitrarily Varying Wiretap Channel - Deterministic and Correlated Random Coding Capacities under the Strong Secrecy Criterion," preprint, available at arXiv:1410.8078, Oct. 2014.

[2] Janis Notzel, Moritz Wiese, and Holger Boche, "The Arbitrarily Varying Wiretap Channel - Secret Randomness, Stability and Super-Activation," preprint, available at arXiv:1501.07439, Jan. 2015.

Abstract: I give an overview over some recent developments concerning arbitrarily varying wiretap channels, with results mainly taken from [1] and [2]. The common thread will be the role that common randomness (CR) plays for this model. We start with the case where no CR is available. Here, we observe super-activation of the corresponding capacity as well as discontinuities. We characterize the points of discontinuity. Then, we switch attention to the case where small amounts of CR are available. Corresponding capacity formulas will be given. In that regime it is of little importance whether the Eavesdropper gets to know the CR or not. If one wants to benefit from large amounts of CR, it has to be kept secret from the Eavesdropper. A capacity formula for this case will be given as well. Our results generally involve multi-letter capacity formulae, with the one exception being the case where lots of secret common randomness are available.

 

Date 3/11/2015
Time-Place 1:45 PM @ 13A EE West
Presenter Abdelrahman Ibrahim and Burak Varan
Title Optimal Power Allocation on Discrete Energy Harvesting Model
Abstract

We will discuss the following paper:

Xiaolei Wang, Jie Gong, Congshi Hu, Sheng Zhou, and Zhisheng Niu, "Optimal Power Allocation on Discrete Energy Harvesting Model," EURASIP Journal on Wireless Communications and Networking, vol. 2015, no. 48, pp. 1-14, Mar. 2015.

Abstract: This paper studies the power allocation problem in energy harvesting systems with finite battery. We adopt the discretized energy arrival and power allocation model. Hence, the service process can be modeled as a finite state Markov chain. Based on the discretized model, we analyze the stationary distribution of the Markov chain and formulate the utility maximization problem, which is then reformed as a linear programming problem. By analyzing the linear programming problem, we provide some intuition on the structure of the optimal power allocation policy and find the condition in which the greedy power allocation is optimal. Numerical simulations show the influence of the energy arrival process on the optimal power allocation policy, and the results are consistent with our analysis.

 

Date 10/24/2014
Time-Place 2:30 PM @ 13A EE West
Presenter Ahmed Zewail
Title Capacity of Gaussian Channels With Energy Harvesting and Processing Cost
Abstract

We will discuss the following paper:

Ramachandran Rajesh, Vinod Sharma, and Pramod Viswanath, "Capacity of Gaussian Channels With Energy Harvesting and Processing Cost," IEEE Transactions on Information Theory, vol. 60, no. 5, pp. 2563-2575, May 2014.

Abstract: Energy harvesting sensor nodes are gaining popularity due to their ability to improve the network life time and are becoming a preferred choice supporting green communication. In this paper, we focus on communicating reliably over an additive white Gaussian noise channel using such an energy harvesting sensor node. An important part of this paper involves appropriate modeling of energy harvesting, as done via various practical architectures. Our main result is the characterization of the Shannon capacity of the communication system. The key technical challenge involves dealing with the dynamic (and stochastic) nature of the (quadratic) cost of the input to the channel. As a corollary, we find close connections between the capacity achieving energy management policies and the queueing theoretic throughput optimal policies.

 

Date 9/26/2014
Time-Place 2:30 PM @ 13A EE West
Presenter Mohamed Nafea
Title Achieving AWGN Capacity Under Stochastic Energy Harvesting
Abstract

We will discuss the following paper:

Omur Ozel and Sennur Ulukus, "Achieving AWGN Capacity Under Stochastic Energy Harvesting," IEEE Transactions on Information Theory, vol. 58, no. 10, pp. 6471-6483, Oct. 2012.

Abstract: In energy harvesting communication systems, an exogenous recharge process supplies energy necessary for data transmission and the arriving energy can be buffered in a battery before consumption. We determine the information-theoretic capacity of the classical additive white Gaussian noise (AWGN) channel with an energy harvesting transmitter with an unlimited sized battery. As the energy arrives randomly and can be saved in the battery, codewords must obey cumulative stochastic energy constraints. We show that the capacity of the AWGN channel with such stochastic channel input constraints is equal to the capacity with an average power constraint equal to the average recharge rate. We provide two capacity achieving schemes: save-and-transmit and best-effort-transmit. In the save-and-transmit scheme, the transmitter collects energy in a saving phase of proper duration that guarantees that there will be no energy shortages during the transmission of code symbols. In the best-effort-transmit scheme, the transmission starts right away without an initial saving period, and the transmitter sends a code symbol if there is sufficient energy in the battery, and a zero symbol otherwise. Finally, we consider a system in which the average recharge rate is time varying in a larger time scale and derive the optimal offline power policy that maximizes the average throughput, by using majorization theory.

 

Date 9/15/2014
Time-Place 4:00 PM @ 13A EE West
Presenter Kaya Tutuncuoglu
Title The Capacity of Discrete-Time Memoryless Rayleigh-Fading Channels
Abstract

We will discuss the following paper:

Ibrahim C. Abou-Faycal, Mitchell D. Trott, and Shlomo Shamai (Shitz), "The Capacity of Discrete-Time Memoryless Rayleigh-Fading Channels," IEEE Transactions on Information Theory, vol. 47, no. 4, pp. 1290-1301, May 2001.

Abstract: We consider transmission over a discrete-time Rayleigh fading channel, in which successive symbols face independent fading, and where neither the transmitter nor the receiver has channel state information. Subject to an average power constraint, we study the capacity-achieving distribution of this channel and prove it to be discrete with a finite number of mass points, one of them located at the origin. We numerically compute the capacity and the corresponding optimal distribution as a function of the signal-to-noise ratio (SNR). The behavior of the channel at low SNR is studied and finally a comparison is drawn with the ideal additive white Gaussian noise channel.

 

Date 9/12/2014
Time-Place 12:15 PM @ 13A EE West
Presenter Ebrahim MolavianJazi
Title On Finite Blocklength Information Theory
Abstract

Ebrahim will be talking about his previous work with emphasis on the following papers:

Yury Polyanskiy, H. Vincent Poor, and Sergio Verdu, "Channel Coding Rate in the Finite Blocklength Regime," IEEE Transactions on Information Theory, vol. 56, no. 5, pp. 2307-2359, May 2010.

Abstract: This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths n as short as 100. It is also shown analytically that the maximal rate achievable with error probability e is closely approximated by C-sqrt(V/n) Q^{-1}(e) where C is the capacity, V is a characteristic of the channel referred to as channel dispersion, and Q is the complementary Gaussian cumulative distribution function.

Ebrahim MolavianJazi and J. Nicholas Laneman, "A Finite Blocklength Perspective on Gaussian Multi-access Channels", submitted to IEEE Transactions on Information Theory, 2014, available arxiv:1309.2343.

Abstract: Motivated by the growing application of wireless multi-access networks with stringent delay constraints, we investigate the Gaussian multiple access channel (MAC) in the finite blocklength regime. Building upon information spectrum concepts, we develop several non-asymptotic inner bounds on channel coding rates over the Gaussian MAC with a given finite blocklength, positive average error probability, and maximal power constraints. Employing Central Limit Theorem (CLT) approximations, we also obtain achievable second-order coding rates for the Gaussian MAC based on an explicit expression for its dispersion matrix. We observe that, unlike the pentagon shape of the asymptotic capacity region, the second-order region has a curved shape with no sharp corners. A main emphasis of the paper is to provide a new perspective on the procedure of handling input cost constraints for tight achievability proofs. Contrary to the complicated achievability techniques in the literature, we show that with a proper choice of input distribution, tight bounds can be achieved via the standard random coding argument and a modified typicality decoding. In particular, we prove that codebooks generated randomly according to independent uniform distributions on the respective "power shells" perform far better than both independent and identically distributed (i.i.d.) Gaussian inputs and TDMA with power control. Interestingly, analogous to an error exponent result of Gallager, the resulting achievable region lies roughly halfway between that of the i.i.d. Gaussian inputs and that of a hypothetical "sum-power shell" input. However, dealing with such a non-i.i.d. input requires additional analysis such as a new change of measure technique and application of a Berry-Esseen CLT for functions of random variables.

 

Date 9/10/2014
Time-Place 4:00 PM @ 13A EE West
Presenter Burak Varan
Title Distributed Interference Compensation for Wireless Networks
Abstract

Burak will present the following paper:

Jianwei Huang, Randall A. Berry, Michael L. Honig, "Distributed Interference Compensation for Wireless Networks," IEEE Journal on Selected Areas in Communications, vol. 24, no. 5, pp. 1074-1084, May 2006.

Abstract: We consider a distributed power control scheme for wireless ad hoc networks, in which each user announces a price that reflects compensation paid by other users for their interference. We present an asynchronous distributed algorithm for updating power levels and prices. By relating this algorithm to myopic best response updates in a fictitious game, we are able to characterize convergence using supermodular game theory. Extensions of this algorithm to a multichannel network are also presented, in which users can allocate their power across multiple frequency bands.

 

Date 8/29/2014
Time-Place 12:15 PM @ 13A EE West
Presenter Abdelrahman Ibrahim
Title On Slotted ALOHA with Opportunistic Energy Harvesting
Abstract

Abdelrahman will be talking about his previous work with emphasis on the following papers:

Abdelrahman M. Ibrahim, Tamer ElBatt, Amr El-Keyi, "Coverage Probability Analysis for Wireless Networks Using Repulsive Point Processes," in Proceedings of the 2013 IEEE 24th International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), Sep. 2013, pp. 1002-1007.

Abstract: The recent witnessed evolution of cellular networks from a carefully planned deployment to more irregular, heterogeneous deployments of Macro, Pico and Femto-BSs motivates new analysis and design approaches. In this paper, we analyze the coverage probability in cellular networks assuming repulsive point processes for the base station deployment. In particular, we characterize, analytically using stochastic geometry, the downlink probability of coverage under a Matern hardcore point process to ensure minimum distance between the randomly located base stations. Assuming a mobile user connects to the nearest base station and Rayleigh fading, we derive two lower bounds expressions on the downlink probability of coverage that is within 4% from the simulated scenario. To validate our model, we compare the probability of coverage of the Matern hardcore topology against an actual base station deployment obtained from a public database. The comparison shows that the actual base station deployment can be fitted by setting the appropriate Matern point process density.

Abdelrahman M. Ibrahim, Ozgur Ercetin, and Tamer ElBatt, "Analysis of Slotted ALOHA with Opportunistic RF Energy Harvesting," 2014.

Abstract: Energy harvesting (EH) technology is a prominent solution for designing energy efficient autonomous wireless networks. In EH-networks, we utilize the ambient energy sources to replenish the batteries of the EH-nodes. In particular, RF signals radiated by ambient transmitters become a viable new source for EH-networks. However, RF energy harvesting imposes new challenges in the design of wireless networks. In this work, we investigate the performance of an ALOHA random access wireless network consisting of nodes with and without RF energy harvesting capability. We consider a wireless network consisting of two source nodes, where type I node has unlimited energy supply and type II is powered by an RF energy harvesting circuit. The transmissions of type I node is recycled by type II node to replenish its battery. Our contribution is multi-fold, first we propose a discrete time stochastic process for modeling the RF energy harvesting process. Second, we generalize the Stochastic dominance technique for analyzing RF EH-networks. Third, we characterize the stable throughput region for our RF EH-network under two RF energy modes. Fourth, we investigate the impact of finite capacity batteries on the stable throughput region.

 

Date 8/25/2014
Time-Place 1:00 PM @ 13A EE West
Presenter Mahmoud Ashour
Title On Cognitive Radio with Energy Harvesting
Abstract

Mahmoud will be talking about his previous work with emphasis on the following paper:

Ahmed El Shafie, Mahmoud Ashour, Tamer Khattab, and Amr Mohamed, "On Spectrum Sharing Between Energy Harvesting Cognitive Radio Users and Primary Users," preprint, available at arXiv:1407.7267, Jul. 2014.

Abstract: This paper investigates the maximum secondary throughput for a rechargeable secondary user (SU) sharing the spectrum with a primary user (PU) plugged to a reliable power supply. The SU maintains a finite energy queue and harvests energy from natural resources and primary radio frequency (RF) transmissions. We propose a power allocation policy at the PU and analyze its effect on the throughput of both the PU and SU. Furthermore, we study the impact of the bursty arrivals at the PU on the energy harvested by the SU from RF transmissions. Moreover, we investigate the impact of the rate of energy harvesting from natural resources on the SU throughput. We assume fading channels and compute exact closed-form expressions for the energy harvested by the SU under fading. Results reveal that the proposed power allocation policy along with the implemented RF energy harvesting at the SU enhance the throughput of both primary and secondary links.