鶹ýӳ

Successive Approximation Coding for Distributed Matrix Multiplication

Submitted by admin on Mon, 10/28/2024 - 01:24

Coded distributed computing was recently introduced to mitigate the effect of stragglers on distributed computing systems. This paper combines ideas of approximate and coded computing to further accelerate computation. We propose successive approximation coding (SAC) techniques that realize a tradeoff between accuracy and speed, allowing the distributed computing system to produce approximations that increase in accuracy over time. If a sufficient number of compute nodes finish their tasks, SAC exactly recovers the desired computation.

Peer-to-Peer Variational Federated Learning Over Arbitrary Graphs

Submitted by admin on Mon, 10/28/2024 - 01:24

This paper proposes a federated supervised learning framework over a general peer-to-peer network with agents that act in a variational Bayesian fashion. The proposed framework consists of local agents where each of which keeps a local “posterior probability distribution” over the parameters of a global model; the updating of the posterior over time happens in a local fashion according to two subroutines of: 1) variational model training given (a batch of) local labeled data, and 2) asynchronous communication and model aggregation with the 1-hop neighbors.

Perfectly Secure Message Transmission Against Rational Adversaries

Submitted by admin on Mon, 10/28/2024 - 01:24

Secure Message Transmission (SMT) is a two-party cryptographic protocol by which the sender can securely and reliably transmit messages to the receiver using multiple channels. An adversary can corrupt a subset of the channels and commit eavesdropping and tampering attacks over the channels. In this work, we introduce a game-theoretic security model for SMT in which adversaries have some preferences for protocol execution. We define rational “timid” adversaries who prefer to violate security requirements but do not prefer the tampering to be detected.

Communication-Efficient Distributed SGD Using Random Access for Over-the-Air Computation

Submitted by admin on Mon, 10/28/2024 - 01:24

In this paper, we study communication-efficient distributed stochastic gradient descent (SGD) with data sets of users distributed over a certain area and communicating through wireless channels. Since the time for one iteration in the proposed approach is independent of the number of users, it is well-suited to scalable distributed SGD.

A Unified Treatment of Partial Stragglers and Sparse Matrices in Coded Matrix Computation

Submitted by admin on Mon, 10/28/2024 - 01:24

The overall execution time of distributed matrix computations is often dominated by slow worker nodes (stragglers) within the computation clusters. Recently, coding-theoretic techniques have been utilized to mitigate the effect of stragglers where worker nodes are assigned the job of processing encoded submatrices of the original matrices. In many machine learning or optimization problems the relevant matrices are often sparse.

An Optimal Transport Approach to Personalized Federated Learning

Submitted by admin on Mon, 10/28/2024 - 01:24

Federated learning is a distributed machine learning paradigm, which aims to train a model using the local data of many distributed clients. A key challenge in federated learning is that the data samples across the clients may not be identically distributed. To address this challenge, personalized federated learning with the goal of tailoring the learned model to the data distribution of every individual client has been proposed.

Soft BIBD and Product Gradient Codes

Submitted by admin on Mon, 10/28/2024 - 01:24

Gradient coding is a coding theoretic framework to provide robustness against slow or unresponsive machines, known as stragglers, in distributed machine learning applications. Recently, Kadhe et al. (2019) proposed a gradient code based on a combinatorial design, called balanced incomplete block design (BIBD), which is shown to outperform many existing gradient codes in worst-case adversarial straggling scenarios. However, parameters for which such BIBD constructions exist are very limited (Colbourn and Dinitz, 2006).

A General Coded Caching Scheme for Scalar Linear Function Retrieval

Submitted by admin on Mon, 10/28/2024 - 01:24

Coded caching aims to minimize the network’s peak-time communication load by leveraging the information pre-stored in the local caches at the users. The original setting by Maddah-Ali and Niesen, which considered single file retrieval, has been recently extended to general Scalar Linear Function Retrieval (SLFR) by Wan et al., who proposed a linear scheme that surprisingly achieves the same optimal load under the constraint of uncoded cache placement as in single file retrieval.

On Rack-Aware Cooperative Regenerating Codes and Epsilon-MSCR Codes

Submitted by admin on Mon, 10/28/2024 - 01:24

In distributed storage systems, cooperative regenerating codes tradeoff storage for repair bandwidth in the case of multiple node failures. In rack-aware distributed storage systems, there is no cost associated with transferring symbols within a rack. Hence, the repair bandwidth will only take into account cross-rack transfer. Rack-aware regenerating codes for the case of single node failures have been studied and their repair bandwidth tradeoff characterized.

Secure Private and Adaptive Matrix Multiplication Beyond the Singleton Bound

Submitted by admin on Mon, 10/28/2024 - 01:24

We consider the problem of designing secure and private codes for distributed matrix-matrix multiplication. A master server owns two private matrices and hires worker nodes to help compute their product. The matrices should remain information-theoretically private from the workers. Some of the workers are malicious and return corrupted results to the master. We design a framework for security against malicious workers in private matrix-matrix multiplication. The main idea is a careful use of Freivalds’ algorithm to detect erroneous matrix multiplications.