Madill, Evan2025-09-152025-09-152025-07-242025-08-26http://hdl.handle.net/1993/39362Split Learning (SL) is a promising framework for distributed machine learning in resource-constrained and privacy-sensitive settings. However, vanilla SL deployments have limitations in communication overhead, privacy leakage from intermediate activations, accommodating heterogeneous client capabilities, and integrating with federated strategies. This thesis addresses these challenges through four primary contributions. First, we address communication efficiency by proposing a Vector Quantized Variational Autoencoder (VQ-VAE) framework using Lookup-Free Quantization (LFQ) for compressing intermediate features. This eliminates the need for codebook transmission, with favorable rate-distortion trade-offs. Second, we introduce PRISM (Privacy Router with Integrated Spatio-Channel Masking), an information router and optimization strategy for enhancing privacy in SL. PRISM adopts a U-shaped network, fine-grained masking, a local bypass stream, and disentangled optimization to reduce information leakage to the server while preserving task utility. Third, we propose Split Learning with Frozen Clients (SL-FC), a strategy designed for heterogeneous environments. SL-FC studies how ultra-resource-constrained ``frozen" clients, capable only of forward propagation, can contribute their data to server-side training, demonstrably improving overall model performance. Finally, we evaluate the Muon optimizer for use as an outer-loop, or federated optimizer. Muon orthogonalizes momentum updates, aiming to improve convergence speed and stability compared to standard federated optimizers like FedAvgM and FedAdam. This thesis provides theoretical analyses, rate-distortion bounds, convergence guarantees, and empirical evaluations to demonstrate the effectiveness of the proposed methods. Tasks involve image classification, segmentation and model training under non-IID constraints. Collectively, these contributions improve the practicality, and privacy over existing split and federated learning systems.engMachine LearningFederated LearningSplit LearningFeature CompressionPrivacy-Preserving MLVQ-VAEFederated OptimizationPrivacy-aware distributed machine learning