What’s new in 1.2?

🧠 Federated LLM Fine-Tuning

  • Adapt LLMs to distributed data without sharing raw data—fully privacy-preserving. Powered by FlowerTune, FedOps 1.2 automates the end-to-end pipeline: task config → distributed LoRA training → model aggregation → global checkpointing.
  • Solves: High GPU/comms overhead, data silos, manual orchestration.
  • Delivers: Parameter-efficient tuning, minimal setup, domain-specific LLM adaptation at scale.

Enhanced Automatic Configuration and FL Server Code Generation

  • FedOps 1.2 auto-generates validated configs + runnable FL server/client code stubs instantly from your task spec. No more manual tuning—strategy, metrics, hooks, and datasets are pre-wired. Deployment just got drastically faster.

🔬 Advanced Federated Learning Capabilities – Now Fully Turnkey via Simple Config Flags

  • Explainable AI (XAI) Built-In

    Enable federated interpretability with Grad-CAM-based XAI to visualize model decisions on local physiological or image data. Clients generate Grad-CAM heatmaps (e.g., MNIST), report aggregated metrics (entropy, similarity), plus per-round explainability hooks for feature importance, drift checks, and exportable reports. Close the interpretability gap in privacy-preserving FL.

  • Intelligent Client Clustering + Hyperparameter Optimization (HPO)

    Automatically group clients by data/behavioral signatures, then run cluster-specific HPO to unlock optimal performance even under severe Non-IID conditions.

  • FedMAP Aggregation for Multimodal FL (MMFL)

    A new multimodal FL aggregation method that dynamically learns adaptive client weights from interpretable meta-features—engineered for real-world multimodal, non-IID clients.

All features ship with full guided tutorials, end-to-end examples, and production-ready use cases.

Fitbit Wearable Pipeline

  • We’ve integrated a real-world federated IoT health pipeline using Fitbit wearable data. Enables privacy-preserving sleep-quality prediction and personalized health monitoring without centralizing user data.
  • Introduces an open-source lightweight SleepLSTM model with 3-layer LSTM + projection bottleneck for temporal feature learning on multivariate Fitbit signals.

🖥️ Enhanced FL Server Logs & Monitoring

  • Problem Solved: Hard to track system health, performance drift, and errors in production FL runs.
  • What’s New: Deep observability with real-time insights into metrics, logs, and lifecycle state.
    • FL Status Tracking: Monitors creation → execution → termination stages, providing clear status indicators to guide next actions.
    • Live Server Operation Logs: Stream server logs directly to the web dashboard—view errors, training progress, and learning status as it happens.

👉 No more blind runs. Full visibility, instant debugging.

Installation Procedure & Requirements

pip install fedops

This release is a valuable advancement for researchers, engineers, and teams building privacy-first AI on distributed data—delivering production-grade orchestration, LLM adaptation, multimodal robustness, and full observability with minimal-touch deployment.

Thanks to our Contributors

We extend our sincere gratitude to the dedicated team at Gachon Cognitive Computing Lab who made this release possible:

Explore the new features and documentation on our website

( link:https://ccl.gachon.ac.kr/fedops).

Join the Discussion: Connect with the community on our Slack channel

(link:https://fedopshq.slack.com/join/shared_invite/zt-3h73abys7-ms07FlAVG7EP2108BzevcA#/shared-invite/email).