A modular federated learning framework implementing cognitive defence strategies based on OODA loop and MAPE-K frameworks, with support for various attacks and defences.
- Modular Architecture: Separation of concerns with pluggable attacks and defences
- Multi-Client Orchestration: Automated management of 10+ client processes with resource monitoring
- Multiple Defence Strategies:
- Cognitive Defence: OODA loop and MAPE-K framework implementation
- Krum: Byzantine-robust aggregation selecting updates with minimal distance scores
- Trimmed Mean: Robust aggregation removing outliers from both ends
- Attack Simulation: Label flipping, gradient noise, model replacement, and more
- Explainable AI: Decision logging with reasoning and evidence
- Deterministic Experiments: Reproducible results with proper seeding
# Clone and setup
git clone https://github.com/self1am/FL_CognitiveDefence.git
cd FL_CognitiveDefence
make setupmake run-basicpython -m src.orchestration.experiment_runner --config experiments/configs/your_config.yamlOODA loop-based adaptive defence with reputation system:
- Observes client update patterns
- Orients using historical context
- Decides on weighted aggregation
- Acts with explainable decisions
Byzantine-robust aggregation from literature (Blanchard et al., NeurIPS 2017):
- Selects client update(s) with smallest distance score
- Robust against up to f Byzantine clients
- Supports both single-Krum and Multi-Krum variants
Byzantine-robust aggregation (Yin et al., ICML 2018):
- Removes β fraction of extreme values per dimension
- Aggregates remaining values
- Parameter-wise robustness
federated-cognitive-defence/
├── src/
│ ├── attacks/ # Attack implementations
│ ├── defences/ # defence strategies
│ ├── clients/ # Client implementations
│ ├── server/ # Server implementations
│ ├── models/ # Neural network models
│ ├── datasets/ # Dataset handlers
│ ├── orchestration/ # Multi-client orchestration
│ └── utils/ # Utilities and configuration
├── experiments/
│ ├── configs/ # Experiment configurations
│ ├── scripts/ # Helper scripts
│ └── results/ # Experiment results
└── tests/ # Unit and integration tests
Experiments are configured using YAML files. Example configurations:
experiment:
experiment_name: "cognitive_defence_test"
seed: 42
num_rounds: 10
server_address: "0.0.0.0:8080"
defence:
strategy: "cognitive_defence"
anomaly_threshold: 0.7
reputation_decay: 0.8
attacks:
- enabled: true
attack_type: "label_flip"
intensity: 0.1
target_clients: [0, 1, 2]
orchestration:
num_clients: 10
batch_size: 3defence:
strategy: "krum"
num_byzantine: 2 # Expected number of malicious clients
multi_krum: false # Use standard Krum (true for Multi-Krum)defence:
strategy: "trimmed_mean"
beta: 0.2 # Trim 20% from each end- MacBook M1 (8GB): Orchestrator + 2-3 lightweight clients
- Azure VMs (4GB each): Server + 3-4 clients each
- Total: Support for 10+ concurrent clients
For multi-machine experiments:
# Setup distributed environment
./scripts/run_distributed.sh
# Monitor progress
tail -f logs/experiment_name.log# Install development dependencies
make dev-install
# Run tests
make test
# Format code
make format
# Lint code
make lintResults are automatically saved to:
experiments/results/: JSON experiment summarieslogs/: Detailed execution logs- Individual client logs with training history
- FEMNIST Integration: More realistic FL dataset
- Quantum Neural Networks: PennyLane integration
- Advanced defences: FreqFed, Median, FoolsGold
- Adaptive Attacks: Learning-based adversarial strategies
- Comparative Analysis: Benchmark defences against various attack scenarios
- Create feature branches for new functionality
- Add tests for new components
- Update documentation
- Submit pull requests
MIT License - see LICENSE file for details.