This commit is contained in:
parent
a14084ce68
commit
6b517f6a46
52
protocol_prototype/DryBox/AUDIO_SETUP.md
Normal file
52
protocol_prototype/DryBox/AUDIO_SETUP.md
Normal file
@ -0,0 +1,52 @@
|
||||
# Audio Setup Guide for DryBox
|
||||
|
||||
## Installing PyAudio on Fedora
|
||||
|
||||
PyAudio requires system dependencies before installation:
|
||||
|
||||
```bash
|
||||
# Install required system packages
|
||||
sudo dnf install python3-devel portaudio-devel
|
||||
|
||||
# Then install PyAudio
|
||||
pip install pyaudio
|
||||
```
|
||||
|
||||
## Alternative: Run Without PyAudio
|
||||
|
||||
If you prefer not to install PyAudio, the application will still work but without real-time playback. You can still:
|
||||
- Record audio to files
|
||||
- Process and export audio
|
||||
- Use all other features
|
||||
|
||||
To run without PyAudio, the audio_player.py module will gracefully handle the missing dependency.
|
||||
|
||||
## Ubuntu/Debian Installation
|
||||
|
||||
```bash
|
||||
sudo apt-get install python3-dev portaudio19-dev
|
||||
pip install pyaudio
|
||||
```
|
||||
|
||||
## macOS Installation
|
||||
|
||||
```bash
|
||||
brew install portaudio
|
||||
pip install pyaudio
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you see "No module named 'pyaudio'" errors:
|
||||
1. The app will continue to work without playback
|
||||
2. Recording and processing features remain available
|
||||
3. Install PyAudio later when convenient
|
||||
|
||||
## Testing Audio Features
|
||||
|
||||
1. Run the application: `python UI/main.py`
|
||||
2. Start a call between phones
|
||||
3. Test features:
|
||||
- Recording: Works without PyAudio
|
||||
- Playback: Requires PyAudio
|
||||
- Processing: Works without PyAudio
|
118
protocol_prototype/DryBox/AUDIO_TESTING_GUIDE.md
Normal file
118
protocol_prototype/DryBox/AUDIO_TESTING_GUIDE.md
Normal file
@ -0,0 +1,118 @@
|
||||
# Audio Testing Guide for DryBox
|
||||
|
||||
## Setup Verification
|
||||
|
||||
1. **Start the server first**:
|
||||
```bash
|
||||
python server.py
|
||||
```
|
||||
|
||||
2. **Run the UI**:
|
||||
```bash
|
||||
python UI/main.py
|
||||
```
|
||||
|
||||
## Testing Audio Playback
|
||||
|
||||
### Step 1: Test PyAudio is Working
|
||||
When you enable playback (Ctrl+1 or Ctrl+2), you should hear a short beep (100ms, 1kHz tone). This confirms:
|
||||
- PyAudio is properly installed
|
||||
- Audio output device is working
|
||||
- Stream format is correct
|
||||
|
||||
### Step 2: Test During Call
|
||||
1. Click "Run Automatic Test" or press Space
|
||||
2. **Immediately** enable playback on Phone 2 (Ctrl+2)
|
||||
- You should hear the test beep
|
||||
3. Watch the debug console for:
|
||||
- "Phone 2 playback started"
|
||||
- "Phone 2 sent test beep to verify audio"
|
||||
4. Wait for handshake to complete (steps 4-5 in test)
|
||||
5. Once voice session starts, you should see:
|
||||
- "Phone 2 received audio data: XXX bytes"
|
||||
- "Phone 2 forwarding audio to player"
|
||||
- "Client 1 playback thread got XXX bytes"
|
||||
|
||||
### What to Look For in Debug Console
|
||||
|
||||
**Good signs:**
|
||||
```
|
||||
[AudioPlayer] Client 1 add_audio_data called with 640 bytes
|
||||
[AudioPlayer] Client 1 added to buffer, queue size: 1
|
||||
[AudioPlayer] Client 1 playback thread got 640 bytes
|
||||
```
|
||||
|
||||
**Problem signs:**
|
||||
```
|
||||
[AudioPlayer] Client 1 has no buffer (playback not started?)
|
||||
Low confidence demodulation: 0.XX
|
||||
Codec decode returned None or empty
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No Test Beep
|
||||
- Check system volume
|
||||
- Verify PyAudio: `python test_audio_setup.py`
|
||||
- Check audio device: `python -c "import pyaudio; p=pyaudio.PyAudio(); print(p.get_default_output_device_info())"`
|
||||
|
||||
### Test Beep Works but No Voice Audio
|
||||
1. **Check if audio is being transmitted:**
|
||||
- Phone 1 should show: "sent N voice frames"
|
||||
- Phone 2 should show: "Received voice data frame #N"
|
||||
|
||||
2. **Check if audio is being decoded:**
|
||||
- Look for: "Decoded PCM samples: type=<class 'numpy.ndarray'>, len=320"
|
||||
- Look for: "Emitting PCM bytes: 640 bytes"
|
||||
|
||||
3. **Check if audio reaches the player:**
|
||||
- Look for: "Phone 2 received audio data: 640 bytes"
|
||||
- Look for: "Client 1 add_audio_data called with 640 bytes"
|
||||
|
||||
### Audio Sounds Distorted
|
||||
This is normal! The system uses:
|
||||
- Codec2 at 1200bps (very low bitrate)
|
||||
- 4FSK modulation
|
||||
- This creates robotic/vocoder-like sound
|
||||
|
||||
### Manual Testing Commands
|
||||
|
||||
Test just the codec:
|
||||
```python
|
||||
python test_audio_pipeline.py
|
||||
```
|
||||
|
||||
Play the test outputs:
|
||||
```bash
|
||||
# Original
|
||||
aplay wav/input.wav
|
||||
|
||||
# Codec only (should sound robotic)
|
||||
aplay wav/test_codec_only.wav
|
||||
|
||||
# Full pipeline (codec + FSK)
|
||||
aplay wav/test_full_pipeline.wav
|
||||
```
|
||||
|
||||
## Expected Audio Flow
|
||||
|
||||
1. Phone 1 reads `wav/input.wav` (8kHz mono)
|
||||
2. Encodes 320 samples (40ms) with Codec2 → 6 bytes
|
||||
3. Modulates with 4FSK → ~1112 float samples
|
||||
4. Encrypts with Noise XK
|
||||
5. Sends to server
|
||||
6. Server routes to Phone 2
|
||||
7. Phone 2 decrypts with Noise XK
|
||||
8. Demodulates FSK → 6 bytes
|
||||
9. Decodes with Codec2 → 320 samples (640 bytes PCM)
|
||||
10. Sends to PyAudio for playback
|
||||
|
||||
## Recording Feature
|
||||
|
||||
To save received audio:
|
||||
1. Press Alt+1 or Alt+2 to start recording
|
||||
2. Let it run during the call
|
||||
3. Press again to stop and save
|
||||
4. Check `wav/` directory for saved files
|
||||
|
||||
This helps verify if audio is being received even if playback isn't working.
|
@ -1,101 +0,0 @@
|
||||
# Complete Fix Summary for DryBox Protocol Integration
|
||||
|
||||
## Issues Identified
|
||||
|
||||
1. **Handshake Never Starts**
|
||||
- Phone 1 (initiator) never receives the IN_CALL message
|
||||
- Without receiving IN_CALL, the handshake is never triggered
|
||||
|
||||
2. **Race Condition in State Management**
|
||||
- When Phone 2 answers, it sets BOTH phones to IN_CALL state locally
|
||||
- This prevents Phone 1 from properly handling the IN_CALL message
|
||||
|
||||
3. **Blocking Socket Operations**
|
||||
- Original Noise XK implementation uses blocking sockets
|
||||
- This doesn't work with GSM simulator's message routing
|
||||
|
||||
4. **GSM Simulator Issues**
|
||||
- Client list management has index problems
|
||||
- Messages may not be forwarded correctly
|
||||
|
||||
## Fixes Implemented
|
||||
|
||||
### 1. Created `noise_wrapper.py`
|
||||
```python
|
||||
# Non-blocking Noise XK wrapper that works with message passing
|
||||
class NoiseXKWrapper:
|
||||
def start_handshake(self, initiator)
|
||||
def process_handshake_message(self, data)
|
||||
def get_next_handshake_message(self)
|
||||
```
|
||||
|
||||
### 2. Updated Protocol Message Handling
|
||||
- Added message type `0x20` for Noise handshake messages
|
||||
- Modified `protocol_phone_client.py` to handle handshake messages
|
||||
- Removed blocking handshake from `protocol_client_state.py`
|
||||
|
||||
### 3. Fixed State Management Race Condition
|
||||
In `phone_manager.py`:
|
||||
```python
|
||||
# OLD: Sets both phones to IN_CALL
|
||||
phone['state'] = other_phone['state'] = PhoneState.IN_CALL
|
||||
|
||||
# NEW: Only set answering phone's state
|
||||
phone['state'] = PhoneState.IN_CALL
|
||||
# Let other phone set state when it receives IN_CALL
|
||||
```
|
||||
|
||||
### 4. Enhanced Debug Logging
|
||||
- Added detailed state change logging
|
||||
- Track initiator/responder roles
|
||||
- Log handshake message flow
|
||||
|
||||
## Remaining Issues
|
||||
|
||||
1. **GSM Simulator Reliability**
|
||||
- The simulator's client management needs improvement
|
||||
- Consider using a more robust message queue system
|
||||
|
||||
2. **Message Delivery Verification**
|
||||
- No acknowledgment that messages are received
|
||||
- No retry mechanism for failed messages
|
||||
|
||||
3. **Timeout Handling**
|
||||
- Need timeouts for handshake completion
|
||||
- Need recovery mechanism for failed handshakes
|
||||
|
||||
## Testing Recommendations
|
||||
|
||||
1. **Unit Tests**
|
||||
- Test Noise wrapper independently
|
||||
- Test message routing through simulator
|
||||
- Test state transitions
|
||||
|
||||
2. **Integration Tests**
|
||||
- Full call flow with handshake
|
||||
- Voice transmission after handshake
|
||||
- Error recovery scenarios
|
||||
|
||||
3. **Debug Mode**
|
||||
- Add flag to enable verbose protocol logging
|
||||
- Add message trace viewer in UI
|
||||
- Add handshake state visualization
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Fix GSM Simulator**
|
||||
- Rewrite client management using proper data structures
|
||||
- Add message queuing and delivery confirmation
|
||||
- Add connection state tracking
|
||||
|
||||
2. **Add Retry Logic**
|
||||
- Retry handshake if no response
|
||||
- Retry voice frames on failure
|
||||
- Add exponential backoff
|
||||
|
||||
3. **Improve Error Handling**
|
||||
- Graceful degradation on handshake failure
|
||||
- Clear error messages in UI
|
||||
- Automatic reconnection
|
||||
|
||||
The core protocol integration is complete, but reliability issues prevent it from working consistently. The main blocker is the GSM simulator's message forwarding reliability.
|
@ -1,42 +0,0 @@
|
||||
# Debug Analysis of Test Failures
|
||||
|
||||
## Issues Identified
|
||||
|
||||
### 1. **Handshake Never Starts**
|
||||
- Phone states show `IN_CALL` after answering
|
||||
- But handshake is never initiated
|
||||
- `handshake_complete` remains False for both phones
|
||||
|
||||
### 2. **Voice Session Never Starts**
|
||||
- Since handshake doesn't complete, voice sessions can't start
|
||||
- Audio files are never loaded
|
||||
- Frame counters remain at 0
|
||||
|
||||
### 3. **Message Flow Issue**
|
||||
- The log shows "Client 1 received raw: CALL_END"
|
||||
- This suggests premature disconnection
|
||||
|
||||
## Root Cause Analysis
|
||||
|
||||
The protocol flow should be:
|
||||
1. Phone 1 calls Phone 2 → sends "RINGING"
|
||||
2. Phone 2 answers → sends "IN_CALL"
|
||||
3. Phone 1 receives "IN_CALL" → initiator starts handshake
|
||||
4. Noise XK handshake completes
|
||||
5. Both phones start voice sessions
|
||||
|
||||
The break appears to be at step 3 - the initiator check or message handling.
|
||||
|
||||
## Debugging Steps Added
|
||||
|
||||
1. **More verbose state logging** - Shows initiator status
|
||||
2. **Command queue debugging** - Shows if handshake command is queued
|
||||
3. **Wait step added** - Gives time for handshake to start
|
||||
4. **All print → debug** - Cleaner console output
|
||||
|
||||
## Next Steps to Fix
|
||||
|
||||
1. **Check message routing** - Ensure IN_CALL triggers handshake for initiator
|
||||
2. **Verify state management** - initiator flag must be properly set
|
||||
3. **Check socket stability** - Why is CALL_END being sent?
|
||||
4. **Add manual handshake trigger** - For testing purposes
|
@ -1,114 +0,0 @@
|
||||
# Debug Features in DryBox UI
|
||||
|
||||
## Overview
|
||||
|
||||
The DryBox UI now includes extensive debugging capabilities for testing and troubleshooting the integrated protocol stack (Noise XK + Codec2 + 4FSK + ChaCha20).
|
||||
|
||||
## Features
|
||||
|
||||
### 1. **Debug Console**
|
||||
- Built-in debug console at the bottom of the UI
|
||||
- Shows real-time protocol events and state changes
|
||||
- Color-coded terminal-style output (green text on black)
|
||||
- Auto-scrolls to show latest messages
|
||||
- Resizable using splitter
|
||||
|
||||
### 2. **Automatic Test Button** 🧪
|
||||
- Orange button that runs through complete protocol flow
|
||||
- 10-step automated test sequence:
|
||||
1. Check initial state
|
||||
2. Make call (Phone 1 → Phone 2)
|
||||
3. Answer call (Phone 2)
|
||||
4. Verify Noise XK handshake
|
||||
5. Check voice session status
|
||||
6. Monitor audio transmission
|
||||
7. Display protocol stack details
|
||||
8. Wait for voice frames
|
||||
9. Show transmission statistics
|
||||
10. Hang up and cleanup
|
||||
- Can be stopped at any time
|
||||
- Shows detailed debug info at each step
|
||||
|
||||
### 3. **Debug Logging**
|
||||
|
||||
#### Console Output
|
||||
All debug messages are printed to console with timestamps:
|
||||
```
|
||||
[HH:MM:SS.mmm] [Component] Message
|
||||
```
|
||||
|
||||
#### UI Output
|
||||
Same messages appear in the debug console within the UI
|
||||
|
||||
#### Components with Debug Logging:
|
||||
- **PhoneManager**: Call setup, audio transmission, state changes
|
||||
- **ProtocolPhoneClient**: Connection, handshake, voice frames
|
||||
- **ProtocolClientState**: Command processing, state transitions
|
||||
- **Main UI**: User actions, state updates
|
||||
|
||||
### 4. **Debug Information Displayed**
|
||||
|
||||
#### Connection Status
|
||||
- GSM simulator connection state
|
||||
- Socket status for each phone
|
||||
|
||||
#### Handshake Details
|
||||
- Noise XK role (initiator/responder)
|
||||
- Public keys (truncated for readability)
|
||||
- Handshake completion status
|
||||
- Session establishment
|
||||
|
||||
#### Voice Protocol
|
||||
- Codec2 mode and parameters (1200 bps, 48 bits/frame)
|
||||
- 4FSK frequencies (600, 1200, 1800, 2400 Hz)
|
||||
- Frame encoding/decoding stats
|
||||
- Encryption details (ChaCha20 key derivation)
|
||||
|
||||
#### Transmission Statistics
|
||||
- Frame counters (logged every 25 frames/1 second)
|
||||
- Audio timer status
|
||||
- Waveform updates
|
||||
|
||||
### 5. **Usage**
|
||||
|
||||
#### Manual Testing
|
||||
1. Click buttons to manually control calls
|
||||
2. Watch debug console for protocol events
|
||||
3. Monitor waveforms for audio activity
|
||||
|
||||
#### Automatic Testing
|
||||
1. Click "🧪 Run Automatic Test"
|
||||
2. Watch as system goes through complete flow
|
||||
3. Review debug output for any issues
|
||||
4. Click "⏹ Stop Test" to halt
|
||||
|
||||
#### Clear Debug
|
||||
- Click "Clear Debug" to clear console
|
||||
- Useful when starting fresh test
|
||||
|
||||
### 6. **Debug Message Examples**
|
||||
|
||||
```
|
||||
[14:23:45.123] [PhoneManager] Initialized Phone 1 with public key: 5f46f046f6e9380d74aff8d4fa24196c...
|
||||
[14:23:45.456] [Phone1] Connected to GSM simulator at localhost:12345
|
||||
[14:23:46.789] [Phone1] Starting Noise XK handshake as initiator
|
||||
[14:23:47.012] [Phone1] Noise XK handshake complete!
|
||||
[14:23:47.234] [Phone1] Voice session started
|
||||
[14:23:47.567] [Phone1] Encoding voice frame #0: 640 bytes PCM → 6 bytes compressed
|
||||
[14:23:48.890] [Phone2] Received voice data frame #25
|
||||
```
|
||||
|
||||
### 7. **Troubleshooting with Debug Info**
|
||||
|
||||
- **No connection**: Check "Connected to GSM simulator" messages
|
||||
- **Handshake fails**: Look for public key exchanges and handshake steps
|
||||
- **No audio**: Verify "Voice session started" and frame encoding
|
||||
- **Poor quality**: Check FSK demodulation confidence scores
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Real-time Visibility**: See exactly what's happening in the protocol
|
||||
2. **Easy Testing**: Automatic test covers all components
|
||||
3. **Quick Debugging**: Identify issues without external tools
|
||||
4. **Educational**: Understand protocol flow and timing
|
||||
5. **Performance Monitoring**: Track frame rates and latency
|
@ -1,81 +0,0 @@
|
||||
# Final Analysis of Protocol Integration
|
||||
|
||||
## Current Status
|
||||
|
||||
### ✅ Working Components
|
||||
1. **Handshake Completion**
|
||||
- Noise XK handshake completes successfully
|
||||
- Both phones establish cipher states
|
||||
- HANDSHAKE_DONE messages are sent
|
||||
|
||||
2. **Voice Session Initiation**
|
||||
- Voice sessions start after handshake
|
||||
- Audio files are loaded
|
||||
- Voice frames are encoded with Codec2
|
||||
|
||||
3. **Protocol Stack Integration**
|
||||
- Codec2 compression working (640 bytes → 6 bytes)
|
||||
- 4FSK modulation working (6 bytes → 4448 bytes)
|
||||
- ChaCha20 encryption working
|
||||
|
||||
### ❌ Remaining Issue
|
||||
**Noise Decryption Failures**: After handshake completes, all subsequent messages fail to decrypt with the Noise wrapper.
|
||||
|
||||
## Root Cause Analysis
|
||||
|
||||
The decryption errors occur because:
|
||||
|
||||
1. **Double Encryption Problem**: Voice frames are being encrypted twice:
|
||||
- First with ChaCha20 (for voice privacy)
|
||||
- Then with Noise XK (for channel security)
|
||||
|
||||
2. **Cipher State Synchronization**: The Noise cipher states may be getting out of sync between the two phones. This could happen if:
|
||||
- Messages are being sent/received out of order
|
||||
- The nonce counters are not synchronized
|
||||
- One side is encrypting but the other isn't expecting encrypted data
|
||||
|
||||
3. **Message Type Confusion**: The protocol needs to clearly distinguish between:
|
||||
- Noise handshake messages (type 0x20)
|
||||
- Protocol messages that should be Noise-encrypted
|
||||
- Protocol messages that should NOT be Noise-encrypted
|
||||
|
||||
## Solution Approaches
|
||||
|
||||
### Option 1: Single Encryption Layer
|
||||
Remove ChaCha20 and use only Noise encryption for everything:
|
||||
- Pros: Simpler, no double encryption
|
||||
- Cons: Loses the separate voice encryption key
|
||||
|
||||
### Option 2: Fix Message Routing
|
||||
Properly handle different message types:
|
||||
- Handshake messages (0x20) - no Noise encryption
|
||||
- Control messages (text) - Noise encrypted
|
||||
- Voice messages (0x10, 0x11, 0x12) - no Noise encryption, use ChaCha20 only
|
||||
|
||||
### Option 3: Debug Cipher State
|
||||
Add extensive logging to track:
|
||||
- Nonce counters on both sides
|
||||
- Exact bytes being encrypted/decrypted
|
||||
- Message sequence numbers
|
||||
|
||||
## Recommended Fix
|
||||
|
||||
The best approach is **Option 2** - fix the message routing to avoid double encryption:
|
||||
|
||||
1. During handshake: Use raw sockets for Noise handshake messages
|
||||
2. After handshake:
|
||||
- Control messages (HANDSHAKE_DONE, etc) → Noise encrypted
|
||||
- Voice data (0x11) → ChaCha20 only, no Noise encryption
|
||||
|
||||
This maintains the security model where:
|
||||
- Noise provides authenticated key exchange and control channel encryption
|
||||
- ChaCha20 provides efficient voice frame encryption with per-frame IVs
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
1. Modify `send()` method to check message type
|
||||
2. Send voice frames (0x10, 0x11, 0x12) without Noise encryption
|
||||
3. Send control messages with Noise encryption
|
||||
4. Update receive side to handle both encrypted and unencrypted messages
|
||||
|
||||
This would complete the integration and allow secure voice communication through the full protocol stack.
|
@ -1,76 +0,0 @@
|
||||
# Handshake Fix Summary
|
||||
|
||||
## Problem
|
||||
The Noise XK handshake was not completing, causing decryption errors when voice data arrived before the secure channel was established.
|
||||
|
||||
## Root Causes
|
||||
|
||||
1. **Blocking Socket Operations**: The original `session.py` implementation used blocking socket operations that would deadlock when both phones were in the same process communicating through the GSM simulator.
|
||||
|
||||
2. **Message Routing**: The Noise handshake expected direct socket communication, but our architecture routes all messages through the GSM simulator.
|
||||
|
||||
3. **Threading Issues**: Attempting to run the handshake in a separate thread didn't solve the problem because the socket was still shared.
|
||||
|
||||
## Solution Implemented
|
||||
|
||||
### 1. Created `noise_wrapper.py`
|
||||
- A new wrapper that implements Noise XK handshake using message-passing instead of direct socket I/O
|
||||
- Processes handshake messages one at a time
|
||||
- Maintains state between messages
|
||||
- Works with the GSM simulator's message routing
|
||||
|
||||
### 2. Updated Protocol Message Types
|
||||
- Added `0x20` as Noise handshake message type
|
||||
- Modified protocol handler to recognize and route handshake messages
|
||||
- Handshake messages are now sent as binary protocol messages through the GSM simulator
|
||||
|
||||
### 3. Simplified State Management
|
||||
- Removed the threading approach from `protocol_client_state.py`
|
||||
- Handshake is now handled directly in the main event loop
|
||||
- No more blocking operations
|
||||
|
||||
### 4. Fixed Message Flow
|
||||
1. Phone 1 (initiator) receives IN_CALL → starts handshake
|
||||
2. Initiator sends first Noise message (type 0x20)
|
||||
3. Responder receives it, processes, and sends response
|
||||
4. Messages continue until handshake completes
|
||||
5. Both sides send HANDSHAKE_DONE
|
||||
6. Voice transmission can begin safely
|
||||
|
||||
## Key Changes
|
||||
|
||||
### protocol_phone_client.py
|
||||
- Added `noise_wrapper` instead of direct `noise_session`
|
||||
- Implemented `_handle_noise_handshake()` for processing handshake messages
|
||||
- Modified `start_handshake()` to use the wrapper
|
||||
- Updated encryption/decryption to use the wrapper
|
||||
|
||||
### protocol_client_state.py
|
||||
- Added handling for binary protocol messages (type 0x20)
|
||||
- Simplified handshake command processing
|
||||
- Auto-initializes responder when receiving first handshake message
|
||||
|
||||
### phone_manager.py
|
||||
- Added manager reference to clients for peer lookup
|
||||
- Set keypair directly on client for easier access
|
||||
|
||||
## Testing
|
||||
To test the fix:
|
||||
|
||||
1. Start GSM simulator:
|
||||
```bash
|
||||
cd simulator
|
||||
python3 gsm_simulator.py
|
||||
```
|
||||
|
||||
2. Run the UI:
|
||||
```bash
|
||||
python3 UI/main.py
|
||||
```
|
||||
|
||||
3. Click "Run Automatic Test" or manually:
|
||||
- Click Call on Phone 1
|
||||
- Click Answer on Phone 2
|
||||
- Watch debug console for handshake completion
|
||||
|
||||
The handshake should now complete successfully without blocking, allowing encrypted voice transmission to proceed.
|
@ -1,115 +0,0 @@
|
||||
# DryBox Protocol Integration - Complete Summary
|
||||
|
||||
## What Was Accomplished
|
||||
|
||||
### 1. **Full Protocol Stack Integration** ✅
|
||||
Successfully integrated all components:
|
||||
- **Noise XK**: Handshake completes, secure channel established
|
||||
- **Codec2**: Voice compression working (640 bytes → 6 bytes)
|
||||
- **4FSK**: Modulation working (6 bytes → 4448 bytes)
|
||||
- **ChaCha20**: Voice frame encryption working
|
||||
|
||||
### 2. **UI Integration** ✅
|
||||
- Modified existing DryBox UI (not created new one)
|
||||
- Added debug console with timestamped messages
|
||||
- Added automatic test button with 11-step sequence
|
||||
- Added waveform visualization for sent/received audio
|
||||
|
||||
### 3. **Handshake Implementation** ✅
|
||||
- Created `noise_wrapper.py` for message-based Noise XK
|
||||
- Fixed blocking socket issues
|
||||
- Proper initiator/responder role handling
|
||||
- Handshake completes successfully for both phones
|
||||
|
||||
### 4. **Voice Session Management** ✅
|
||||
- Voice sessions start after handshake
|
||||
- Audio files loaded from `wav/input_8k_mono.wav`
|
||||
- Frames are being encoded and sent
|
||||
|
||||
## Remaining Issues
|
||||
|
||||
### 1. **Decryption Errors** ❌
|
||||
- Voice frames fail to decrypt on receiving side
|
||||
- Caused by mixing Noise encryption with protocol messages
|
||||
- Need to properly separate control vs data channels
|
||||
|
||||
### 2. **Voice Reception** ❌
|
||||
- Only 2 frames successfully received out of ~100 sent
|
||||
- Suggests issue with message routing or decryption
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
Phone 1 GSM Simulator Phone 2
|
||||
| | |
|
||||
|------ RINGING ------------------>|------ RINGING --------------->|
|
||||
| | |
|
||||
|<----- IN_CALL -------------------|<----- IN_CALL ----------------|
|
||||
| | |
|
||||
|------ NOISE_HS (0x20) ---------->|------ NOISE_HS ------------->|
|
||||
|<----- NOISE_HS ------------------|<----- NOISE_HS --------------|
|
||||
|------ NOISE_HS ----------------->|------ NOISE_HS ------------->|
|
||||
| | |
|
||||
|====== SECURE CHANNEL ESTABLISHED ================================|
|
||||
| | |
|
||||
|------ HANDSHAKE_DONE (encrypted) |------ HANDSHAKE_DONE ------->|
|
||||
|<----- HANDSHAKE_DONE ------------|<----- HANDSHAKE_DONE --------|
|
||||
| | |
|
||||
|------ VOICE_START (0x10) ------->|------ VOICE_START ---------->|
|
||||
|<----- VOICE_START ---------------|<----- VOICE_START -----------|
|
||||
| | |
|
||||
|------ VOICE_DATA (0x11) -------->|------ VOICE_DATA ----------->|
|
||||
| [ChaCha20 encrypted] | |
|
||||
```
|
||||
|
||||
## Code Structure
|
||||
|
||||
### New Files Created:
|
||||
1. `UI/protocol_phone_client.py` - Integrated phone client
|
||||
2. `UI/protocol_client_state.py` - Enhanced state management
|
||||
3. `UI/noise_wrapper.py` - Non-blocking Noise XK implementation
|
||||
|
||||
### Modified Files:
|
||||
1. `UI/phone_manager.py` - Uses ProtocolPhoneClient
|
||||
2. `UI/main.py` - Added debug console and auto test
|
||||
3. `UI/phone_state.py` - Converted to proper Enum
|
||||
4. `UI/session.py` - Disabled verbose logging
|
||||
|
||||
## How to Test
|
||||
|
||||
1. Start GSM simulator:
|
||||
```bash
|
||||
cd simulator
|
||||
python3 gsm_simulator.py
|
||||
```
|
||||
|
||||
2. Run the UI:
|
||||
```bash
|
||||
python3 UI/main.py
|
||||
```
|
||||
|
||||
3. Click "Run Automatic Test" or manually:
|
||||
- Phone 1: Click "Call"
|
||||
- Phone 2: Click "Answer"
|
||||
- Watch debug console
|
||||
|
||||
## Next Steps to Complete
|
||||
|
||||
1. **Fix Message Routing**
|
||||
- Separate control channel (Noise encrypted) from data channel
|
||||
- Voice frames should use only ChaCha20, not Noise
|
||||
- Control messages should use only Noise
|
||||
|
||||
2. **Debug Voice Reception**
|
||||
- Add sequence numbers to voice frames
|
||||
- Track which frames are lost
|
||||
- Verify ChaCha20 key synchronization
|
||||
|
||||
3. **Performance Optimization**
|
||||
- Reduce debug logging in production
|
||||
- Optimize codec/modem processing
|
||||
- Add frame buffering
|
||||
|
||||
## Conclusion
|
||||
|
||||
The integration is 90% complete. All components are working individually and the handshake completes successfully. The remaining issue is with the dual encryption causing decryption failures. Once the message routing is fixed to properly separate control and data channels, the full protocol stack will be operational.
|
@ -1,78 +0,0 @@
|
||||
# DryBox Protocol Integration Status
|
||||
|
||||
## Current Issues
|
||||
|
||||
1. **UI Integration Problems:**
|
||||
- The UI is trying to use the old `NoiseXKSession` handshake mechanism
|
||||
- The `protocol_phone_client.py` was calling non-existent `get_messages()` method
|
||||
- Error handling was passing error messages through the UI state system
|
||||
|
||||
2. **Protocol Mismatch:**
|
||||
- The original DryBox UI uses a simple handshake exchange
|
||||
- The Protocol uses a more complex Noise XK pattern with:
|
||||
- PING REQUEST/RESPONSE
|
||||
- HANDSHAKE messages
|
||||
- Key derivation
|
||||
- These two approaches are incompatible without significant refactoring
|
||||
|
||||
3. **Test Failures:**
|
||||
- FSK demodulation is returning empty data
|
||||
- Voice protocol tests are failing due to API mismatches
|
||||
- Encryption tests have parameter issues
|
||||
|
||||
## What Works
|
||||
|
||||
1. **Individual Components:**
|
||||
- IcingProtocol works standalone
|
||||
- FSK modulation works (but demodulation has issues)
|
||||
- Codec2 wrapper works
|
||||
- Encryption/decryption works with correct parameters
|
||||
|
||||
2. **Simple Integration:**
|
||||
- See `simple_integrated_ui.py` for a working example
|
||||
- Shows proper protocol flow step-by-step
|
||||
- Demonstrates successful key exchange and encryption
|
||||
|
||||
## Recommended Approach
|
||||
|
||||
Instead of trying to retrofit the complex Protocol into the existing DryBox UI, I recommend:
|
||||
|
||||
1. **Start Fresh:**
|
||||
- Use `simple_integrated_ui.py` as a base
|
||||
- Build up the phone UI features gradually
|
||||
- Ensure each step works before adding complexity
|
||||
|
||||
2. **Fix Protocol Flow:**
|
||||
- Remove old handshake code from UI
|
||||
- Implement proper state machine for protocol phases:
|
||||
- IDLE → CONNECTING → PING_SENT → HANDSHAKE_SENT → KEYS_DERIVED → READY
|
||||
- Handle auto-responder mode properly
|
||||
|
||||
3. **Simplify Audio Integration:**
|
||||
- Get basic encrypted messaging working first
|
||||
- Add voice/FSK modulation as a separate phase
|
||||
- Test with GSM simulator separately
|
||||
|
||||
## Quick Start
|
||||
|
||||
To see the protocol working:
|
||||
```bash
|
||||
cd DryBox/UI
|
||||
python3 simple_integrated_ui.py
|
||||
```
|
||||
|
||||
Then click through the buttons in order:
|
||||
1. Connect
|
||||
2. Send PING
|
||||
3. Send Handshake
|
||||
4. Derive Keys
|
||||
5. Send Encrypted Message
|
||||
|
||||
This demonstrates the full protocol flow without the complexity of the phone UI.
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Fix the FSK demodulation issue
|
||||
2. Create a new phone UI based on the working protocol flow
|
||||
3. Integrate voice/audio after basic encryption works
|
||||
4. Add GSM simulator support once everything else works
|
58
protocol_prototype/DryBox/PHONE2_PLAYBACK.md
Normal file
58
protocol_prototype/DryBox/PHONE2_PLAYBACK.md
Normal file
@ -0,0 +1,58 @@
|
||||
# Phone 2 Playback - What It Actually Plays
|
||||
|
||||
## The Complete Audio Flow for Phone 2
|
||||
|
||||
When Phone 2 receives audio, it goes through this exact process:
|
||||
|
||||
### 1. Network Reception
|
||||
- Encrypted data arrives from server
|
||||
- Data includes Noise XK encrypted voice frames
|
||||
|
||||
### 2. Decryption (Noise XK)
|
||||
- `protocol_phone_client.py` line 156-165: Noise wrapper decrypts the data
|
||||
- Result: Decrypted voice message containing FSK modulated signal
|
||||
|
||||
### 3. Demodulation (4FSK)
|
||||
- `_handle_voice_data()` line 223: FSK demodulation
|
||||
- Converts modulated signal back to 6 bytes of compressed data
|
||||
- Only processes if confidence > 0.5
|
||||
|
||||
### 4. Decompression (Codec2 Decode)
|
||||
- Line 236: `pcm_samples = self.codec.decode(frame)`
|
||||
- Converts 6 bytes → 320 samples (640 bytes PCM)
|
||||
- This is the final audio ready for playback
|
||||
|
||||
### 5. Playback
|
||||
- Line 264: `self.data_received.emit(pcm_bytes, self.client_id)`
|
||||
- PCM audio sent to audio player
|
||||
- PyAudio plays the 16-bit, 8kHz mono audio
|
||||
|
||||
## What You Hear on Phone 2
|
||||
|
||||
Phone 2 plays audio that has been:
|
||||
- ✅ Encrypted → Decrypted (Noise XK)
|
||||
- ✅ Modulated → Demodulated (4FSK)
|
||||
- ✅ Compressed → Decompressed (Codec2)
|
||||
|
||||
The audio will sound:
|
||||
- **Robotic/Vocoder-like** due to 1200bps Codec2 compression
|
||||
- **Slightly delayed** due to processing pipeline
|
||||
- **But intelligible** - you can understand speech
|
||||
|
||||
## Fixed Issues
|
||||
|
||||
1. **Silent beginning**: Now skips first second of silence in input.wav
|
||||
2. **Control messages**: No longer sent to audio player
|
||||
3. **Debug spam**: Reduced to show only important frames
|
||||
|
||||
## Testing Phone 2 Playback
|
||||
|
||||
1. Run automatic test (Space)
|
||||
2. Enable Phone 2 playback (Ctrl+2)
|
||||
3. Wait for handshake to complete
|
||||
4. You should hear:
|
||||
- Audio starting from 1 second into input.wav
|
||||
- Processed through full protocol stack
|
||||
- Robotic but understandable audio
|
||||
|
||||
The key point: Phone 2 IS playing fully processed audio (decrypted + demodulated + decompressed)!
|
67
protocol_prototype/DryBox/PLAYBACK_FIXED.md
Normal file
67
protocol_prototype/DryBox/PLAYBACK_FIXED.md
Normal file
@ -0,0 +1,67 @@
|
||||
# Fixed Audio Playback Guide
|
||||
|
||||
## How Playback Now Works
|
||||
|
||||
### Phone 1 (Sender) Playback
|
||||
- **What it plays**: Original audio from `input.wav` BEFORE encoding
|
||||
- **When to enable**: During a call to hear what you're sending
|
||||
- **Audio quality**: Clear, unprocessed 8kHz mono audio
|
||||
|
||||
### Phone 2 (Receiver) Playback
|
||||
- **What it plays**: Decoded audio AFTER the full pipeline (Codec2 → FSK → Noise XK → transmission → decryption → demodulation → decoding)
|
||||
- **When to enable**: During a call to hear what's being received
|
||||
- **Audio quality**: Robotic/vocoder sound due to 1200bps Codec2 compression
|
||||
|
||||
## Changes Made
|
||||
|
||||
1. **Fixed control message routing** - 8-byte control messages no longer sent to audio player
|
||||
2. **Phone 1 now plays original audio** when sending (before encoding)
|
||||
3. **Removed test beep** - you'll hear actual audio immediately
|
||||
4. **Added size filter** - only audio data (≥320 bytes) is processed
|
||||
|
||||
## Testing Steps
|
||||
|
||||
1. Start server: `python server.py`
|
||||
2. Start UI: `python UI/main.py`
|
||||
3. Run automatic test (Space key)
|
||||
4. **For Phone 1 playback**: Press Ctrl+1 to hear the original `input.wav` being sent
|
||||
5. **For Phone 2 playback**: Press Ctrl+2 to hear the decoded audio after transmission
|
||||
|
||||
## Expected Debug Output
|
||||
|
||||
**Good signs for Phone 1 (sender):**
|
||||
```
|
||||
Phone 1 playing original audio (sender playback)
|
||||
[AudioPlayer] Client 0 add_audio_data called with 640 bytes
|
||||
```
|
||||
|
||||
**Good signs for Phone 2 (receiver):**
|
||||
```
|
||||
Phone 2 received audio data: 640 bytes
|
||||
Phone 2 forwarding audio to player (playback enabled)
|
||||
[AudioPlayer] Client 1 add_audio_data called with 640 bytes
|
||||
```
|
||||
|
||||
**Fixed issues:**
|
||||
```
|
||||
Phone 2 received non-audio data: 8 bytes (ignoring) # Control messages now filtered out
|
||||
```
|
||||
|
||||
## Audio Quality Expectations
|
||||
|
||||
- **Phone 1**: Should sound identical to `input.wav`
|
||||
- **Phone 2**: Will sound robotic/compressed due to:
|
||||
- Codec2 compression at 1200bps (very low bitrate)
|
||||
- 4FSK modulation/demodulation
|
||||
- This is normal and proves the protocol is working!
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you still don't hear audio:
|
||||
|
||||
1. **Check debug console** for the messages above
|
||||
2. **Verify handshake completes** before expecting audio
|
||||
3. **Try recording** (Alt+1/2) to save audio for offline playback
|
||||
4. **Check system volume** and audio device
|
||||
|
||||
The most important fix: control messages are no longer sent to the audio player, so you should only receive actual 640-byte audio frames.
|
83
protocol_prototype/DryBox/PLAYBACK_SUMMARY.md
Normal file
83
protocol_prototype/DryBox/PLAYBACK_SUMMARY.md
Normal file
@ -0,0 +1,83 @@
|
||||
# Audio Playback Implementation Summary
|
||||
|
||||
## Key Fixes Applied
|
||||
|
||||
### 1. Separated Sender vs Receiver Playback
|
||||
- **Phone 1 (Sender)**: Now plays the original `input.wav` audio when transmitting
|
||||
- **Phone 2 (Receiver)**: Plays the decoded audio after full protocol processing
|
||||
|
||||
### 2. Fixed Control Message Routing
|
||||
- Control messages (like "CALL_END" - 8 bytes) no longer sent to audio player
|
||||
- Added size filter: only data ≥320 bytes is considered audio
|
||||
- Removed problematic `data_received.emit()` for non-audio messages
|
||||
|
||||
### 3. Improved Debug Logging
|
||||
- Reduced verbosity: logs only first frame and every 25th frame
|
||||
- Clear indication of what's happening at each stage
|
||||
- Separate logging for sender vs receiver playback
|
||||
|
||||
### 4. Code Changes Made
|
||||
|
||||
**phone_manager.py**:
|
||||
- Added original audio playback for sender
|
||||
- Added size filter for received data
|
||||
- Improved debug logging with frame counters
|
||||
|
||||
**protocol_phone_client.py**:
|
||||
- Removed control message emission to data_received
|
||||
- Added confidence logging for demodulation
|
||||
- Reduced debug verbosity
|
||||
|
||||
**audio_player.py**:
|
||||
- Added frame counting for debug
|
||||
- Reduced playback thread logging
|
||||
- Better buffer status reporting
|
||||
|
||||
**main.py**:
|
||||
- Fixed lambda signal connection issue
|
||||
- Improved UI scaling with flexible layouts
|
||||
|
||||
## How to Test
|
||||
|
||||
1. Start server and UI
|
||||
2. Run automatic test (Space)
|
||||
3. Enable playback:
|
||||
- **Ctrl+1**: Hear original audio from Phone 1
|
||||
- **Ctrl+2**: Hear decoded audio on Phone 2
|
||||
|
||||
## Expected Behavior
|
||||
|
||||
**Phone 1 with playback enabled:**
|
||||
- Clear audio matching `input.wav`
|
||||
- Shows "playing original audio (sender playback)"
|
||||
|
||||
**Phone 2 with playback enabled:**
|
||||
- Robotic/compressed audio (normal for 1200bps)
|
||||
- Shows "received audio frame #N: 640 bytes"
|
||||
- No more "8 bytes" messages
|
||||
|
||||
## Audio Flow
|
||||
```
|
||||
Phone 1: Phone 2:
|
||||
input.wav (8kHz)
|
||||
↓
|
||||
[Playback here if enabled]
|
||||
↓
|
||||
Codec2 encode (1200bps)
|
||||
↓
|
||||
4FSK modulate
|
||||
↓
|
||||
Noise XK encrypt
|
||||
↓
|
||||
→ Network transmission →
|
||||
↓
|
||||
Noise XK decrypt
|
||||
↓
|
||||
4FSK demodulate
|
||||
↓
|
||||
Codec2 decode
|
||||
↓
|
||||
[Playback here if enabled]
|
||||
```
|
||||
|
||||
The playback now correctly plays audio at the right points in the pipeline!
|
@ -1,60 +0,0 @@
|
||||
# DryBox Protocol Integration - WORKING
|
||||
|
||||
## Status: ✅ SUCCESSFULLY INTEGRATED
|
||||
|
||||
The protocol stack has been successfully integrated with the DryBox test environment.
|
||||
|
||||
## Working Components
|
||||
|
||||
### 1. Noise XK Protocol ✅
|
||||
- Handshake completes successfully
|
||||
- Secure encrypted channel established
|
||||
- Both phones successfully complete handshake
|
||||
|
||||
### 2. Codec2 Voice Codec ✅
|
||||
- MODE_1200 (1200 bps) working
|
||||
- Compression: 640 bytes PCM → 6 bytes compressed
|
||||
- Decompression working without errors
|
||||
|
||||
### 3. 4FSK Modulation ✅
|
||||
- Frequencies: 600, 1200, 1800, 2400 Hz
|
||||
- Successfully modulating codec frames
|
||||
- Demodulation working with good confidence
|
||||
|
||||
### 4. Message Framing ✅
|
||||
- Length-prefixed messages prevent fragmentation
|
||||
- Large voice frames (4448 bytes) transmitted intact
|
||||
- GSM simulator handling frames correctly
|
||||
|
||||
## Test Results
|
||||
```
|
||||
Protocol Status:
|
||||
Handshakes completed: 2 ✅
|
||||
Voice sessions: 4 ✅
|
||||
Decode errors: 0 ✅
|
||||
Phone 1: rx=247 frames ✅
|
||||
Phone 2: rx=103 frames ✅
|
||||
```
|
||||
|
||||
## Protocol Flow
|
||||
1. Call initiated → Phones connect via GSM simulator
|
||||
2. Noise XK handshake (3 messages) → Secure channel established
|
||||
3. Voice sessions start → Bidirectional communication begins
|
||||
4. Audio → Codec2 → 4FSK → Noise encryption → Framed transmission → GSM
|
||||
5. GSM → Frame reassembly → Noise decryption → 4FSK demod → Codec2 → Audio
|
||||
|
||||
## Key Fixes Applied
|
||||
1. Removed ChaCha20 layer (using only Noise XK)
|
||||
2. Added proper message framing (4-byte length prefix)
|
||||
3. Fixed Codec2Frame construction with frame_number
|
||||
4. Proper array/bytes conversion for PCM data
|
||||
5. Non-blocking Noise wrapper for GSM environment
|
||||
|
||||
## Files Modified
|
||||
- `UI/protocol_phone_client.py` - Main integration
|
||||
- `UI/noise_wrapper.py` - Message-based Noise XK
|
||||
- `simulator/gsm_simulator.py` - Message framing support
|
||||
- `UI/phone_manager.py` - Protocol client usage
|
||||
- `UI/main.py` - Debug console and testing
|
||||
|
||||
The integration is complete and functional!
|
60
protocol_prototype/DryBox/README.md
Normal file
60
protocol_prototype/DryBox/README.md
Normal file
@ -0,0 +1,60 @@
|
||||
# DryBox - Secure Voice Over GSM Protocol
|
||||
|
||||
A secure voice communication protocol that transmits encrypted voice data over standard GSM voice channels.
|
||||
|
||||
## Architecture
|
||||
|
||||
- **Noise XK Protocol**: Provides authenticated key exchange and secure channel
|
||||
- **Codec2**: Voice compression (1200 bps mode)
|
||||
- **4FSK Modulation**: Converts digital data to audio tones
|
||||
- **Encryption**: ChaCha20-Poly1305 for secure communication
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
DryBox/
|
||||
├── UI/ # User interface components
|
||||
│ ├── main.py # Main PyQt5 application
|
||||
│ ├── phone_manager.py # Phone state management
|
||||
│ ├── protocol_phone_client.py # Protocol client implementation
|
||||
│ ├── noise_wrapper.py # Noise XK wrapper
|
||||
│ └── ...
|
||||
├── simulator/ # GSM channel simulator
|
||||
│ └── gsm_simulator.py # Simulates GSM voice channel
|
||||
├── voice_codec.py # Codec2 and FSK modem implementation
|
||||
├── encryption.py # Encryption utilities
|
||||
└── wav/ # Audio test files
|
||||
|
||||
```
|
||||
|
||||
## Running the Protocol
|
||||
|
||||
1. Start the GSM simulator:
|
||||
```bash
|
||||
cd simulator
|
||||
python3 gsm_simulator.py
|
||||
```
|
||||
|
||||
2. Run the UI application:
|
||||
```bash
|
||||
./run_ui.sh
|
||||
# or
|
||||
python3 UI/main.py
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
1. Click "Call" on Phone 1 to initiate
|
||||
2. Click "Answer" on Phone 2 to accept
|
||||
3. The protocol will automatically:
|
||||
- Establish secure connection via Noise XK
|
||||
- Start voice session
|
||||
- Compress and encrypt voice data
|
||||
- Transmit over simulated GSM channel
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.6+
|
||||
- PyQt5
|
||||
- dissononce (Noise protocol)
|
||||
- numpy (optional, for optimized audio processing)
|
@ -1,89 +0,0 @@
|
||||
# Testing Guide for DryBox Integrated Protocol
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. Install dependencies:
|
||||
```bash
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
2. Ensure GSM simulator is running:
|
||||
```bash
|
||||
# Check if simulator is running
|
||||
netstat -an | grep 12345
|
||||
|
||||
# If not running, start it:
|
||||
cd simulator
|
||||
python3 gsm_simulator.py
|
||||
```
|
||||
|
||||
## Testing Options
|
||||
|
||||
### Option 1: GUI Test (Recommended)
|
||||
|
||||
Run the main DryBox UI with integrated protocol:
|
||||
|
||||
```bash
|
||||
cd UI
|
||||
python3 main.py
|
||||
```
|
||||
|
||||
**How to use:**
|
||||
1. The UI automatically connects both phones to the GSM simulator on startup
|
||||
2. Click "Call" on Phone 1 to call Phone 2
|
||||
3. Click "Answer" on Phone 2 to accept the call
|
||||
4. The Noise XK handshake will start automatically
|
||||
5. Watch the status change to "🔒 Secure Channel Established"
|
||||
6. Voice transmission starts automatically using test audio
|
||||
7. Watch the waveform displays showing transmitted/received audio
|
||||
8. Status changes to "🎤 Voice Active (Encrypted)" during voice
|
||||
9. Click "Hang Up" on either phone to end the call
|
||||
|
||||
### Option 2: Command Line Test
|
||||
|
||||
For automated testing without GUI:
|
||||
|
||||
```bash
|
||||
python3 test_protocol_cli.py
|
||||
```
|
||||
|
||||
This runs through the complete protocol flow automatically.
|
||||
|
||||
|
||||
## What to Expect
|
||||
|
||||
When everything is working correctly:
|
||||
|
||||
1. **Connection Phase**: Both phones connect to the GSM simulator
|
||||
2. **Call Setup**: Phone 1 calls Phone 2, you'll see "RINGING" state
|
||||
3. **Handshake**: Noise XK handshake establishes secure channel
|
||||
4. **Voice Session**:
|
||||
- Audio is compressed with Codec2 (1200 bps)
|
||||
- Modulated with 4FSK (600 baud)
|
||||
- Encrypted with ChaCha20
|
||||
- Wrapped in Noise XK encryption
|
||||
- Transmitted over simulated GSM channel
|
||||
|
||||
## Verifying the Integration
|
||||
|
||||
Look for these indicators:
|
||||
- ✓ "Handshake complete" message
|
||||
- ✓ Waveform displays showing activity
|
||||
- ✓ Log messages showing encryption/decryption
|
||||
- ✓ Confidence scores >90% for demodulation
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
1. **"Address already in use"**: GSM simulator already running
|
||||
2. **"Module not found"**: Run `pip3 install -r requirements.txt`
|
||||
3. **No audio**: Check if test wav files exist in `wav/` directory
|
||||
4. **Connection refused**: Start the GSM simulator first
|
||||
|
||||
## Protocol Details
|
||||
|
||||
The integrated system implements:
|
||||
- **Noise XK**: 3-message handshake pattern
|
||||
- **Codec2**: 48 bits per 40ms frame at 1200 bps
|
||||
- **4FSK**: Frequencies 600, 1200, 1800, 2400 Hz
|
||||
- **ChaCha20**: 256-bit keys with 16-byte nonces
|
||||
- **Dual encryption**: Noise session + per-frame ChaCha20
|
253
protocol_prototype/DryBox/UI/audio_player.py
Normal file
253
protocol_prototype/DryBox/UI/audio_player.py
Normal file
@ -0,0 +1,253 @@
|
||||
import wave
|
||||
import threading
|
||||
import queue
|
||||
import time
|
||||
import os
|
||||
from datetime import datetime
|
||||
from PyQt5.QtCore import QObject, pyqtSignal
|
||||
|
||||
# Try to import PyAudio, but handle if it's not available
|
||||
try:
|
||||
import pyaudio
|
||||
PYAUDIO_AVAILABLE = True
|
||||
except ImportError:
|
||||
PYAUDIO_AVAILABLE = False
|
||||
print("Warning: PyAudio not installed. Audio playback will be disabled.")
|
||||
print("To enable playback, install with: sudo dnf install python3-devel portaudio-devel && pip install pyaudio")
|
||||
|
||||
class AudioPlayer(QObject):
|
||||
playback_started = pyqtSignal(int) # client_id
|
||||
playback_stopped = pyqtSignal(int) # client_id
|
||||
recording_saved = pyqtSignal(int, str) # client_id, filepath
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.audio = None
|
||||
self.streams = {} # client_id -> stream
|
||||
self.buffers = {} # client_id -> queue
|
||||
self.threads = {} # client_id -> thread
|
||||
self.recording_buffers = {} # client_id -> list of audio data
|
||||
self.recording_enabled = {} # client_id -> bool
|
||||
self.playback_enabled = {} # client_id -> bool
|
||||
self.sample_rate = 8000
|
||||
self.channels = 1
|
||||
self.chunk_size = 320 # 40ms at 8kHz
|
||||
self.debug_callback = None
|
||||
|
||||
if PYAUDIO_AVAILABLE:
|
||||
try:
|
||||
self.audio = pyaudio.PyAudio()
|
||||
except Exception as e:
|
||||
self.debug(f"Failed to initialize PyAudio: {e}")
|
||||
self.audio = None
|
||||
else:
|
||||
self.audio = None
|
||||
self.debug("PyAudio not available - playback disabled, recording still works")
|
||||
|
||||
def debug(self, message):
|
||||
if self.debug_callback:
|
||||
self.debug_callback(f"[AudioPlayer] {message}")
|
||||
else:
|
||||
print(f"[AudioPlayer] {message}")
|
||||
|
||||
def set_debug_callback(self, callback):
|
||||
self.debug_callback = callback
|
||||
|
||||
def start_playback(self, client_id):
|
||||
"""Start audio playback for a client"""
|
||||
if not self.audio:
|
||||
self.debug("Audio playback not available - PyAudio not installed")
|
||||
self.debug("To enable: sudo dnf install python3-devel portaudio-devel && pip install pyaudio")
|
||||
return False
|
||||
|
||||
if client_id in self.streams:
|
||||
self.debug(f"Playback already active for client {client_id}")
|
||||
return False
|
||||
|
||||
try:
|
||||
# Create audio stream
|
||||
stream = self.audio.open(
|
||||
format=pyaudio.paInt16,
|
||||
channels=self.channels,
|
||||
rate=self.sample_rate,
|
||||
output=True,
|
||||
frames_per_buffer=self.chunk_size
|
||||
)
|
||||
|
||||
self.streams[client_id] = stream
|
||||
self.buffers[client_id] = queue.Queue()
|
||||
self.playback_enabled[client_id] = True
|
||||
|
||||
# Start playback thread
|
||||
thread = threading.Thread(
|
||||
target=self._playback_thread,
|
||||
args=(client_id,),
|
||||
daemon=True
|
||||
)
|
||||
self.threads[client_id] = thread
|
||||
thread.start()
|
||||
|
||||
self.debug(f"Started playback for client {client_id}")
|
||||
self.playback_started.emit(client_id)
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
self.debug(f"Failed to start playback for client {client_id}: {e}")
|
||||
return False
|
||||
|
||||
def stop_playback(self, client_id):
|
||||
"""Stop audio playback for a client"""
|
||||
if client_id not in self.streams:
|
||||
return
|
||||
|
||||
self.playback_enabled[client_id] = False
|
||||
|
||||
# Wait for thread to finish
|
||||
if client_id in self.threads:
|
||||
self.threads[client_id].join(timeout=1.0)
|
||||
del self.threads[client_id]
|
||||
|
||||
# Close stream
|
||||
if client_id in self.streams:
|
||||
try:
|
||||
self.streams[client_id].stop_stream()
|
||||
self.streams[client_id].close()
|
||||
except:
|
||||
pass
|
||||
del self.streams[client_id]
|
||||
|
||||
# Clear buffer
|
||||
if client_id in self.buffers:
|
||||
del self.buffers[client_id]
|
||||
|
||||
self.debug(f"Stopped playback for client {client_id}")
|
||||
self.playback_stopped.emit(client_id)
|
||||
|
||||
def add_audio_data(self, client_id, pcm_data):
|
||||
"""Add audio data to playback buffer"""
|
||||
# Initialize frame counter for debug logging
|
||||
if not hasattr(self, '_frame_count'):
|
||||
self._frame_count = {}
|
||||
if client_id not in self._frame_count:
|
||||
self._frame_count[client_id] = 0
|
||||
self._frame_count[client_id] += 1
|
||||
|
||||
# Only log occasionally to avoid spam
|
||||
if self._frame_count[client_id] == 1 or self._frame_count[client_id] % 25 == 0:
|
||||
self.debug(f"Client {client_id} audio frame #{self._frame_count[client_id]}: {len(pcm_data)} bytes")
|
||||
|
||||
if client_id in self.buffers:
|
||||
self.buffers[client_id].put(pcm_data)
|
||||
if self._frame_count[client_id] == 1:
|
||||
self.debug(f"Client {client_id} buffer started, queue size: {self.buffers[client_id].qsize()}")
|
||||
else:
|
||||
self.debug(f"Client {client_id} has no buffer (playback not started?)")
|
||||
|
||||
# Add to recording buffer if recording
|
||||
if self.recording_enabled.get(client_id, False):
|
||||
if client_id not in self.recording_buffers:
|
||||
self.recording_buffers[client_id] = []
|
||||
self.recording_buffers[client_id].append(pcm_data)
|
||||
|
||||
def _playback_thread(self, client_id):
|
||||
"""Thread function for audio playback"""
|
||||
stream = self.streams.get(client_id)
|
||||
buffer = self.buffers.get(client_id)
|
||||
|
||||
if not stream or not buffer:
|
||||
return
|
||||
|
||||
self.debug(f"Playback thread started for client {client_id}")
|
||||
|
||||
while self.playback_enabled.get(client_id, False):
|
||||
try:
|
||||
# Get audio data from buffer with timeout
|
||||
audio_data = buffer.get(timeout=0.1)
|
||||
|
||||
# Only log first frame to avoid spam
|
||||
if not hasattr(self, '_playback_logged'):
|
||||
self._playback_logged = {}
|
||||
if client_id not in self._playback_logged:
|
||||
self._playback_logged[client_id] = False
|
||||
|
||||
if not self._playback_logged[client_id]:
|
||||
self.debug(f"Client {client_id} playback thread playing first frame: {len(audio_data)} bytes")
|
||||
self._playback_logged[client_id] = True
|
||||
|
||||
# Play audio
|
||||
stream.write(audio_data)
|
||||
|
||||
except queue.Empty:
|
||||
# No data available, continue
|
||||
continue
|
||||
except Exception as e:
|
||||
self.debug(f"Playback error for client {client_id}: {e}")
|
||||
break
|
||||
|
||||
self.debug(f"Playback thread ended for client {client_id}")
|
||||
|
||||
def start_recording(self, client_id):
|
||||
"""Start recording received audio"""
|
||||
self.recording_enabled[client_id] = True
|
||||
self.recording_buffers[client_id] = []
|
||||
self.debug(f"Started recording for client {client_id}")
|
||||
|
||||
def stop_recording(self, client_id, save_path=None):
|
||||
"""Stop recording and optionally save to file"""
|
||||
if not self.recording_enabled.get(client_id, False):
|
||||
return None
|
||||
|
||||
self.recording_enabled[client_id] = False
|
||||
|
||||
if client_id not in self.recording_buffers:
|
||||
return None
|
||||
|
||||
audio_data = self.recording_buffers[client_id]
|
||||
|
||||
if not audio_data:
|
||||
self.debug(f"No audio data recorded for client {client_id}")
|
||||
return None
|
||||
|
||||
# Generate filename if not provided
|
||||
if not save_path:
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
save_path = f"wav/received_client{client_id}_{timestamp}.wav"
|
||||
|
||||
# Ensure directory exists
|
||||
save_dir = os.path.dirname(save_path)
|
||||
if save_dir:
|
||||
os.makedirs(save_dir, exist_ok=True)
|
||||
|
||||
try:
|
||||
# Combine all audio chunks
|
||||
combined_audio = b''.join(audio_data)
|
||||
|
||||
# Save as WAV file
|
||||
with wave.open(save_path, 'wb') as wav_file:
|
||||
wav_file.setnchannels(self.channels)
|
||||
wav_file.setsampwidth(2) # 16-bit
|
||||
wav_file.setframerate(self.sample_rate)
|
||||
wav_file.writeframes(combined_audio)
|
||||
|
||||
self.debug(f"Saved recording for client {client_id} to {save_path}")
|
||||
self.recording_saved.emit(client_id, save_path)
|
||||
|
||||
# Clear recording buffer
|
||||
del self.recording_buffers[client_id]
|
||||
|
||||
return save_path
|
||||
|
||||
except Exception as e:
|
||||
self.debug(f"Failed to save recording for client {client_id}: {e}")
|
||||
return None
|
||||
|
||||
def cleanup(self):
|
||||
"""Clean up audio resources"""
|
||||
# Stop all playback
|
||||
for client_id in list(self.streams.keys()):
|
||||
self.stop_playback(client_id)
|
||||
|
||||
# Terminate PyAudio
|
||||
if self.audio:
|
||||
self.audio.terminate()
|
||||
self.audio = None
|
220
protocol_prototype/DryBox/UI/audio_processor.py
Normal file
220
protocol_prototype/DryBox/UI/audio_processor.py
Normal file
@ -0,0 +1,220 @@
|
||||
import numpy as np
|
||||
import wave
|
||||
import os
|
||||
from datetime import datetime
|
||||
from PyQt5.QtCore import QObject, pyqtSignal
|
||||
import struct
|
||||
|
||||
class AudioProcessor(QObject):
|
||||
processing_complete = pyqtSignal(str) # filepath
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.debug_callback = None
|
||||
|
||||
def debug(self, message):
|
||||
if self.debug_callback:
|
||||
self.debug_callback(f"[AudioProcessor] {message}")
|
||||
else:
|
||||
print(f"[AudioProcessor] {message}")
|
||||
|
||||
def set_debug_callback(self, callback):
|
||||
self.debug_callback = callback
|
||||
|
||||
def apply_gain(self, audio_data, gain_db):
|
||||
"""Apply gain to audio data"""
|
||||
# Convert bytes to numpy array
|
||||
samples = np.frombuffer(audio_data, dtype=np.int16)
|
||||
|
||||
# Apply gain
|
||||
gain_linear = 10 ** (gain_db / 20.0)
|
||||
samples_float = samples.astype(np.float32) * gain_linear
|
||||
|
||||
# Clip to prevent overflow
|
||||
samples_float = np.clip(samples_float, -32768, 32767)
|
||||
|
||||
# Convert back to int16
|
||||
return samples_float.astype(np.int16).tobytes()
|
||||
|
||||
def apply_noise_gate(self, audio_data, threshold_db=-40):
|
||||
"""Apply noise gate to remove low-level noise"""
|
||||
samples = np.frombuffer(audio_data, dtype=np.int16)
|
||||
|
||||
# Calculate RMS in dB
|
||||
rms = np.sqrt(np.mean(samples.astype(np.float32) ** 2))
|
||||
rms_db = 20 * np.log10(max(rms, 1e-10))
|
||||
|
||||
# Gate the audio if below threshold
|
||||
if rms_db < threshold_db:
|
||||
return np.zeros_like(samples, dtype=np.int16).tobytes()
|
||||
|
||||
return audio_data
|
||||
|
||||
def apply_low_pass_filter(self, audio_data, cutoff_hz=3400, sample_rate=8000):
|
||||
"""Apply simple low-pass filter"""
|
||||
samples = np.frombuffer(audio_data, dtype=np.int16).astype(np.float32)
|
||||
|
||||
# Simple moving average filter
|
||||
# Calculate filter length based on cutoff frequency
|
||||
filter_length = int(sample_rate / cutoff_hz)
|
||||
if filter_length < 3:
|
||||
filter_length = 3
|
||||
|
||||
# Apply moving average
|
||||
filtered = np.convolve(samples, np.ones(filter_length) / filter_length, mode='same')
|
||||
|
||||
return filtered.astype(np.int16).tobytes()
|
||||
|
||||
def apply_high_pass_filter(self, audio_data, cutoff_hz=300, sample_rate=8000):
|
||||
"""Apply simple high-pass filter"""
|
||||
samples = np.frombuffer(audio_data, dtype=np.int16).astype(np.float32)
|
||||
|
||||
# Simple differentiator as high-pass
|
||||
filtered = np.diff(samples, prepend=samples[0])
|
||||
|
||||
# Scale to maintain amplitude
|
||||
scale = cutoff_hz / (sample_rate / 2)
|
||||
filtered *= scale
|
||||
|
||||
return filtered.astype(np.int16).tobytes()
|
||||
|
||||
def normalize_audio(self, audio_data, target_db=-3):
|
||||
"""Normalize audio to target dB level"""
|
||||
samples = np.frombuffer(audio_data, dtype=np.int16).astype(np.float32)
|
||||
|
||||
# Find peak
|
||||
peak = np.max(np.abs(samples))
|
||||
if peak == 0:
|
||||
return audio_data
|
||||
|
||||
# Calculate current peak in dB
|
||||
current_db = 20 * np.log10(peak / 32768.0)
|
||||
|
||||
# Calculate gain needed
|
||||
gain_db = target_db - current_db
|
||||
|
||||
# Apply gain
|
||||
return self.apply_gain(audio_data, gain_db)
|
||||
|
||||
def remove_silence(self, audio_data, threshold_db=-40, min_silence_ms=100, sample_rate=8000):
|
||||
"""Remove silence from audio"""
|
||||
samples = np.frombuffer(audio_data, dtype=np.int16)
|
||||
|
||||
# Calculate frame size for silence detection
|
||||
frame_size = int(sample_rate * min_silence_ms / 1000)
|
||||
|
||||
# Detect non-silent regions
|
||||
non_silent_regions = []
|
||||
i = 0
|
||||
|
||||
while i < len(samples):
|
||||
frame = samples[i:i+frame_size]
|
||||
if len(frame) == 0:
|
||||
break
|
||||
|
||||
# Calculate RMS of frame
|
||||
rms = np.sqrt(np.mean(frame.astype(np.float32) ** 2))
|
||||
rms_db = 20 * np.log10(max(rms, 1e-10))
|
||||
|
||||
if rms_db > threshold_db:
|
||||
# Found non-silent region, find its extent
|
||||
start = i
|
||||
while i < len(samples):
|
||||
frame = samples[i:i+frame_size]
|
||||
if len(frame) == 0:
|
||||
break
|
||||
rms = np.sqrt(np.mean(frame.astype(np.float32) ** 2))
|
||||
rms_db = 20 * np.log10(max(rms, 1e-10))
|
||||
if rms_db <= threshold_db:
|
||||
break
|
||||
i += frame_size
|
||||
non_silent_regions.append((start, i))
|
||||
else:
|
||||
i += frame_size
|
||||
|
||||
# Combine non-silent regions
|
||||
if not non_silent_regions:
|
||||
return audio_data # Return original if all silent
|
||||
|
||||
combined = []
|
||||
for start, end in non_silent_regions:
|
||||
combined.extend(samples[start:end])
|
||||
|
||||
return np.array(combined, dtype=np.int16).tobytes()
|
||||
|
||||
def save_processed_audio(self, audio_data, original_path, processing_type):
|
||||
"""Save processed audio with descriptive filename"""
|
||||
# Generate new filename
|
||||
base_name = os.path.splitext(os.path.basename(original_path))[0]
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
new_filename = f"{base_name}_{processing_type}_{timestamp}.wav"
|
||||
|
||||
# Ensure directory exists
|
||||
save_dir = os.path.dirname(original_path)
|
||||
if not save_dir:
|
||||
save_dir = "wav"
|
||||
os.makedirs(save_dir, exist_ok=True)
|
||||
|
||||
save_path = os.path.join(save_dir, new_filename)
|
||||
|
||||
try:
|
||||
with wave.open(save_path, 'wb') as wav_file:
|
||||
wav_file.setnchannels(1)
|
||||
wav_file.setsampwidth(2)
|
||||
wav_file.setframerate(8000)
|
||||
wav_file.writeframes(audio_data)
|
||||
|
||||
self.debug(f"Saved processed audio to {save_path}")
|
||||
self.processing_complete.emit(save_path)
|
||||
return save_path
|
||||
|
||||
except Exception as e:
|
||||
self.debug(f"Failed to save processed audio: {e}")
|
||||
return None
|
||||
|
||||
def concatenate_audio_files(self, file_paths, output_path=None):
|
||||
"""Concatenate multiple audio files"""
|
||||
if not file_paths:
|
||||
return None
|
||||
|
||||
combined_data = b''
|
||||
sample_rate = None
|
||||
|
||||
for file_path in file_paths:
|
||||
try:
|
||||
with wave.open(file_path, 'rb') as wav_file:
|
||||
if sample_rate is None:
|
||||
sample_rate = wav_file.getframerate()
|
||||
elif wav_file.getframerate() != sample_rate:
|
||||
self.debug(f"Sample rate mismatch in {file_path}")
|
||||
continue
|
||||
|
||||
data = wav_file.readframes(wav_file.getnframes())
|
||||
combined_data += data
|
||||
|
||||
except Exception as e:
|
||||
self.debug(f"Failed to read {file_path}: {e}")
|
||||
|
||||
if not combined_data:
|
||||
return None
|
||||
|
||||
# Save concatenated audio
|
||||
if not output_path:
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
output_path = f"wav/concatenated_{timestamp}.wav"
|
||||
|
||||
os.makedirs(os.path.dirname(output_path), exist_ok=True)
|
||||
|
||||
try:
|
||||
with wave.open(output_path, 'wb') as wav_file:
|
||||
wav_file.setnchannels(1)
|
||||
wav_file.setsampwidth(2)
|
||||
wav_file.setframerate(sample_rate or 8000)
|
||||
wav_file.writeframes(combined_data)
|
||||
|
||||
self.debug(f"Saved concatenated audio to {output_path}")
|
||||
return output_path
|
||||
|
||||
except Exception as e:
|
||||
self.debug(f"Failed to save concatenated audio: {e}")
|
||||
return None
|
@ -1,79 +0,0 @@
|
||||
# client_state.py
|
||||
from queue import Queue
|
||||
from session import NoiseXKSession
|
||||
import time
|
||||
|
||||
class ClientState:
|
||||
def __init__(self, client_id):
|
||||
self.client_id = client_id
|
||||
self.command_queue = Queue()
|
||||
self.initiator = None
|
||||
self.keypair = None
|
||||
self.peer_pubkey = None
|
||||
self.session = None
|
||||
self.handshake_in_progress = False
|
||||
self.handshake_start_time = None
|
||||
self.call_active = False
|
||||
|
||||
def process_command(self, client):
|
||||
"""Process commands from the queue."""
|
||||
if not self.command_queue.empty():
|
||||
print(f"Client {self.client_id} processing command queue, size: {self.command_queue.qsize()}")
|
||||
command = self.command_queue.get()
|
||||
if command == "handshake":
|
||||
try:
|
||||
print(f"Client {self.client_id} starting handshake, initiator: {self.initiator}")
|
||||
self.session = NoiseXKSession(self.keypair, self.peer_pubkey)
|
||||
self.session.handshake(client.sock, self.initiator)
|
||||
print(f"Client {self.client_id} handshake complete")
|
||||
client.send("HANDSHAKE_DONE")
|
||||
except Exception as e:
|
||||
print(f"Client {self.client_id} handshake failed: {e}")
|
||||
client.state_changed.emit("CALL_END", "", self.client_id)
|
||||
finally:
|
||||
self.handshake_in_progress = False
|
||||
self.handshake_start_time = None
|
||||
|
||||
def start_handshake(self, initiator, keypair, peer_pubkey):
|
||||
"""Queue handshake command."""
|
||||
self.initiator = initiator
|
||||
self.keypair = keypair
|
||||
self.peer_pubkey = peer_pubkey
|
||||
print(f"Client {self.client_id} queuing handshake, initiator: {initiator}")
|
||||
self.handshake_in_progress = True
|
||||
self.handshake_start_time = time.time()
|
||||
self.command_queue.put("handshake")
|
||||
|
||||
def handle_data(self, client, data):
|
||||
"""Handle received data (control or audio)."""
|
||||
try:
|
||||
decoded_data = data.decode('utf-8').strip()
|
||||
print(f"Client {self.client_id} received raw: {decoded_data}")
|
||||
if decoded_data in ["RINGING", "CALL_END", "CALL_DROPPED", "IN_CALL", "HANDSHAKE", "HANDSHAKE_DONE"]:
|
||||
client.state_changed.emit(decoded_data, decoded_data, self.client_id)
|
||||
if decoded_data == "HANDSHAKE":
|
||||
self.handshake_in_progress = True
|
||||
elif decoded_data == "HANDSHAKE_DONE":
|
||||
self.call_active = True
|
||||
else:
|
||||
print(f"Client {self.client_id} ignored unexpected text message: {decoded_data}")
|
||||
except UnicodeDecodeError:
|
||||
if self.call_active and self.session:
|
||||
try:
|
||||
print(f"Client {self.client_id} received audio packet, length={len(data)}")
|
||||
decrypted_data = self.session.decrypt(data)
|
||||
print(f"Client {self.client_id} decrypted audio packet, length={len(decrypted_data)}")
|
||||
client.data_received.emit(decrypted_data, self.client_id)
|
||||
except Exception as e:
|
||||
print(f"Client {self.client_id} failed to process audio packet: {e}")
|
||||
else:
|
||||
print(f"Client {self.client_id} ignored non-text message: {data.hex()}")
|
||||
|
||||
def check_handshake_timeout(self, client):
|
||||
"""Check for handshake timeout."""
|
||||
if self.handshake_in_progress and self.handshake_start_time:
|
||||
if time.time() - self.handshake_start_time > 30:
|
||||
print(f"Client {self.client_id} handshake timeout after 30s")
|
||||
client.state_changed.emit("CALL_END", "", self.client_id)
|
||||
self.handshake_in_progress = False
|
||||
self.handshake_start_time = None
|
@ -1,773 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Stable version of integrated UI with fixed auto-test and voice transmission
|
||||
"""
|
||||
|
||||
import sys
|
||||
import random
|
||||
import socket
|
||||
import threading
|
||||
import time
|
||||
import subprocess
|
||||
import os
|
||||
from pathlib import Path
|
||||
from PyQt5.QtWidgets import (
|
||||
QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout,
|
||||
QPushButton, QLabel, QFrame, QSizePolicy, QStyle, QTextEdit,
|
||||
QLineEdit, QCheckBox, QRadioButton, QButtonGroup
|
||||
)
|
||||
from PyQt5.QtCore import Qt, QTimer, QSize, QPointF, pyqtSignal, QThread
|
||||
from PyQt5.QtGui import QPainter, QColor, QPen, QLinearGradient, QBrush, QIcon, QFont
|
||||
|
||||
# Add parent directories to path
|
||||
parent_dir = str(Path(__file__).parent.parent)
|
||||
grandparent_dir = str(Path(__file__).parent.parent.parent)
|
||||
if parent_dir not in sys.path:
|
||||
sys.path.insert(0, parent_dir)
|
||||
if grandparent_dir not in sys.path:
|
||||
sys.path.insert(0, grandparent_dir)
|
||||
|
||||
# Import from DryBox directory
|
||||
from integrated_protocol import IntegratedDryBoxProtocol
|
||||
|
||||
# ANSI colors for console
|
||||
RED = "\033[91m"
|
||||
GREEN = "\033[92m"
|
||||
YELLOW = "\033[93m"
|
||||
BLUE = "\033[94m"
|
||||
RESET = "\033[0m"
|
||||
|
||||
|
||||
class ProtocolThread(QThread):
|
||||
"""Thread for running the integrated protocol"""
|
||||
status_update = pyqtSignal(str)
|
||||
key_exchange_complete = pyqtSignal(bool)
|
||||
message_received = pyqtSignal(str)
|
||||
voice_received = pyqtSignal(str)
|
||||
|
||||
def __init__(self, mode, gsm_host="localhost", gsm_port=12345):
|
||||
super().__init__()
|
||||
self.mode = mode
|
||||
self.gsm_host = gsm_host
|
||||
self.gsm_port = gsm_port
|
||||
self.protocol = None
|
||||
self.running = True
|
||||
self._voice_lock = threading.Lock()
|
||||
|
||||
def run(self):
|
||||
"""Run the protocol in background"""
|
||||
try:
|
||||
# Create protocol instance
|
||||
self.protocol = IntegratedDryBoxProtocol(
|
||||
gsm_host=self.gsm_host,
|
||||
gsm_port=self.gsm_port,
|
||||
mode=self.mode
|
||||
)
|
||||
|
||||
self.status_update.emit(f"Protocol initialized in {self.mode} mode")
|
||||
|
||||
# Connect to GSM
|
||||
if self.protocol.connect_gsm():
|
||||
self.status_update.emit("Connected to GSM simulator")
|
||||
else:
|
||||
self.status_update.emit("Failed to connect to GSM")
|
||||
return
|
||||
|
||||
# Get identity
|
||||
identity = self.protocol.get_identity_key()
|
||||
self.status_update.emit(f"Identity: {identity[:32]}...")
|
||||
|
||||
# Keep running
|
||||
while self.running:
|
||||
time.sleep(0.1)
|
||||
|
||||
# Check for key exchange completion
|
||||
if (self.protocol.protocol.state.get("key_exchange_complete") and
|
||||
not hasattr(self, '_key_exchange_notified')):
|
||||
self._key_exchange_notified = True
|
||||
self.key_exchange_complete.emit(True)
|
||||
|
||||
# Check for received messages
|
||||
if hasattr(self.protocol.protocol, 'last_received_message'):
|
||||
msg = self.protocol.protocol.last_received_message
|
||||
if msg and not hasattr(self, '_last_msg_id') or self._last_msg_id != id(msg):
|
||||
self._last_msg_id = id(msg)
|
||||
self.message_received.emit(msg)
|
||||
|
||||
except Exception as e:
|
||||
self.status_update.emit(f"Protocol error: {str(e)}")
|
||||
import traceback
|
||||
print(traceback.format_exc())
|
||||
|
||||
def stop(self):
|
||||
"""Stop the protocol thread"""
|
||||
self.running = False
|
||||
if self.protocol:
|
||||
try:
|
||||
self.protocol.close()
|
||||
except:
|
||||
pass
|
||||
|
||||
def setup_connection(self, peer_port=None, peer_identity=None):
|
||||
"""Setup protocol connection"""
|
||||
if self.protocol:
|
||||
port = self.protocol.setup_protocol_connection(
|
||||
peer_port=peer_port,
|
||||
peer_identity=peer_identity
|
||||
)
|
||||
return port
|
||||
return None
|
||||
|
||||
def initiate_key_exchange(self, cipher_type=1):
|
||||
"""Initiate key exchange"""
|
||||
if self.protocol:
|
||||
try:
|
||||
return self.protocol.initiate_key_exchange(cipher_type)
|
||||
except Exception as e:
|
||||
self.status_update.emit(f"Key exchange error: {str(e)}")
|
||||
return False
|
||||
return False
|
||||
|
||||
def send_voice(self, audio_file):
|
||||
"""Send voice through protocol (thread-safe)"""
|
||||
if not self.protocol:
|
||||
return
|
||||
|
||||
with self._voice_lock:
|
||||
try:
|
||||
# Check if protocol is ready
|
||||
if not self.protocol.protocol.hkdf_key:
|
||||
self.status_update.emit("No encryption key - complete key exchange first")
|
||||
return
|
||||
|
||||
# Send voice in a safe way
|
||||
old_input = self.protocol.input_file
|
||||
self.protocol.input_file = str(audio_file)
|
||||
|
||||
# Call send_voice in a try-except to catch segfaults
|
||||
self.protocol.send_voice()
|
||||
|
||||
self.protocol.input_file = old_input
|
||||
self.status_update.emit("Voice transmission completed")
|
||||
|
||||
except Exception as e:
|
||||
self.status_update.emit(f"Voice transmission error: {str(e)}")
|
||||
import traceback
|
||||
print(traceback.format_exc())
|
||||
|
||||
def send_message(self, message):
|
||||
"""Send encrypted text message"""
|
||||
if self.protocol:
|
||||
try:
|
||||
self.protocol.send_encrypted_message(message)
|
||||
except Exception as e:
|
||||
self.status_update.emit(f"Message send error: {str(e)}")
|
||||
|
||||
|
||||
class WaveformWidget(QWidget):
|
||||
"""Widget for displaying audio waveform"""
|
||||
def __init__(self, parent=None, dynamic=False):
|
||||
super().__init__(parent)
|
||||
self.dynamic = dynamic
|
||||
self.setMinimumSize(200, 80)
|
||||
self.setMaximumHeight(100)
|
||||
self.waveform_data = [random.randint(10, 90) for _ in range(50)]
|
||||
if self.dynamic:
|
||||
self.timer = QTimer(self)
|
||||
self.timer.timeout.connect(self.update_waveform)
|
||||
self.timer.start(100)
|
||||
|
||||
def update_waveform(self):
|
||||
self.waveform_data = self.waveform_data[1:] + [random.randint(10, 90)]
|
||||
self.update()
|
||||
|
||||
def set_data(self, data):
|
||||
amplitude = sum(byte for byte in data) % 90 + 10
|
||||
self.waveform_data = self.waveform_data[1:] + [amplitude]
|
||||
self.update()
|
||||
|
||||
def paintEvent(self, event):
|
||||
painter = QPainter(self)
|
||||
painter.setRenderHint(QPainter.Antialiasing)
|
||||
rect = self.rect()
|
||||
|
||||
# Background
|
||||
painter.fillRect(rect, QColor(30, 30, 30))
|
||||
|
||||
# Draw waveform
|
||||
pen = QPen(QColor(0, 120, 212), 2)
|
||||
painter.setPen(pen)
|
||||
|
||||
width = rect.width()
|
||||
height = rect.height()
|
||||
bar_width = width / len(self.waveform_data)
|
||||
|
||||
for i, value in enumerate(self.waveform_data):
|
||||
x = i * bar_width
|
||||
bar_height = (value / 100) * height * 0.8
|
||||
y = (height - bar_height) / 2
|
||||
painter.drawLine(QPointF(x + bar_width / 2, y),
|
||||
QPointF(x + bar_width / 2, y + bar_height))
|
||||
|
||||
|
||||
class PhoneFrame(QFrame):
|
||||
"""Frame representing a single phone"""
|
||||
def __init__(self, phone_id, parent=None):
|
||||
super().__init__(parent)
|
||||
self.phone_id = phone_id
|
||||
self.setup_ui()
|
||||
|
||||
def setup_ui(self):
|
||||
"""Setup the phone UI"""
|
||||
self.setFrameStyle(QFrame.Box)
|
||||
self.setStyleSheet("""
|
||||
QFrame {
|
||||
border: 2px solid #444;
|
||||
border-radius: 10px;
|
||||
background-color: #2a2a2a;
|
||||
padding: 10px;
|
||||
}
|
||||
""")
|
||||
|
||||
layout = QVBoxLayout()
|
||||
self.setLayout(layout)
|
||||
|
||||
# Title
|
||||
title = QLabel(f"Phone {self.phone_id}")
|
||||
title.setAlignment(Qt.AlignCenter)
|
||||
title.setStyleSheet("font-size: 18px; font-weight: bold; color: #0078D4;")
|
||||
layout.addWidget(title)
|
||||
|
||||
# Status
|
||||
self.status_label = QLabel("Disconnected")
|
||||
self.status_label.setAlignment(Qt.AlignCenter)
|
||||
self.status_label.setStyleSheet("color: #888;")
|
||||
layout.addWidget(self.status_label)
|
||||
|
||||
# Port info
|
||||
port_layout = QHBoxLayout()
|
||||
port_layout.addWidget(QLabel("Port:"))
|
||||
self.port_label = QLabel("Not set")
|
||||
self.port_label.setStyleSheet("color: #0078D4;")
|
||||
port_layout.addWidget(self.port_label)
|
||||
port_layout.addStretch()
|
||||
layout.addLayout(port_layout)
|
||||
|
||||
# Peer port
|
||||
peer_layout = QHBoxLayout()
|
||||
peer_layout.addWidget(QLabel("Peer Port:"))
|
||||
self.peer_port_input = QLineEdit()
|
||||
self.peer_port_input.setPlaceholderText("Enter peer port")
|
||||
self.peer_port_input.setMaximumWidth(150)
|
||||
peer_layout.addWidget(self.peer_port_input)
|
||||
layout.addLayout(peer_layout)
|
||||
|
||||
# Cipher selection
|
||||
cipher_group = QButtonGroup(self)
|
||||
cipher_layout = QHBoxLayout()
|
||||
cipher_layout.addWidget(QLabel("Cipher:"))
|
||||
|
||||
self.chacha_radio = QRadioButton("ChaCha20")
|
||||
self.chacha_radio.setChecked(True)
|
||||
cipher_group.addButton(self.chacha_radio)
|
||||
cipher_layout.addWidget(self.chacha_radio)
|
||||
|
||||
self.aes_radio = QRadioButton("AES-GCM")
|
||||
cipher_group.addButton(self.aes_radio)
|
||||
cipher_layout.addWidget(self.aes_radio)
|
||||
|
||||
cipher_layout.addStretch()
|
||||
layout.addLayout(cipher_layout)
|
||||
|
||||
# Control buttons
|
||||
self.connect_btn = QPushButton("Connect to Peer")
|
||||
self.connect_btn.setEnabled(False)
|
||||
layout.addWidget(self.connect_btn)
|
||||
|
||||
self.key_exchange_btn = QPushButton("Start Key Exchange")
|
||||
self.key_exchange_btn.setEnabled(False)
|
||||
layout.addWidget(self.key_exchange_btn)
|
||||
|
||||
# Message input
|
||||
self.msg_input = QLineEdit()
|
||||
self.msg_input.setPlaceholderText("Enter message to send")
|
||||
layout.addWidget(self.msg_input)
|
||||
|
||||
self.send_btn = QPushButton("Send Encrypted Message")
|
||||
self.send_btn.setEnabled(False)
|
||||
layout.addWidget(self.send_btn)
|
||||
|
||||
# Voice button
|
||||
self.voice_btn = QPushButton("Send Voice")
|
||||
self.voice_btn.setEnabled(False)
|
||||
layout.addWidget(self.voice_btn)
|
||||
|
||||
# Waveform display
|
||||
self.waveform = WaveformWidget(dynamic=True)
|
||||
layout.addWidget(self.waveform)
|
||||
|
||||
# Received messages
|
||||
self.received_text = QTextEdit()
|
||||
self.received_text.setReadOnly(True)
|
||||
self.received_text.setMaximumHeight(100)
|
||||
self.received_text.setStyleSheet("""
|
||||
QTextEdit {
|
||||
background-color: #1e1e1e;
|
||||
color: #E0E0E0;
|
||||
border: 1px solid #444;
|
||||
font-family: monospace;
|
||||
}
|
||||
""")
|
||||
layout.addWidget(QLabel("Received:"))
|
||||
layout.addWidget(self.received_text)
|
||||
|
||||
|
||||
class IntegratedPhoneUI(QMainWindow):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.setWindowTitle("DryBox Integrated Protocol UI - Stable Version")
|
||||
self.setGeometry(100, 100, 1000, 800)
|
||||
self.setStyleSheet("""
|
||||
QMainWindow { background-color: #1e1e1e; }
|
||||
QLabel { color: #E0E0E0; font-size: 14px; }
|
||||
QPushButton {
|
||||
background-color: #0078D4; color: white; border: none;
|
||||
padding: 10px 15px; border-radius: 5px; font-size: 14px;
|
||||
min-height: 30px;
|
||||
}
|
||||
QPushButton:hover { background-color: #106EBE; }
|
||||
QPushButton:pressed { background-color: #005A9E; }
|
||||
QPushButton:disabled { background-color: #555; color: #888; }
|
||||
QPushButton#successButton { background-color: #107C10; }
|
||||
QPushButton#successButton:hover { background-color: #0E6E0E; }
|
||||
QLineEdit {
|
||||
background-color: #2a2a2a; color: #E0E0E0; border: 1px solid #444;
|
||||
padding: 5px; border-radius: 3px;
|
||||
}
|
||||
QTextEdit {
|
||||
background-color: #1e1e1e; color: #E0E0E0; border: 1px solid #444;
|
||||
font-family: monospace; font-size: 12px;
|
||||
padding: 5px;
|
||||
}
|
||||
QRadioButton { color: #E0E0E0; }
|
||||
QRadioButton::indicator { width: 15px; height: 15px; }
|
||||
""")
|
||||
|
||||
# Protocol threads
|
||||
self.phone1_protocol = None
|
||||
self.phone2_protocol = None
|
||||
|
||||
# GSM simulator process
|
||||
self.gsm_process = None
|
||||
|
||||
# Setup UI
|
||||
self.setup_ui()
|
||||
|
||||
def setup_ui(self):
|
||||
"""Setup the user interface"""
|
||||
main_widget = QWidget()
|
||||
self.setCentralWidget(main_widget)
|
||||
main_layout = QVBoxLayout()
|
||||
main_layout.setSpacing(20)
|
||||
main_layout.setContentsMargins(20, 20, 20, 20)
|
||||
main_widget.setLayout(main_layout)
|
||||
|
||||
# Title
|
||||
title = QLabel("DryBox Encrypted Voice Protocol - Stable Version")
|
||||
title.setObjectName("titleLabel")
|
||||
title.setAlignment(Qt.AlignCenter)
|
||||
title.setStyleSheet("font-size: 24px; font-weight: bold; color: #0078D4;")
|
||||
main_layout.addWidget(title)
|
||||
|
||||
# Horizontal layout for phones
|
||||
phones_layout = QHBoxLayout()
|
||||
phones_layout.setSpacing(20)
|
||||
main_layout.addLayout(phones_layout)
|
||||
|
||||
# Phone 1
|
||||
self.phone1_frame = PhoneFrame(1)
|
||||
phones_layout.addWidget(self.phone1_frame)
|
||||
|
||||
# Phone 2
|
||||
self.phone2_frame = PhoneFrame(2)
|
||||
phones_layout.addWidget(self.phone2_frame)
|
||||
|
||||
# Connect signals
|
||||
self.phone1_frame.connect_btn.clicked.connect(lambda: self.connect_phone(1))
|
||||
self.phone2_frame.connect_btn.clicked.connect(lambda: self.connect_phone(2))
|
||||
self.phone1_frame.key_exchange_btn.clicked.connect(lambda: self.start_key_exchange(1))
|
||||
self.phone2_frame.key_exchange_btn.clicked.connect(lambda: self.start_key_exchange(2))
|
||||
self.phone1_frame.send_btn.clicked.connect(lambda: self.send_message(1))
|
||||
self.phone2_frame.send_btn.clicked.connect(lambda: self.send_message(2))
|
||||
self.phone1_frame.voice_btn.clicked.connect(lambda: self.send_voice(1))
|
||||
self.phone2_frame.voice_btn.clicked.connect(lambda: self.send_voice(2))
|
||||
|
||||
# Control buttons
|
||||
controls_layout = QHBoxLayout()
|
||||
|
||||
self.start_gsm_btn = QPushButton("Start GSM Simulator")
|
||||
self.start_gsm_btn.clicked.connect(self.start_gsm_simulator)
|
||||
controls_layout.addWidget(self.start_gsm_btn)
|
||||
|
||||
self.test_voice_btn = QPushButton("Test Voice Transmission")
|
||||
self.test_voice_btn.clicked.connect(self.test_voice_transmission)
|
||||
self.test_voice_btn.setEnabled(False)
|
||||
controls_layout.addWidget(self.test_voice_btn)
|
||||
|
||||
self.auto_test_btn = QPushButton("Run Auto Test")
|
||||
self.auto_test_btn.clicked.connect(self.run_auto_test)
|
||||
self.auto_test_btn.setEnabled(False)
|
||||
self.auto_test_btn.setObjectName("successButton")
|
||||
controls_layout.addWidget(self.auto_test_btn)
|
||||
|
||||
controls_layout.addStretch()
|
||||
main_layout.addLayout(controls_layout)
|
||||
|
||||
# Status display
|
||||
self.status_text = QTextEdit()
|
||||
self.status_text.setReadOnly(True)
|
||||
self.status_text.setMaximumHeight(200)
|
||||
main_layout.addWidget(QLabel("Status Log:"))
|
||||
main_layout.addWidget(self.status_text)
|
||||
|
||||
def start_gsm_simulator(self):
|
||||
"""Start the GSM simulator in background"""
|
||||
self.log_status("Starting GSM simulator...")
|
||||
|
||||
# Check if simulator is already running
|
||||
try:
|
||||
test_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
test_sock.settimeout(1)
|
||||
test_sock.connect(("localhost", 12345))
|
||||
test_sock.close()
|
||||
self.log_status("GSM simulator already running")
|
||||
self.enable_phones()
|
||||
return
|
||||
except:
|
||||
pass
|
||||
|
||||
# Kill any existing GSM simulator
|
||||
try:
|
||||
subprocess.run(["pkill", "-f", "gsm_simulator.py"], capture_output=True)
|
||||
time.sleep(0.5)
|
||||
except:
|
||||
pass
|
||||
|
||||
# Start simulator
|
||||
gsm_path = Path(__file__).parent.parent / "gsm_simulator.py"
|
||||
self.gsm_process = subprocess.Popen(
|
||||
[sys.executable, str(gsm_path)],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE
|
||||
)
|
||||
|
||||
# Wait for it to start
|
||||
for i in range(10):
|
||||
time.sleep(0.5)
|
||||
try:
|
||||
test_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
test_sock.settimeout(1)
|
||||
test_sock.connect(("localhost", 12345))
|
||||
test_sock.close()
|
||||
self.log_status("GSM simulator started successfully")
|
||||
self.enable_phones()
|
||||
return
|
||||
except:
|
||||
continue
|
||||
|
||||
self.log_status("Failed to start GSM simulator")
|
||||
|
||||
def enable_phones(self):
|
||||
"""Enable phone controls"""
|
||||
self.phone1_frame.connect_btn.setEnabled(True)
|
||||
self.phone2_frame.connect_btn.setEnabled(True)
|
||||
self.auto_test_btn.setEnabled(True)
|
||||
|
||||
# Start protocol threads
|
||||
if not self.phone1_protocol:
|
||||
self.phone1_protocol = ProtocolThread("receiver")
|
||||
self.phone1_protocol.status_update.connect(lambda msg: self.update_phone_status(1, msg))
|
||||
self.phone1_protocol.key_exchange_complete.connect(lambda: self.on_key_exchange_complete(1))
|
||||
self.phone1_protocol.message_received.connect(lambda msg: self.on_message_received(1, msg))
|
||||
self.phone1_protocol.start()
|
||||
|
||||
if not self.phone2_protocol:
|
||||
self.phone2_protocol = ProtocolThread("sender")
|
||||
self.phone2_protocol.status_update.connect(lambda msg: self.update_phone_status(2, msg))
|
||||
self.phone2_protocol.key_exchange_complete.connect(lambda: self.on_key_exchange_complete(2))
|
||||
self.phone2_protocol.message_received.connect(lambda msg: self.on_message_received(2, msg))
|
||||
self.phone2_protocol.start()
|
||||
|
||||
def connect_phone(self, phone_id):
|
||||
"""Connect phone to peer"""
|
||||
if phone_id == 1:
|
||||
frame = self.phone1_frame
|
||||
protocol = self.phone1_protocol
|
||||
peer_frame = self.phone2_frame
|
||||
else:
|
||||
frame = self.phone2_frame
|
||||
protocol = self.phone2_protocol
|
||||
peer_frame = self.phone1_frame
|
||||
|
||||
# Get peer port
|
||||
peer_port = frame.peer_port_input.text()
|
||||
if peer_port:
|
||||
try:
|
||||
peer_port = int(peer_port)
|
||||
except:
|
||||
self.log_status(f"Phone {phone_id}: Invalid peer port")
|
||||
return
|
||||
else:
|
||||
peer_port = None
|
||||
|
||||
# Setup connection
|
||||
port = protocol.setup_connection(peer_port=peer_port)
|
||||
if port:
|
||||
frame.port_label.setText(str(port))
|
||||
frame.status_label.setText("Connected")
|
||||
frame.key_exchange_btn.setEnabled(True)
|
||||
self.log_status(f"Phone {phone_id}: Connected on port {port}")
|
||||
|
||||
# Auto-fill peer port if empty
|
||||
if not peer_frame.peer_port_input.text():
|
||||
peer_frame.peer_port_input.setText(str(port))
|
||||
else:
|
||||
self.log_status(f"Phone {phone_id}: Connection failed")
|
||||
|
||||
def start_key_exchange(self, phone_id):
|
||||
"""Start key exchange"""
|
||||
if phone_id == 1:
|
||||
frame = self.phone1_frame
|
||||
protocol = self.phone1_protocol
|
||||
else:
|
||||
frame = self.phone2_frame
|
||||
protocol = self.phone2_protocol
|
||||
|
||||
# Get cipher preference
|
||||
cipher_type = 1 if frame.chacha_radio.isChecked() else 0
|
||||
|
||||
self.log_status(f"Phone {phone_id}: Starting key exchange...")
|
||||
|
||||
# Start key exchange in thread
|
||||
threading.Thread(
|
||||
target=lambda: protocol.initiate_key_exchange(cipher_type),
|
||||
daemon=True
|
||||
).start()
|
||||
|
||||
def on_key_exchange_complete(self, phone_id):
|
||||
"""Handle key exchange completion"""
|
||||
if phone_id == 1:
|
||||
frame = self.phone1_frame
|
||||
else:
|
||||
frame = self.phone2_frame
|
||||
|
||||
self.log_status(f"Phone {phone_id}: Key exchange completed!")
|
||||
frame.status_label.setText("Secure - Key Exchanged")
|
||||
frame.send_btn.setEnabled(True)
|
||||
frame.voice_btn.setEnabled(True)
|
||||
self.test_voice_btn.setEnabled(True)
|
||||
|
||||
def on_message_received(self, phone_id, message):
|
||||
"""Handle received message"""
|
||||
if phone_id == 1:
|
||||
frame = self.phone1_frame
|
||||
else:
|
||||
frame = self.phone2_frame
|
||||
|
||||
frame.received_text.append(f"[{time.strftime('%H:%M:%S')}] {message}")
|
||||
self.log_status(f"Phone {phone_id}: Received: {message}")
|
||||
|
||||
def send_message(self, phone_id):
|
||||
"""Send encrypted message"""
|
||||
if phone_id == 1:
|
||||
frame = self.phone1_frame
|
||||
protocol = self.phone1_protocol
|
||||
else:
|
||||
frame = self.phone2_frame
|
||||
protocol = self.phone2_protocol
|
||||
|
||||
message = frame.msg_input.text()
|
||||
if message:
|
||||
protocol.send_message(message)
|
||||
self.log_status(f"Phone {phone_id}: Sent encrypted: {message}")
|
||||
frame.msg_input.clear()
|
||||
|
||||
def send_voice(self, phone_id):
|
||||
"""Send voice from phone"""
|
||||
if phone_id == 1:
|
||||
protocol = self.phone1_protocol
|
||||
else:
|
||||
protocol = self.phone2_protocol
|
||||
|
||||
# Check if input.wav exists
|
||||
audio_file = Path(__file__).parent.parent / "input.wav"
|
||||
if not audio_file.exists():
|
||||
self.log_status(f"Phone {phone_id}: input.wav not found")
|
||||
return
|
||||
|
||||
self.log_status(f"Phone {phone_id}: Sending voice...")
|
||||
|
||||
# Send in thread with proper error handling
|
||||
def send_voice_safe():
|
||||
try:
|
||||
protocol.send_voice(audio_file)
|
||||
except Exception as e:
|
||||
self.log_status(f"Phone {phone_id}: Voice error: {str(e)}")
|
||||
|
||||
threading.Thread(target=send_voice_safe, daemon=True).start()
|
||||
|
||||
def test_voice_transmission(self):
|
||||
"""Test full voice transmission"""
|
||||
self.log_status("Testing voice transmission from Phone 1 to Phone 2...")
|
||||
self.send_voice(1)
|
||||
|
||||
def run_auto_test(self):
|
||||
"""Run automated test sequence"""
|
||||
self.log_status("="*50)
|
||||
self.log_status("Starting Auto Test Sequence")
|
||||
self.log_status("="*50)
|
||||
|
||||
# Disable auto test button during test
|
||||
self.auto_test_btn.setEnabled(False)
|
||||
|
||||
# Run test in a separate thread to avoid blocking UI
|
||||
threading.Thread(target=self._run_auto_test_sequence, daemon=True).start()
|
||||
|
||||
def _run_auto_test_sequence(self):
|
||||
"""Execute the automated test sequence"""
|
||||
try:
|
||||
# Test 1: Basic connection
|
||||
self.log_status("\n[TEST 1] Setting up connections...")
|
||||
time.sleep(1)
|
||||
|
||||
# Wait for protocols to be ready
|
||||
timeout = 5
|
||||
start = time.time()
|
||||
while time.time() - start < timeout:
|
||||
if (self.phone1_protocol and self.phone2_protocol and
|
||||
hasattr(self.phone1_protocol, 'protocol') and
|
||||
hasattr(self.phone2_protocol, 'protocol') and
|
||||
self.phone1_protocol.protocol and
|
||||
self.phone2_protocol.protocol):
|
||||
break
|
||||
time.sleep(0.5)
|
||||
else:
|
||||
self.log_status("❌ Protocols not ready")
|
||||
self.auto_test_btn.setEnabled(True)
|
||||
return
|
||||
|
||||
# Get ports
|
||||
phone1_port = self.phone1_protocol.protocol.protocol.local_port
|
||||
phone2_port = self.phone2_protocol.protocol.protocol.local_port
|
||||
|
||||
# Auto-fill peer ports
|
||||
self.phone1_frame.peer_port_input.setText(str(phone2_port))
|
||||
self.phone2_frame.peer_port_input.setText(str(phone1_port))
|
||||
|
||||
# Update port labels
|
||||
self.phone1_frame.port_label.setText(str(phone1_port))
|
||||
self.phone2_frame.port_label.setText(str(phone2_port))
|
||||
|
||||
self.log_status(f"✓ Phone 1 port: {phone1_port}")
|
||||
self.log_status(f"✓ Phone 2 port: {phone2_port}")
|
||||
|
||||
# Connect phones
|
||||
self.connect_phone(1)
|
||||
time.sleep(1)
|
||||
self.connect_phone(2)
|
||||
time.sleep(2)
|
||||
|
||||
self.log_status("✓ Connections established")
|
||||
|
||||
# Test 2: ChaCha20 encryption (default)
|
||||
self.log_status("\n[TEST 2] Testing ChaCha20-Poly1305 encryption...")
|
||||
|
||||
# Ensure ChaCha20 is selected
|
||||
self.phone1_frame.chacha_radio.setChecked(True)
|
||||
self.phone1_frame.aes_radio.setChecked(False)
|
||||
|
||||
# Only phone 1 initiates to avoid race condition
|
||||
self.start_key_exchange(1)
|
||||
|
||||
# Wait for key exchange
|
||||
timeout = 10
|
||||
start = time.time()
|
||||
while time.time() - start < timeout:
|
||||
if self.phone1_protocol.protocol.protocol.state.get("key_exchange_complete"):
|
||||
break
|
||||
time.sleep(0.5)
|
||||
|
||||
if self.phone1_protocol.protocol.protocol.state.get("key_exchange_complete"):
|
||||
self.log_status("✓ ChaCha20 key exchange successful")
|
||||
time.sleep(1)
|
||||
|
||||
# Send test message
|
||||
test_msg = "Hello from automated test with ChaCha20!"
|
||||
self.phone1_frame.msg_input.setText(test_msg)
|
||||
self.send_message(1)
|
||||
self.log_status(f"✓ Sent encrypted message: {test_msg}")
|
||||
time.sleep(2)
|
||||
|
||||
# Test voice only if enabled and safe
|
||||
if False: # Disabled due to segfault issues
|
||||
audio_file = Path(__file__).parent.parent / "input.wav"
|
||||
if audio_file.exists():
|
||||
self.log_status("\n[TEST 3] Testing voice transmission...")
|
||||
self.test_voice_transmission()
|
||||
self.log_status("✓ Voice transmission initiated")
|
||||
else:
|
||||
self.log_status("\n[TEST 3] Skipping voice test (input.wav not found)")
|
||||
else:
|
||||
self.log_status("\n[TEST 3] Voice test disabled for stability")
|
||||
else:
|
||||
self.log_status("❌ Key exchange failed")
|
||||
|
||||
# Summary
|
||||
self.log_status("\n" + "="*50)
|
||||
self.log_status("Auto Test Completed")
|
||||
self.log_status("✓ Connection setup successful")
|
||||
self.log_status("✓ ChaCha20 encryption tested")
|
||||
self.log_status("✓ Message transmission verified")
|
||||
self.log_status("="*50)
|
||||
|
||||
except Exception as e:
|
||||
self.log_status(f"\n❌ Auto test error: {str(e)}")
|
||||
import traceback
|
||||
self.log_status(traceback.format_exc())
|
||||
finally:
|
||||
# Re-enable auto test button
|
||||
self.auto_test_btn.setEnabled(True)
|
||||
|
||||
def update_phone_status(self, phone_id, message):
|
||||
"""Update phone status display"""
|
||||
self.log_status(f"Phone {phone_id}: {message}")
|
||||
|
||||
def log_status(self, message):
|
||||
"""Log status message"""
|
||||
timestamp = time.strftime("%H:%M:%S")
|
||||
self.status_text.append(f"[{timestamp}] {message}")
|
||||
|
||||
def closeEvent(self, event):
|
||||
"""Clean up on close"""
|
||||
if self.phone1_protocol:
|
||||
self.phone1_protocol.stop()
|
||||
if self.phone2_protocol:
|
||||
self.phone2_protocol.stop()
|
||||
|
||||
if self.gsm_process:
|
||||
self.gsm_process.terminate()
|
||||
|
||||
# Kill any GSM simulator
|
||||
try:
|
||||
subprocess.run(["pkill", "-f", "gsm_simulator.py"], capture_output=True)
|
||||
except:
|
||||
pass
|
||||
|
||||
event.accept()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
app = QApplication(sys.argv)
|
||||
window = IntegratedPhoneUI()
|
||||
window.show()
|
||||
sys.exit(app.exec_())
|
@ -1,10 +1,11 @@
|
||||
import sys
|
||||
from PyQt5.QtWidgets import (
|
||||
QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout,
|
||||
QPushButton, QLabel, QFrame, QSizePolicy, QStyle, QTextEdit, QSplitter
|
||||
QPushButton, QLabel, QFrame, QSizePolicy, QStyle, QTextEdit, QSplitter,
|
||||
QMenu, QAction, QInputDialog, QShortcut
|
||||
)
|
||||
from PyQt5.QtCore import Qt, QSize, QTimer, pyqtSignal
|
||||
from PyQt5.QtGui import QFont, QTextCursor
|
||||
from PyQt5.QtGui import QFont, QTextCursor, QKeySequence
|
||||
import time
|
||||
import threading
|
||||
from phone_manager import PhoneManager
|
||||
@ -51,8 +52,9 @@ class PhoneUI(QMainWindow):
|
||||
padding: 15px;
|
||||
}
|
||||
QWidget#phoneWidget {
|
||||
border: 1px solid #4A4A4A; border-radius: 8px;
|
||||
padding: 10px; background-color: #3A3A3A;
|
||||
border: 2px solid #4A4A4A; border-radius: 10px;
|
||||
background-color: #3A3A3A;
|
||||
min-width: 250px;
|
||||
}
|
||||
QTextEdit#debugConsole {
|
||||
background-color: #1E1E1E; color: #00FF00;
|
||||
@ -104,45 +106,63 @@ class PhoneUI(QMainWindow):
|
||||
|
||||
# Phone displays layout
|
||||
phone_controls_layout = QHBoxLayout()
|
||||
phone_controls_layout.setSpacing(30)
|
||||
phone_controls_layout.setAlignment(Qt.AlignCenter)
|
||||
phone_controls_layout.setSpacing(20)
|
||||
phone_controls_layout.setContentsMargins(10, 0, 10, 0)
|
||||
phones_layout.addLayout(phone_controls_layout)
|
||||
|
||||
# Setup UI for phones
|
||||
for phone in self.manager.phones:
|
||||
phone_container_widget, phone_display_frame, phone_button, waveform_widget, sent_waveform_widget, phone_status_label = self._create_phone_ui(
|
||||
phone_container_widget, phone_display_frame, phone_button, waveform_widget, sent_waveform_widget, phone_status_label, playback_button, record_button = self._create_phone_ui(
|
||||
f"Phone {phone['id']+1}", lambda checked, pid=phone['id']: self.manager.phone_action(pid, self)
|
||||
)
|
||||
phone['button'] = phone_button
|
||||
phone['waveform'] = waveform_widget
|
||||
phone['sent_waveform'] = sent_waveform_widget
|
||||
phone['status_label'] = phone_status_label
|
||||
phone['playback_button'] = playback_button
|
||||
phone['record_button'] = record_button
|
||||
|
||||
# Connect audio control buttons with proper closure
|
||||
playback_button.clicked.connect(lambda checked, pid=phone['id']: self.toggle_playback(pid))
|
||||
record_button.clicked.connect(lambda checked, pid=phone['id']: self.toggle_recording(pid))
|
||||
phone_controls_layout.addWidget(phone_container_widget)
|
||||
phone['client'].data_received.connect(lambda data, cid=phone['id']: self.manager.update_waveform(cid, data))
|
||||
# Connect data_received signal - it emits (data, client_id)
|
||||
phone['client'].data_received.connect(lambda data, cid: self.manager.update_waveform(cid, data))
|
||||
phone['client'].state_changed.connect(lambda state, num, cid=phone['id']: self.set_phone_state(cid, state, num))
|
||||
phone['client'].start()
|
||||
|
||||
# Control buttons layout
|
||||
control_layout = QHBoxLayout()
|
||||
control_layout.setSpacing(20)
|
||||
control_layout.setSpacing(15)
|
||||
control_layout.setContentsMargins(20, 10, 20, 10)
|
||||
|
||||
# Auto Test Button
|
||||
self.auto_test_button = QPushButton("🧪 Run Automatic Test")
|
||||
self.auto_test_button.setObjectName("autoTestButton")
|
||||
self.auto_test_button.setFixedWidth(200)
|
||||
self.auto_test_button.setMinimumWidth(180)
|
||||
self.auto_test_button.setMaximumWidth(250)
|
||||
self.auto_test_button.clicked.connect(self.toggle_auto_test)
|
||||
control_layout.addWidget(self.auto_test_button)
|
||||
|
||||
# Clear Debug Button
|
||||
self.clear_debug_button = QPushButton("Clear Debug")
|
||||
self.clear_debug_button.setFixedWidth(120)
|
||||
self.clear_debug_button.setMinimumWidth(100)
|
||||
self.clear_debug_button.setMaximumWidth(150)
|
||||
self.clear_debug_button.clicked.connect(self.clear_debug)
|
||||
control_layout.addWidget(self.clear_debug_button)
|
||||
|
||||
# Audio Processing Button
|
||||
self.audio_menu_button = QPushButton("Audio Options")
|
||||
self.audio_menu_button.setMinimumWidth(100)
|
||||
self.audio_menu_button.setMaximumWidth(150)
|
||||
self.audio_menu_button.clicked.connect(self.show_audio_menu)
|
||||
control_layout.addWidget(self.audio_menu_button)
|
||||
|
||||
# Settings Button
|
||||
self.settings_button = QPushButton("Settings")
|
||||
self.settings_button.setObjectName("settingsButton")
|
||||
self.settings_button.setFixedWidth(120)
|
||||
self.settings_button.setMinimumWidth(100)
|
||||
self.settings_button.setMaximumWidth(150)
|
||||
self.settings_button.setIcon(self.style().standardIcon(QStyle.SP_FileDialogDetailedView))
|
||||
self.settings_button.setIconSize(QSize(20, 20))
|
||||
self.settings_button.clicked.connect(self.settings_action)
|
||||
@ -176,13 +196,18 @@ class PhoneUI(QMainWindow):
|
||||
|
||||
# Initial debug message
|
||||
QTimer.singleShot(100, lambda: self.debug("DryBox UI initialized with integrated protocol"))
|
||||
|
||||
# Setup keyboard shortcuts
|
||||
self.setup_shortcuts()
|
||||
|
||||
def _create_phone_ui(self, title, action_slot):
|
||||
phone_container_widget = QWidget()
|
||||
phone_container_widget.setObjectName("phoneWidget")
|
||||
phone_container_widget.setSizePolicy(QSizePolicy.Preferred, QSizePolicy.Preferred)
|
||||
phone_layout = QVBoxLayout()
|
||||
phone_layout.setAlignment(Qt.AlignCenter)
|
||||
phone_layout.setSpacing(15)
|
||||
phone_layout.setSpacing(10)
|
||||
phone_layout.setContentsMargins(15, 15, 15, 15)
|
||||
phone_container_widget.setLayout(phone_layout)
|
||||
|
||||
phone_title_label = QLabel(title)
|
||||
@ -192,8 +217,9 @@ class PhoneUI(QMainWindow):
|
||||
|
||||
phone_display_frame = QFrame()
|
||||
phone_display_frame.setObjectName("phoneDisplay")
|
||||
phone_display_frame.setFixedSize(220, 300)
|
||||
phone_display_frame.setSizePolicy(QSizePolicy.Fixed, QSizePolicy.Fixed)
|
||||
phone_display_frame.setMinimumSize(200, 250)
|
||||
phone_display_frame.setMaximumSize(300, 400)
|
||||
phone_display_frame.setSizePolicy(QSizePolicy.Preferred, QSizePolicy.Preferred)
|
||||
|
||||
display_content_layout = QVBoxLayout(phone_display_frame)
|
||||
display_content_layout.setAlignment(Qt.AlignCenter)
|
||||
@ -204,7 +230,8 @@ class PhoneUI(QMainWindow):
|
||||
phone_layout.addWidget(phone_display_frame, alignment=Qt.AlignCenter)
|
||||
|
||||
phone_button = QPushButton()
|
||||
phone_button.setFixedWidth(120)
|
||||
phone_button.setMinimumWidth(100)
|
||||
phone_button.setMaximumWidth(150)
|
||||
phone_button.setIconSize(QSize(20, 20))
|
||||
phone_button.clicked.connect(action_slot)
|
||||
phone_layout.addWidget(phone_button, alignment=Qt.AlignCenter)
|
||||
@ -215,7 +242,9 @@ class PhoneUI(QMainWindow):
|
||||
waveform_label.setStyleSheet("font-size: 12px; color: #E0E0E0;")
|
||||
phone_layout.addWidget(waveform_label)
|
||||
waveform_widget = WaveformWidget(dynamic=False)
|
||||
waveform_widget.setFixedSize(220, 60)
|
||||
waveform_widget.setMinimumSize(200, 50)
|
||||
waveform_widget.setMaximumSize(300, 80)
|
||||
waveform_widget.setSizePolicy(QSizePolicy.Preferred, QSizePolicy.Fixed)
|
||||
phone_layout.addWidget(waveform_widget, alignment=Qt.AlignCenter)
|
||||
|
||||
# Sent waveform
|
||||
@ -224,10 +253,60 @@ class PhoneUI(QMainWindow):
|
||||
sent_waveform_label.setStyleSheet("font-size: 12px; color: #E0E0E0;")
|
||||
phone_layout.addWidget(sent_waveform_label)
|
||||
sent_waveform_widget = WaveformWidget(dynamic=False)
|
||||
sent_waveform_widget.setFixedSize(220, 60)
|
||||
sent_waveform_widget.setMinimumSize(200, 50)
|
||||
sent_waveform_widget.setMaximumSize(300, 80)
|
||||
sent_waveform_widget.setSizePolicy(QSizePolicy.Preferred, QSizePolicy.Fixed)
|
||||
phone_layout.addWidget(sent_waveform_widget, alignment=Qt.AlignCenter)
|
||||
|
||||
# Audio control buttons
|
||||
audio_controls_layout = QHBoxLayout()
|
||||
audio_controls_layout.setAlignment(Qt.AlignCenter)
|
||||
|
||||
playback_button = QPushButton("🔊 Playback")
|
||||
playback_button.setCheckable(True)
|
||||
playback_button.setMinimumWidth(90)
|
||||
playback_button.setMaximumWidth(120)
|
||||
playback_button.setStyleSheet("""
|
||||
QPushButton {
|
||||
background-color: #404040;
|
||||
color: white;
|
||||
border: 1px solid #606060;
|
||||
padding: 5px;
|
||||
border-radius: 3px;
|
||||
}
|
||||
QPushButton:checked {
|
||||
background-color: #4CAF50;
|
||||
}
|
||||
QPushButton:hover {
|
||||
background-color: #505050;
|
||||
}
|
||||
""")
|
||||
|
||||
record_button = QPushButton("⏺ Record")
|
||||
record_button.setCheckable(True)
|
||||
record_button.setMinimumWidth(90)
|
||||
record_button.setMaximumWidth(120)
|
||||
record_button.setStyleSheet("""
|
||||
QPushButton {
|
||||
background-color: #404040;
|
||||
color: white;
|
||||
border: 1px solid #606060;
|
||||
padding: 5px;
|
||||
border-radius: 3px;
|
||||
}
|
||||
QPushButton:checked {
|
||||
background-color: #F44336;
|
||||
}
|
||||
QPushButton:hover {
|
||||
background-color: #505050;
|
||||
}
|
||||
""")
|
||||
|
||||
audio_controls_layout.addWidget(playback_button)
|
||||
audio_controls_layout.addWidget(record_button)
|
||||
phone_layout.addLayout(audio_controls_layout)
|
||||
|
||||
return phone_container_widget, phone_display_frame, phone_button, waveform_widget, sent_waveform_widget, phone_status_label
|
||||
return phone_container_widget, phone_display_frame, phone_button, waveform_widget, sent_waveform_widget, phone_status_label, playback_button, record_button
|
||||
|
||||
def update_phone_ui(self, phone_id):
|
||||
phone = self.manager.phones[phone_id]
|
||||
@ -488,10 +567,142 @@ class PhoneUI(QMainWindow):
|
||||
return
|
||||
|
||||
self.test_step += 1
|
||||
|
||||
def toggle_playback(self, phone_id):
|
||||
"""Toggle audio playback for a phone"""
|
||||
is_enabled = self.manager.toggle_playback(phone_id)
|
||||
phone = self.manager.phones[phone_id]
|
||||
phone['playback_button'].setChecked(is_enabled)
|
||||
|
||||
if is_enabled:
|
||||
self.debug(f"Phone {phone_id + 1}: Audio playback enabled")
|
||||
else:
|
||||
self.debug(f"Phone {phone_id + 1}: Audio playback disabled")
|
||||
|
||||
def toggle_recording(self, phone_id):
|
||||
"""Toggle audio recording for a phone"""
|
||||
is_recording, save_path = self.manager.toggle_recording(phone_id)
|
||||
phone = self.manager.phones[phone_id]
|
||||
phone['record_button'].setChecked(is_recording)
|
||||
|
||||
if is_recording:
|
||||
self.debug(f"Phone {phone_id + 1}: Recording started")
|
||||
else:
|
||||
if save_path:
|
||||
self.debug(f"Phone {phone_id + 1}: Recording saved to {save_path}")
|
||||
else:
|
||||
self.debug(f"Phone {phone_id + 1}: Recording stopped (no data)")
|
||||
|
||||
def show_audio_menu(self):
|
||||
"""Show audio processing options menu"""
|
||||
menu = QMenu(self)
|
||||
|
||||
# Create phone selection submenu
|
||||
for phone_id in range(2):
|
||||
phone_menu = menu.addMenu(f"Phone {phone_id + 1}")
|
||||
|
||||
# Export buffer
|
||||
export_action = QAction("Export Audio Buffer", self)
|
||||
export_action.triggered.connect(lambda checked, pid=phone_id: self.export_audio_buffer(pid))
|
||||
phone_menu.addAction(export_action)
|
||||
|
||||
# Clear buffer
|
||||
clear_action = QAction("Clear Audio Buffer", self)
|
||||
clear_action.triggered.connect(lambda checked, pid=phone_id: self.clear_audio_buffer(pid))
|
||||
phone_menu.addAction(clear_action)
|
||||
|
||||
phone_menu.addSeparator()
|
||||
|
||||
# Processing options
|
||||
normalize_action = QAction("Normalize Audio", self)
|
||||
normalize_action.triggered.connect(lambda checked, pid=phone_id: self.process_audio(pid, "normalize"))
|
||||
phone_menu.addAction(normalize_action)
|
||||
|
||||
gain_action = QAction("Apply Gain...", self)
|
||||
gain_action.triggered.connect(lambda checked, pid=phone_id: self.apply_gain_dialog(pid))
|
||||
phone_menu.addAction(gain_action)
|
||||
|
||||
noise_gate_action = QAction("Apply Noise Gate", self)
|
||||
noise_gate_action.triggered.connect(lambda checked, pid=phone_id: self.process_audio(pid, "noise_gate"))
|
||||
phone_menu.addAction(noise_gate_action)
|
||||
|
||||
low_pass_action = QAction("Apply Low Pass Filter", self)
|
||||
low_pass_action.triggered.connect(lambda checked, pid=phone_id: self.process_audio(pid, "low_pass"))
|
||||
phone_menu.addAction(low_pass_action)
|
||||
|
||||
high_pass_action = QAction("Apply High Pass Filter", self)
|
||||
high_pass_action.triggered.connect(lambda checked, pid=phone_id: self.process_audio(pid, "high_pass"))
|
||||
phone_menu.addAction(high_pass_action)
|
||||
|
||||
remove_silence_action = QAction("Remove Silence", self)
|
||||
remove_silence_action.triggered.connect(lambda checked, pid=phone_id: self.process_audio(pid, "remove_silence"))
|
||||
phone_menu.addAction(remove_silence_action)
|
||||
|
||||
# Show menu at button position
|
||||
menu.exec_(self.audio_menu_button.mapToGlobal(self.audio_menu_button.rect().bottomLeft()))
|
||||
|
||||
def export_audio_buffer(self, phone_id):
|
||||
"""Export audio buffer for a phone"""
|
||||
save_path = self.manager.export_buffered_audio(phone_id)
|
||||
if save_path:
|
||||
self.debug(f"Phone {phone_id + 1}: Audio buffer exported to {save_path}")
|
||||
else:
|
||||
self.debug(f"Phone {phone_id + 1}: No audio data to export")
|
||||
|
||||
def clear_audio_buffer(self, phone_id):
|
||||
"""Clear audio buffer for a phone"""
|
||||
self.manager.clear_audio_buffer(phone_id)
|
||||
|
||||
def process_audio(self, phone_id, processing_type):
|
||||
"""Process audio with specified type"""
|
||||
save_path = self.manager.process_audio(phone_id, processing_type)
|
||||
if save_path:
|
||||
self.debug(f"Phone {phone_id + 1}: Processed audio saved to {save_path}")
|
||||
else:
|
||||
self.debug(f"Phone {phone_id + 1}: Audio processing failed")
|
||||
|
||||
def apply_gain_dialog(self, phone_id):
|
||||
"""Show dialog to get gain value"""
|
||||
gain, ok = QInputDialog.getDouble(
|
||||
self, "Apply Gain", "Enter gain in dB:",
|
||||
0.0, -20.0, 20.0, 1
|
||||
)
|
||||
if ok:
|
||||
save_path = self.manager.process_audio(phone_id, "gain", gain_db=gain)
|
||||
if save_path:
|
||||
self.debug(f"Phone {phone_id + 1}: Applied {gain}dB gain, saved to {save_path}")
|
||||
|
||||
def setup_shortcuts(self):
|
||||
"""Setup keyboard shortcuts"""
|
||||
# Phone 1 shortcuts
|
||||
QShortcut(QKeySequence("1"), self, lambda: self.manager.phone_action(0, self))
|
||||
QShortcut(QKeySequence("Ctrl+1"), self, lambda: self.toggle_playback(0))
|
||||
QShortcut(QKeySequence("Alt+1"), self, lambda: self.toggle_recording(0))
|
||||
|
||||
# Phone 2 shortcuts
|
||||
QShortcut(QKeySequence("2"), self, lambda: self.manager.phone_action(1, self))
|
||||
QShortcut(QKeySequence("Ctrl+2"), self, lambda: self.toggle_playback(1))
|
||||
QShortcut(QKeySequence("Alt+2"), self, lambda: self.toggle_recording(1))
|
||||
|
||||
# General shortcuts
|
||||
QShortcut(QKeySequence("Space"), self, self.toggle_auto_test)
|
||||
QShortcut(QKeySequence("Ctrl+L"), self, self.clear_debug)
|
||||
QShortcut(QKeySequence("Ctrl+A"), self, self.show_audio_menu)
|
||||
|
||||
self.debug("Keyboard shortcuts enabled:")
|
||||
self.debug(" 1/2: Phone action (call/answer/hangup)")
|
||||
self.debug(" Ctrl+1/2: Toggle playback")
|
||||
self.debug(" Alt+1/2: Toggle recording")
|
||||
self.debug(" Space: Toggle auto test")
|
||||
self.debug(" Ctrl+L: Clear debug")
|
||||
self.debug(" Ctrl+A: Audio options menu")
|
||||
|
||||
def closeEvent(self, event):
|
||||
if self.auto_test_running:
|
||||
self.stop_auto_test()
|
||||
# Clean up audio player
|
||||
if hasattr(self.manager, 'audio_player'):
|
||||
self.manager.audio_player.cleanup()
|
||||
for phone in self.manager.phones:
|
||||
phone['client'].stop()
|
||||
event.accept()
|
||||
|
@ -3,6 +3,8 @@ from PyQt5.QtCore import QTimer
|
||||
from protocol_phone_client import ProtocolPhoneClient
|
||||
from session import NoiseXKSession
|
||||
from phone_state import PhoneState # Added import
|
||||
from audio_player import AudioPlayer
|
||||
from audio_processor import AudioProcessor
|
||||
import struct
|
||||
import wave
|
||||
import os
|
||||
@ -12,6 +14,11 @@ class PhoneManager:
|
||||
self.phones = []
|
||||
self.handshake_done_count = 0
|
||||
self.ui = None # Will be set by UI
|
||||
self.audio_player = AudioPlayer()
|
||||
self.audio_player.set_debug_callback(self.debug)
|
||||
self.audio_processor = AudioProcessor()
|
||||
self.audio_processor.set_debug_callback(self.debug)
|
||||
self.audio_buffer = {} # client_id -> list of audio chunks for processing
|
||||
|
||||
def debug(self, message):
|
||||
"""Send debug message to UI if available"""
|
||||
@ -36,7 +43,9 @@ class PhoneManager:
|
||||
'public_key': keypair.public,
|
||||
'is_initiator': False,
|
||||
'audio_file': None, # For test audio
|
||||
'frame_counter': 0
|
||||
'frame_counter': 0,
|
||||
'playback_enabled': False,
|
||||
'recording_enabled': False
|
||||
}
|
||||
client.keypair = keypair # Also set keypair on client
|
||||
self.phones.append(phone)
|
||||
@ -102,13 +111,22 @@ class PhoneManager:
|
||||
if phone['state'] == PhoneState.IN_CALL and phone['client'].handshake_complete and phone['client'].voice_active:
|
||||
# Load test audio file if not loaded
|
||||
if phone['audio_file'] is None:
|
||||
wav_path = "../wav/input_8k_mono.wav"
|
||||
wav_path = "../wav/input.wav"
|
||||
if not os.path.exists(wav_path):
|
||||
wav_path = "wav/input_8k_mono.wav"
|
||||
wav_path = "wav/input.wav"
|
||||
if os.path.exists(wav_path):
|
||||
try:
|
||||
phone['audio_file'] = wave.open(wav_path, 'rb')
|
||||
self.debug(f"Phone {phone_id + 1} loaded test audio file: {wav_path}")
|
||||
# Verify it's 8kHz mono
|
||||
if phone['audio_file'].getframerate() != 8000:
|
||||
self.debug(f"Warning: {wav_path} is {phone['audio_file'].getframerate()}Hz, expected 8000Hz")
|
||||
if phone['audio_file'].getnchannels() != 1:
|
||||
self.debug(f"Warning: {wav_path} has {phone['audio_file'].getnchannels()} channels, expected 1")
|
||||
|
||||
# Skip initial silence - jump to 1 second in (8000 samples)
|
||||
phone['audio_file'].setpos(8000)
|
||||
self.debug(f"Phone {phone_id + 1} skipped initial silence, starting at 1 second")
|
||||
except Exception as e:
|
||||
self.debug(f"Phone {phone_id + 1} failed to load audio: {e}")
|
||||
# Use mock audio as fallback
|
||||
@ -119,9 +137,10 @@ class PhoneManager:
|
||||
try:
|
||||
frames = phone['audio_file'].readframes(320)
|
||||
if not frames or len(frames) < 640: # 320 samples * 2 bytes
|
||||
# Loop back to start
|
||||
phone['audio_file'].rewind()
|
||||
# Loop back to 1 second (skip silence)
|
||||
phone['audio_file'].setpos(8000)
|
||||
frames = phone['audio_file'].readframes(320)
|
||||
self.debug(f"Phone {phone_id + 1} looped audio back to 1 second mark")
|
||||
|
||||
# Send through protocol (codec + 4FSK + encryption)
|
||||
phone['client'].send_voice_frame(frames)
|
||||
@ -131,6 +150,12 @@ class PhoneManager:
|
||||
samples = struct.unpack(f'{len(frames)//2}h', frames)
|
||||
self.update_sent_waveform(phone_id, frames)
|
||||
|
||||
# If playback is enabled on the sender, play the original audio
|
||||
if phone['playback_enabled']:
|
||||
self.audio_player.add_audio_data(phone_id, frames)
|
||||
if phone['frame_counter'] % 25 == 0:
|
||||
self.debug(f"Phone {phone_id + 1} playing original audio (sender playback)")
|
||||
|
||||
phone['frame_counter'] += 1
|
||||
if phone['frame_counter'] % 25 == 0: # Log every second
|
||||
self.debug(f"Phone {phone_id + 1} sent {phone['frame_counter']} voice frames")
|
||||
@ -168,10 +193,172 @@ class PhoneManager:
|
||||
self.handshake_done_count = 0
|
||||
|
||||
def update_waveform(self, client_id, data):
|
||||
# Only process actual audio data (should be 640 bytes for 320 samples * 2 bytes)
|
||||
# Ignore small control messages
|
||||
if len(data) < 320: # Less than 160 samples (too small for audio)
|
||||
self.debug(f"Phone {client_id + 1} received non-audio data: {len(data)} bytes (ignoring)")
|
||||
return
|
||||
|
||||
self.phones[client_id]['waveform'].set_data(data)
|
||||
|
||||
# Debug log audio data reception (only occasionally to avoid spam)
|
||||
if not hasattr(self, '_audio_frame_count'):
|
||||
self._audio_frame_count = {}
|
||||
if client_id not in self._audio_frame_count:
|
||||
self._audio_frame_count[client_id] = 0
|
||||
self._audio_frame_count[client_id] += 1
|
||||
|
||||
if self._audio_frame_count[client_id] == 1 or self._audio_frame_count[client_id] % 25 == 0:
|
||||
self.debug(f"Phone {client_id + 1} received audio frame #{self._audio_frame_count[client_id]}: {len(data)} bytes")
|
||||
|
||||
# Store audio data in buffer for potential processing
|
||||
if client_id not in self.audio_buffer:
|
||||
self.audio_buffer[client_id] = []
|
||||
self.audio_buffer[client_id].append(data)
|
||||
|
||||
# Keep buffer size reasonable (last 30 seconds at 8kHz)
|
||||
max_chunks = 30 * 25 # 30 seconds * 25 chunks/second
|
||||
if len(self.audio_buffer[client_id]) > max_chunks:
|
||||
self.audio_buffer[client_id] = self.audio_buffer[client_id][-max_chunks:]
|
||||
|
||||
# Forward audio data to player if playback is enabled
|
||||
if self.phones[client_id]['playback_enabled']:
|
||||
if self._audio_frame_count[client_id] == 1:
|
||||
self.debug(f"Phone {client_id + 1} forwarding audio to player (playback enabled)")
|
||||
self.audio_player.add_audio_data(client_id, data)
|
||||
|
||||
def update_sent_waveform(self, client_id, data):
|
||||
self.phones[client_id]['sent_waveform'].set_data(data)
|
||||
|
||||
def toggle_playback(self, client_id):
|
||||
"""Toggle audio playback for a phone"""
|
||||
phone = self.phones[client_id]
|
||||
|
||||
if phone['playback_enabled']:
|
||||
# Stop playback
|
||||
self.audio_player.stop_playback(client_id)
|
||||
phone['playback_enabled'] = False
|
||||
self.debug(f"Phone {client_id + 1} playback stopped")
|
||||
else:
|
||||
# Start playback
|
||||
if self.audio_player.start_playback(client_id):
|
||||
phone['playback_enabled'] = True
|
||||
self.debug(f"Phone {client_id + 1} playback started")
|
||||
# Removed test beep - we want to hear actual audio
|
||||
else:
|
||||
self.debug(f"Phone {client_id + 1} failed to start playback")
|
||||
|
||||
return phone['playback_enabled']
|
||||
|
||||
def toggle_recording(self, client_id):
|
||||
"""Toggle audio recording for a phone"""
|
||||
phone = self.phones[client_id]
|
||||
|
||||
if phone['recording_enabled']:
|
||||
# Stop recording and save
|
||||
save_path = self.audio_player.stop_recording(client_id)
|
||||
phone['recording_enabled'] = False
|
||||
if save_path:
|
||||
self.debug(f"Phone {client_id + 1} recording saved to {save_path}")
|
||||
return False, save_path
|
||||
else:
|
||||
# Start recording
|
||||
self.audio_player.start_recording(client_id)
|
||||
phone['recording_enabled'] = True
|
||||
self.debug(f"Phone {client_id + 1} recording started")
|
||||
return True, None
|
||||
|
||||
def save_received_audio(self, client_id, filename=None):
|
||||
"""Save the last received audio to a file"""
|
||||
if client_id not in self.phones:
|
||||
return None
|
||||
|
||||
save_path = self.audio_player.stop_recording(client_id, filename)
|
||||
if save_path:
|
||||
self.debug(f"Phone {client_id + 1} audio saved to {save_path}")
|
||||
return save_path
|
||||
|
||||
def process_audio(self, client_id, processing_type, **kwargs):
|
||||
"""Process buffered audio with specified processing type"""
|
||||
if client_id not in self.audio_buffer or not self.audio_buffer[client_id]:
|
||||
self.debug(f"No audio data available for Phone {client_id + 1}")
|
||||
return None
|
||||
|
||||
# Combine all audio chunks
|
||||
combined_audio = b''.join(self.audio_buffer[client_id])
|
||||
|
||||
# Apply processing based on type
|
||||
processed_audio = combined_audio
|
||||
|
||||
if processing_type == "normalize":
|
||||
target_db = kwargs.get('target_db', -3)
|
||||
processed_audio = self.audio_processor.normalize_audio(combined_audio, target_db)
|
||||
|
||||
elif processing_type == "gain":
|
||||
gain_db = kwargs.get('gain_db', 0)
|
||||
processed_audio = self.audio_processor.apply_gain(combined_audio, gain_db)
|
||||
|
||||
elif processing_type == "noise_gate":
|
||||
threshold_db = kwargs.get('threshold_db', -40)
|
||||
processed_audio = self.audio_processor.apply_noise_gate(combined_audio, threshold_db)
|
||||
|
||||
elif processing_type == "low_pass":
|
||||
cutoff_hz = kwargs.get('cutoff_hz', 3400)
|
||||
processed_audio = self.audio_processor.apply_low_pass_filter(combined_audio, cutoff_hz)
|
||||
|
||||
elif processing_type == "high_pass":
|
||||
cutoff_hz = kwargs.get('cutoff_hz', 300)
|
||||
processed_audio = self.audio_processor.apply_high_pass_filter(combined_audio, cutoff_hz)
|
||||
|
||||
elif processing_type == "remove_silence":
|
||||
threshold_db = kwargs.get('threshold_db', -40)
|
||||
processed_audio = self.audio_processor.remove_silence(combined_audio, threshold_db)
|
||||
|
||||
# Save processed audio
|
||||
save_path = f"wav/phone{client_id + 1}_received.wav"
|
||||
processed_path = self.audio_processor.save_processed_audio(
|
||||
processed_audio, save_path, processing_type
|
||||
)
|
||||
|
||||
return processed_path
|
||||
|
||||
def export_buffered_audio(self, client_id, filename=None):
|
||||
"""Export current audio buffer to file"""
|
||||
if client_id not in self.audio_buffer or not self.audio_buffer[client_id]:
|
||||
self.debug(f"No audio data available for Phone {client_id + 1}")
|
||||
return None
|
||||
|
||||
# Combine all audio chunks
|
||||
combined_audio = b''.join(self.audio_buffer[client_id])
|
||||
|
||||
# Generate filename if not provided
|
||||
if not filename:
|
||||
from datetime import datetime
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
filename = f"wav/phone{client_id + 1}_buffer_{timestamp}.wav"
|
||||
|
||||
# Ensure directory exists
|
||||
os.makedirs(os.path.dirname(filename), exist_ok=True)
|
||||
|
||||
try:
|
||||
with wave.open(filename, 'wb') as wav_file:
|
||||
wav_file.setnchannels(1)
|
||||
wav_file.setsampwidth(2)
|
||||
wav_file.setframerate(8000)
|
||||
wav_file.writeframes(combined_audio)
|
||||
|
||||
self.debug(f"Exported audio buffer for Phone {client_id + 1} to {filename}")
|
||||
return filename
|
||||
|
||||
except Exception as e:
|
||||
self.debug(f"Failed to export audio buffer: {e}")
|
||||
return None
|
||||
|
||||
def clear_audio_buffer(self, client_id):
|
||||
"""Clear audio buffer for a phone"""
|
||||
if client_id in self.audio_buffer:
|
||||
self.audio_buffer[client_id] = []
|
||||
self.debug(f"Cleared audio buffer for Phone {client_id + 1}")
|
||||
|
||||
def map_state(self, state_str):
|
||||
if state_str == "RINGING":
|
||||
|
@ -1,323 +1,456 @@
|
||||
import sys
|
||||
import os
|
||||
import socket
|
||||
import time
|
||||
import select
|
||||
import threading
|
||||
import struct
|
||||
import array
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
# Add Protocol directory to Python path
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'Protocol'))
|
||||
|
||||
from protocol import IcingProtocol
|
||||
from voice_codec import VoiceProtocol, Codec2Mode
|
||||
from messages import VoiceStart, VoiceAck, VoiceEnd
|
||||
from encryption import EncryptedMessage
|
||||
from protocol_client_state import ProtocolClientState
|
||||
from session import NoiseXKSession
|
||||
from noise_wrapper import NoiseXKWrapper
|
||||
from dissononce.dh.keypair import KeyPair
|
||||
from dissononce.dh.x25519.public import PublicKey
|
||||
import sys
|
||||
import os
|
||||
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
from voice_codec import Codec2Wrapper, FSKModem, Codec2Mode
|
||||
# ChaCha20 removed - using only Noise XK encryption
|
||||
|
||||
class ProtocolPhoneClient(QThread):
|
||||
"""Phone client that integrates the full Icing Protocol with 4FSK and ChaCha20."""
|
||||
|
||||
"""Integrated phone client with Noise XK, Codec2, 4FSK, and ChaCha20"""
|
||||
data_received = pyqtSignal(bytes, int)
|
||||
state_changed = pyqtSignal(str, str, int)
|
||||
audio_received = pyqtSignal(bytes, int) # For decoded audio
|
||||
|
||||
def __init__(self, client_id, identity_keys=None):
|
||||
def __init__(self, client_id):
|
||||
super().__init__()
|
||||
self.host = "localhost"
|
||||
self.port = 12345
|
||||
self.client_id = client_id
|
||||
self.sock = None
|
||||
self.running = True
|
||||
self.state = ProtocolClientState(client_id)
|
||||
|
||||
# Initialize Icing Protocol
|
||||
self.protocol = IcingProtocol()
|
||||
|
||||
# Override identity keys if provided
|
||||
if identity_keys:
|
||||
self.protocol.identity_privkey = identity_keys[0]
|
||||
self.protocol.identity_pubkey = identity_keys[1]
|
||||
|
||||
# Connection state
|
||||
self.connected = False
|
||||
# Noise XK session
|
||||
self.noise_session = None
|
||||
self.noise_wrapper = None
|
||||
self.handshake_complete = False
|
||||
self.handshake_initiated = False
|
||||
|
||||
# No buffer needed with larger frame size
|
||||
|
||||
# Voice codec components
|
||||
self.codec = Codec2Wrapper(mode=Codec2Mode.MODE_1200)
|
||||
self.modem = FSKModem()
|
||||
|
||||
# Voice encryption handled by Noise XK
|
||||
# No separate voice key needed
|
||||
|
||||
# Voice state
|
||||
self.voice_active = False
|
||||
self.voice_protocol = None
|
||||
self.voice_frame_counter = 0
|
||||
|
||||
# Peer information
|
||||
self.peer_identity_hex = None
|
||||
self.peer_port = None
|
||||
# Message buffer for fragmented messages
|
||||
self.recv_buffer = bytearray()
|
||||
|
||||
# For GSM simulator compatibility
|
||||
self.gsm_host = "localhost"
|
||||
self.gsm_port = 12345
|
||||
self.gsm_socket = None
|
||||
# Debug callback
|
||||
self.debug_callback = None
|
||||
|
||||
def set_debug_callback(self, callback):
|
||||
"""Set debug callback function"""
|
||||
self.debug_callback = callback
|
||||
self.state.debug_callback = callback
|
||||
|
||||
def debug(self, message):
|
||||
"""Send debug message"""
|
||||
if self.debug_callback:
|
||||
self.debug_callback(f"[Phone{self.client_id+1}] {message}")
|
||||
else:
|
||||
print(f"[Phone{self.client_id+1}] {message}")
|
||||
|
||||
# Track processed messages
|
||||
self.processed_message_count = 0
|
||||
|
||||
def set_peer_identity(self, peer_identity_hex):
|
||||
"""Set the peer's identity public key (hex string)."""
|
||||
self.peer_identity_hex = peer_identity_hex
|
||||
self.protocol.set_peer_identity(peer_identity_hex)
|
||||
|
||||
def set_peer_port(self, port):
|
||||
"""Set the peer's port for direct connection."""
|
||||
self.peer_port = port
|
||||
|
||||
def connect_to_gsm_simulator(self):
|
||||
"""Connect to the GSM simulator for voice channel simulation."""
|
||||
try:
|
||||
self.gsm_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
self.gsm_socket.settimeout(5)
|
||||
self.gsm_socket.connect((self.gsm_host, self.gsm_port))
|
||||
print(f"Client {self.client_id} connected to GSM simulator")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"Client {self.client_id} failed to connect to GSM simulator: {e}")
|
||||
return False
|
||||
def connect_socket(self):
|
||||
retries = 3
|
||||
for attempt in range(retries):
|
||||
try:
|
||||
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
|
||||
self.sock.settimeout(120)
|
||||
self.sock.connect((self.host, self.port))
|
||||
self.debug(f"Connected to GSM simulator at {self.host}:{self.port}")
|
||||
return True
|
||||
except Exception as e:
|
||||
self.debug(f"Connection attempt {attempt + 1} failed: {e}")
|
||||
if attempt < retries - 1:
|
||||
time.sleep(1)
|
||||
self.sock = None
|
||||
return False
|
||||
|
||||
def run(self):
|
||||
"""Main thread loop."""
|
||||
# Protocol listener already started in __init__ of IcingProtocol
|
||||
|
||||
# Connect to GSM simulator if available
|
||||
self.connect_to_gsm_simulator()
|
||||
|
||||
while self.running:
|
||||
try:
|
||||
# Process protocol messages
|
||||
self._process_protocol_messages()
|
||||
|
||||
# Process GSM simulator data if connected
|
||||
if self.gsm_socket:
|
||||
self._process_gsm_data()
|
||||
|
||||
self.msleep(10)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Client {self.client_id} error in main loop: {e}")
|
||||
# Only emit state change if it's a real connection error
|
||||
if "get_messages" not in str(e):
|
||||
if not self.sock:
|
||||
if not self.connect_socket():
|
||||
self.debug("Failed to connect after retries")
|
||||
self.state_changed.emit("CALL_END", "", self.client_id)
|
||||
break
|
||||
try:
|
||||
while self.running:
|
||||
self.state.process_command(self)
|
||||
self.state.check_handshake_timeout(self)
|
||||
|
||||
if self.handshake_complete and self.voice_active:
|
||||
# Process voice data if active
|
||||
self._process_voice_data()
|
||||
|
||||
# Always check for incoming data, even during handshake
|
||||
if self.sock is None:
|
||||
break
|
||||
readable, _, _ = select.select([self.sock], [], [], 0.01)
|
||||
if readable:
|
||||
try:
|
||||
if self.sock is None:
|
||||
break
|
||||
chunk = self.sock.recv(4096)
|
||||
if not chunk:
|
||||
self.debug("Disconnected from server")
|
||||
self.state_changed.emit("CALL_END", "", self.client_id)
|
||||
break
|
||||
|
||||
# Add to buffer
|
||||
self.recv_buffer.extend(chunk)
|
||||
|
||||
# Process complete messages
|
||||
while len(self.recv_buffer) >= 4:
|
||||
# Read message length
|
||||
msg_len = struct.unpack('>I', self.recv_buffer[:4])[0]
|
||||
|
||||
# Check if we have the complete message
|
||||
if len(self.recv_buffer) >= 4 + msg_len:
|
||||
# Extract message
|
||||
data = bytes(self.recv_buffer[4:4+msg_len])
|
||||
# Remove from buffer
|
||||
self.recv_buffer = self.recv_buffer[4+msg_len:]
|
||||
# Pass to state handler
|
||||
self.state.handle_data(self, data)
|
||||
else:
|
||||
# Wait for more data
|
||||
break
|
||||
|
||||
except socket.error as e:
|
||||
self.debug(f"Socket error: {e}")
|
||||
self.state_changed.emit("CALL_END", "", self.client_id)
|
||||
break
|
||||
|
||||
self.msleep(1)
|
||||
except Exception as e:
|
||||
self.debug(f"Unexpected error in run loop: {e}")
|
||||
self.state_changed.emit("CALL_END", "", self.client_id)
|
||||
break
|
||||
|
||||
def _process_protocol_messages(self):
|
||||
"""Process messages from the Icing Protocol."""
|
||||
# Process new messages in the inbound queue
|
||||
if not hasattr(self.protocol, 'inbound_messages'):
|
||||
finally:
|
||||
if self.sock:
|
||||
try:
|
||||
self.sock.close()
|
||||
except Exception as e:
|
||||
self.debug(f"Error closing socket: {e}")
|
||||
self.sock = None
|
||||
|
||||
def _handle_encrypted_data(self, data):
|
||||
"""Handle encrypted data after handshake"""
|
||||
if not self.handshake_complete or not self.noise_wrapper:
|
||||
self.debug(f"Cannot decrypt - handshake not complete")
|
||||
return
|
||||
|
||||
new_messages = self.protocol.inbound_messages[self.processed_message_count:]
|
||||
|
||||
for msg in new_messages:
|
||||
self.processed_message_count += 1
|
||||
msg_type = msg.get('type', '')
|
||||
|
||||
if msg_type == 'PING_REQUEST':
|
||||
# Received ping request, we're the responder
|
||||
if not self.protocol.state['ping_sent']:
|
||||
# Enable auto responder to handle protocol flow
|
||||
self.protocol.auto_responder = True
|
||||
# Send ping response
|
||||
index = self.protocol.inbound_messages.index(msg)
|
||||
self.protocol.respond_to_ping(index, 1) # Accept with ChaCha20
|
||||
|
||||
elif msg_type == 'PING_RESPONSE':
|
||||
# Ping response received, continue with handshake
|
||||
if not self.protocol.state['handshake_sent']:
|
||||
self.protocol.send_handshake()
|
||||
|
||||
elif msg_type == 'HANDSHAKE':
|
||||
# Handshake received
|
||||
if self.protocol.state['ping_sent'] and not self.protocol.state['handshake_sent']:
|
||||
# We're initiator, send our handshake
|
||||
self.protocol.send_handshake()
|
||||
# Derive keys if we have peer's handshake
|
||||
if self.protocol.state['handshake_received'] and not self.protocol.state['key_exchange_complete']:
|
||||
self.protocol.derive_hkdf()
|
||||
self.handshake_complete = True
|
||||
self.state_changed.emit("HANDSHAKE_DONE", "", self.client_id)
|
||||
|
||||
elif msg_type == 'ENCRYPTED':
|
||||
# Decrypt and process encrypted message
|
||||
parsed = msg.get('parsed')
|
||||
if parsed and hasattr(parsed, 'plaintext'):
|
||||
self._handle_encrypted_message(parsed.plaintext)
|
||||
|
||||
elif msg_type == 'voice_start':
|
||||
# Voice session started by peer
|
||||
self._handle_voice_start(msg.get('parsed'))
|
||||
|
||||
elif msg_type == 'voice_ack':
|
||||
# Voice session acknowledged
|
||||
self.voice_active = True
|
||||
|
||||
elif msg_type == 'voice_end':
|
||||
# Voice session ended
|
||||
self.voice_active = False
|
||||
|
||||
def _process_gsm_data(self):
|
||||
"""Process audio data from GSM simulator."""
|
||||
# All data after handshake is encrypted, decrypt it first
|
||||
try:
|
||||
readable, _, _ = select.select([self.gsm_socket], [], [], 0)
|
||||
if readable:
|
||||
data = self.gsm_socket.recv(4096)
|
||||
if data and self.voice_active and self.voice_protocol:
|
||||
# Process received FSK-modulated audio
|
||||
encrypted_frames = self.voice_protocol.demodulate_audio(data)
|
||||
for frame in encrypted_frames:
|
||||
# Decrypt and decode
|
||||
audio_samples = self.voice_protocol.decrypt_and_decode(frame)
|
||||
if audio_samples:
|
||||
self.audio_received.emit(audio_samples, self.client_id)
|
||||
except Exception as e:
|
||||
print(f"Client {self.client_id} GSM data processing error: {e}")
|
||||
|
||||
def _handle_encrypted_message(self, plaintext):
|
||||
"""Handle decrypted message content."""
|
||||
# Check if it's audio data or control message
|
||||
if plaintext.startswith(b'AUDIO:'):
|
||||
audio_data = plaintext[6:]
|
||||
self.data_received.emit(audio_data, self.client_id)
|
||||
else:
|
||||
# Control message
|
||||
plaintext = self.noise_wrapper.decrypt(data)
|
||||
|
||||
# Check if it's a text message
|
||||
try:
|
||||
message = plaintext.decode('utf-8')
|
||||
self._handle_control_message(message)
|
||||
except:
|
||||
# Binary data
|
||||
self.data_received.emit(plaintext, self.client_id)
|
||||
|
||||
def _handle_control_message(self, message):
|
||||
"""Handle control messages."""
|
||||
if message == "RINGING":
|
||||
self.state_changed.emit("RINGING", "", self.client_id)
|
||||
elif message == "IN_CALL":
|
||||
self.state_changed.emit("IN_CALL", "", self.client_id)
|
||||
elif message == "CALL_END":
|
||||
self.state_changed.emit("CALL_END", "", self.client_id)
|
||||
|
||||
def _handle_voice_start(self, voice_start_msg):
|
||||
"""Handle voice session start."""
|
||||
if voice_start_msg:
|
||||
# Initialize voice protocol with negotiated parameters
|
||||
self.protocol.voice_session_active = True
|
||||
self.protocol.voice_session_id = voice_start_msg.session_id
|
||||
|
||||
# Send acknowledgment
|
||||
self.protocol.send_voice_ack(voice_start_msg.session_id)
|
||||
|
||||
# Initialize voice codec
|
||||
self._initialize_voice_protocol(voice_start_msg.codec_mode)
|
||||
self.voice_active = True
|
||||
|
||||
def _initialize_voice_protocol(self, codec_mode=Codec2Mode.MODE_1200):
|
||||
"""Initialize voice protocol with codec and encryption."""
|
||||
if self.protocol.hkdf_key:
|
||||
self.voice_protocol = VoiceProtocol(
|
||||
shared_key=bytes.fromhex(self.protocol.hkdf_key),
|
||||
codec_mode=codec_mode,
|
||||
cipher_type=self.protocol.cipher_type
|
||||
)
|
||||
print(f"Client {self.client_id} initialized voice protocol")
|
||||
|
||||
def initiate_call(self):
|
||||
"""Initiate a call to the peer."""
|
||||
if not self.peer_port:
|
||||
print(f"Client {self.client_id}: No peer port set")
|
||||
return False
|
||||
|
||||
# Connect to peer
|
||||
self.protocol.connect_to_peer(self.peer_port)
|
||||
self.state_changed.emit("CALLING", "", self.client_id)
|
||||
|
||||
# Start key exchange
|
||||
self.protocol.generate_ephemeral_keys()
|
||||
self.protocol.send_ping_request(cipher_type=1) # Request ChaCha20
|
||||
|
||||
return True
|
||||
|
||||
def answer_call(self):
|
||||
"""Answer an incoming call."""
|
||||
self.state_changed.emit("IN_CALL", "", self.client_id)
|
||||
|
||||
# Enable auto-responder for handling protocol flow
|
||||
self.protocol.auto_responder = True
|
||||
|
||||
# If we already have a ping request, respond to it
|
||||
for i, msg in enumerate(self.protocol.inbound_messages):
|
||||
if msg.get('type') == 'PING_REQUEST' and not self.protocol.state['ping_sent']:
|
||||
self.protocol.respond_to_ping(i, 1) # Accept with ChaCha20
|
||||
break
|
||||
|
||||
def end_call(self):
|
||||
"""End the current call."""
|
||||
if self.voice_active:
|
||||
self.protocol.end_voice_call()
|
||||
|
||||
self.voice_active = False
|
||||
self.handshake_complete = False
|
||||
self.state_changed.emit("CALL_END", "", self.client_id)
|
||||
|
||||
# Close connections
|
||||
for conn in self.protocol.connections:
|
||||
conn.close()
|
||||
self.protocol.connections.clear()
|
||||
|
||||
def send_audio(self, audio_samples):
|
||||
"""Send audio samples through the voice protocol."""
|
||||
if self.voice_active and self.voice_protocol and self.gsm_socket:
|
||||
# Encode and encrypt audio
|
||||
fsk_audio = self.voice_protocol.encode_and_encrypt(audio_samples)
|
||||
if fsk_audio and self.gsm_socket:
|
||||
try:
|
||||
# Send FSK-modulated audio through GSM simulator
|
||||
self.gsm_socket.send(fsk_audio)
|
||||
except Exception as e:
|
||||
print(f"Client {self.client_id} failed to send audio: {e}")
|
||||
|
||||
def send_message(self, message):
|
||||
"""Send an encrypted message."""
|
||||
if self.handshake_complete:
|
||||
if isinstance(message, str):
|
||||
message = message.encode('utf-8')
|
||||
self.protocol.send_encrypted_message(message)
|
||||
|
||||
def start_voice_session(self):
|
||||
"""Start a voice session."""
|
||||
if self.handshake_complete and not self.voice_active:
|
||||
self._initialize_voice_protocol()
|
||||
self.protocol.start_voice_call(codec_mode=5) # 1200 bps mode
|
||||
|
||||
def get_identity_key(self):
|
||||
"""Get this client's identity public key."""
|
||||
if hasattr(self.protocol, 'identity_pubkey') and self.protocol.identity_pubkey:
|
||||
return self.protocol.identity_pubkey.hex()
|
||||
return "test_identity_key_" + str(self.client_id)
|
||||
|
||||
def get_local_port(self):
|
||||
"""Get the local listening port."""
|
||||
if hasattr(self.protocol, 'local_port'):
|
||||
return self.protocol.local_port
|
||||
return 12345 + self.client_id
|
||||
|
||||
def stop(self):
|
||||
"""Stop the client."""
|
||||
self.running = False
|
||||
|
||||
# End voice session if active
|
||||
if self.voice_active:
|
||||
self.protocol.end_voice_call()
|
||||
|
||||
# Stop protocol server listener
|
||||
if hasattr(self.protocol, 'server_listener') and self.protocol.server_listener:
|
||||
self.protocol.server_listener.stop()
|
||||
|
||||
# Close GSM socket
|
||||
if self.gsm_socket:
|
||||
try:
|
||||
self.gsm_socket.close()
|
||||
text_msg = plaintext.decode('utf-8').strip()
|
||||
if text_msg == "HANDSHAKE_DONE":
|
||||
self.debug(f"Received encrypted HANDSHAKE_DONE")
|
||||
self.state_changed.emit("HANDSHAKE_DONE", "HANDSHAKE_DONE", self.client_id)
|
||||
return
|
||||
except:
|
||||
pass
|
||||
|
||||
# Otherwise handle as protocol message
|
||||
self._handle_protocol_message(plaintext)
|
||||
except Exception as e:
|
||||
# Suppress common decryption errors
|
||||
pass
|
||||
|
||||
def _handle_protocol_message(self, plaintext):
|
||||
"""Handle decrypted protocol messages"""
|
||||
if len(plaintext) < 1:
|
||||
return
|
||||
|
||||
msg_type = plaintext[0]
|
||||
msg_data = plaintext[1:]
|
||||
|
||||
if msg_type == 0x10: # Voice start
|
||||
self.debug("Received VOICE_START message")
|
||||
self._handle_voice_start(msg_data)
|
||||
elif msg_type == 0x11: # Voice data
|
||||
self._handle_voice_data(msg_data)
|
||||
elif msg_type == 0x12: # Voice end
|
||||
self.debug("Received VOICE_END message")
|
||||
self._handle_voice_end(msg_data)
|
||||
elif msg_type == 0x20: # Noise handshake
|
||||
self.debug("Received NOISE_HS message")
|
||||
self._handle_noise_handshake(msg_data)
|
||||
else:
|
||||
self.debug(f"Received unknown protocol message type: 0x{msg_type:02x}")
|
||||
# Don't emit control messages to data_received - that's only for audio
|
||||
# Control messages should be handled via state_changed signal
|
||||
|
||||
def _handle_voice_start(self, data):
|
||||
"""Handle voice session start"""
|
||||
self.debug("Voice session started by peer")
|
||||
self.voice_active = True
|
||||
self.voice_frame_counter = 0
|
||||
self.state_changed.emit("VOICE_START", "", self.client_id)
|
||||
|
||||
def _handle_voice_data(self, data):
|
||||
"""Handle voice frame (already decrypted by Noise)"""
|
||||
if len(data) < 4:
|
||||
return
|
||||
|
||||
try:
|
||||
# Data is float array packed as bytes
|
||||
# Unpack the float array
|
||||
num_floats = len(data) // 4
|
||||
modulated_signal = struct.unpack(f'{num_floats}f', data)
|
||||
|
||||
# Demodulate FSK
|
||||
demodulated_data, confidence = self.modem.demodulate(modulated_signal)
|
||||
|
||||
if confidence > 0.5: # Only decode if confidence is good
|
||||
# Create Codec2Frame from demodulated data
|
||||
from voice_codec import Codec2Frame, Codec2Mode
|
||||
frame = Codec2Frame(
|
||||
mode=Codec2Mode.MODE_1200,
|
||||
bits=demodulated_data,
|
||||
timestamp=time.time(),
|
||||
frame_number=self.voice_frame_counter
|
||||
)
|
||||
|
||||
# Decode with Codec2
|
||||
pcm_samples = self.codec.decode(frame)
|
||||
|
||||
if self.voice_frame_counter == 0:
|
||||
self.debug(f"First voice frame demodulated with confidence {confidence:.2f}")
|
||||
|
||||
# Send PCM to UI for playback
|
||||
if pcm_samples is not None and len(pcm_samples) > 0:
|
||||
# Only log details for first frame and every 25th frame
|
||||
if self.voice_frame_counter == 0 or self.voice_frame_counter % 25 == 0:
|
||||
self.debug(f"Decoded PCM samples: type={type(pcm_samples)}, len={len(pcm_samples)}")
|
||||
|
||||
# Convert to bytes if needed
|
||||
if hasattr(pcm_samples, 'tobytes'):
|
||||
pcm_bytes = pcm_samples.tobytes()
|
||||
elif isinstance(pcm_samples, (list, array.array)):
|
||||
# Convert array to bytes
|
||||
import array
|
||||
if isinstance(pcm_samples, list):
|
||||
pcm_array = array.array('h', pcm_samples)
|
||||
pcm_bytes = pcm_array.tobytes()
|
||||
else:
|
||||
pcm_bytes = pcm_samples.tobytes()
|
||||
else:
|
||||
pcm_bytes = bytes(pcm_samples)
|
||||
|
||||
if self.voice_frame_counter == 0:
|
||||
self.debug(f"Emitting first PCM frame: {len(pcm_bytes)} bytes")
|
||||
|
||||
self.data_received.emit(pcm_bytes, self.client_id)
|
||||
self.voice_frame_counter += 1
|
||||
# Log frame reception periodically
|
||||
if self.voice_frame_counter == 1 or self.voice_frame_counter % 25 == 0:
|
||||
self.debug(f"Received voice data frame #{self.voice_frame_counter}")
|
||||
else:
|
||||
self.debug(f"Codec decode returned None or empty")
|
||||
else:
|
||||
if self.voice_frame_counter % 10 == 0:
|
||||
self.debug(f"Low confidence demodulation: {confidence:.2f}")
|
||||
|
||||
except Exception as e:
|
||||
self.debug(f"Voice decode error: {e}")
|
||||
|
||||
def _handle_voice_end(self, data):
|
||||
"""Handle voice session end"""
|
||||
self.debug("Voice session ended by peer")
|
||||
self.voice_active = False
|
||||
self.state_changed.emit("VOICE_END", "", self.client_id)
|
||||
|
||||
def _handle_noise_handshake(self, data):
|
||||
"""Handle Noise handshake message"""
|
||||
if not self.noise_wrapper:
|
||||
self.debug("Received handshake message but no wrapper initialized")
|
||||
return
|
||||
|
||||
try:
|
||||
# Process the handshake message
|
||||
self.noise_wrapper.process_handshake_message(data)
|
||||
|
||||
# Check if we need to send a response
|
||||
response = self.noise_wrapper.get_next_handshake_message()
|
||||
if response:
|
||||
self.send(b'\x20' + response)
|
||||
|
||||
# Check if handshake is complete
|
||||
if self.noise_wrapper.handshake_complete and not self.handshake_complete:
|
||||
self.debug("Noise wrapper handshake complete, calling complete_handshake()")
|
||||
self.complete_handshake()
|
||||
|
||||
except Exception as e:
|
||||
self.debug(f"Handshake processing error: {e}")
|
||||
self.state_changed.emit("CALL_END", "", self.client_id)
|
||||
|
||||
def _process_voice_data(self):
|
||||
"""Process outgoing voice data"""
|
||||
# This would be called when we have voice input to send
|
||||
# For now, this is a placeholder
|
||||
pass
|
||||
|
||||
def send_voice_frame(self, pcm_samples):
|
||||
"""Send a voice frame through the protocol"""
|
||||
if not self.handshake_complete:
|
||||
self.debug("Cannot send voice - handshake not complete")
|
||||
return
|
||||
if not self.voice_active:
|
||||
self.debug("Cannot send voice - voice session not active")
|
||||
return
|
||||
|
||||
try:
|
||||
# Encode with Codec2
|
||||
codec_frame = self.codec.encode(pcm_samples)
|
||||
if not codec_frame:
|
||||
return
|
||||
|
||||
if self.voice_frame_counter % 25 == 0: # Log every 25 frames (1 second)
|
||||
self.debug(f"Encoding voice frame #{self.voice_frame_counter}: {len(pcm_samples)} bytes PCM → {len(codec_frame.bits)} bytes compressed")
|
||||
|
||||
# Modulate with FSK
|
||||
modulated_data = self.modem.modulate(codec_frame.bits)
|
||||
|
||||
# Convert modulated float array to bytes
|
||||
modulated_bytes = struct.pack(f'{len(modulated_data)}f', *modulated_data)
|
||||
|
||||
if self.voice_frame_counter % 25 == 0:
|
||||
self.debug(f"Voice frame size: {len(modulated_bytes)} bytes")
|
||||
|
||||
# Build voice data message (no ChaCha20, will be encrypted by Noise)
|
||||
msg = bytes([0x11]) + modulated_bytes
|
||||
|
||||
# Send through Noise encrypted channel
|
||||
self.send(msg)
|
||||
|
||||
self.voice_frame_counter += 1
|
||||
|
||||
except Exception as e:
|
||||
self.debug(f"Voice encode error: {e}")
|
||||
|
||||
def send(self, message):
|
||||
"""Send data through Noise encrypted channel with proper framing"""
|
||||
if self.sock and self.running:
|
||||
try:
|
||||
# Handshake messages (0x20) bypass Noise encryption
|
||||
if isinstance(message, bytes) and len(message) > 0 and message[0] == 0x20:
|
||||
# Add length prefix for framing
|
||||
framed = struct.pack('>I', len(message)) + message
|
||||
self.sock.send(framed)
|
||||
return
|
||||
|
||||
if self.handshake_complete and self.noise_wrapper:
|
||||
# Encrypt everything with Noise after handshake
|
||||
# Convert string to bytes if needed
|
||||
if isinstance(message, str):
|
||||
message = message.encode('utf-8')
|
||||
encrypted = self.noise_wrapper.encrypt(message)
|
||||
# Add length prefix for framing
|
||||
framed = struct.pack('>I', len(encrypted)) + encrypted
|
||||
self.sock.send(framed)
|
||||
else:
|
||||
# During handshake, send raw with framing
|
||||
if isinstance(message, str):
|
||||
data = message.encode('utf-8')
|
||||
framed = struct.pack('>I', len(data)) + data
|
||||
self.sock.send(framed)
|
||||
self.debug(f"Sent control message: {message}")
|
||||
else:
|
||||
framed = struct.pack('>I', len(message)) + message
|
||||
self.sock.send(framed)
|
||||
except socket.error as e:
|
||||
self.debug(f"Send error: {e}")
|
||||
self.state_changed.emit("CALL_END", "", self.client_id)
|
||||
|
||||
def stop(self):
|
||||
self.running = False
|
||||
self.voice_active = False
|
||||
if self.sock:
|
||||
try:
|
||||
self.sock.close()
|
||||
except Exception as e:
|
||||
self.debug(f"Error closing socket in stop: {e}")
|
||||
self.sock = None
|
||||
self.quit()
|
||||
self.wait(1000)
|
||||
self.wait(1000)
|
||||
|
||||
def start_handshake(self, initiator, keypair, peer_pubkey):
|
||||
"""Start Noise XK handshake"""
|
||||
self.debug(f"Starting Noise XK handshake as {'initiator' if initiator else 'responder'}")
|
||||
self.debug(f"Our public key: {keypair.public.data.hex()[:32]}...")
|
||||
self.debug(f"Peer public key: {peer_pubkey.data.hex()[:32]}...")
|
||||
|
||||
# Create noise wrapper
|
||||
self.noise_wrapper = NoiseXKWrapper(keypair, peer_pubkey, self.debug)
|
||||
self.noise_wrapper.start_handshake(initiator)
|
||||
self.handshake_initiated = True
|
||||
|
||||
# Send first handshake message if initiator
|
||||
if initiator:
|
||||
msg = self.noise_wrapper.get_next_handshake_message()
|
||||
if msg:
|
||||
# Send as NOISE_HS message type
|
||||
self.send(b'\x20' + msg) # 0x20 = Noise handshake message
|
||||
|
||||
def complete_handshake(self):
|
||||
"""Called when Noise handshake completes"""
|
||||
self.handshake_complete = True
|
||||
|
||||
self.debug("Noise XK handshake complete!")
|
||||
self.debug("Secure channel established")
|
||||
|
||||
# Send HANDSHAKE_DONE message
|
||||
self.send("HANDSHAKE_DONE")
|
||||
|
||||
self.state_changed.emit("HANDSHAKE_COMPLETE", "", self.client_id)
|
||||
|
||||
def start_voice_session(self):
|
||||
"""Start a voice session"""
|
||||
if not self.handshake_complete:
|
||||
self.debug("Cannot start voice - handshake not complete")
|
||||
return
|
||||
|
||||
self.voice_active = True
|
||||
self.voice_frame_counter = 0
|
||||
|
||||
# Send voice start message
|
||||
msg = bytes([0x10]) # Voice start message type
|
||||
self.send(msg)
|
||||
|
||||
self.debug("Voice session started")
|
||||
self.state_changed.emit("VOICE_START", "", self.client_id)
|
||||
|
||||
def end_voice_session(self):
|
||||
"""End a voice session"""
|
||||
if not self.voice_active:
|
||||
return
|
||||
|
||||
self.voice_active = False
|
||||
|
||||
# Send voice end message
|
||||
msg = bytes([0x12]) # Voice end message type
|
||||
self.send(msg)
|
||||
|
||||
self.debug("Voice session ended")
|
||||
self.state_changed.emit("VOICE_END", "", self.client_id)
|
@ -1,265 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Simple integrated UI that properly uses the Protocol.
|
||||
This replaces the complex integration attempt with a cleaner approach.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
from PyQt5.QtWidgets import QApplication, QMainWindow, QWidget, QVBoxLayout, QPushButton, QLabel
|
||||
from PyQt5.QtCore import Qt, QThread, pyqtSignal, QTimer
|
||||
|
||||
# Add Protocol directory to path
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'Protocol'))
|
||||
|
||||
from protocol import IcingProtocol
|
||||
|
||||
|
||||
class ProtocolWorker(QThread):
|
||||
"""Worker thread for protocol operations."""
|
||||
|
||||
message_received = pyqtSignal(str)
|
||||
state_changed = pyqtSignal(str)
|
||||
|
||||
def __init__(self, protocol):
|
||||
super().__init__()
|
||||
self.protocol = protocol
|
||||
self.running = True
|
||||
self.processed_count = 0
|
||||
|
||||
def run(self):
|
||||
"""Monitor protocol for new messages."""
|
||||
while self.running:
|
||||
try:
|
||||
# Check for new messages
|
||||
if hasattr(self.protocol, 'inbound_messages'):
|
||||
new_messages = self.protocol.inbound_messages[self.processed_count:]
|
||||
for msg in new_messages:
|
||||
self.processed_count += 1
|
||||
msg_type = msg.get('type', 'UNKNOWN')
|
||||
self.message_received.emit(f"Received: {msg_type}")
|
||||
|
||||
# Handle specific message types
|
||||
if msg_type == 'PING_REQUEST' and self.protocol.auto_responder:
|
||||
self.state_changed.emit("Responding to PING...")
|
||||
elif msg_type == 'PING_RESPONSE':
|
||||
self.state_changed.emit("PING response received")
|
||||
elif msg_type == 'HANDSHAKE':
|
||||
self.state_changed.emit("Handshake message received")
|
||||
elif msg_type == 'ENCRYPTED_MESSAGE':
|
||||
self.state_changed.emit("Encrypted message received")
|
||||
|
||||
# Check protocol state
|
||||
if self.protocol.state.get('key_exchange_complete'):
|
||||
self.state_changed.emit("Key exchange complete!")
|
||||
|
||||
self.msleep(100)
|
||||
except Exception as e:
|
||||
print(f"Worker error: {e}")
|
||||
self.msleep(100)
|
||||
|
||||
def stop(self):
|
||||
self.running = False
|
||||
self.quit()
|
||||
self.wait()
|
||||
|
||||
|
||||
class SimpleProtocolUI(QMainWindow):
|
||||
"""Simple UI to demonstrate protocol integration."""
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.setWindowTitle("Simple Protocol Integration")
|
||||
self.setGeometry(100, 100, 400, 500)
|
||||
|
||||
# Create protocol instances
|
||||
self.protocol1 = IcingProtocol()
|
||||
self.protocol2 = IcingProtocol()
|
||||
|
||||
# Exchange identity keys
|
||||
self.protocol1.set_peer_identity(self.protocol2.identity_pubkey.hex())
|
||||
self.protocol2.set_peer_identity(self.protocol1.identity_pubkey.hex())
|
||||
|
||||
# Enable auto-responder on protocol 2
|
||||
self.protocol2.auto_responder = True
|
||||
|
||||
# Create UI
|
||||
central_widget = QWidget()
|
||||
self.setCentralWidget(central_widget)
|
||||
layout = QVBoxLayout()
|
||||
central_widget.setLayout(layout)
|
||||
|
||||
# Info labels
|
||||
self.info_label = QLabel("Protocol Status")
|
||||
self.info_label.setAlignment(Qt.AlignCenter)
|
||||
layout.addWidget(self.info_label)
|
||||
|
||||
self.port_label = QLabel(f"Protocol 1: port {self.protocol1.local_port}\n"
|
||||
f"Protocol 2: port {self.protocol2.local_port}")
|
||||
layout.addWidget(self.port_label)
|
||||
|
||||
# Status display
|
||||
self.status_label = QLabel("Ready")
|
||||
self.status_label.setStyleSheet("QLabel { background-color: #f0f0f0; padding: 10px; }")
|
||||
layout.addWidget(self.status_label)
|
||||
|
||||
# Message log
|
||||
self.log_label = QLabel("Message Log:\n")
|
||||
self.log_label.setAlignment(Qt.AlignTop)
|
||||
self.log_label.setStyleSheet("QLabel { background-color: #ffffff; padding: 10px; }")
|
||||
self.log_label.setMinimumHeight(200)
|
||||
layout.addWidget(self.log_label)
|
||||
|
||||
# Buttons
|
||||
self.connect_btn = QPushButton("1. Connect")
|
||||
self.connect_btn.clicked.connect(self.do_connect)
|
||||
layout.addWidget(self.connect_btn)
|
||||
|
||||
self.ping_btn = QPushButton("2. Send PING")
|
||||
self.ping_btn.clicked.connect(self.do_ping)
|
||||
self.ping_btn.setEnabled(False)
|
||||
layout.addWidget(self.ping_btn)
|
||||
|
||||
self.handshake_btn = QPushButton("3. Send Handshake")
|
||||
self.handshake_btn.clicked.connect(self.do_handshake)
|
||||
self.handshake_btn.setEnabled(False)
|
||||
layout.addWidget(self.handshake_btn)
|
||||
|
||||
self.derive_btn = QPushButton("4. Derive Keys")
|
||||
self.derive_btn.clicked.connect(self.do_derive)
|
||||
self.derive_btn.setEnabled(False)
|
||||
layout.addWidget(self.derive_btn)
|
||||
|
||||
self.encrypt_btn = QPushButton("5. Send Encrypted Message")
|
||||
self.encrypt_btn.clicked.connect(self.do_encrypt)
|
||||
self.encrypt_btn.setEnabled(False)
|
||||
layout.addWidget(self.encrypt_btn)
|
||||
|
||||
# Create workers
|
||||
self.worker1 = ProtocolWorker(self.protocol1)
|
||||
self.worker1.message_received.connect(lambda msg: self.log_message(f"P1: {msg}"))
|
||||
self.worker1.state_changed.connect(lambda state: self.update_status(f"P1: {state}"))
|
||||
self.worker1.start()
|
||||
|
||||
self.worker2 = ProtocolWorker(self.protocol2)
|
||||
self.worker2.message_received.connect(lambda msg: self.log_message(f"P2: {msg}"))
|
||||
self.worker2.state_changed.connect(lambda state: self.update_status(f"P2: {state}"))
|
||||
self.worker2.start()
|
||||
|
||||
# Wait timer for protocol startup
|
||||
QTimer.singleShot(1000, self.on_ready)
|
||||
|
||||
def on_ready(self):
|
||||
"""Called when protocols are ready."""
|
||||
self.status_label.setText("Protocols ready. Click Connect to start.")
|
||||
|
||||
def log_message(self, msg):
|
||||
"""Add message to log."""
|
||||
current = self.log_label.text()
|
||||
self.log_label.setText(current + msg + "\n")
|
||||
|
||||
def update_status(self, status):
|
||||
"""Update status display."""
|
||||
self.status_label.setText(status)
|
||||
|
||||
def do_connect(self):
|
||||
"""Connect protocol 1 to protocol 2."""
|
||||
try:
|
||||
self.protocol1.connect_to_peer(self.protocol2.local_port)
|
||||
self.log_message("Connected to peer")
|
||||
self.connect_btn.setEnabled(False)
|
||||
self.ping_btn.setEnabled(True)
|
||||
|
||||
# Generate ephemeral keys
|
||||
self.protocol1.generate_ephemeral_keys()
|
||||
self.log_message("Generated ephemeral keys")
|
||||
except Exception as e:
|
||||
self.log_message(f"Connection error: {e}")
|
||||
|
||||
def do_ping(self):
|
||||
"""Send PING request."""
|
||||
try:
|
||||
self.protocol1.send_ping_request(cipher_type=1) # ChaCha20
|
||||
self.log_message("Sent PING request")
|
||||
self.ping_btn.setEnabled(False)
|
||||
self.handshake_btn.setEnabled(True)
|
||||
except Exception as e:
|
||||
self.log_message(f"PING error: {e}")
|
||||
|
||||
def do_handshake(self):
|
||||
"""Send handshake."""
|
||||
try:
|
||||
self.protocol1.send_handshake()
|
||||
self.log_message("Sent handshake")
|
||||
self.handshake_btn.setEnabled(False)
|
||||
|
||||
# Enable derive after a delay (to allow response)
|
||||
QTimer.singleShot(500, lambda: self.derive_btn.setEnabled(True))
|
||||
except Exception as e:
|
||||
self.log_message(f"Handshake error: {e}")
|
||||
|
||||
def do_derive(self):
|
||||
"""Derive keys."""
|
||||
try:
|
||||
self.protocol1.derive_hkdf()
|
||||
self.log_message("Derived keys")
|
||||
self.derive_btn.setEnabled(False)
|
||||
self.encrypt_btn.setEnabled(True)
|
||||
|
||||
# Check if protocol 2 also completed
|
||||
if self.protocol2.state.get('key_exchange_complete'):
|
||||
self.log_message("Both protocols have completed key exchange!")
|
||||
except Exception as e:
|
||||
self.log_message(f"Derive error: {e}")
|
||||
|
||||
def do_encrypt(self):
|
||||
"""Send encrypted message."""
|
||||
try:
|
||||
test_msg = "Hello, encrypted world!"
|
||||
self.protocol1.send_encrypted_message(test_msg)
|
||||
self.log_message(f"Sent encrypted: '{test_msg}'")
|
||||
|
||||
# Check if protocol 2 can decrypt
|
||||
QTimer.singleShot(100, self.check_decryption)
|
||||
except Exception as e:
|
||||
self.log_message(f"Encryption error: {e}")
|
||||
|
||||
def check_decryption(self):
|
||||
"""Check if protocol 2 received and can decrypt."""
|
||||
for i, msg in enumerate(self.protocol2.inbound_messages):
|
||||
if msg.get('type') == 'ENCRYPTED_MESSAGE':
|
||||
try:
|
||||
decrypted = self.protocol2.decrypt_received_message(i)
|
||||
self.log_message(f"P2 decrypted: '{decrypted}'")
|
||||
self.log_message("SUCCESS! Full protocol flow complete.")
|
||||
except Exception as e:
|
||||
self.log_message(f"Decryption error: {e}")
|
||||
break
|
||||
|
||||
def closeEvent(self, event):
|
||||
"""Clean up on close."""
|
||||
self.worker1.stop()
|
||||
self.worker2.stop()
|
||||
|
||||
if self.protocol1.server_listener:
|
||||
self.protocol1.server_listener.stop()
|
||||
if self.protocol2.server_listener:
|
||||
self.protocol2.server_listener.stop()
|
||||
|
||||
for conn in self.protocol1.connections:
|
||||
conn.close()
|
||||
for conn in self.protocol2.connections:
|
||||
conn.close()
|
||||
|
||||
event.accept()
|
||||
|
||||
|
||||
def main():
|
||||
app = QApplication(sys.argv)
|
||||
window = SimpleProtocolUI()
|
||||
window.show()
|
||||
sys.exit(app.exec_())
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
105
protocol_prototype/DryBox/UI_FEATURES_GUIDE.md
Normal file
105
protocol_prototype/DryBox/UI_FEATURES_GUIDE.md
Normal file
@ -0,0 +1,105 @@
|
||||
# DryBox UI Features Guide
|
||||
|
||||
## UI Improvements
|
||||
The UI has been updated with responsive layouts that scale better:
|
||||
- Phone displays now use flexible sizing (min/max constraints)
|
||||
- Waveform widgets adapt to available space
|
||||
- Buttons have flexible widths that scale with window size
|
||||
- Better margins and padding for improved visual appearance
|
||||
|
||||
## Audio Playback Feature
|
||||
|
||||
The DryBox UI includes real-time audio playback capabilities that allow you to hear the decoded audio as it's received.
|
||||
|
||||
### How to Use Playback
|
||||
|
||||
#### Manual Control
|
||||
1. **During a Call**: Once a secure voice session is established, click the "🔊 Playback" button under either phone
|
||||
2. **Button States**:
|
||||
- Gray (unchecked): Playback disabled
|
||||
- Green (checked): Playback active
|
||||
3. **Toggle Anytime**: You can enable/disable playback at any time during a call
|
||||
|
||||
#### Keyboard Shortcuts
|
||||
- `Ctrl+1`: Toggle playback for Phone 1
|
||||
- `Ctrl+2`: Toggle playback for Phone 2
|
||||
|
||||
### Using Playback with Automatic Test
|
||||
|
||||
The automatic test feature demonstrates the complete protocol flow. Here's how to use it with playback:
|
||||
|
||||
1. **Start the Test**: Click "🧪 Run Automatic Test" or press `Space`
|
||||
2. **Enable Playback Early**:
|
||||
- As soon as the test starts, enable playback on Phone 2 (Ctrl+2)
|
||||
- This ensures you'll hear audio as soon as the secure channel is established
|
||||
3. **What You'll Hear**:
|
||||
- Once handshake completes (step 4-5), Phone 1 starts transmitting test audio
|
||||
- Phone 2 will play the received, decoded audio through your speakers
|
||||
- The audio goes through: Codec2 encoding → 4FSK modulation → Noise XK encryption → transmission → decryption → demodulation → Codec2 decoding
|
||||
|
||||
### Audio Recording Feature
|
||||
|
||||
You can also record received audio for later analysis:
|
||||
|
||||
1. **Start Recording**: Click "⏺ Record" button (or press Alt+1/Alt+2)
|
||||
2. **Stop Recording**: Click the button again
|
||||
3. **Files Saved**: Recordings are saved to `wav/` directory with timestamps
|
||||
|
||||
### Audio Processing Options
|
||||
|
||||
Access advanced audio features via "Audio Options" button (Ctrl+A):
|
||||
- **Export Buffer**: Save current audio buffer to file
|
||||
- **Clear Buffer**: Clear accumulated audio data
|
||||
- **Processing Options**:
|
||||
- Normalize Audio
|
||||
- Apply Gain (adjustable dB)
|
||||
- Noise Gate
|
||||
- Low/High Pass Filters
|
||||
- Remove Silence
|
||||
|
||||
### Requirements
|
||||
|
||||
For playback to work, you need PyAudio installed:
|
||||
```bash
|
||||
# Fedora/RHEL
|
||||
sudo dnf install python3-devel portaudio-devel
|
||||
pip install pyaudio
|
||||
|
||||
# Ubuntu/Debian
|
||||
sudo apt-get install python3-dev portaudio19-dev
|
||||
pip install pyaudio
|
||||
```
|
||||
|
||||
If PyAudio isn't installed, recording will still work but playback will be disabled.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
1. **No Sound**:
|
||||
- Check PyAudio is installed
|
||||
- Ensure system volume is up
|
||||
- Verify audio device is working
|
||||
|
||||
2. **Choppy Audio**:
|
||||
- Normal for low-bitrate codec (1200bps)
|
||||
- Represents actual protocol performance
|
||||
|
||||
3. **Delayed Start**:
|
||||
- Audio only flows after secure handshake
|
||||
- Wait for "🔒 Secure Channel Established" status
|
||||
|
||||
### Test Sequence Overview
|
||||
|
||||
The automatic test goes through these steps:
|
||||
1. Initial state check
|
||||
2. Phone 1 calls Phone 2
|
||||
3. Phone 2 answers
|
||||
4. Noise XK handshake begins
|
||||
5. Handshake completes, secure channel established
|
||||
6. Voice session starts (Codec2 + 4FSK)
|
||||
7. Audio transmission begins
|
||||
8. Protocol details logged
|
||||
9. Transmission continues for observation
|
||||
10. Final statistics
|
||||
11. Call ends, cleanup
|
||||
|
||||
Enable playback on the receiving phone to hear the transmitted audio in real-time!
|
@ -1,39 +0,0 @@
|
||||
# UI Fixes Summary
|
||||
|
||||
## Issues Fixed
|
||||
|
||||
### 1. Updated UI Text
|
||||
- Removed "ChaCha20" from window title and protocol info
|
||||
- Now shows: "Noise XK + Codec2 + 4FSK"
|
||||
|
||||
### 2. Waveform Display Fixed
|
||||
- Improved `set_data()` method to properly handle PCM audio data
|
||||
- Extracts 16-bit signed samples and normalizes amplitude
|
||||
- Maintains rolling buffer of 50 samples for smooth visualization
|
||||
- Both sent and received waveforms now update correctly
|
||||
|
||||
### 3. Layout Improvements
|
||||
- Reduced phone display frame size: 250x350 → 220x300
|
||||
- Fixed waveform widget size: 220x60
|
||||
- Reduced spacing between phones: 50 → 30
|
||||
- Shortened waveform labels: "Phone X Received Audio" → "Phone X Received"
|
||||
- Set proper min/max heights for waveform widgets
|
||||
|
||||
### 4. Protocol Message Handling
|
||||
- Fixed issue where large voice frames were misinterpreted as handshake messages
|
||||
- Added check to only process 0x20 messages as handshake before handshake completes
|
||||
- Prevents "pop from empty list" errors after handshake
|
||||
|
||||
## Visual Improvements
|
||||
- More compact layout fits better on screen
|
||||
- Waveforms show actual audio activity
|
||||
- Clear visual feedback for voice transmission
|
||||
- No overlapping UI elements
|
||||
|
||||
## Test Results
|
||||
- ✓ Waveforms updating correctly
|
||||
- ✓ Both sent and received audio displayed
|
||||
- ✓ No layout issues
|
||||
- ✓ Protocol continues working properly
|
||||
|
||||
The UI is now properly displaying the integrated protocol with working waveform visualization.
|
@ -1,86 +0,0 @@
|
||||
# UI Improvements Summary
|
||||
|
||||
## Fixed Issues
|
||||
|
||||
### 1. **AttributeError with symbol_rate**
|
||||
- Changed `symbol_rate` to `baud_rate` (correct attribute name)
|
||||
- FSKModem uses `baud_rate` not `symbol_rate`
|
||||
|
||||
### 2. **PhoneState Enum Error**
|
||||
- Converted PhoneState from class with integers to proper Enum
|
||||
- Fixed `.name` attribute errors in auto test
|
||||
- Added fallback for both enum and integer states
|
||||
|
||||
### 3. **Window Resize on Fedora/Wayland**
|
||||
- Added minimum window size (800x600)
|
||||
- Created `run_ui.sh` script with proper Wayland support
|
||||
- Set `QT_QPA_PLATFORM=wayland` environment variable
|
||||
|
||||
### 4. **Reduced Console Noise**
|
||||
- Removed verbose initialization prints from Codec2 and FSKModem
|
||||
- Converted most print statements to debug() method
|
||||
- Removed per-chunk data logging
|
||||
- Only log voice frames every 25 frames (1 second)
|
||||
|
||||
### 5. **Audio File Testing**
|
||||
- Confirmed test uses `wav/input_8k_mono.wav`
|
||||
- Added debug output to show when audio file is loaded
|
||||
- Auto test now checks if audio files are loaded
|
||||
|
||||
## Debug Console Features
|
||||
|
||||
- **Automatic Test Button**: 10-step test sequence
|
||||
- **Clear Debug Button**: Clear console output
|
||||
- **Resizable Debug Console**: Using QSplitter
|
||||
- **Timestamped Messages**: Format `[HH:MM:SS.mmm]`
|
||||
- **Component Identification**: `[Phone1]`, `[PhoneManager]`, etc.
|
||||
|
||||
## What You'll See During Test
|
||||
|
||||
1. **Initial Connection**
|
||||
```
|
||||
[09:52:01] [PhoneManager] Initialized Phone 1 with public key: 61e2779d...
|
||||
[09:52:01] [Phone1] Connected to GSM simulator at localhost:12345
|
||||
```
|
||||
|
||||
2. **Call Setup**
|
||||
```
|
||||
[09:52:03] [PhoneManager] Phone 1 initiating call to Phone 2
|
||||
[09:52:03] Phone 1 state change: RINGING
|
||||
```
|
||||
|
||||
3. **Handshake**
|
||||
```
|
||||
[09:52:05] [Phone1] Starting Noise XK handshake as initiator
|
||||
[09:52:05] [Phone1] Noise XK handshake complete!
|
||||
[09:52:05] [Phone1] Secure channel established
|
||||
```
|
||||
|
||||
4. **Voice Session**
|
||||
```
|
||||
[09:52:06] [PhoneManager] Phone 1 loaded test audio file: wav/input_8k_mono.wav
|
||||
[09:52:06] [Phone1] Voice session started
|
||||
[09:52:07] [Phone1] Encoding voice frame #0: 640 bytes PCM → 6 bytes compressed
|
||||
[09:52:08] [PhoneManager] Phone 1 sent 25 voice frames
|
||||
```
|
||||
|
||||
5. **Protocol Details**
|
||||
```
|
||||
[09:52:09] Codec mode: MODE_1200
|
||||
[09:52:09] Frame size: 48 bits
|
||||
[09:52:09] FSK frequencies: [600, 1200, 1800, 2400]
|
||||
[09:52:09] Symbol rate: 600 baud
|
||||
```
|
||||
|
||||
## Running the UI
|
||||
|
||||
```bash
|
||||
# With Wayland support
|
||||
./run_ui.sh
|
||||
|
||||
# Or directly
|
||||
cd UI
|
||||
QT_QPA_PLATFORM=wayland python3 main.py
|
||||
```
|
||||
|
||||
The UI is now cleaner, more informative, and properly handles all protocol components with extensive debugging capabilities.
|
@ -1,61 +0,0 @@
|
||||
# UI Modifications for Integrated Protocol
|
||||
|
||||
## Summary of Changes
|
||||
|
||||
The existing DryBox UI has been modified to use the integrated protocol (Noise XK + Codec2 + 4FSK + ChaCha20) instead of creating a new UI.
|
||||
|
||||
## Modified Files
|
||||
|
||||
### 1. `phone_manager.py`
|
||||
- **Changed imports**: Now uses `ProtocolPhoneClient` instead of `PhoneClient`
|
||||
- **Added audio file handling**: Loads and plays `wav/input_8k_mono.wav` for testing
|
||||
- **Updated `send_audio()`**:
|
||||
- Sends audio through the protocol stack (Codec2 → 4FSK → ChaCha20)
|
||||
- Handles 40ms frames (320 samples at 8kHz)
|
||||
- Updates waveform display
|
||||
- **Enhanced `start_audio()`**: Starts voice sessions after handshake
|
||||
- **Added cleanup**: Properly closes audio files and ends voice sessions
|
||||
|
||||
### 2. `main.py`
|
||||
- **Updated window title**: Shows "DryBox - Noise XK + Codec2 + 4FSK + ChaCha20"
|
||||
- **Added protocol info label**: Displays protocol stack information
|
||||
- **Enhanced `set_phone_state()`**:
|
||||
- Handles new protocol states: `HANDSHAKE_COMPLETE`, `VOICE_START`, `VOICE_END`
|
||||
- Shows secure channel status with lock emoji 🔒
|
||||
- Shows voice active status with microphone emoji 🎤
|
||||
|
||||
### 3. Protocol Integration
|
||||
- Uses `ProtocolPhoneClient` which includes:
|
||||
- Noise XK handshake
|
||||
- Codec2 voice compression (1200 bps)
|
||||
- 4FSK modulation (600 baud)
|
||||
- ChaCha20 encryption for voice frames
|
||||
- Automatic voice session management
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Startup**: Both phones automatically connect to GSM simulator
|
||||
2. **Call Setup**: Click "Call" → "Answer" establishes connection
|
||||
3. **Security**: Automatic Noise XK handshake creates secure channel
|
||||
4. **Voice**:
|
||||
- Audio compressed with Codec2 (1200 bps, 48 bits/frame)
|
||||
- Modulated with 4FSK (frequencies: 600, 1200, 1800, 2400 Hz)
|
||||
- Encrypted with ChaCha20 (per-frame encryption)
|
||||
- Wrapped in Noise XK session encryption
|
||||
5. **Display**: Real-time waveforms show sent/received audio
|
||||
|
||||
## Visual Indicators
|
||||
|
||||
- **"🔒 Secure Channel Established"**: Handshake complete
|
||||
- **"🎤 Voice Active (Encrypted)"**: Voice transmission active
|
||||
- **Waveforms**: Show audio activity in real-time
|
||||
|
||||
## Testing
|
||||
|
||||
Simply run:
|
||||
```bash
|
||||
cd UI
|
||||
python3 main.py
|
||||
```
|
||||
|
||||
The integrated protocol is now seamlessly part of the existing DryBox UI!
|
@ -1,13 +0,0 @@
|
||||
simulator/
|
||||
├── gsm_simulator.py # gsm_simulator
|
||||
├── launch_gsm_simulator.sh # use to start docker and simulator, run in terminal
|
||||
|
||||
2 clients nect to gsm_simulator and simulate a call using noise protocol
|
||||
UI/
|
||||
├── main.py # UI setup and event handling
|
||||
├── phone_manager.py # Phone state, client init, audio logic
|
||||
├── phone_client.py # Socket communication and threading
|
||||
├── client_state.py # Client state and command processing
|
||||
├── session.py # Noise XK crypto session
|
||||
├── waveform_widget.py # Waveform UI component
|
||||
├── phone_state.py # State constants
|
@ -1,145 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Example of proper Protocol integration with handshake flow.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import time
|
||||
import threading
|
||||
|
||||
# Add directories to path
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'Protocol'))
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'UI'))
|
||||
|
||||
from protocol import IcingProtocol
|
||||
from protocol_phone_client import ProtocolPhoneClient
|
||||
|
||||
def demo_handshake():
|
||||
"""Demonstrate the proper handshake flow between two protocols."""
|
||||
print("\n=== Protocol Handshake Demo ===\n")
|
||||
|
||||
# Create two protocol instances
|
||||
protocol1 = IcingProtocol()
|
||||
protocol2 = IcingProtocol()
|
||||
|
||||
print(f"Protocol 1 listening on port: {protocol1.local_port}")
|
||||
print(f"Protocol 1 identity: {protocol1.identity_pubkey.hex()[:32]}...")
|
||||
print(f"Protocol 2 listening on port: {protocol2.local_port}")
|
||||
print(f"Protocol 2 identity: {protocol2.identity_pubkey.hex()[:32]}...")
|
||||
print()
|
||||
|
||||
# Wait for listeners to start
|
||||
time.sleep(1)
|
||||
|
||||
# Exchange identity keys - these are already valid EC public keys
|
||||
try:
|
||||
protocol1.set_peer_identity(protocol2.identity_pubkey.hex())
|
||||
protocol2.set_peer_identity(protocol1.identity_pubkey.hex())
|
||||
print("Identity keys exchanged successfully")
|
||||
except Exception as e:
|
||||
print(f"Error exchanging identity keys: {e}")
|
||||
return
|
||||
|
||||
# Enable auto-responder on protocol2
|
||||
protocol2.auto_responder = True
|
||||
|
||||
print("\n1. Protocol 1 connects to Protocol 2...")
|
||||
try:
|
||||
protocol1.connect_to_peer(protocol2.local_port)
|
||||
time.sleep(0.5)
|
||||
except Exception as e:
|
||||
print(f"Connection failed: {e}")
|
||||
return
|
||||
|
||||
print("\n2. Protocol 1 generates ephemeral keys...")
|
||||
protocol1.generate_ephemeral_keys()
|
||||
|
||||
print("\n3. Protocol 1 sends PING request (requesting ChaCha20)...")
|
||||
protocol1.send_ping_request(cipher_type=1)
|
||||
time.sleep(0.5)
|
||||
|
||||
print("\n4. Protocol 2 auto-responds with PING response...")
|
||||
# Auto-responder handles this automatically
|
||||
time.sleep(0.5)
|
||||
|
||||
print("\n5. Protocol 1 sends handshake...")
|
||||
protocol1.send_handshake()
|
||||
time.sleep(0.5)
|
||||
|
||||
print("\n6. Protocol 2 auto-responds with handshake...")
|
||||
# Auto-responder handles this automatically
|
||||
time.sleep(0.5)
|
||||
|
||||
print("\n7. Both derive keys...")
|
||||
protocol1.derive_hkdf()
|
||||
# Protocol 2 auto-derives in auto-responder mode
|
||||
time.sleep(0.5)
|
||||
|
||||
print("\n=== Handshake Complete ===")
|
||||
print(f"Protocol 1 - Key exchange complete: {protocol1.state['key_exchange_complete']}")
|
||||
print(f"Protocol 2 - Key exchange complete: {protocol2.state['key_exchange_complete']}")
|
||||
|
||||
if protocol1.hkdf_key and protocol2.hkdf_key:
|
||||
print(f"\nDerived keys match: {protocol1.hkdf_key == protocol2.hkdf_key}")
|
||||
print(f"Cipher type: {'ChaCha20-Poly1305' if protocol1.cipher_type == 1 else 'AES-256-GCM'}")
|
||||
|
||||
# Test encrypted messaging
|
||||
print("\n8. Testing encrypted message...")
|
||||
test_msg = "Hello, encrypted world!"
|
||||
protocol1.send_encrypted_message(test_msg)
|
||||
time.sleep(0.5)
|
||||
|
||||
# Check if protocol2 received it
|
||||
for i, msg in enumerate(protocol2.inbound_messages):
|
||||
if msg['type'] == 'ENCRYPTED_MESSAGE':
|
||||
decrypted = protocol2.decrypt_received_message(i)
|
||||
print(f"Protocol 2 decrypted: {decrypted}")
|
||||
break
|
||||
|
||||
# Clean up
|
||||
protocol1.server_listener.stop()
|
||||
protocol2.server_listener.stop()
|
||||
|
||||
for conn in protocol1.connections:
|
||||
conn.close()
|
||||
for conn in protocol2.connections:
|
||||
conn.close()
|
||||
|
||||
def demo_ui_integration():
|
||||
"""Demonstrate UI integration with proper handshake."""
|
||||
print("\n\n=== UI Integration Demo ===\n")
|
||||
|
||||
# This shows how the UI should integrate the protocol
|
||||
print("The UI integration flow:")
|
||||
print("1. PhoneManager creates ProtocolPhoneClient instances")
|
||||
print("2. Identity keys are exchanged via set_peer_identity()")
|
||||
print("3. Ports are exchanged via set_peer_port()")
|
||||
print("4. When user initiates call:")
|
||||
print(" - Initiator calls initiate_call()")
|
||||
print(" - This connects to peer and sends PING request")
|
||||
print("5. When user answers call:")
|
||||
print(" - Responder calls answer_call()")
|
||||
print(" - This enables auto-responder and responds to PING")
|
||||
print("6. Protocol messages are processed in _process_protocol_messages()")
|
||||
print("7. Handshake completes automatically")
|
||||
print("8. HANDSHAKE_DONE signal is emitted")
|
||||
print("9. Voice session can start with start_voice_session()")
|
||||
print("10. Audio is sent via send_audio()")
|
||||
|
||||
def main():
|
||||
"""Run the demos."""
|
||||
print("Protocol Integration Example")
|
||||
print("=" * 50)
|
||||
|
||||
# Run handshake demo
|
||||
demo_handshake()
|
||||
|
||||
# Explain UI integration
|
||||
demo_ui_integration()
|
||||
|
||||
print("\n" + "=" * 50)
|
||||
print("Demo complete!")
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
@ -1,37 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Run the integrated DryBox UI with Protocol (4FSK, ChaCha20, etc.)
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add UI directory to path
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'UI'))
|
||||
|
||||
from PyQt5.QtWidgets import QApplication
|
||||
from main import PhoneUI
|
||||
|
||||
def main():
|
||||
print("Starting DryBox with integrated Protocol...")
|
||||
print("Features:")
|
||||
print("- 4FSK modulation for GSM voice channel compatibility")
|
||||
print("- ChaCha20-Poly1305 encryption")
|
||||
print("- Noise XK protocol for key exchange")
|
||||
print("- Codec2 voice compression (1200 bps)")
|
||||
print("")
|
||||
|
||||
app = QApplication(sys.argv)
|
||||
window = PhoneUI()
|
||||
window.show()
|
||||
|
||||
print("UI started. Use the phone buttons to:")
|
||||
print("1. Click Phone 1 to initiate a call")
|
||||
print("2. Click Phone 2 to answer when ringing")
|
||||
print("3. Audio will be encrypted and transmitted")
|
||||
print("")
|
||||
|
||||
sys.exit(app.exec_())
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
@ -1,77 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Final test with ChaCha20 removed and larger GSM frames"""
|
||||
|
||||
import sys
|
||||
import time
|
||||
sys.path.append('UI')
|
||||
from PyQt5.QtWidgets import QApplication
|
||||
from PyQt5.QtCore import QTimer
|
||||
from main import PhoneUI
|
||||
|
||||
def main():
|
||||
app = QApplication(sys.argv)
|
||||
window = PhoneUI()
|
||||
window.show()
|
||||
|
||||
print("\n=== FINAL PROTOCOL TEST ===")
|
||||
print("- ChaCha20 removed (using only Noise XK)")
|
||||
print("- GSM frame size increased to 10KB\n")
|
||||
|
||||
# Click auto test after 1 second
|
||||
QTimer.singleShot(1000, lambda: window.auto_test_button.click())
|
||||
|
||||
# Final check after 15 seconds
|
||||
def final_check():
|
||||
phone1 = window.manager.phones[0]
|
||||
phone2 = window.manager.phones[1]
|
||||
|
||||
console = window.debug_console.toPlainText()
|
||||
|
||||
# Count important events
|
||||
handshake_complete = console.count("handshake complete!")
|
||||
voice_started = console.count("Voice session started")
|
||||
decrypt_errors = console.count("Decryption error:")
|
||||
voice_decode_errors = console.count("Voice decode error:")
|
||||
frames_sent = console.count("voice frame #")
|
||||
|
||||
print(f"\nRESULTS:")
|
||||
print(f"- Handshakes completed: {handshake_complete}")
|
||||
print(f"- Voice sessions started: {voice_started}")
|
||||
print(f"- Voice frames sent: {frames_sent}")
|
||||
print(f"- Decryption errors: {decrypt_errors}")
|
||||
print(f"- Voice decode errors: {voice_decode_errors}")
|
||||
|
||||
print(f"\nFINAL STATE:")
|
||||
print(f"- Phone 1: handshake={phone1['client'].handshake_complete}, voice={phone1['client'].voice_active}")
|
||||
print(f"- Phone 2: handshake={phone2['client'].handshake_complete}, voice={phone2['client'].voice_active}")
|
||||
print(f"- Frames sent: P1={phone1.get('frame_counter', 0)}, P2={phone2.get('frame_counter', 0)}")
|
||||
print(f"- Frames received: P1={phone1['client'].voice_frame_counter}, P2={phone2['client'].voice_frame_counter}")
|
||||
|
||||
# Success criteria
|
||||
if (handshake_complete >= 2 and
|
||||
voice_started >= 2 and
|
||||
decrypt_errors == 0 and
|
||||
phone1['client'].voice_frame_counter > 0 and
|
||||
phone2['client'].voice_frame_counter > 0):
|
||||
print("\n✅ SUCCESS! Full protocol stack working!")
|
||||
print(" - Noise XK handshake ✓")
|
||||
print(" - Voice codec (Codec2) ✓")
|
||||
print(" - 4FSK modulation ✓")
|
||||
print(" - Bidirectional voice ✓")
|
||||
else:
|
||||
print("\n❌ Protocol test failed")
|
||||
if decrypt_errors > 0:
|
||||
print(" - Still getting decryption errors")
|
||||
if phone1['client'].voice_frame_counter == 0:
|
||||
print(" - Phone 1 not receiving voice")
|
||||
if phone2['client'].voice_frame_counter == 0:
|
||||
print(" - Phone 2 not receiving voice")
|
||||
|
||||
app.quit()
|
||||
|
||||
QTimer.singleShot(15000, final_check)
|
||||
|
||||
sys.exit(app.exec_())
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
58
protocol_prototype/DryBox/install_audio_deps.sh
Executable file
58
protocol_prototype/DryBox/install_audio_deps.sh
Executable file
@ -0,0 +1,58 @@
|
||||
#!/bin/bash
|
||||
# Install audio dependencies for DryBox
|
||||
|
||||
echo "Installing audio dependencies for DryBox..."
|
||||
echo
|
||||
|
||||
# Detect OS
|
||||
if [ -f /etc/os-release ]; then
|
||||
. /etc/os-release
|
||||
OS=$ID
|
||||
VER=$VERSION_ID
|
||||
else
|
||||
echo "Cannot detect OS. Please install manually."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
case $OS in
|
||||
fedora)
|
||||
echo "Detected Fedora $VER"
|
||||
echo "Installing python3-devel and portaudio-devel..."
|
||||
sudo dnf install -y python3-devel portaudio-devel
|
||||
;;
|
||||
|
||||
ubuntu|debian)
|
||||
echo "Detected $OS $VER"
|
||||
echo "Installing python3-dev and portaudio19-dev..."
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y python3-dev portaudio19-dev
|
||||
;;
|
||||
|
||||
*)
|
||||
echo "Unsupported OS: $OS"
|
||||
echo "Please install manually:"
|
||||
echo " - Python development headers"
|
||||
echo " - PortAudio development libraries"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo
|
||||
echo "System dependencies installed successfully!"
|
||||
echo "Now installing PyAudio..."
|
||||
pip install pyaudio
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo
|
||||
echo "✅ Audio dependencies installed successfully!"
|
||||
echo "You can now use real-time audio playback in DryBox."
|
||||
else
|
||||
echo
|
||||
echo "❌ Failed to install PyAudio"
|
||||
echo "Try: pip install --user pyaudio"
|
||||
fi
|
||||
else
|
||||
echo
|
||||
echo "❌ Failed to install system dependencies"
|
||||
fi
|
@ -1,16 +0,0 @@
|
||||
DryBox Protocol Integration Complete
|
||||
====================================
|
||||
|
||||
Successfully integrated:
|
||||
- Noise XK protocol for secure handshake and encryption
|
||||
- Codec2 voice codec (1200 bps mode)
|
||||
- 4FSK modulation (600/1200/1800/2400 Hz)
|
||||
- Message framing for GSM transport
|
||||
|
||||
Test Results:
|
||||
- Handshakes: ✓ Working
|
||||
- Voice sessions: ✓ Working
|
||||
- Voice transmission: ✓ Working (Phone 2 receiving frames)
|
||||
- Zero decryption errors with proper framing
|
||||
|
||||
The complete protocol stack is now integrated with the DryBox UI and GSM simulator.
|
@ -12,8 +12,8 @@ PyQt5>=5.15.0
|
||||
# Numerical computing (for signal processing)
|
||||
numpy>=1.24.0
|
||||
|
||||
# Audio processing (optional, for real audio I/O)
|
||||
# pyaudio>=0.2.11
|
||||
# Audio processing (for real audio I/O)
|
||||
pyaudio>=0.2.11
|
||||
|
||||
# Wave file handling (included in standard library)
|
||||
# wave
|
||||
|
@ -1,16 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "Starting GSM simulator..."
|
||||
cd simulator
|
||||
python3 gsm_simulator.py &
|
||||
SIM_PID=$!
|
||||
echo "GSM simulator PID: $SIM_PID"
|
||||
|
||||
sleep 3
|
||||
|
||||
echo "Running test..."
|
||||
cd ..
|
||||
python3 test_no_chacha.py
|
||||
|
||||
echo "Killing GSM simulator..."
|
||||
kill $SIM_PID
|
150
protocol_prototype/DryBox/test_audio_features.py
Executable file
150
protocol_prototype/DryBox/test_audio_features.py
Executable file
@ -0,0 +1,150 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test script for audio features in DryBox"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import wave
|
||||
import struct
|
||||
import time
|
||||
|
||||
# Add parent directory to path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from UI.audio_player import AudioPlayer, PYAUDIO_AVAILABLE
|
||||
from UI.audio_processor import AudioProcessor
|
||||
|
||||
def create_test_audio(filename="test_tone.wav", duration=2, frequency=440):
|
||||
"""Create a test audio file with a sine wave"""
|
||||
sample_rate = 8000
|
||||
num_samples = int(sample_rate * duration)
|
||||
|
||||
# Generate sine wave
|
||||
import math
|
||||
samples = []
|
||||
for i in range(num_samples):
|
||||
t = float(i) / sample_rate
|
||||
value = int(32767 * 0.5 * math.sin(2 * math.pi * frequency * t))
|
||||
samples.append(value)
|
||||
|
||||
# Save to WAV file
|
||||
with wave.open(filename, 'wb') as wav_file:
|
||||
wav_file.setnchannels(1)
|
||||
wav_file.setsampwidth(2)
|
||||
wav_file.setframerate(sample_rate)
|
||||
wav_file.writeframes(struct.pack(f'{len(samples)}h', *samples))
|
||||
|
||||
print(f"Created test audio file: {filename}")
|
||||
return filename
|
||||
|
||||
def test_audio_player():
|
||||
"""Test audio player functionality"""
|
||||
print("\n=== Testing Audio Player ===")
|
||||
|
||||
player = AudioPlayer()
|
||||
player.set_debug_callback(print)
|
||||
|
||||
if PYAUDIO_AVAILABLE:
|
||||
print("PyAudio is available - testing playback")
|
||||
|
||||
# Test playback
|
||||
client_id = 0
|
||||
if player.start_playback(client_id):
|
||||
print(f"Started playback for client {client_id}")
|
||||
|
||||
# Create and play test audio
|
||||
test_file = create_test_audio()
|
||||
with wave.open(test_file, 'rb') as wav:
|
||||
data = wav.readframes(wav.getnframes())
|
||||
|
||||
# Add audio data
|
||||
chunk_size = 640 # 320 samples * 2 bytes
|
||||
for i in range(0, len(data), chunk_size):
|
||||
chunk = data[i:i+chunk_size]
|
||||
player.add_audio_data(client_id, chunk)
|
||||
time.sleep(0.04) # 40ms per chunk
|
||||
|
||||
time.sleep(0.5) # Let playback finish
|
||||
player.stop_playback(client_id)
|
||||
print(f"Stopped playback for client {client_id}")
|
||||
|
||||
# Clean up
|
||||
os.remove(test_file)
|
||||
else:
|
||||
print("PyAudio not available - skipping playback test")
|
||||
|
||||
# Test recording (works without PyAudio)
|
||||
print("\n=== Testing Recording ===")
|
||||
client_id = 1
|
||||
player.start_recording(client_id)
|
||||
|
||||
# Add some test data
|
||||
test_data = b'\x00\x01' * 320 # Simple test pattern
|
||||
for i in range(10):
|
||||
player.add_audio_data(client_id, test_data)
|
||||
|
||||
save_path = player.stop_recording(client_id, "test_recording.wav")
|
||||
if save_path and os.path.exists(save_path):
|
||||
print(f"Recording saved successfully: {save_path}")
|
||||
os.remove(save_path)
|
||||
else:
|
||||
print("Recording failed")
|
||||
|
||||
player.cleanup()
|
||||
print("Audio player test complete")
|
||||
|
||||
def test_audio_processor():
|
||||
"""Test audio processor functionality"""
|
||||
print("\n=== Testing Audio Processor ===")
|
||||
|
||||
processor = AudioProcessor()
|
||||
processor.set_debug_callback(print)
|
||||
|
||||
# Create test audio
|
||||
test_file = create_test_audio("test_input.wav", duration=1, frequency=1000)
|
||||
|
||||
# Read test audio
|
||||
with wave.open(test_file, 'rb') as wav:
|
||||
test_data = wav.readframes(wav.getnframes())
|
||||
|
||||
# Test various processing functions
|
||||
print("\nTesting normalize:")
|
||||
normalized = processor.normalize_audio(test_data, target_db=-6)
|
||||
save_path = processor.save_processed_audio(normalized, test_file, "normalized")
|
||||
if save_path:
|
||||
print(f"Saved: {save_path}")
|
||||
os.remove(save_path)
|
||||
|
||||
print("\nTesting gain:")
|
||||
gained = processor.apply_gain(test_data, gain_db=6)
|
||||
save_path = processor.save_processed_audio(gained, test_file, "gained")
|
||||
if save_path:
|
||||
print(f"Saved: {save_path}")
|
||||
os.remove(save_path)
|
||||
|
||||
print("\nTesting filters:")
|
||||
filtered = processor.apply_low_pass_filter(test_data)
|
||||
save_path = processor.save_processed_audio(filtered, test_file, "lowpass")
|
||||
if save_path:
|
||||
print(f"Saved: {save_path}")
|
||||
os.remove(save_path)
|
||||
|
||||
# Clean up
|
||||
os.remove(test_file)
|
||||
print("\nAudio processor test complete")
|
||||
|
||||
def main():
|
||||
"""Run all tests"""
|
||||
print("DryBox Audio Features Test")
|
||||
print("==========================")
|
||||
|
||||
if not PYAUDIO_AVAILABLE:
|
||||
print("\nNOTE: PyAudio not installed. Playback tests will be skipped.")
|
||||
print("To install: sudo dnf install python3-devel portaudio-devel && pip install pyaudio")
|
||||
|
||||
test_audio_player()
|
||||
test_audio_processor()
|
||||
|
||||
print("\nAll tests complete!")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
67
protocol_prototype/DryBox/test_audio_flow.py
Normal file
67
protocol_prototype/DryBox/test_audio_flow.py
Normal file
@ -0,0 +1,67 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test to verify audio is flowing through the system
|
||||
"""
|
||||
|
||||
import os
|
||||
import wave
|
||||
import struct
|
||||
|
||||
def check_audio_file():
|
||||
"""Verify input.wav has actual audio content"""
|
||||
print("Checking input.wav content...")
|
||||
|
||||
with wave.open("wav/input.wav", 'rb') as wf:
|
||||
# Read multiple frames to check for silence
|
||||
total_frames = wf.getnframes()
|
||||
print(f"Total frames: {total_frames}")
|
||||
|
||||
# Check beginning
|
||||
wf.setpos(0)
|
||||
frames = wf.readframes(320)
|
||||
samples = struct.unpack('320h', frames)
|
||||
max_val = max(abs(s) for s in samples)
|
||||
print(f"Frame 0 (beginning): max amplitude = {max_val}")
|
||||
|
||||
# Check middle
|
||||
wf.setpos(total_frames // 2)
|
||||
frames = wf.readframes(320)
|
||||
samples = struct.unpack('320h', frames)
|
||||
max_val = max(abs(s) for s in samples)
|
||||
print(f"Frame {total_frames//2} (middle): max amplitude = {max_val}")
|
||||
|
||||
# Check near end
|
||||
wf.setpos(total_frames - 640)
|
||||
frames = wf.readframes(320)
|
||||
samples = struct.unpack('320h', frames)
|
||||
max_val = max(abs(s) for s in samples)
|
||||
print(f"Frame {total_frames-640} (near end): max amplitude = {max_val}")
|
||||
|
||||
# Find first non-silent frame
|
||||
wf.setpos(0)
|
||||
for i in range(0, total_frames, 320):
|
||||
frames = wf.readframes(320)
|
||||
if len(frames) < 640:
|
||||
break
|
||||
samples = struct.unpack('320h', frames)
|
||||
max_val = max(abs(s) for s in samples)
|
||||
if max_val > 100: # Not silence
|
||||
print(f"\nFirst non-silent frame at position {i}")
|
||||
print(f"First 10 samples: {samples[:10]}")
|
||||
break
|
||||
|
||||
def main():
|
||||
# Change to DryBox directory if needed
|
||||
if os.path.basename(os.getcwd()) != 'DryBox':
|
||||
if os.path.exists('DryBox'):
|
||||
os.chdir('DryBox')
|
||||
|
||||
check_audio_file()
|
||||
|
||||
print("\nTo fix silence at beginning of file:")
|
||||
print("1. Skip initial silence in phone_manager.py")
|
||||
print("2. Or use a different test file")
|
||||
print("3. Or trim the silence from input.wav")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
193
protocol_prototype/DryBox/test_audio_pipeline.py
Normal file
193
protocol_prototype/DryBox/test_audio_pipeline.py
Normal file
@ -0,0 +1,193 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test the audio pipeline (Codec2 + FSK) independently
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import wave
|
||||
import struct
|
||||
import numpy as np
|
||||
|
||||
# Add parent directory to path
|
||||
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from voice_codec import Codec2Wrapper, FSKModem, Codec2Mode, Codec2Frame
|
||||
|
||||
def test_codec_only():
|
||||
"""Test just the codec2 encode/decode"""
|
||||
print("\n1. Testing Codec2 only...")
|
||||
|
||||
codec = Codec2Wrapper(mode=Codec2Mode.MODE_1200)
|
||||
|
||||
# Read test audio
|
||||
with wave.open("wav/input.wav", 'rb') as wf:
|
||||
# Read 320 samples (40ms at 8kHz)
|
||||
frames = wf.readframes(320)
|
||||
if len(frames) < 640: # 320 samples * 2 bytes
|
||||
print("Not enough audio data")
|
||||
return False
|
||||
|
||||
# Convert to samples
|
||||
samples = struct.unpack(f'{len(frames)//2}h', frames)
|
||||
print(f"Input: {len(samples)} samples, first 10: {samples[:10]}")
|
||||
|
||||
# Encode
|
||||
encoded = codec.encode(frames)
|
||||
if encoded:
|
||||
print(f"Encoded: {len(encoded.bits)} bytes")
|
||||
print(f"First 10 bytes: {encoded.bits[:10].hex()}")
|
||||
else:
|
||||
print("Encoding failed!")
|
||||
return False
|
||||
|
||||
# Decode
|
||||
decoded = codec.decode(encoded)
|
||||
if decoded is not None:
|
||||
print(f"Decoded: type={type(decoded)}, len={len(decoded)}")
|
||||
if hasattr(decoded, '__getitem__'):
|
||||
print(f"First 10 samples: {list(decoded[:10])}")
|
||||
|
||||
# Save decoded audio
|
||||
with wave.open("wav/test_codec_only.wav", 'wb') as out:
|
||||
out.setnchannels(1)
|
||||
out.setsampwidth(2)
|
||||
out.setframerate(8000)
|
||||
if hasattr(decoded, 'tobytes'):
|
||||
out.writeframes(decoded.tobytes())
|
||||
else:
|
||||
# Convert to bytes
|
||||
import array
|
||||
arr = array.array('h', decoded)
|
||||
out.writeframes(arr.tobytes())
|
||||
print("Saved decoded audio to wav/test_codec_only.wav")
|
||||
return True
|
||||
else:
|
||||
print("Decoding failed!")
|
||||
return False
|
||||
|
||||
def test_full_pipeline():
|
||||
"""Test the full Codec2 + FSK pipeline"""
|
||||
print("\n2. Testing full pipeline (Codec2 + FSK)...")
|
||||
|
||||
codec = Codec2Wrapper(mode=Codec2Mode.MODE_1200)
|
||||
modem = FSKModem()
|
||||
|
||||
# Read test audio
|
||||
with wave.open("wav/input.wav", 'rb') as wf:
|
||||
frames = wf.readframes(320)
|
||||
if len(frames) < 640:
|
||||
print("Not enough audio data")
|
||||
return False
|
||||
|
||||
# Encode with Codec2
|
||||
encoded = codec.encode(frames)
|
||||
if not encoded:
|
||||
print("Codec encoding failed!")
|
||||
return False
|
||||
print(f"Codec2 encoded: {len(encoded.bits)} bytes")
|
||||
|
||||
# Modulate with FSK
|
||||
modulated = modem.modulate(encoded.bits)
|
||||
print(f"FSK modulated: {len(modulated)} float samples")
|
||||
|
||||
# Demodulate
|
||||
demodulated, confidence = modem.demodulate(modulated)
|
||||
print(f"FSK demodulated: {len(demodulated)} bytes, confidence: {confidence:.2f}")
|
||||
|
||||
if confidence < 0.5:
|
||||
print("Low confidence demodulation!")
|
||||
return False
|
||||
|
||||
# Create frame for decoding
|
||||
frame = Codec2Frame(
|
||||
mode=Codec2Mode.MODE_1200,
|
||||
bits=demodulated,
|
||||
timestamp=0,
|
||||
frame_number=0
|
||||
)
|
||||
|
||||
# Decode with Codec2
|
||||
decoded = codec.decode(frame)
|
||||
if decoded is not None:
|
||||
print(f"Decoded: type={type(decoded)}, len={len(decoded)}")
|
||||
|
||||
# Save decoded audio
|
||||
with wave.open("wav/test_full_pipeline.wav", 'wb') as out:
|
||||
out.setnchannels(1)
|
||||
out.setsampwidth(2)
|
||||
out.setframerate(8000)
|
||||
if hasattr(decoded, 'tobytes'):
|
||||
out.writeframes(decoded.tobytes())
|
||||
else:
|
||||
# Convert to bytes
|
||||
import array
|
||||
arr = array.array('h', decoded)
|
||||
out.writeframes(arr.tobytes())
|
||||
print("Saved decoded audio to wav/test_full_pipeline.wav")
|
||||
return True
|
||||
else:
|
||||
print("Codec decoding failed!")
|
||||
return False
|
||||
|
||||
def test_byte_conversion():
|
||||
"""Test the byte conversion that happens in the protocol"""
|
||||
print("\n3. Testing byte conversion...")
|
||||
|
||||
# Create test PCM data
|
||||
test_samples = [100, -100, 200, -200, 300, -300, 0, 0, 1000, -1000]
|
||||
|
||||
# Method 1: array.tobytes()
|
||||
import array
|
||||
arr = array.array('h', test_samples)
|
||||
bytes1 = arr.tobytes()
|
||||
print(f"array.tobytes(): {len(bytes1)} bytes, hex: {bytes1.hex()}")
|
||||
|
||||
# Method 2: struct.pack
|
||||
bytes2 = struct.pack(f'{len(test_samples)}h', *test_samples)
|
||||
print(f"struct.pack(): {len(bytes2)} bytes, hex: {bytes2.hex()}")
|
||||
|
||||
# They should be the same
|
||||
print(f"Bytes match: {bytes1 == bytes2}")
|
||||
|
||||
# Test unpacking
|
||||
unpacked = struct.unpack(f'{len(bytes1)//2}h', bytes1)
|
||||
print(f"Unpacked: {unpacked}")
|
||||
print(f"Matches original: {list(unpacked) == test_samples}")
|
||||
|
||||
return True
|
||||
|
||||
def main():
|
||||
print("Audio Pipeline Test")
|
||||
print("=" * 50)
|
||||
|
||||
# Change to DryBox directory if needed
|
||||
if os.path.basename(os.getcwd()) != 'DryBox':
|
||||
if os.path.exists('DryBox'):
|
||||
os.chdir('DryBox')
|
||||
|
||||
# Ensure wav directory exists
|
||||
os.makedirs("wav", exist_ok=True)
|
||||
|
||||
# Run tests
|
||||
codec_ok = test_codec_only()
|
||||
pipeline_ok = test_full_pipeline()
|
||||
bytes_ok = test_byte_conversion()
|
||||
|
||||
print("\n" + "=" * 50)
|
||||
print("Test Results:")
|
||||
print(f" Codec2 only: {'✅ PASS' if codec_ok else '❌ FAIL'}")
|
||||
print(f" Full pipeline: {'✅ PASS' if pipeline_ok else '❌ FAIL'}")
|
||||
print(f" Byte conversion: {'✅ PASS' if bytes_ok else '❌ FAIL'}")
|
||||
|
||||
if codec_ok and pipeline_ok and bytes_ok:
|
||||
print("\n✅ All tests passed!")
|
||||
print("\nIf playback still doesn't work, check:")
|
||||
print("1. Is the audio data actually being sent? (check debug logs)")
|
||||
print("2. Is PyAudio stream format correct? (16-bit, 8kHz, mono)")
|
||||
print("3. Is the volume turned up?")
|
||||
else:
|
||||
print("\n❌ Some tests failed - this explains the playback issue")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
127
protocol_prototype/DryBox/test_audio_setup.py
Executable file
127
protocol_prototype/DryBox/test_audio_setup.py
Executable file
@ -0,0 +1,127 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script to verify audio setup for DryBox
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import wave
|
||||
|
||||
def check_audio_file():
|
||||
"""Check if input.wav exists and has correct format"""
|
||||
wav_path = "wav/input.wav"
|
||||
|
||||
if not os.path.exists(wav_path):
|
||||
print(f"❌ {wav_path} not found!")
|
||||
return False
|
||||
|
||||
try:
|
||||
with wave.open(wav_path, 'rb') as wf:
|
||||
channels = wf.getnchannels()
|
||||
framerate = wf.getframerate()
|
||||
sampwidth = wf.getsampwidth()
|
||||
nframes = wf.getnframes()
|
||||
duration = nframes / framerate
|
||||
|
||||
print(f"✅ Audio file: {wav_path}")
|
||||
print(f" Channels: {channels} {'✅' if channels == 1 else '❌ (should be 1)'}")
|
||||
print(f" Sample rate: {framerate}Hz {'✅' if framerate == 8000 else '❌ (should be 8000)'}")
|
||||
print(f" Sample width: {sampwidth * 8} bits {'✅' if sampwidth == 2 else '❌'}")
|
||||
print(f" Duration: {duration:.2f} seconds")
|
||||
print(f" Size: {os.path.getsize(wav_path) / 1024:.1f} KB")
|
||||
|
||||
return channels == 1 and framerate == 8000
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error reading {wav_path}: {e}")
|
||||
return False
|
||||
|
||||
def check_pyaudio():
|
||||
"""Check if PyAudio is installed and working"""
|
||||
try:
|
||||
import pyaudio
|
||||
p = pyaudio.PyAudio()
|
||||
|
||||
# Check for output devices
|
||||
output_devices = 0
|
||||
for i in range(p.get_device_count()):
|
||||
info = p.get_device_info_by_index(i)
|
||||
if info['maxOutputChannels'] > 0:
|
||||
output_devices += 1
|
||||
|
||||
p.terminate()
|
||||
|
||||
print(f"✅ PyAudio installed")
|
||||
print(f" Output devices available: {output_devices}")
|
||||
return True
|
||||
|
||||
except ImportError:
|
||||
print("❌ PyAudio not installed")
|
||||
print(" To enable playback, run:")
|
||||
print(" sudo dnf install python3-devel portaudio-devel")
|
||||
print(" pip install pyaudio")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"❌ PyAudio error: {e}")
|
||||
return False
|
||||
|
||||
def check_dependencies():
|
||||
"""Check all required dependencies"""
|
||||
deps = {
|
||||
'PyQt5': 'PyQt5',
|
||||
'numpy': 'numpy',
|
||||
'struct': None, # Built-in
|
||||
'wave': None, # Built-in
|
||||
}
|
||||
|
||||
print("\nDependency check:")
|
||||
all_good = True
|
||||
|
||||
for module_name, pip_name in deps.items():
|
||||
try:
|
||||
__import__(module_name)
|
||||
print(f"✅ {module_name}")
|
||||
except ImportError:
|
||||
print(f"❌ {module_name} not found")
|
||||
if pip_name:
|
||||
print(f" Install with: pip install {pip_name}")
|
||||
all_good = False
|
||||
|
||||
return all_good
|
||||
|
||||
def main():
|
||||
print("DryBox Audio Setup Test")
|
||||
print("=" * 40)
|
||||
|
||||
# Change to DryBox directory if needed
|
||||
if os.path.basename(os.getcwd()) != 'DryBox':
|
||||
if os.path.exists('DryBox'):
|
||||
os.chdir('DryBox')
|
||||
print(f"Changed to DryBox directory: {os.getcwd()}")
|
||||
|
||||
print("\nChecking audio file...")
|
||||
audio_ok = check_audio_file()
|
||||
|
||||
print("\nChecking PyAudio...")
|
||||
pyaudio_ok = check_pyaudio()
|
||||
|
||||
print("\nChecking dependencies...")
|
||||
deps_ok = check_dependencies()
|
||||
|
||||
print("\n" + "=" * 40)
|
||||
if audio_ok and deps_ok:
|
||||
print("✅ Audio setup is ready!")
|
||||
if not pyaudio_ok:
|
||||
print("⚠️ Playback disabled (PyAudio not available)")
|
||||
print(" Recording will still work")
|
||||
else:
|
||||
print("❌ Audio setup needs attention")
|
||||
|
||||
print("\nUsage tips:")
|
||||
print("1. Run the UI: python UI/main.py")
|
||||
print("2. Click 'Run Automatic Test' or press Space")
|
||||
print("3. Enable playback on Phone 2 with Ctrl+2")
|
||||
print("4. You'll hear the decoded audio after handshake completes")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -1,168 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script to debug auto-test functionality
|
||||
"""
|
||||
|
||||
import sys
|
||||
import time
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
# Add parent directory to path
|
||||
parent_dir = str(Path(__file__).parent.parent)
|
||||
if parent_dir not in sys.path:
|
||||
sys.path.insert(0, parent_dir)
|
||||
|
||||
from integrated_protocol import IntegratedDryBoxProtocol
|
||||
|
||||
def test_basic_protocol():
|
||||
"""Test basic protocol functionality"""
|
||||
print("=== Testing Basic Protocol ===")
|
||||
|
||||
# Create two protocol instances
|
||||
phone1 = IntegratedDryBoxProtocol(mode="receiver")
|
||||
phone2 = IntegratedDryBoxProtocol(mode="sender")
|
||||
|
||||
# Connect to GSM
|
||||
print("Connecting to GSM...")
|
||||
if not phone1.connect_gsm():
|
||||
print("Phone 1 failed to connect to GSM")
|
||||
return False
|
||||
|
||||
if not phone2.connect_gsm():
|
||||
print("Phone 2 failed to connect to GSM")
|
||||
return False
|
||||
|
||||
print("✓ Both phones connected to GSM")
|
||||
|
||||
# Setup connections
|
||||
print("\nSetting up protocol connections...")
|
||||
port1 = phone1.setup_protocol_connection()
|
||||
port2 = phone2.setup_protocol_connection()
|
||||
|
||||
print(f"Phone 1 port: {port1}")
|
||||
print(f"Phone 2 port: {port2}")
|
||||
|
||||
# Connect to each other
|
||||
phone1.setup_protocol_connection(peer_port=port2)
|
||||
phone2.setup_protocol_connection(peer_port=port1)
|
||||
|
||||
print("✓ Connections established")
|
||||
|
||||
# Test key exchange
|
||||
print("\nTesting key exchange...")
|
||||
|
||||
# Phone 1 initiates
|
||||
if phone1.initiate_key_exchange(cipher_type=1): # ChaCha20
|
||||
print("✓ Phone 1 initiated key exchange")
|
||||
else:
|
||||
print("✗ Phone 1 failed to initiate key exchange")
|
||||
return False
|
||||
|
||||
# Wait for completion
|
||||
timeout = 10
|
||||
start = time.time()
|
||||
while time.time() - start < timeout:
|
||||
if phone1.protocol.state.get("key_exchange_complete"):
|
||||
print("✓ Key exchange completed!")
|
||||
break
|
||||
time.sleep(0.5)
|
||||
else:
|
||||
print("✗ Key exchange timeout")
|
||||
return False
|
||||
|
||||
# Test message sending
|
||||
print("\nTesting encrypted message...")
|
||||
test_msg = "Test message from auto-test"
|
||||
phone1.send_encrypted_message(test_msg)
|
||||
print(f"✓ Sent: {test_msg}")
|
||||
|
||||
# Give time for message to be received
|
||||
time.sleep(2)
|
||||
|
||||
# Clean up
|
||||
phone1.close()
|
||||
phone2.close()
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def test_voice_safe():
|
||||
"""Test voice functionality safely"""
|
||||
print("\n=== Testing Voice (Safe Mode) ===")
|
||||
|
||||
# Check if input.wav exists
|
||||
input_file = Path(__file__).parent / "input.wav"
|
||||
if not input_file.exists():
|
||||
print("✗ input.wav not found")
|
||||
print("Creating a test audio file...")
|
||||
|
||||
# Create a simple test audio file
|
||||
try:
|
||||
import wave
|
||||
import array
|
||||
|
||||
with wave.open(str(input_file), 'wb') as wav:
|
||||
wav.setnchannels(1) # Mono
|
||||
wav.setsampwidth(2) # 16-bit
|
||||
wav.setframerate(8000) # 8kHz
|
||||
|
||||
# Generate 1 second of 440Hz sine wave
|
||||
duration = 1
|
||||
samples = []
|
||||
for i in range(8000 * duration):
|
||||
t = i / 8000.0
|
||||
sample = int(32767 * 0.5 * (2 * 3.14159 * 440 * t))
|
||||
samples.append(sample)
|
||||
|
||||
wav.writeframes(array.array('h', samples).tobytes())
|
||||
|
||||
print("✓ Created test audio file")
|
||||
except Exception as e:
|
||||
print(f"✗ Failed to create audio: {e}")
|
||||
return False
|
||||
|
||||
print("✓ Audio file ready")
|
||||
return True
|
||||
|
||||
|
||||
def main():
|
||||
"""Run all tests"""
|
||||
print("DryBox Auto-Test Functionality Debugger")
|
||||
print("=" * 50)
|
||||
|
||||
# Start GSM simulator
|
||||
print("\nStarting GSM simulator...")
|
||||
gsm_path = Path(__file__).parent / "gsm_simulator.py"
|
||||
gsm_process = subprocess.Popen(
|
||||
[sys.executable, str(gsm_path)],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE
|
||||
)
|
||||
|
||||
# Wait for GSM to start
|
||||
time.sleep(2)
|
||||
|
||||
try:
|
||||
# Run tests
|
||||
if test_basic_protocol():
|
||||
print("\n✓ Basic protocol test passed")
|
||||
else:
|
||||
print("\n✗ Basic protocol test failed")
|
||||
|
||||
if test_voice_safe():
|
||||
print("\n✓ Voice setup test passed")
|
||||
else:
|
||||
print("\n✗ Voice setup test failed")
|
||||
|
||||
finally:
|
||||
# Clean up
|
||||
gsm_process.terminate()
|
||||
subprocess.run(["pkill", "-f", "gsm_simulator.py"], capture_output=True)
|
||||
|
||||
print("\n" + "=" * 50)
|
||||
print("Test complete!")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -1,74 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Clean test without ChaCha20"""
|
||||
|
||||
import sys
|
||||
import time
|
||||
sys.path.append('UI')
|
||||
from PyQt5.QtWidgets import QApplication
|
||||
from PyQt5.QtCore import QTimer
|
||||
from main import PhoneUI
|
||||
|
||||
def main():
|
||||
app = QApplication(sys.argv)
|
||||
window = PhoneUI()
|
||||
|
||||
# Suppress debug output during test
|
||||
error_count = 0
|
||||
success_count = 0
|
||||
|
||||
original_debug = window.debug
|
||||
def count_debug(msg):
|
||||
nonlocal error_count, success_count
|
||||
if "Decryption error:" in msg:
|
||||
error_count += 1
|
||||
elif "Received voice data frame" in msg:
|
||||
success_count += 1
|
||||
# Only show important messages
|
||||
if any(x in msg for x in ["handshake complete!", "Voice session", "Starting audio"]):
|
||||
original_debug(msg)
|
||||
|
||||
window.debug = count_debug
|
||||
window.show()
|
||||
|
||||
print("\n=== CLEAN TEST - NO CHACHA20 ===\n")
|
||||
|
||||
# Make call
|
||||
QTimer.singleShot(1000, lambda: test_sequence())
|
||||
|
||||
def test_sequence():
|
||||
print("1. Making call...")
|
||||
window.manager.phone_action(0, window)
|
||||
|
||||
QTimer.singleShot(1000, lambda: answer_call())
|
||||
|
||||
def answer_call():
|
||||
print("2. Answering call...")
|
||||
window.manager.phone_action(1, window)
|
||||
|
||||
QTimer.singleShot(10000, lambda: show_results())
|
||||
|
||||
def show_results():
|
||||
phone1 = window.manager.phones[0]
|
||||
phone2 = window.manager.phones[1]
|
||||
|
||||
print(f"\n3. Results after 10 seconds:")
|
||||
print(f" Handshake: P1={phone1['client'].handshake_complete}, P2={phone2['client'].handshake_complete}")
|
||||
print(f" Voice: P1={phone1['client'].voice_active}, P2={phone2['client'].voice_active}")
|
||||
print(f" Sent: P1={phone1.get('frame_counter', 0)}, P2={phone2.get('frame_counter', 0)}")
|
||||
print(f" Received: P1={phone1['client'].voice_frame_counter}, P2={phone2['client'].voice_frame_counter}")
|
||||
print(f" Decryption errors: {error_count}")
|
||||
print(f" Voice frames decoded: {success_count}")
|
||||
|
||||
if error_count == 0 and success_count > 0:
|
||||
print(f"\n✅ SUCCESS! Protocol working without ChaCha20!")
|
||||
elif error_count > 0:
|
||||
print(f"\n❌ Still getting decryption errors")
|
||||
else:
|
||||
print(f"\n❌ No voice frames received")
|
||||
|
||||
app.quit()
|
||||
|
||||
sys.exit(app.exec_())
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -1,138 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Complete integration test of DryBox with Noise XK, Codec2, and 4FSK"""
|
||||
|
||||
import sys
|
||||
import time
|
||||
sys.path.append('UI')
|
||||
from PyQt5.QtWidgets import QApplication
|
||||
from PyQt5.QtCore import QTimer
|
||||
from main import PhoneUI
|
||||
|
||||
def main():
|
||||
app = QApplication(sys.argv)
|
||||
window = PhoneUI()
|
||||
|
||||
# Track comprehensive results
|
||||
results = {
|
||||
'handshakes': 0,
|
||||
'voice_sessions': 0,
|
||||
'frames_sent_p1': 0,
|
||||
'frames_sent_p2': 0,
|
||||
'frames_received_p1': 0,
|
||||
'frames_received_p2': 0,
|
||||
'decode_errors': 0,
|
||||
'low_confidence': 0
|
||||
}
|
||||
|
||||
original_debug = window.debug
|
||||
def track_debug(msg):
|
||||
if "handshake complete!" in msg:
|
||||
results['handshakes'] += 1
|
||||
original_debug(msg)
|
||||
elif "Voice session started" in msg:
|
||||
results['voice_sessions'] += 1
|
||||
original_debug(msg)
|
||||
elif "Encoding voice frame #" in msg:
|
||||
if "[Phone1]" in msg:
|
||||
results['frames_sent_p1'] += 1
|
||||
else:
|
||||
results['frames_sent_p2'] += 1
|
||||
if "#0" in msg or "#50" in msg or "#100" in msg:
|
||||
original_debug(msg)
|
||||
elif "Received voice data frame #" in msg:
|
||||
if "[Phone1]" in msg:
|
||||
results['frames_received_p1'] += 1
|
||||
else:
|
||||
results['frames_received_p2'] += 1
|
||||
if "#0" in msg or "#25" in msg:
|
||||
original_debug(msg)
|
||||
elif "Voice decode error:" in msg:
|
||||
results['decode_errors'] += 1
|
||||
elif "Low confidence demodulation:" in msg:
|
||||
results['low_confidence'] += 1
|
||||
|
||||
window.debug = track_debug
|
||||
window.show()
|
||||
|
||||
print("\n=== COMPLETE PROTOCOL INTEGRATION TEST ===")
|
||||
print("Components:")
|
||||
print("- Noise XK handshake and encryption")
|
||||
print("- Codec2 voice codec (1200 bps)")
|
||||
print("- 4FSK modulation (600/1200/1800/2400 Hz)")
|
||||
print("- Message framing for GSM transport")
|
||||
print("- Bidirectional voice communication\n")
|
||||
|
||||
# Test sequence
|
||||
def start_test():
|
||||
print("Step 1: Initiating call from Phone 1 to Phone 2...")
|
||||
window.manager.phone_action(0, window)
|
||||
QTimer.singleShot(1500, answer_call)
|
||||
|
||||
def answer_call():
|
||||
print("Step 2: Phone 2 answering call...")
|
||||
window.manager.phone_action(1, window)
|
||||
print("Step 3: Establishing secure channel and starting voice...")
|
||||
QTimer.singleShot(10000, show_results)
|
||||
|
||||
def show_results():
|
||||
phone1 = window.manager.phones[0]
|
||||
phone2 = window.manager.phones[1]
|
||||
|
||||
print(f"\n=== FINAL RESULTS ===")
|
||||
print(f"\nHandshake Status:")
|
||||
print(f" Handshakes completed: {results['handshakes']}")
|
||||
print(f" Phone 1: {'✓' if phone1['client'].handshake_complete else '✗'}")
|
||||
print(f" Phone 2: {'✓' if phone2['client'].handshake_complete else '✗'}")
|
||||
|
||||
print(f"\nVoice Session Status:")
|
||||
print(f" Sessions started: {results['voice_sessions']}")
|
||||
print(f" Phone 1 active: {'✓' if phone1['client'].voice_active else '✗'}")
|
||||
print(f" Phone 2 active: {'✓' if phone2['client'].voice_active else '✗'}")
|
||||
|
||||
print(f"\nVoice Frame Statistics:")
|
||||
print(f" Phone 1: Sent {results['frames_sent_p1']}, Received {phone1['client'].voice_frame_counter}")
|
||||
print(f" Phone 2: Sent {results['frames_sent_p2']}, Received {phone2['client'].voice_frame_counter}")
|
||||
print(f" Decode errors: {results['decode_errors']}")
|
||||
print(f" Low confidence frames: {results['low_confidence']}")
|
||||
|
||||
# Calculate success
|
||||
handshake_ok = results['handshakes'] >= 2
|
||||
voice_ok = results['voice_sessions'] >= 2
|
||||
p1_rx = phone1['client'].voice_frame_counter > 0
|
||||
p2_rx = phone2['client'].voice_frame_counter > 0
|
||||
|
||||
print(f"\n=== PROTOCOL STACK STATUS ===")
|
||||
print(f" Noise XK Handshake: {'✓ WORKING' if handshake_ok else '✗ FAILED'}")
|
||||
print(f" Voice Sessions: {'✓ WORKING' if voice_ok else '✗ FAILED'}")
|
||||
print(f" Codec2 + 4FSK (P1→P2): {'✓ WORKING' if p2_rx else '✗ FAILED'}")
|
||||
print(f" Codec2 + 4FSK (P2→P1): {'✓ WORKING' if p1_rx else '✗ FAILED'}")
|
||||
|
||||
if handshake_ok and voice_ok and (p1_rx or p2_rx):
|
||||
print(f"\n✅ INTEGRATION SUCCESSFUL!")
|
||||
print(f" The protocol stack is working with:")
|
||||
print(f" - Secure Noise XK encrypted channel established")
|
||||
print(f" - Voice codec and modulation operational")
|
||||
if p1_rx and p2_rx:
|
||||
print(f" - Full duplex communication achieved")
|
||||
else:
|
||||
print(f" - Half duplex communication achieved")
|
||||
if not p1_rx:
|
||||
print(f" - Note: Phone 1 not receiving (may need timing adjustments)")
|
||||
if not p2_rx:
|
||||
print(f" - Note: Phone 2 not receiving (may need timing adjustments)")
|
||||
else:
|
||||
print(f"\n❌ INTEGRATION FAILED")
|
||||
if not handshake_ok:
|
||||
print(f" - Noise XK handshake did not complete")
|
||||
if not voice_ok:
|
||||
print(f" - Voice sessions did not start")
|
||||
if not p1_rx and not p2_rx:
|
||||
print(f" - No voice frames received by either phone")
|
||||
|
||||
app.quit()
|
||||
|
||||
QTimer.singleShot(1000, start_test)
|
||||
sys.exit(app.exec_())
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -1,123 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Final test of complete protocol integration"""
|
||||
|
||||
import sys
|
||||
import time
|
||||
sys.path.append('UI')
|
||||
from PyQt5.QtWidgets import QApplication
|
||||
from PyQt5.QtCore import QTimer
|
||||
from main import PhoneUI
|
||||
|
||||
def main():
|
||||
app = QApplication(sys.argv)
|
||||
window = PhoneUI()
|
||||
|
||||
# Comprehensive tracking
|
||||
stats = {
|
||||
'handshakes': 0,
|
||||
'voice_sessions': 0,
|
||||
'frames_sent': 0,
|
||||
'frames_received': 0,
|
||||
'decode_errors': 0,
|
||||
'demod_success': 0,
|
||||
'demod_low_conf': 0
|
||||
}
|
||||
|
||||
original_debug = window.debug
|
||||
def track_events(msg):
|
||||
if "handshake complete!" in msg:
|
||||
stats['handshakes'] += 1
|
||||
original_debug(msg)
|
||||
elif "Voice session started" in msg:
|
||||
stats['voice_sessions'] += 1
|
||||
original_debug(msg)
|
||||
elif "Encoding voice frame" in msg:
|
||||
stats['frames_sent'] += 1
|
||||
if stats['frames_sent'] == 1:
|
||||
original_debug("First frame encoded successfully")
|
||||
elif "voice frame #" in msg and "Received" in msg:
|
||||
stats['frames_received'] += 1
|
||||
if stats['frames_received'] == 1:
|
||||
original_debug("✓ First voice frame received!")
|
||||
elif stats['frames_received'] % 50 == 0:
|
||||
original_debug(f"✓ Received {stats['frames_received']} voice frames")
|
||||
elif "Voice decode error:" in msg:
|
||||
stats['decode_errors'] += 1
|
||||
if stats['decode_errors'] == 1:
|
||||
original_debug(f"First decode error: {msg}")
|
||||
elif "Low confidence demodulation:" in msg:
|
||||
stats['demod_low_conf'] += 1
|
||||
|
||||
window.debug = track_events
|
||||
window.show()
|
||||
|
||||
print("\n=== FINAL PROTOCOL INTEGRATION TEST ===")
|
||||
print("Testing complete stack:")
|
||||
print("- Noise XK handshake")
|
||||
print("- Codec2 voice compression")
|
||||
print("- 4FSK modulation")
|
||||
print("- Message framing\n")
|
||||
|
||||
def start_call():
|
||||
print("Step 1: Initiating call...")
|
||||
window.manager.phone_action(0, window)
|
||||
QTimer.singleShot(1500, answer_call)
|
||||
|
||||
def answer_call():
|
||||
print("Step 2: Answering call...")
|
||||
window.manager.phone_action(1, window)
|
||||
print("Step 3: Waiting for voice transmission...\n")
|
||||
QTimer.singleShot(8000, final_results)
|
||||
|
||||
def final_results():
|
||||
phone1 = window.manager.phones[0]
|
||||
phone2 = window.manager.phones[1]
|
||||
|
||||
print("\n=== FINAL RESULTS ===")
|
||||
print(f"\nProtocol Status:")
|
||||
print(f" Handshakes completed: {stats['handshakes']}")
|
||||
print(f" Voice sessions: {stats['voice_sessions']}")
|
||||
print(f" Frames sent: {stats['frames_sent']}")
|
||||
print(f" Frames received: {stats['frames_received']}")
|
||||
print(f" Decode errors: {stats['decode_errors']}")
|
||||
print(f" Low confidence demod: {stats['demod_low_conf']}")
|
||||
|
||||
print(f"\nPhone Status:")
|
||||
print(f" Phone 1: handshake={phone1['client'].handshake_complete}, "
|
||||
f"voice={phone1['client'].voice_active}, "
|
||||
f"rx={phone1['client'].voice_frame_counter}")
|
||||
print(f" Phone 2: handshake={phone2['client'].handshake_complete}, "
|
||||
f"voice={phone2['client'].voice_active}, "
|
||||
f"rx={phone2['client'].voice_frame_counter}")
|
||||
|
||||
# Determine success
|
||||
success = (
|
||||
stats['handshakes'] >= 2 and
|
||||
stats['voice_sessions'] >= 2 and
|
||||
stats['frames_received'] > 0 and
|
||||
stats['decode_errors'] == 0
|
||||
)
|
||||
|
||||
if success:
|
||||
print(f"\n✅ PROTOCOL INTEGRATION SUCCESSFUL!")
|
||||
print(f" - Noise XK: Working")
|
||||
print(f" - Codec2: Working")
|
||||
print(f" - 4FSK: Working")
|
||||
print(f" - Framing: Working")
|
||||
print(f" - Voice transmission: {stats['frames_received']} frames received")
|
||||
else:
|
||||
print(f"\n❌ Issues detected:")
|
||||
if stats['handshakes'] < 2:
|
||||
print(f" - Handshake incomplete")
|
||||
if stats['decode_errors'] > 0:
|
||||
print(f" - Voice decode errors: {stats['decode_errors']}")
|
||||
if stats['frames_received'] == 0:
|
||||
print(f" - No voice frames received")
|
||||
|
||||
app.quit()
|
||||
|
||||
QTimer.singleShot(1000, start_call)
|
||||
sys.exit(app.exec_())
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -1,88 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test with proper message framing"""
|
||||
|
||||
import sys
|
||||
import time
|
||||
sys.path.append('UI')
|
||||
from PyQt5.QtWidgets import QApplication
|
||||
from PyQt5.QtCore import QTimer
|
||||
from main import PhoneUI
|
||||
|
||||
def main():
|
||||
app = QApplication(sys.argv)
|
||||
window = PhoneUI()
|
||||
|
||||
# Track results
|
||||
results = {
|
||||
'handshakes': 0,
|
||||
'voice_started': 0,
|
||||
'decrypt_errors': 0,
|
||||
'frames_received': 0
|
||||
}
|
||||
|
||||
original_debug = window.debug
|
||||
def count_debug(msg):
|
||||
if "handshake complete!" in msg:
|
||||
results['handshakes'] += 1
|
||||
elif "Voice session started" in msg:
|
||||
results['voice_started'] += 1
|
||||
elif "Decryption error:" in msg:
|
||||
results['decrypt_errors'] += 1
|
||||
elif "Received voice data frame" in msg:
|
||||
results['frames_received'] += 1
|
||||
# Show important messages
|
||||
if any(x in msg for x in ["handshake complete!", "Voice session", "frame #0", "frame #25"]):
|
||||
original_debug(msg)
|
||||
|
||||
window.debug = count_debug
|
||||
window.show()
|
||||
|
||||
print("\n=== TEST WITH MESSAGE FRAMING ===")
|
||||
print("- Proper length-prefixed messages")
|
||||
print("- No fragmentation issues\n")
|
||||
|
||||
# Test sequence
|
||||
def test_sequence():
|
||||
print("1. Making call...")
|
||||
window.manager.phone_action(0, window)
|
||||
QTimer.singleShot(1000, answer_call)
|
||||
|
||||
def answer_call():
|
||||
print("2. Answering call...")
|
||||
window.manager.phone_action(1, window)
|
||||
QTimer.singleShot(8000, show_results)
|
||||
|
||||
def show_results():
|
||||
phone1 = window.manager.phones[0]
|
||||
phone2 = window.manager.phones[1]
|
||||
|
||||
print(f"\n3. Results:")
|
||||
print(f" Handshakes completed: {results['handshakes']}")
|
||||
print(f" Voice sessions started: {results['voice_started']}")
|
||||
print(f" Decryption errors: {results['decrypt_errors']}")
|
||||
print(f" Voice frames received: {results['frames_received']}")
|
||||
print(f" Phone 1 received: {phone1['client'].voice_frame_counter} frames")
|
||||
print(f" Phone 2 received: {phone2['client'].voice_frame_counter} frames")
|
||||
|
||||
if (results['handshakes'] >= 2 and
|
||||
results['voice_started'] >= 2 and
|
||||
results['decrypt_errors'] == 0 and
|
||||
phone1['client'].voice_frame_counter > 0 and
|
||||
phone2['client'].voice_frame_counter > 0):
|
||||
print(f"\n✅ SUCCESS! Protocol working with proper framing!")
|
||||
print(f" - Noise XK encryption ✓")
|
||||
print(f" - Codec2 voice codec ✓")
|
||||
print(f" - 4FSK modulation ✓")
|
||||
print(f" - No fragmentation ✓")
|
||||
else:
|
||||
print(f"\n❌ Protocol test failed")
|
||||
if results['decrypt_errors'] > 0:
|
||||
print(f" - Still getting decryption errors")
|
||||
|
||||
app.quit()
|
||||
|
||||
QTimer.singleShot(1000, test_sequence)
|
||||
sys.exit(app.exec_())
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -1,80 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test without ChaCha20 encryption"""
|
||||
|
||||
import sys
|
||||
import time
|
||||
sys.path.append('UI')
|
||||
from PyQt5.QtWidgets import QApplication
|
||||
from PyQt5.QtCore import QTimer
|
||||
from main import PhoneUI
|
||||
|
||||
def main():
|
||||
app = QApplication(sys.argv)
|
||||
window = PhoneUI()
|
||||
window.show()
|
||||
|
||||
print("\n=== TESTING WITHOUT CHACHA20 ===")
|
||||
print("Using only Noise XK encryption for everything\n")
|
||||
|
||||
# Run auto test
|
||||
QTimer.singleShot(1000, lambda: window.auto_test_button.click())
|
||||
|
||||
# Monitor progress
|
||||
def check_progress():
|
||||
phone1 = window.manager.phones[0]
|
||||
phone2 = window.manager.phones[1]
|
||||
|
||||
if phone1['client'].handshake_complete and phone2['client'].handshake_complete:
|
||||
print("✓ Handshake completed for both phones")
|
||||
if phone1['client'].voice_active and phone2['client'].voice_active:
|
||||
print("✓ Voice sessions active")
|
||||
frames1 = phone1.get('frame_counter', 0)
|
||||
frames2 = phone2.get('frame_counter', 0)
|
||||
print(f" Phone 1 sent {frames1} frames")
|
||||
print(f" Phone 2 sent {frames2} frames")
|
||||
|
||||
# Check every 2 seconds
|
||||
progress_timer = QTimer()
|
||||
progress_timer.timeout.connect(check_progress)
|
||||
progress_timer.start(2000)
|
||||
|
||||
# Final results after 20 seconds
|
||||
def final_results():
|
||||
progress_timer.stop()
|
||||
|
||||
phone1 = window.manager.phones[0]
|
||||
phone2 = window.manager.phones[1]
|
||||
|
||||
console_text = window.debug_console.toPlainText()
|
||||
|
||||
print("\n=== FINAL RESULTS ===")
|
||||
print(f"Handshake: P1={phone1['client'].handshake_complete}, P2={phone2['client'].handshake_complete}")
|
||||
print(f"Voice Active: P1={phone1['client'].voice_active}, P2={phone2['client'].voice_active}")
|
||||
print(f"Frames Sent: P1={phone1.get('frame_counter', 0)}, P2={phone2.get('frame_counter', 0)}")
|
||||
print(f"Frames Received: P1={phone1['client'].voice_frame_counter}, P2={phone2['client'].voice_frame_counter}")
|
||||
|
||||
# Count errors and successes
|
||||
decrypt_errors = console_text.count("Decryption error")
|
||||
voice_decode_errors = console_text.count("Voice decode error")
|
||||
received_voice = console_text.count("Received voice data frame")
|
||||
|
||||
print(f"\nDecryption errors: {decrypt_errors}")
|
||||
print(f"Voice decode errors: {voice_decode_errors}")
|
||||
print(f"Voice frames successfully received: {received_voice}")
|
||||
|
||||
# Success criteria
|
||||
if (decrypt_errors == 0 and
|
||||
phone1['client'].voice_frame_counter > 10 and
|
||||
phone2['client'].voice_frame_counter > 10):
|
||||
print("\n✅ SUCCESS! No ChaCha20 = No decryption errors!")
|
||||
else:
|
||||
print("\n❌ Still having issues...")
|
||||
|
||||
app.quit()
|
||||
|
||||
QTimer.singleShot(20000, final_results)
|
||||
|
||||
sys.exit(app.exec_())
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -1,77 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test UI fixes - waveforms and layout"""
|
||||
|
||||
import sys
|
||||
import time
|
||||
sys.path.append('UI')
|
||||
from PyQt5.QtWidgets import QApplication
|
||||
from PyQt5.QtCore import QTimer
|
||||
from main import PhoneUI
|
||||
|
||||
def main():
|
||||
app = QApplication(sys.argv)
|
||||
window = PhoneUI()
|
||||
|
||||
# Track waveform updates
|
||||
waveform_updates = {'sent': 0, 'received': 0}
|
||||
|
||||
original_update_waveform = window.manager.update_waveform
|
||||
original_update_sent = window.manager.update_sent_waveform
|
||||
|
||||
def track_received(client_id, data):
|
||||
waveform_updates['received'] += 1
|
||||
original_update_waveform(client_id, data)
|
||||
if waveform_updates['received'] == 1:
|
||||
print("✓ First received waveform update")
|
||||
|
||||
def track_sent(client_id, data):
|
||||
waveform_updates['sent'] += 1
|
||||
original_update_sent(client_id, data)
|
||||
if waveform_updates['sent'] == 1:
|
||||
print("✓ First sent waveform update")
|
||||
|
||||
window.manager.update_waveform = track_received
|
||||
window.manager.update_sent_waveform = track_sent
|
||||
|
||||
window.show()
|
||||
|
||||
print("\n=== UI FIXES TEST ===")
|
||||
print("1. Window title updated (no ChaCha20)")
|
||||
print("2. Waveform widgets properly sized")
|
||||
print("3. Layout more compact")
|
||||
print("")
|
||||
|
||||
# Test sequence
|
||||
def start_test():
|
||||
print("Starting call test...")
|
||||
window.manager.phone_action(0, window)
|
||||
QTimer.singleShot(1000, answer_call)
|
||||
|
||||
def answer_call():
|
||||
print("Answering call...")
|
||||
window.manager.phone_action(1, window)
|
||||
QTimer.singleShot(5000, check_results)
|
||||
|
||||
def check_results():
|
||||
print(f"\nWaveform updates:")
|
||||
print(f" Sent: {waveform_updates['sent']}")
|
||||
print(f" Received: {waveform_updates['received']}")
|
||||
|
||||
if waveform_updates['sent'] > 0 and waveform_updates['received'] > 0:
|
||||
print("\n✅ Waveforms updating correctly!")
|
||||
else:
|
||||
print("\n⚠️ Waveforms may not be updating")
|
||||
|
||||
print("\nCheck the UI visually:")
|
||||
print("- Waveforms should show audio activity")
|
||||
print("- Layout should be properly sized")
|
||||
print("- No overlapping elements")
|
||||
|
||||
# Keep window open for visual inspection
|
||||
QTimer.singleShot(5000, app.quit)
|
||||
|
||||
QTimer.singleShot(1000, start_test)
|
||||
sys.exit(app.exec_())
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -1,63 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test with voice decode fix"""
|
||||
|
||||
import sys
|
||||
import time
|
||||
sys.path.append('UI')
|
||||
from PyQt5.QtWidgets import QApplication
|
||||
from PyQt5.QtCore import QTimer
|
||||
from main import PhoneUI
|
||||
|
||||
def main():
|
||||
app = QApplication(sys.argv)
|
||||
window = PhoneUI()
|
||||
|
||||
# Track results
|
||||
results = {
|
||||
'decode_errors': 0,
|
||||
'frames_decoded': 0
|
||||
}
|
||||
|
||||
original_debug = window.debug
|
||||
def track_debug(msg):
|
||||
if "Voice decode error:" in msg:
|
||||
results['decode_errors'] += 1
|
||||
original_debug(msg) # Show the error
|
||||
elif "Received voice data frame" in msg:
|
||||
results['frames_decoded'] += 1
|
||||
if results['frames_decoded'] == 1:
|
||||
original_debug("First voice frame successfully received!")
|
||||
|
||||
window.debug = track_debug
|
||||
window.show()
|
||||
|
||||
print("\n=== VOICE DECODE FIX TEST ===\n")
|
||||
|
||||
# Simple test sequence
|
||||
def start_test():
|
||||
print("1. Making call...")
|
||||
window.manager.phone_action(0, window)
|
||||
QTimer.singleShot(1000, answer_call)
|
||||
|
||||
def answer_call():
|
||||
print("2. Answering call...")
|
||||
window.manager.phone_action(1, window)
|
||||
QTimer.singleShot(5000, show_results)
|
||||
|
||||
def show_results():
|
||||
print(f"\n3. Results after 5 seconds:")
|
||||
print(f" Decode errors: {results['decode_errors']}")
|
||||
print(f" Frames decoded: {results['frames_decoded']}")
|
||||
|
||||
if results['decode_errors'] == 0 and results['frames_decoded'] > 0:
|
||||
print(f"\n✅ Voice decode fixed! No more 'bytes' object errors.")
|
||||
else:
|
||||
print(f"\n❌ Still having issues with voice decode")
|
||||
|
||||
app.quit()
|
||||
|
||||
QTimer.singleShot(1000, start_test)
|
||||
sys.exit(app.exec_())
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -1,188 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Debug script for Protocol integration issues.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import time
|
||||
|
||||
# Add Protocol directory to path
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'Protocol'))
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'UI'))
|
||||
|
||||
from protocol import IcingProtocol
|
||||
from voice_codec import VoiceProtocol, FSKModem, Codec2Wrapper, Codec2Mode
|
||||
from encryption import encrypt_message, decrypt_message, generate_iv
|
||||
from protocol_phone_client import ProtocolPhoneClient
|
||||
|
||||
def test_fsk_modem():
|
||||
"""Test FSK modem functionality."""
|
||||
print("\n=== Testing FSK Modem ===")
|
||||
modem = FSKModem()
|
||||
|
||||
# Test with shorter data (FSK demodulation expects specific format)
|
||||
test_data = b"Hi"
|
||||
print(f"Original data: {test_data}")
|
||||
|
||||
# Modulate with preamble for sync
|
||||
audio = modem.modulate(test_data, add_preamble=True)
|
||||
print(f"Modulated audio length: {len(audio)} samples")
|
||||
|
||||
# Demodulate
|
||||
demod_data, confidence = modem.demodulate(audio)
|
||||
print(f"Demodulated data: {demod_data}")
|
||||
print(f"Confidence: {confidence:.2f}")
|
||||
|
||||
# For now, just check that we got some output
|
||||
success = demod_data is not None and len(demod_data) > 0
|
||||
print(f"Success: {success}")
|
||||
return success
|
||||
|
||||
def test_codec2():
|
||||
"""Test Codec2 wrapper."""
|
||||
print("\n=== Testing Codec2 ===")
|
||||
codec = Codec2Wrapper(Codec2Mode.MODE_1200)
|
||||
|
||||
# Generate test audio (320 samples = 40ms @ 8kHz)
|
||||
import random
|
||||
audio = [random.randint(-1000, 1000) for _ in range(320)]
|
||||
|
||||
# Encode
|
||||
frame = codec.encode(audio)
|
||||
if frame:
|
||||
print(f"Encoded frame: {len(frame.bits)} bytes")
|
||||
|
||||
# Decode
|
||||
decoded = codec.decode(frame)
|
||||
print(f"Decoded audio: {len(decoded)} samples")
|
||||
return True
|
||||
else:
|
||||
print("Failed to encode audio")
|
||||
return False
|
||||
|
||||
def test_encryption():
|
||||
"""Test encryption/decryption."""
|
||||
print("\n=== Testing Encryption ===")
|
||||
|
||||
key = os.urandom(32)
|
||||
plaintext = b"Secret message for encryption test"
|
||||
iv = generate_iv()
|
||||
|
||||
# Encrypt with ChaCha20
|
||||
encrypted = encrypt_message(
|
||||
plaintext=plaintext,
|
||||
key=key,
|
||||
iv=iv,
|
||||
cipher_type=1
|
||||
)
|
||||
# encrypted is the full EncryptedMessage bytes
|
||||
print(f"Encrypted length: {len(encrypted)} bytes")
|
||||
|
||||
# Decrypt - decrypt_message expects the full message bytes
|
||||
decrypted = decrypt_message(encrypted, key, cipher_type=1)
|
||||
print(f"Decrypted: {decrypted}")
|
||||
|
||||
success = decrypted == plaintext
|
||||
print(f"Success: {success}")
|
||||
return success
|
||||
|
||||
def test_protocol_integration():
|
||||
"""Test full protocol integration."""
|
||||
print("\n=== Testing Protocol Integration ===")
|
||||
|
||||
# Create two protocol instances
|
||||
protocol1 = IcingProtocol()
|
||||
protocol2 = IcingProtocol()
|
||||
|
||||
print(f"Protocol 1 identity: {protocol1.identity_pubkey.hex()[:16]}...")
|
||||
print(f"Protocol 2 identity: {protocol2.identity_pubkey.hex()[:16]}...")
|
||||
|
||||
# Exchange identities
|
||||
protocol1.set_peer_identity(protocol2.identity_pubkey.hex())
|
||||
protocol2.set_peer_identity(protocol1.identity_pubkey.hex())
|
||||
|
||||
# Generate ephemeral keys
|
||||
protocol1.generate_ephemeral_keys()
|
||||
protocol2.generate_ephemeral_keys()
|
||||
|
||||
print("Ephemeral keys generated")
|
||||
|
||||
# Simulate key derivation
|
||||
shared_key = os.urandom(32)
|
||||
protocol1.hkdf_key = shared_key.hex()
|
||||
protocol2.hkdf_key = shared_key.hex()
|
||||
protocol1.cipher_type = 1
|
||||
protocol2.cipher_type = 1
|
||||
|
||||
print("Keys derived (simulated)")
|
||||
|
||||
# Test voice protocol
|
||||
voice1 = VoiceProtocol(protocol1)
|
||||
voice2 = VoiceProtocol(protocol2)
|
||||
|
||||
print("Voice protocols initialized")
|
||||
|
||||
return True
|
||||
|
||||
def test_ui_client():
|
||||
"""Test UI client initialization."""
|
||||
print("\n=== Testing UI Client ===")
|
||||
|
||||
# Mock the Qt components
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
with patch('protocol_phone_client.QThread'):
|
||||
with patch('protocol_phone_client.pyqtSignal', return_value=MagicMock()):
|
||||
client = ProtocolPhoneClient(0)
|
||||
|
||||
print(f"Client ID: {client.client_id}")
|
||||
print(f"Identity key: {client.get_identity_key()}")
|
||||
print(f"Local port: {client.get_local_port()}")
|
||||
|
||||
# Set peer info with valid hex key
|
||||
test_hex_key = "1234567890abcdef" * 8 # 64 hex chars = 32 bytes
|
||||
client.set_peer_identity(test_hex_key)
|
||||
client.set_peer_port(12346)
|
||||
|
||||
print("Peer info set successfully")
|
||||
|
||||
return True
|
||||
|
||||
def main():
|
||||
"""Run all debug tests."""
|
||||
print("Protocol Integration Debug Script")
|
||||
print("=" * 50)
|
||||
|
||||
tests = [
|
||||
("FSK Modem", test_fsk_modem),
|
||||
("Codec2", test_codec2),
|
||||
("Encryption", test_encryption),
|
||||
("Protocol Integration", test_protocol_integration),
|
||||
("UI Client", test_ui_client)
|
||||
]
|
||||
|
||||
results = []
|
||||
for name, test_func in tests:
|
||||
try:
|
||||
success = test_func()
|
||||
results.append((name, success))
|
||||
except Exception as e:
|
||||
print(f"\nERROR in {name}: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
results.append((name, False))
|
||||
|
||||
print("\n" + "=" * 50)
|
||||
print("Summary:")
|
||||
for name, success in results:
|
||||
status = "✓ PASS" if success else "✗ FAIL"
|
||||
print(f"{name}: {status}")
|
||||
|
||||
all_passed = all(success for _, success in results)
|
||||
print(f"\nOverall: {'ALL TESTS PASSED' if all_passed else 'SOME TESTS FAILED'}")
|
||||
|
||||
return 0 if all_passed else 1
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.exit(main())
|
@ -1,24 +0,0 @@
|
||||
#external_caller.py
|
||||
import socket
|
||||
import time
|
||||
|
||||
|
||||
def connect():
|
||||
caller_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
caller_socket.connect(('localhost', 5555))
|
||||
caller_socket.send("CALLER".encode())
|
||||
print("Connected to GSM simulator as CALLER")
|
||||
time.sleep(2) # Wait 2 seconds for receiver to connect
|
||||
|
||||
for i in range(5):
|
||||
message = f"Audio packet {i + 1}"
|
||||
caller_socket.send(message.encode())
|
||||
print(f"Sent: {message}")
|
||||
time.sleep(1)
|
||||
|
||||
caller_socket.send("CALL_END".encode())
|
||||
print("Call ended.")
|
||||
caller_socket.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
connect()
|
@ -1,37 +0,0 @@
|
||||
#external_receiver.py
|
||||
import socket
|
||||
|
||||
def connect():
|
||||
receiver_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
receiver_socket.settimeout(15) # Increase timeout to 15 seconds
|
||||
receiver_socket.connect(('localhost', 5555))
|
||||
receiver_socket.send("RECEIVER".encode())
|
||||
print("Connected to GSM simulator as RECEIVER")
|
||||
|
||||
while True:
|
||||
try:
|
||||
data = receiver_socket.recv(1024).decode().strip()
|
||||
if not data:
|
||||
print("No data received. Connection closed.")
|
||||
break
|
||||
if data == "RINGING":
|
||||
print("Incoming call... ringing")
|
||||
elif data == "CALL_END":
|
||||
print("Call ended by caller.")
|
||||
break
|
||||
elif data == "CALL_DROPPED":
|
||||
print("Call dropped by network.")
|
||||
break
|
||||
else:
|
||||
print(f"Received: {data}")
|
||||
except socket.timeout:
|
||||
print("Timed out waiting for data.")
|
||||
break
|
||||
except Exception as e:
|
||||
print(f"Receiver error: {e}")
|
||||
break
|
||||
|
||||
receiver_socket.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
connect()
|
@ -1,24 +0,0 @@
|
||||
#external_caller.py
|
||||
import socket
|
||||
import time
|
||||
|
||||
|
||||
def connect():
|
||||
caller_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
caller_socket.connect(('localhost', 12345))
|
||||
caller_socket.send("CALLER".encode())
|
||||
print("Connected to GSM simulator as CALLER")
|
||||
time.sleep(2) # Wait 2 seconds for receiver to connect
|
||||
|
||||
for i in range(5):
|
||||
message = f"Audio packet {i + 1}"
|
||||
caller_socket.send(message.encode())
|
||||
print(f"Sent: {message}")
|
||||
time.sleep(1)
|
||||
|
||||
caller_socket.send("CALL_END".encode())
|
||||
print("Call ended.")
|
||||
caller_socket.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
connect()
|
@ -1,37 +0,0 @@
|
||||
#external_receiver.py
|
||||
import socket
|
||||
|
||||
def connect():
|
||||
receiver_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
receiver_socket.settimeout(15) # Increase timeout to 15 seconds
|
||||
receiver_socket.connect(('localhost', 12345))
|
||||
receiver_socket.send("RECEIVER".encode())
|
||||
print("Connected to GSM simulator as RECEIVER")
|
||||
|
||||
while True:
|
||||
try:
|
||||
data = receiver_socket.recv(1024).decode().strip()
|
||||
if not data:
|
||||
print("No data received. Connection closed.")
|
||||
break
|
||||
if data == "RINGING":
|
||||
print("Incoming call... ringing")
|
||||
elif data == "CALL_END":
|
||||
print("Call ended by caller.")
|
||||
break
|
||||
elif data == "CALL_DROPPED":
|
||||
print("Call dropped by network.")
|
||||
break
|
||||
else:
|
||||
print(f"Received: {data}")
|
||||
except socket.timeout:
|
||||
print("Timed out waiting for data.")
|
||||
break
|
||||
except Exception as e:
|
||||
print(f"Receiver error: {e}")
|
||||
break
|
||||
|
||||
receiver_socket.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
connect()
|
@ -1,86 +0,0 @@
|
||||
import socket
|
||||
import os
|
||||
import time
|
||||
import subprocess
|
||||
|
||||
# Configuration
|
||||
HOST = "localhost"
|
||||
PORT = 12345
|
||||
INPUT_FILE = "wav/input.wav"
|
||||
OUTPUT_FILE = "wav/received.wav"
|
||||
|
||||
|
||||
def encrypt_data(data):
|
||||
return data # Replace with your encryption protocol
|
||||
|
||||
|
||||
def decrypt_data(data):
|
||||
return data # Replace with your decryption protocol
|
||||
|
||||
|
||||
def run_protocol(send_mode=True):
|
||||
"""Connect to the simulator and send/receive data."""
|
||||
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
sock.connect((HOST, PORT))
|
||||
print(f"Connected to simulator at {HOST}:{PORT}")
|
||||
|
||||
if send_mode:
|
||||
# Sender mode: Encode audio with toast
|
||||
os.system(f"toast -p -l {INPUT_FILE}") # Creates input.wav.gsm
|
||||
input_gsm_file = f"{INPUT_FILE}.gsm"
|
||||
if not os.path.exists(input_gsm_file):
|
||||
print(f"Error: {input_gsm_file} not created")
|
||||
sock.close()
|
||||
return
|
||||
with open(input_gsm_file, "rb") as f:
|
||||
voice_data = f.read()
|
||||
|
||||
encrypted_data = encrypt_data(voice_data)
|
||||
sock.send(encrypted_data)
|
||||
print(f"Sent {len(encrypted_data)} bytes")
|
||||
os.remove(input_gsm_file) # Clean up
|
||||
else:
|
||||
# Receiver mode: Wait for and receive data
|
||||
print("Waiting for data from sender...")
|
||||
received_data = b""
|
||||
sock.settimeout(5.0)
|
||||
try:
|
||||
while True:
|
||||
print("Calling recv()...")
|
||||
data = sock.recv(1024)
|
||||
print(f"Received {len(data)} bytes")
|
||||
if not data:
|
||||
print("Connection closed by sender or simulator")
|
||||
break
|
||||
received_data += data
|
||||
except socket.timeout:
|
||||
print("Timed out waiting for data")
|
||||
|
||||
if received_data:
|
||||
with open("received.gsm", "wb") as f:
|
||||
f.write(decrypt_data(received_data))
|
||||
print(f"Wrote {len(received_data)} bytes to received.gsm")
|
||||
# Decode with untoast, then convert to WAV with sox
|
||||
result = subprocess.run(["untoast", "received.gsm"], capture_output=True, text=True)
|
||||
print(f"untoast return code: {result.returncode}")
|
||||
print(f"untoast stderr: {result.stderr}")
|
||||
if result.returncode == 0:
|
||||
if os.path.exists("received"):
|
||||
# Convert raw PCM to WAV (8 kHz, mono, 16-bit)
|
||||
subprocess.run(["sox", "-t", "raw", "-r", "8000", "-e", "signed", "-b", "16", "-c", "1", "received",
|
||||
OUTPUT_FILE])
|
||||
os.remove("received")
|
||||
print(f"Received and saved {len(received_data)} bytes to {OUTPUT_FILE}")
|
||||
else:
|
||||
print("Error: 'received' file not created by untoast")
|
||||
else:
|
||||
print(f"untoast failed: {result.stderr}")
|
||||
else:
|
||||
print("No data received from simulator")
|
||||
|
||||
sock.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
mode = input("Enter 'send' to send data or 'receive' to receive data: ").strip().lower()
|
||||
run_protocol(send_mode=(mode == "send"))
|
Binary file not shown.
BIN
protocol_prototype/DryBox/wav/input_original.wav
Normal file
BIN
protocol_prototype/DryBox/wav/input_original.wav
Normal file
Binary file not shown.
BIN
protocol_prototype/DryBox/wav/test_codec_only.wav
Normal file
BIN
protocol_prototype/DryBox/wav/test_codec_only.wav
Normal file
Binary file not shown.
BIN
protocol_prototype/DryBox/wav/test_full_pipeline.wav
Normal file
BIN
protocol_prototype/DryBox/wav/test_full_pipeline.wav
Normal file
Binary file not shown.
Loading…
Reference in New Issue
Block a user