π₯ Camera Movement Detection
Project Summary
This project is developed to detect camera motion in video or image sequences from a static camera. The goal is to determine only the physical changes of the camera (e.g., pan, tilt, shift, shake) independently of object movements in the scene.
- Motivation: In video surveillance systems, detecting camera movement (e.g. tampering) is often more critical than detecting object motion.
- Solution: Global motion was calculated using ORB feature matching, Optical Flow, and Affine methods between consecutive frames.
- Detection: Translation values derived from the transformations are used to decide if there is "significant movement."
Technologies Used
Python
Pillow, NumPy
OpenCV
Streamlit
Docker
Pytest
SMTP
Dataset: Hugging Face syCen/CameraBench
Algorithm Descriptions
πΉ ORB + Homography
- Purpose: Detect global camera movement by identifying and matching keypoints between consecutive frames.
- Method:
- Use ORB (Oriented FAST and Rotated BRIEF) to extract prominent keypoints.
- Match keypoints between frames.
- Analyze the homography matrix to determine if the motion is from the camera or objects.
- Pro: Focuses on camera movement more than local object motion.
- Con: Can be sensitive to dominant local motion.
πΉ Optical Flow
- Purpose:Track per-pixel motion to estimate direction and intensity of movement.
- Method:
- Compute optical flow between two frames using Farneback method.
- Calculate mean flow magnitude and variance.
- High mean + low variance = likely camera movement.
- Advantage:Filters out local object motion like hand/arm using variance.
- Limitation: May miss very small camera shakes..
πΉ Affine + Good Features
- Purpose: Model the global scene transformation (translation, rotation, scaling) to estimate camera movement.
- Method:
- Detect strong corners using Harris or Shi-Tomasi.
- Track these corners across frames.
- Analyze the affine transformation matrix.
- Advantage: Sensitive to scene geometry.
- Limitation: Unstable with too few corners.
Setup
1) Installation
git clone https://github.com/Arifkarakilic/camera-movement-detector.git
cd camera-movement-detector
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
2) RealβTime Email Alerts (SMTP)
Note: "RealβTime Camera" works only locally; Streamlit Cloud doesnβt allow live camera.
# .env\ nSMTP_USER=youremail@gmail.com
SMTP_PASSWORD=your_app_password
RECEIVER_EMAIL=targetmail@gmail.com
Live detection triggers an alert email using a background threading worker so the UI stream remains smooth.
3) Launch
streamlit run camera-movement-detection/app.py
4) Usage
- Select from builtβin frame folders or upload .mp4/.gif
- Tune sensitivity with sliders
- Inspect detected frames visually
- Enable realβtime alerts to receive emails on movement
5) Testing
pytest tests/test_file_utils.py
Project Structure
camera-movement-detection/
ββ app.py
ββ config.py
ββ detectors/
β ββ movement_detector.py
β ββ movement_detector_affine.py
β ββ movement_detector_optical.py
ββ logic/
β ββ realtime_detector.py
β ββ video_processor.py
ββ ui/
β ββ sidebar.py
β ββ realtime_view.py
β ββ video_analysis_view.py
ββ utils/
β ββ file_utils.py
β ββ time_utils.py
β ββ visual_utils.py
β ββ notify.py
β ββ send_mail.py
ββ tests/
β ββ test_file_utils.py
β ββ test_video_processor.py
ββ .streamlit/
β ββ config.toml
ββ .env
ββ requirements.txt
ββ Dockerfile
ββ README.md
Sample Output
Detected camera motion frames: [13, 14, 15, 30]
Run with Docker
With Dockerfile
docker build -t camera-app .
docker run -p 8501:8501 --env-file .env camera-app
Remember to list .env in .dockerignore.
Links
AI Assistance & References
Some parts of this project were assisted and optimized using AI (OpenAI / ChatGPT).