First working, tray version
This commit is contained in:
9
.gitignore
vendored
9
.gitignore
vendored
@@ -1 +1,10 @@
|
||||
__pycache__/
|
||||
.vscode/
|
||||
.venv/
|
||||
.env
|
||||
.ruff_cache/
|
||||
.pytest_cache/
|
||||
.claude/
|
||||
AGENTS.md
|
||||
DESIGN_DOCUMENT.md
|
||||
.mypy_cache/
|
||||
|
||||
52
CHANGELOG.md
Normal file
52
CHANGELOG.md
Normal file
@@ -0,0 +1,52 @@
|
||||
# Changelog
|
||||
|
||||
## 1.0.0 (2026-01-30)
|
||||
|
||||
První kompletní verze aplikace Vault.
|
||||
|
||||
### Core
|
||||
|
||||
- **file_entry.py** - Immutable dataclass pro reprezentaci souboru (cesta, SHA-256 hash, velikost, timestampy)
|
||||
- **manifest.py** - Správa vault metadat (vault_id, lokace replik, seznam souborů). Ukládání/načítání z `.vault/manifest.json`. Deduplikace lokací s resolved cestami
|
||||
- **lock.py** - Exkluzivní přístup k vault pomocí fcntl (LOCK_EX). Detekce PID vlastníka
|
||||
- **image_manager.py** - Vytváření sparse .vault souborů, formátování exFAT (mkfs.exfat), resize (truncate + fsck), dotaz na info
|
||||
- **container.py** - Mount/unmount přes udisksctl (loop device, bez root oprávnění)
|
||||
- **file_watcher.py** - Detekce změn souborů přes watchdog/inotify (create, modify, delete, move). Podpora ignore patterns
|
||||
- **file_sync.py** - Kopírování souborů s progress callbackem (chunked, 1MB). Sync na úrovni jednotlivých souborů (SHA-256 + timestamp porovnání)
|
||||
- **sync_manager.py** - Orchestrace synchronizace mezi replikami. Real-time propagace změn přes file watcher. Manifest-based porovnání pro reconnect sync. Pause/resume
|
||||
- **vault.py** - Hlavní třída Vault orchestrující vše:
|
||||
- Otevření/zavření vault s lock managementem
|
||||
- Automatické mountování sekundárních replik z manifestu
|
||||
- Přidání/odebrání replik s full sync
|
||||
- Manuální synchronizace
|
||||
- Resize všech replik (unmount → resize → remount)
|
||||
- Detekce zaplnění (>90% varování)
|
||||
- Polling dostupnosti replik (30s) s auto-reconnect
|
||||
- Konzistentní porovnání cest pomocí Path.resolve()
|
||||
- Graceful shutdown (SIGINT/SIGTERM)
|
||||
|
||||
### GUI (System Tray)
|
||||
|
||||
- **tray_app.py** - System tray daemon s kontextovým menu:
|
||||
- Vytvořit/otevřít/zavřít vault
|
||||
- Přidat repliku, spravovat repliky
|
||||
- Synchronizovat, zvětšit vault
|
||||
- Stavy ikon: šedá (zavřeno), zelená (otevřeno), žlutá (částečné repliky), modrá (sync), červená (chyba)
|
||||
- 5s status update timer, 30s replica check timer
|
||||
- Varování při zaplnění >90%
|
||||
- **notifications.py** - Systémové notifikace přes notify-send
|
||||
- **dialogs/new_vault.py** - Dialog pro vytvoření vault (název, cesta, velikost s quick buttony 1/5/10/50 GB)
|
||||
- **dialogs/open_vault.py** - Dialog pro otevření existujícího .vault souboru
|
||||
- **dialogs/manage_replicas.py** - Tabulka replik se statusem (primární/sekundární, připojeno/odpojeno) a tlačítkem pro odebrání
|
||||
- **dialogs/sync_progress.py** - Progress dialog se progress barem, aktuálním souborem, logem a cancel tlačítkem
|
||||
- **dialogs/resize_vault.py** - Dialog pro zvětšení vault s aktuálním využitím a spinnerem pro novou velikost
|
||||
|
||||
### Testy
|
||||
|
||||
- 130 unit testů pokrývajících všechny core moduly
|
||||
- test_file_entry, test_manifest, test_lock, test_image_manager, test_container
|
||||
- test_file_watcher, test_file_sync, test_sync_manager, test_vault
|
||||
|
||||
### Bugfix
|
||||
|
||||
- Oprava duplicitního zobrazení replik při otevření sekundárního kontejneru - všechny cesty se nyní porovnávají přes `Path.resolve()` pro kanonickou formu
|
||||
@@ -1,37 +0,0 @@
|
||||
import zipfile
|
||||
import os
|
||||
|
||||
def vytvor_zip(zip_path, soubory):
|
||||
with zipfile.ZipFile(zip_path, 'w') as zipf:
|
||||
for soubor in soubory:
|
||||
zipf.write(soubor)
|
||||
|
||||
def extrahuj_zip(zip_path, cilovy_adresar):
|
||||
with zipfile.ZipFile(zip_path, 'r') as zipf:
|
||||
zipf.extractall(cilovy_adresar)
|
||||
|
||||
def vypis_obsah_zip(zip_path):
|
||||
with zipfile.ZipFile(zip_path, 'r') as zipf:
|
||||
for nazev in zipf.namelist():
|
||||
print(nazev)
|
||||
|
||||
def pridej_do_zip(zip_path, soubor):
|
||||
with zipfile.ZipFile(zip_path, 'a') as zipf:
|
||||
zipf.write(soubor)
|
||||
|
||||
def prepis_soubor_v_zipu(zip_path, novy_soubor, jmeno_v_zipu=None):
|
||||
jmeno_v_zipu = jmeno_v_zipu or os.path.basename(novy_soubor)
|
||||
temp_zip = zip_path + '.tmp'
|
||||
|
||||
with zipfile.ZipFile(zip_path, 'r') as zip_read, \
|
||||
zipfile.ZipFile(temp_zip, 'w') as zip_write:
|
||||
for item in zip_read.infolist():
|
||||
if item.filename != jmeno_v_zipu:
|
||||
data = zip_read.read(item.filename)
|
||||
zip_write.writestr(item, data)
|
||||
zip_write.write(novy_soubor, arcname=jmeno_v_zipu)
|
||||
|
||||
os.replace(temp_zip, zip_path)
|
||||
|
||||
#vytvor_zip("test.vf", ["material_hardness.xlsx", "material_denominations.xlsx"])
|
||||
#vypis_obsah_zip("test.vf")
|
||||
228
PROJECT.md
Normal file
228
PROJECT.md
Normal file
@@ -0,0 +1,228 @@
|
||||
# Vault - Project Documentation
|
||||
|
||||
## About
|
||||
|
||||
Resilientní úložiště s redundantní strukturou. Program spravuje 2+ kontejnerů (disk image souborů), které obsahují:
|
||||
- Souborovou strukturu přístupnou přes FUSE mount
|
||||
- Metadata o lokacích ostatních kontejnerů
|
||||
- Informace o verzích
|
||||
- Synchronizované kopie stejného obsahu napříč všemi kontejnery
|
||||
|
||||
**Klíčový princip:** Aplikace běží jako tray daemon, mountuje vault jako běžnou složku. Uživatel pracuje se svým oblíbeným file managerem (Nautilus, Dolphin, Thunar). Aplikace na pozadí zajišťuje synchronizaci mezi replikami.
|
||||
|
||||
**Platforma:** Pouze Linux (FUSE mount). Windows/Mac možná později.
|
||||
|
||||
---
|
||||
|
||||
## Specifikace
|
||||
|
||||
### Formát úložiště
|
||||
- **Kontejner:** Raw disk image (.vault) se sparse alokací
|
||||
- **Filesystem:** exFAT (cross-platform, bez 4GB limitu)
|
||||
- **Přípona:** `.vault`
|
||||
- **Velikost:** Uživatel zadá při vytvoření
|
||||
- **Struktura uvnitř:**
|
||||
```
|
||||
myvault.vault (sparse soubor)
|
||||
└── [exFAT filesystem]
|
||||
├── .vault/
|
||||
│ ├── manifest.json
|
||||
│ └── lock # Lock soubor pro exkluzivní přístup
|
||||
├── documents/
|
||||
│ └── file.txt
|
||||
└── photos/
|
||||
└── image.jpg
|
||||
```
|
||||
|
||||
### Mount
|
||||
- **Metoda:** udisksctl (loop device + mount, bez root)
|
||||
- **Mount point:** Automatický (udisksctl) nebo uživatelem zvolený
|
||||
- **Exkluzivní přístup:** Lock soubor s fcntl - pouze jedna instance může mít vault otevřený
|
||||
|
||||
### Čitelnost bez aplikace (bez Vault app)
|
||||
- `sudo mount -o loop myvault.vault /mnt/vault`
|
||||
|
||||
### Resize
|
||||
- Při zaplnění: varování uživateli (notifikace při >90% využití)
|
||||
- Uživatel rozhodne o zvětšení přes GUI dialog
|
||||
|
||||
### Umístění kontejnerů
|
||||
- Kdekoli s RW přístupem (různé disky, síťová umístění, USB, cloud)
|
||||
- Každý kontejner obsahuje metadata s cestami k ostatním kontejnerům
|
||||
|
||||
### Synchronizace
|
||||
- **Úroveň:** Jednotlivé soubory (NE celé .vault obrazy)
|
||||
- **Interní Python implementace** (bez rsync)
|
||||
- **Hash** (SHA-256) pro detekci změn obsahu
|
||||
- **Timestamp** pro určení master verze (novější vyhrává)
|
||||
- Předpoklad: většinou budou všechny kontejnery dostupné → synchronní zápis
|
||||
- Fallback: při nedostupnosti některého kontejneru se při reconnectu synchronizuje podle timestampu
|
||||
- Chunked copy s progress callbackem pro velké soubory
|
||||
- **Background sync:** Změny se detekují přes inotify/watchdog a propagují automaticky
|
||||
|
||||
**Jak sync funguje:**
|
||||
1. Všechny repliky jsou mountnuté současně (každá do jiného temp mount pointu)
|
||||
2. Uživatel vidí pouze hlavní mount point
|
||||
3. Při změně souboru → soubor se zkopíruje do všech mountnutých replik
|
||||
4. Při reconnect nedostupné repliky → porovnání manifestů → kopírování pouze změněných souborů
|
||||
|
||||
### GUI
|
||||
- **Framework:** PySide6
|
||||
- **Typ:** System tray aplikace (daemon)
|
||||
- **Jazyk UI:** Čeština
|
||||
- **Uživatel pracuje:** Se standardním file managerem OS
|
||||
|
||||
---
|
||||
|
||||
## GUI - System Tray App
|
||||
|
||||
### Tray ikona - stavy
|
||||
| Ikona | Stav |
|
||||
|-------|------|
|
||||
| Zelená | Vault otevřen, vše synchronizováno |
|
||||
| Modrá | Synchronizace probíhá |
|
||||
| Žlutá | Některé repliky nedostupné |
|
||||
| Červená | Chyba (vault zaplněn, sync selhala, atd.) |
|
||||
| Šedá | Žádný vault otevřen |
|
||||
|
||||
### Tray menu
|
||||
```
|
||||
My Vault (3/3 replik online) [status]
|
||||
────────────────────────────────────
|
||||
Otevřít složku [otevře mount point v file manageru]
|
||||
────────────────────────────────────
|
||||
Vytvořit nový vault...
|
||||
Otevřít vault...
|
||||
Zavřít vault
|
||||
────────────────────────────────────
|
||||
Přidat repliku...
|
||||
Spravovat repliky...
|
||||
────────────────────────────────────
|
||||
Synchronizovat
|
||||
Zvětšit vault...
|
||||
────────────────────────────────────
|
||||
Ukončit
|
||||
```
|
||||
|
||||
### Dialogy
|
||||
- **Nový vault:** Název, cesta, velikost (s quick buttony 1/5/10/50 GB)
|
||||
- **Otevřít vault:** Výběr .vault souboru
|
||||
- **Spravovat repliky:** Tabulka replik se statusem a tlačítkem pro odebrání
|
||||
- **Zvětšit vault:** Aktuální využití, nová velikost
|
||||
- **Sync progress:** Progress bar, aktuální soubor, log, cancel
|
||||
|
||||
---
|
||||
|
||||
## Architektura
|
||||
|
||||
```
|
||||
Vault/
|
||||
├── Vault.py # Entry point
|
||||
├── pyproject.toml # Poetry konfigurace
|
||||
├── src/
|
||||
│ ├── core/ # Business logika (BEZ UI importů!)
|
||||
│ │ ├── vault.py # Hlavní třída Vault - orchestrace
|
||||
│ │ ├── container.py # Mount/unmount přes udisksctl
|
||||
│ │ ├── image_manager.py # Vytváření/resize sparse .vault souborů
|
||||
│ │ ├── lock.py # Exkluzivní přístup (fcntl)
|
||||
│ │ ├── sync_manager.py # Synchronizace mezi replikami
|
||||
│ │ ├── file_watcher.py # watchdog/inotify pro detekci změn
|
||||
│ │ ├── file_sync.py # Kopírování souborů s progress callback
|
||||
│ │ ├── manifest.py # Metadata - lokace, verze, soubory
|
||||
│ │ └── file_entry.py # Reprezentace souboru (path, hash, timestamp)
|
||||
│ └── ui/
|
||||
│ ├── tray_app.py # System tray aplikace + menu
|
||||
│ ├── notifications.py # System notifikace (notify-send)
|
||||
│ └── dialogs/
|
||||
│ ├── new_vault.py
|
||||
│ ├── open_vault.py
|
||||
│ ├── manage_replicas.py
|
||||
│ ├── resize_vault.py
|
||||
│ └── sync_progress.py
|
||||
└── tests/
|
||||
├── test_file_entry.py
|
||||
├── test_manifest.py
|
||||
├── test_lock.py
|
||||
├── test_image_manager.py
|
||||
├── test_container.py
|
||||
├── test_file_watcher.py
|
||||
├── test_file_sync.py
|
||||
├── test_sync_manager.py
|
||||
└── test_vault.py
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Metadata struktura (.vault/manifest.json)
|
||||
|
||||
```json
|
||||
{
|
||||
"vault_id": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"vault_name": "My Vault",
|
||||
"version": 1,
|
||||
"created": "2026-01-28T10:30:00Z",
|
||||
"last_modified": "2026-01-28T15:45:00Z",
|
||||
"image_size_mb": 10240,
|
||||
"locations": [
|
||||
{
|
||||
"path": "/mnt/disk1/myvault.vault",
|
||||
"last_seen": "2026-01-28T15:45:00Z",
|
||||
"status": "active"
|
||||
},
|
||||
{
|
||||
"path": "/mnt/usb/myvault.vault",
|
||||
"last_seen": "2026-01-28T15:45:00Z",
|
||||
"status": "active"
|
||||
}
|
||||
],
|
||||
"files": [
|
||||
{
|
||||
"path": "documents/file.txt",
|
||||
"hash": "sha256:e3b0c44...",
|
||||
"size": 1234,
|
||||
"created": "2026-01-28T10:30:00Z",
|
||||
"modified": "2026-01-28T14:20:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Závislosti
|
||||
|
||||
### Python (Poetry)
|
||||
```
|
||||
PySide6>=6.10.1 # GUI framework
|
||||
pyfuse3>=3.4.2 # FUSE binding (nepoužito - udisksctl místo FUSE)
|
||||
watchdog>=6.0.0 # File system events (inotify)
|
||||
loguru>=0.7.3 # Logging
|
||||
python-dotenv>=1.2.1 # Environment variables
|
||||
|
||||
# Dev
|
||||
pytest>=9.0.2
|
||||
pytest-cov>=7.0.0
|
||||
ruff>=0.14.14
|
||||
mypy>=1.19.1
|
||||
```
|
||||
|
||||
### Systémové závislosti (Linux)
|
||||
```bash
|
||||
sudo apt install udisks2 exfatprogs
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Aktuální stav
|
||||
|
||||
**Fáze 1-5: Dokončeno**
|
||||
|
||||
- Kompletní core logika (vault, container, image_manager, manifest, file_entry, lock, sync_manager, file_watcher, file_sync)
|
||||
- System tray GUI s menu, notifikacemi, dialogy
|
||||
- Správa replik (přidání, odebrání, tabulka se statusy)
|
||||
- Automatická detekce dostupnosti replik (30s polling) s auto-reconnect
|
||||
- Resize vault přes GUI dialog
|
||||
- Detekce zaplnění s varováním při >90%
|
||||
- Graceful shutdown (SIGINT/SIGTERM)
|
||||
- 130 testů, vše passing
|
||||
- ruff + mypy clean
|
||||
@@ -1,3 +0,0 @@
|
||||
def startup_test():
|
||||
print("Zahajuji testování")
|
||||
print("V pořádku")
|
||||
43
UI/GUI.py
43
UI/GUI.py
@@ -1,43 +0,0 @@
|
||||
import tkinter as tk
|
||||
from tkinter import ttk
|
||||
from Logic.Filehandler import vytvor_zip
|
||||
|
||||
def run_app():
|
||||
root = tk.Tk()
|
||||
root.title("Three Tab GUI")
|
||||
root.geometry("1280x720")
|
||||
|
||||
notebook = ttk.Notebook(root)
|
||||
notebook.pack(expand=True, fill='both')
|
||||
|
||||
tab1 = ttk.Frame(notebook)
|
||||
tab2 = ttk.Frame(notebook)
|
||||
tab3 = ttk.Frame(notebook)
|
||||
|
||||
notebook.add(tab1, text="Tab 1")
|
||||
notebook.add(tab2, text="Tab 2")
|
||||
notebook.add(tab3, text="Tab 3")
|
||||
|
||||
label1 = ttk.Label(tab1, text="Welcome to Tab 1")
|
||||
columns = ("Name", "Age", "Occupation")
|
||||
|
||||
tree = ttk.Treeview(tab1, columns=columns, show='headings')
|
||||
|
||||
# Define column headers
|
||||
for col in columns:
|
||||
tree.heading(col, text=col)
|
||||
tree.column(col, anchor='center', width=150)
|
||||
|
||||
label1.pack(pady=20)
|
||||
tree.pack(pady=20)
|
||||
|
||||
label2 = ttk.Label(tab2, text="This is Tab 2")
|
||||
label2.pack(pady=20)
|
||||
|
||||
label3 = ttk.Label(tab3, text="You are in Tab 3")
|
||||
label3.pack(pady=20)
|
||||
|
||||
root.mainloop()
|
||||
|
||||
if __name__ == "__main__":
|
||||
run_app()
|
||||
36
Vault.py
36
Vault.py
@@ -1,6 +1,32 @@
|
||||
from UI.GUI import run_app
|
||||
from Logic.Filehandler import *
|
||||
from Test.Startup import startup_test
|
||||
"""Vault - Resilient storage application.
|
||||
|
||||
startup_test()
|
||||
run_app()
|
||||
Entry point for the Vault application.
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
from loguru import logger
|
||||
|
||||
|
||||
def setup_logging() -> None:
|
||||
"""Configure loguru logging."""
|
||||
logger.remove()
|
||||
logger.add(
|
||||
sys.stderr,
|
||||
format="<green>{time:HH:mm:ss}</green> | <level>{level: <8}</level> | {message}",
|
||||
level="INFO",
|
||||
)
|
||||
|
||||
|
||||
def main() -> int:
|
||||
"""Main entry point."""
|
||||
setup_logging()
|
||||
logger.info("Vault starting...")
|
||||
|
||||
from src.ui.tray_app import main as tray_main
|
||||
|
||||
return tray_main()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
|
||||
816
poetry.lock
generated
Normal file
816
poetry.lock
generated
Normal file
@@ -0,0 +1,816 @@
|
||||
# This file is automatically @generated by Poetry 2.1.2 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "attrs"
|
||||
version = "25.4.0"
|
||||
description = "Classes Without Boilerplate"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "attrs-25.4.0-py3-none-any.whl", hash = "sha256:adcf7e2a1fb3b36ac48d97835bb6d8ade15b8dcce26aba8bf1d14847b57a3373"},
|
||||
{file = "attrs-25.4.0.tar.gz", hash = "sha256:16d5969b87f0859ef33a48b35d55ac1be6e42ae49d5e853b597db70c35c57e11"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "cffi"
|
||||
version = "2.0.0"
|
||||
description = "Foreign Function Interface for Python calling C code."
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
markers = "os_name == \"nt\" and implementation_name != \"pypy\""
|
||||
files = [
|
||||
{file = "cffi-2.0.0-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:0cf2d91ecc3fcc0625c2c530fe004f82c110405f101548512cce44322fa8ac44"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f73b96c41e3b2adedc34a7356e64c8eb96e03a3782b535e043a986276ce12a49"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:53f77cbe57044e88bbd5ed26ac1d0514d2acf0591dd6bb02a3ae37f76811b80c"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:3e837e369566884707ddaf85fc1744b47575005c0a229de3327f8f9a20f4efeb"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:5eda85d6d1879e692d546a078b44251cdd08dd1cfb98dfb77b670c97cee49ea0"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:9332088d75dc3241c702d852d4671613136d90fa6881da7d770a483fd05248b4"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fc7de24befaeae77ba923797c7c87834c73648a05a4bde34b3b7e5588973a453"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:cf364028c016c03078a23b503f02058f1814320a56ad535686f90565636a9495"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:e11e82b744887154b182fd3e7e8512418446501191994dbf9c9fc1f32cc8efd5"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:8ea985900c5c95ce9db1745f7933eeef5d314f0565b27625d9a10ec9881e1bfb"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-win32.whl", hash = "sha256:1f72fb8906754ac8a2cc3f9f5aaa298070652a0ffae577e0ea9bd480dc3c931a"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-win_amd64.whl", hash = "sha256:b18a3ed7d5b3bd8d9ef7a8cb226502c6bf8308df1525e1cc676c3680e7176739"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:b4c854ef3adc177950a8dfc81a86f5115d2abd545751a304c5bcf2c2c7283cfe"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2de9a304e27f7596cd03d16f1b7c72219bd944e99cc52b84d0145aefb07cbd3c"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:baf5215e0ab74c16e2dd324e8ec067ef59e41125d3eade2b863d294fd5035c92"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:730cacb21e1bdff3ce90babf007d0a0917cc3e6492f336c2f0134101e0944f93"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:6824f87845e3396029f3820c206e459ccc91760e8fa24422f8b0c3d1731cbec5"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:9de40a7b0323d889cf8d23d1ef214f565ab154443c42737dfe52ff82cf857664"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:8941aaadaf67246224cee8c3803777eed332a19d909b47e29c9842ef1e79ac26"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:a05d0c237b3349096d3981b727493e22147f934b20f6f125a3eba8f994bec4a9"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:94698a9c5f91f9d138526b48fe26a199609544591f859c870d477351dc7b2414"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:5fed36fccc0612a53f1d4d9a816b50a36702c28a2aa880cb8a122b3466638743"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-win32.whl", hash = "sha256:c649e3a33450ec82378822b3dad03cc228b8f5963c0c12fc3b1e0ab940f768a5"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-win_amd64.whl", hash = "sha256:66f011380d0e49ed280c789fbd08ff0d40968ee7b665575489afa95c98196ab5"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-win_arm64.whl", hash = "sha256:c6638687455baf640e37344fe26d37c404db8b80d037c3d29f58fe8d1c3b194d"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:6d02d6655b0e54f54c4ef0b94eb6be0607b70853c45ce98bd278dc7de718be5d"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8eca2a813c1cb7ad4fb74d368c2ffbbb4789d377ee5bb8df98373c2cc0dee76c"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:21d1152871b019407d8ac3985f6775c079416c282e431a4da6afe7aefd2bccbe"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b21e08af67b8a103c71a250401c78d5e0893beff75e28c53c98f4de42f774062"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:1e3a615586f05fc4065a8b22b8152f0c1b00cdbc60596d187c2a74f9e3036e4e"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:81afed14892743bbe14dacb9e36d9e0e504cd204e0b165062c488942b9718037"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3e17ed538242334bf70832644a32a7aae3d83b57567f9fd60a26257e992b79ba"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3925dd22fa2b7699ed2617149842d2e6adde22b262fcbfada50e3d195e4b3a94"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:2c8f814d84194c9ea681642fd164267891702542f028a15fc97d4674b6206187"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-win32.whl", hash = "sha256:da902562c3e9c550df360bfa53c035b2f241fed6d9aef119048073680ace4a18"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:da68248800ad6320861f129cd9c1bf96ca849a2771a59e0344e88681905916f5"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-win_arm64.whl", hash = "sha256:4671d9dd5ec934cb9a73e7ee9676f9362aba54f7f34910956b84d727b0d73fb6"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:45d5e886156860dc35862657e1494b9bae8dfa63bf56796f2fb56e1679fc0bca"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:07b271772c100085dd28b74fa0cd81c8fb1a3ba18b21e03d7c27f3436a10606b"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d48a880098c96020b02d5a1f7d9251308510ce8858940e6fa99ece33f610838b"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f93fd8e5c8c0a4aa1f424d6173f14a892044054871c771f8566e4008eaa359d2"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:dd4f05f54a52fb558f1ba9f528228066954fee3ebe629fc1660d874d040ae5a3"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c8d3b5532fc71b7a77c09192b4a5a200ea992702734a2e9279a37f2478236f26"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:d9b29c1f0ae438d5ee9acb31cadee00a58c46cc9c0b2f9038c6b0b3470877a8c"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6d50360be4546678fc1b79ffe7a66265e28667840010348dd69a314145807a1b"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-win32.whl", hash = "sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-win_amd64.whl", hash = "sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-win_arm64.whl", hash = "sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-win32.whl", hash = "sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-win_amd64.whl", hash = "sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-win_arm64.whl", hash = "sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-win32.whl", hash = "sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-win_amd64.whl", hash = "sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-win_arm64.whl", hash = "sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:fe562eb1a64e67dd297ccc4f5addea2501664954f2692b69a76449ec7913ecbf"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:de8dad4425a6ca6e4e5e297b27b5c824ecc7581910bf9aee86cb6835e6812aa7"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:4647afc2f90d1ddd33441e5b0e85b16b12ddec4fca55f0d9671fef036ecca27c"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:3f4d46d8b35698056ec29bca21546e1551a205058ae1a181d871e278b0b28165"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:e6e73b9e02893c764e7e8d5bb5ce277f1a009cd5243f8228f75f842bf937c534"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:cb527a79772e5ef98fb1d700678fe031e353e765d1ca2d409c92263c6d43e09f"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:61d028e90346df14fedc3d1e5441df818d095f3b87d286825dfcbd6459b7ef63"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:0f6084a0ea23d05d20c3edcda20c3d006f9b6f3fefeac38f59262e10cef47ee2"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:1cd13c99ce269b3ed80b417dcd591415d3372bcac067009b6e0f59c7d4015e65"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:89472c9762729b5ae1ad974b777416bfda4ac5642423fa93bd57a09204712322"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-win32.whl", hash = "sha256:2081580ebb843f759b9f617314a24ed5738c51d2aee65d31e02f6f7a2b97707a"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-win_amd64.whl", hash = "sha256:b882b3df248017dba09d6b16defe9b5c407fe32fc7c65a9c69798e6175601be9"},
|
||||
{file = "cffi-2.0.0.tar.gz", hash = "sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
pycparser = {version = "*", markers = "implementation_name != \"PyPy\""}
|
||||
|
||||
[[package]]
|
||||
name = "colorama"
|
||||
version = "0.4.6"
|
||||
description = "Cross-platform colored terminal text."
|
||||
optional = false
|
||||
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
|
||||
groups = ["main", "dev"]
|
||||
markers = "sys_platform == \"win32\""
|
||||
files = [
|
||||
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
|
||||
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "coverage"
|
||||
version = "7.13.2"
|
||||
description = "Code coverage measurement for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "coverage-7.13.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f4af3b01763909f477ea17c962e2cca8f39b350a4e46e3a30838b2c12e31b81b"},
|
||||
{file = "coverage-7.13.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:36393bd2841fa0b59498f75466ee9bdec4f770d3254f031f23e8fd8e140ffdd2"},
|
||||
{file = "coverage-7.13.2-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:9cc7573518b7e2186bd229b1a0fe24a807273798832c27032c4510f47ffdb896"},
|
||||
{file = "coverage-7.13.2-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:ca9566769b69a5e216a4e176d54b9df88f29d750c5b78dbb899e379b4e14b30c"},
|
||||
{file = "coverage-7.13.2-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9c9bdea644e94fd66d75a6f7e9a97bb822371e1fe7eadae2cacd50fcbc28e4dc"},
|
||||
{file = "coverage-7.13.2-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:5bd447332ec4f45838c1ad42268ce21ca87c40deb86eabd59888859b66be22a5"},
|
||||
{file = "coverage-7.13.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:7c79ad5c28a16a1277e1187cf83ea8dafdcc689a784228a7d390f19776db7c31"},
|
||||
{file = "coverage-7.13.2-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:76e06ccacd1fb6ada5d076ed98a8c6f66e2e6acd3df02819e2ee29fd637b76ad"},
|
||||
{file = "coverage-7.13.2-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:49d49e9a5e9f4dc3d3dac95278a020afa6d6bdd41f63608a76fa05a719d5b66f"},
|
||||
{file = "coverage-7.13.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:ed2bce0e7bfa53f7b0b01c722da289ef6ad4c18ebd52b1f93704c21f116360c8"},
|
||||
{file = "coverage-7.13.2-cp310-cp310-win32.whl", hash = "sha256:1574983178b35b9af4db4a9f7328a18a14a0a0ce76ffaa1c1bacb4cc82089a7c"},
|
||||
{file = "coverage-7.13.2-cp310-cp310-win_amd64.whl", hash = "sha256:a360a8baeb038928ceb996f5623a4cd508728f8f13e08d4e96ce161702f3dd99"},
|
||||
{file = "coverage-7.13.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:060ebf6f2c51aff5ba38e1f43a2095e087389b1c69d559fde6049a4b0001320e"},
|
||||
{file = "coverage-7.13.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c1ea8ca9db5e7469cd364552985e15911548ea5b69c48a17291f0cac70484b2e"},
|
||||
{file = "coverage-7.13.2-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:b780090d15fd58f07cf2011943e25a5f0c1c894384b13a216b6c86c8a8a7c508"},
|
||||
{file = "coverage-7.13.2-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:88a800258d83acb803c38175b4495d293656d5fac48659c953c18e5f539a274b"},
|
||||
{file = "coverage-7.13.2-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6326e18e9a553e674d948536a04a80d850a5eeefe2aae2e6d7cf05d54046c01b"},
|
||||
{file = "coverage-7.13.2-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:59562de3f797979e1ff07c587e2ac36ba60ca59d16c211eceaa579c266c5022f"},
|
||||
{file = "coverage-7.13.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:27ba1ed6f66b0e2d61bfa78874dffd4f8c3a12f8e2b5410e515ab345ba7bc9c3"},
|
||||
{file = "coverage-7.13.2-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:8be48da4d47cc68754ce643ea50b3234557cbefe47c2f120495e7bd0a2756f2b"},
|
||||
{file = "coverage-7.13.2-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:2a47a4223d3361b91176aedd9d4e05844ca67d7188456227b6bf5e436630c9a1"},
|
||||
{file = "coverage-7.13.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:c6f141b468740197d6bd38f2b26ade124363228cc3f9858bd9924ab059e00059"},
|
||||
{file = "coverage-7.13.2-cp311-cp311-win32.whl", hash = "sha256:89567798404af067604246e01a49ef907d112edf2b75ef814b1364d5ce267031"},
|
||||
{file = "coverage-7.13.2-cp311-cp311-win_amd64.whl", hash = "sha256:21dd57941804ae2ac7e921771a5e21bbf9aabec317a041d164853ad0a96ce31e"},
|
||||
{file = "coverage-7.13.2-cp311-cp311-win_arm64.whl", hash = "sha256:10758e0586c134a0bafa28f2d37dd2cdb5e4a90de25c0fc0c77dabbad46eca28"},
|
||||
{file = "coverage-7.13.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f106b2af193f965d0d3234f3f83fc35278c7fb935dfbde56ae2da3dd2c03b84d"},
|
||||
{file = "coverage-7.13.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:78f45d21dc4d5d6bd29323f0320089ef7eae16e4bef712dff79d184fa7330af3"},
|
||||
{file = "coverage-7.13.2-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:fae91dfecd816444c74531a9c3d6ded17a504767e97aa674d44f638107265b99"},
|
||||
{file = "coverage-7.13.2-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:264657171406c114787b441484de620e03d8f7202f113d62fcd3d9688baa3e6f"},
|
||||
{file = "coverage-7.13.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ae47d8dcd3ded0155afbb59c62bd8ab07ea0fd4902e1c40567439e6db9dcaf2f"},
|
||||
{file = "coverage-7.13.2-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:8a0b33e9fd838220b007ce8f299114d406c1e8edb21336af4c97a26ecfd185aa"},
|
||||
{file = "coverage-7.13.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b3becbea7f3ce9a2d4d430f223ec15888e4deb31395840a79e916368d6004cce"},
|
||||
{file = "coverage-7.13.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:f819c727a6e6eeb8711e4ce63d78c620f69630a2e9d53bc95ca5379f57b6ba94"},
|
||||
{file = "coverage-7.13.2-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:4f7b71757a3ab19f7ba286e04c181004c1d61be921795ee8ba6970fd0ec91da5"},
|
||||
{file = "coverage-7.13.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:b7fc50d2afd2e6b4f6f2f403b70103d280a8e0cb35320cbbe6debcda02a1030b"},
|
||||
{file = "coverage-7.13.2-cp312-cp312-win32.whl", hash = "sha256:292250282cf9bcf206b543d7608bda17ca6fc151f4cbae949fc7e115112fbd41"},
|
||||
{file = "coverage-7.13.2-cp312-cp312-win_amd64.whl", hash = "sha256:eeea10169fac01549a7921d27a3e517194ae254b542102267bef7a93ed38c40e"},
|
||||
{file = "coverage-7.13.2-cp312-cp312-win_arm64.whl", hash = "sha256:2a5b567f0b635b592c917f96b9a9cb3dbd4c320d03f4bf94e9084e494f2e8894"},
|
||||
{file = "coverage-7.13.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ed75de7d1217cf3b99365d110975f83af0528c849ef5180a12fd91b5064df9d6"},
|
||||
{file = "coverage-7.13.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:97e596de8fa9bada4d88fde64a3f4d37f1b6131e4faa32bad7808abc79887ddc"},
|
||||
{file = "coverage-7.13.2-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:68c86173562ed4413345410c9480a8d64864ac5e54a5cda236748031e094229f"},
|
||||
{file = "coverage-7.13.2-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:7be4d613638d678b2b3773b8f687537b284d7074695a43fe2fbbfc0e31ceaed1"},
|
||||
{file = "coverage-7.13.2-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d7f63ce526a96acd0e16c4af8b50b64334239550402fb1607ce6a584a6d62ce9"},
|
||||
{file = "coverage-7.13.2-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:406821f37f864f968e29ac14c3fccae0fec9fdeba48327f0341decf4daf92d7c"},
|
||||
{file = "coverage-7.13.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ee68e5a4e3e5443623406b905db447dceddffee0dceb39f4e0cd9ec2a35004b5"},
|
||||
{file = "coverage-7.13.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:2ee0e58cca0c17dd9c6c1cdde02bb705c7b3fbfa5f3b0b5afeda20d4ebff8ef4"},
|
||||
{file = "coverage-7.13.2-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:6e5bbb5018bf76a56aabdb64246b5288d5ae1b7d0dd4d0534fe86df2c2992d1c"},
|
||||
{file = "coverage-7.13.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a55516c68ef3e08e134e818d5e308ffa6b1337cc8b092b69b24287bf07d38e31"},
|
||||
{file = "coverage-7.13.2-cp313-cp313-win32.whl", hash = "sha256:5b20211c47a8abf4abc3319d8ce2464864fa9f30c5fcaf958a3eed92f4f1fef8"},
|
||||
{file = "coverage-7.13.2-cp313-cp313-win_amd64.whl", hash = "sha256:14f500232e521201cf031549fb1ebdfc0a40f401cf519157f76c397e586c3beb"},
|
||||
{file = "coverage-7.13.2-cp313-cp313-win_arm64.whl", hash = "sha256:9779310cb5a9778a60c899f075a8514c89fa6d10131445c2207fc893e0b14557"},
|
||||
{file = "coverage-7.13.2-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:e64fa5a1e41ce5df6b547cbc3d3699381c9e2c2c369c67837e716ed0f549d48e"},
|
||||
{file = "coverage-7.13.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:b01899e82a04085b6561eb233fd688474f57455e8ad35cd82286463ba06332b7"},
|
||||
{file = "coverage-7.13.2-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:838943bea48be0e2768b0cf7819544cdedc1bbb2f28427eabb6eb8c9eb2285d3"},
|
||||
{file = "coverage-7.13.2-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:93d1d25ec2b27e90bcfef7012992d1f5121b51161b8bffcda756a816cf13c2c3"},
|
||||
{file = "coverage-7.13.2-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:93b57142f9621b0d12349c43fc7741fe578e4bc914c1e5a54142856cfc0bf421"},
|
||||
{file = "coverage-7.13.2-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:f06799ae1bdfff7ccb8665d75f8291c69110ba9585253de254688aa8a1ccc6c5"},
|
||||
{file = "coverage-7.13.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:7f9405ab4f81d490811b1d91c7a20361135a2df4c170e7f0b747a794da5b7f23"},
|
||||
{file = "coverage-7.13.2-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:f9ab1d5b86f8fbc97a5b3cd6280a3fd85fef3b028689d8a2c00918f0d82c728c"},
|
||||
{file = "coverage-7.13.2-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:f674f59712d67e841525b99e5e2b595250e39b529c3bda14764e4f625a3fa01f"},
|
||||
{file = "coverage-7.13.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:c6cadac7b8ace1ba9144feb1ae3cb787a6065ba6d23ffc59a934b16406c26573"},
|
||||
{file = "coverage-7.13.2-cp313-cp313t-win32.whl", hash = "sha256:14ae4146465f8e6e6253eba0cccd57423e598a4cb925958b240c805300918343"},
|
||||
{file = "coverage-7.13.2-cp313-cp313t-win_amd64.whl", hash = "sha256:9074896edd705a05769e3de0eac0a8388484b503b68863dd06d5e473f874fd47"},
|
||||
{file = "coverage-7.13.2-cp313-cp313t-win_arm64.whl", hash = "sha256:69e526e14f3f854eda573d3cf40cffd29a1a91c684743d904c33dbdcd0e0f3e7"},
|
||||
{file = "coverage-7.13.2-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:387a825f43d680e7310e6f325b2167dd093bc8ffd933b83e9aa0983cf6e0a2ef"},
|
||||
{file = "coverage-7.13.2-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:f0d7fea9d8e5d778cd5a9e8fc38308ad688f02040e883cdc13311ef2748cb40f"},
|
||||
{file = "coverage-7.13.2-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:e080afb413be106c95c4ee96b4fffdc9e2fa56a8bbf90b5c0918e5c4449412f5"},
|
||||
{file = "coverage-7.13.2-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:a7fc042ba3c7ce25b8a9f097eb0f32a5ce1ccdb639d9eec114e26def98e1f8a4"},
|
||||
{file = "coverage-7.13.2-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d0ba505e021557f7f8173ee8cd6b926373d8653e5ff7581ae2efce1b11ef4c27"},
|
||||
{file = "coverage-7.13.2-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:7de326f80e3451bd5cc7239ab46c73ddb658fe0b7649476bc7413572d36cd548"},
|
||||
{file = "coverage-7.13.2-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:abaea04f1e7e34841d4a7b343904a3f59481f62f9df39e2cd399d69a187a9660"},
|
||||
{file = "coverage-7.13.2-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:9f93959ee0c604bccd8e0697be21de0887b1f73efcc3aa73a3ec0fd13feace92"},
|
||||
{file = "coverage-7.13.2-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:13fe81ead04e34e105bf1b3c9f9cdf32ce31736ee5d90a8d2de02b9d3e1bcb82"},
|
||||
{file = "coverage-7.13.2-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:d6d16b0f71120e365741bca2cb473ca6fe38930bc5431c5e850ba949f708f892"},
|
||||
{file = "coverage-7.13.2-cp314-cp314-win32.whl", hash = "sha256:9b2f4714bb7d99ba3790ee095b3b4ac94767e1347fe424278a0b10acb3ff04fe"},
|
||||
{file = "coverage-7.13.2-cp314-cp314-win_amd64.whl", hash = "sha256:e4121a90823a063d717a96e0a0529c727fb31ea889369a0ee3ec00ed99bf6859"},
|
||||
{file = "coverage-7.13.2-cp314-cp314-win_arm64.whl", hash = "sha256:6873f0271b4a15a33e7590f338d823f6f66f91ed147a03938d7ce26efd04eee6"},
|
||||
{file = "coverage-7.13.2-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:f61d349f5b7cd95c34017f1927ee379bfbe9884300d74e07cf630ccf7a610c1b"},
|
||||
{file = "coverage-7.13.2-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:a43d34ce714f4ca674c0d90beb760eb05aad906f2c47580ccee9da8fe8bfb417"},
|
||||
{file = "coverage-7.13.2-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:bff1b04cb9d4900ce5c56c4942f047dc7efe57e2608cb7c3c8936e9970ccdbee"},
|
||||
{file = "coverage-7.13.2-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:6ae99e4560963ad8e163e819e5d77d413d331fd00566c1e0856aa252303552c1"},
|
||||
{file = "coverage-7.13.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e79a8c7d461820257d9aa43716c4efc55366d7b292e46b5b37165be1d377405d"},
|
||||
{file = "coverage-7.13.2-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:060ee84f6a769d40c492711911a76811b4befb6fba50abb450371abb720f5bd6"},
|
||||
{file = "coverage-7.13.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:3bca209d001fd03ea2d978f8a4985093240a355c93078aee3f799852c23f561a"},
|
||||
{file = "coverage-7.13.2-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:6b8092aa38d72f091db61ef83cb66076f18f02da3e1a75039a4f218629600e04"},
|
||||
{file = "coverage-7.13.2-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:4a3158dc2dcce5200d91ec28cd315c999eebff355437d2765840555d765a6e5f"},
|
||||
{file = "coverage-7.13.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:3973f353b2d70bd9796cc12f532a05945232ccae966456c8ed7034cb96bbfd6f"},
|
||||
{file = "coverage-7.13.2-cp314-cp314t-win32.whl", hash = "sha256:79f6506a678a59d4ded048dc72f1859ebede8ec2b9a2d509ebe161f01c2879d3"},
|
||||
{file = "coverage-7.13.2-cp314-cp314t-win_amd64.whl", hash = "sha256:196bfeabdccc5a020a57d5a368c681e3a6ceb0447d153aeccc1ab4d70a5032ba"},
|
||||
{file = "coverage-7.13.2-cp314-cp314t-win_arm64.whl", hash = "sha256:69269ab58783e090bfbf5b916ab3d188126e22d6070bbfc93098fdd474ef937c"},
|
||||
{file = "coverage-7.13.2-py3-none-any.whl", hash = "sha256:40ce1ea1e25125556d8e76bd0b61500839a07944cc287ac21d5626f3e620cad5"},
|
||||
{file = "coverage-7.13.2.tar.gz", hash = "sha256:044c6951ec37146b72a50cc81ef02217d27d4c3640efd2640311393cbbf143d3"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
toml = ["tomli ; python_full_version <= \"3.11.0a6\""]
|
||||
|
||||
[[package]]
|
||||
name = "idna"
|
||||
version = "3.11"
|
||||
description = "Internationalized Domain Names in Applications (IDNA)"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea"},
|
||||
{file = "idna-3.11.tar.gz", hash = "sha256:795dafcc9c04ed0c1fb032c2aa73654d8e8c5023a7df64a53f39190ada629902"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
all = ["flake8 (>=7.1.1)", "mypy (>=1.11.2)", "pytest (>=8.3.2)", "ruff (>=0.6.2)"]
|
||||
|
||||
[[package]]
|
||||
name = "iniconfig"
|
||||
version = "2.3.0"
|
||||
description = "brain-dead simple config-ini parsing"
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12"},
|
||||
{file = "iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "librt"
|
||||
version = "0.7.8"
|
||||
description = "Mypyc runtime library"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["dev"]
|
||||
markers = "platform_python_implementation != \"PyPy\""
|
||||
files = [
|
||||
{file = "librt-0.7.8-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b45306a1fc5f53c9330fbee134d8b3227fe5da2ab09813b892790400aa49352d"},
|
||||
{file = "librt-0.7.8-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:864c4b7083eeee250ed55135d2127b260d7eb4b5e953a9e5df09c852e327961b"},
|
||||
{file = "librt-0.7.8-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:6938cc2de153bc927ed8d71c7d2f2ae01b4e96359126c602721340eb7ce1a92d"},
|
||||
{file = "librt-0.7.8-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:66daa6ac5de4288a5bbfbe55b4caa7bf0cd26b3269c7a476ffe8ce45f837f87d"},
|
||||
{file = "librt-0.7.8-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4864045f49dc9c974dadb942ac56a74cd0479a2aafa51ce272c490a82322ea3c"},
|
||||
{file = "librt-0.7.8-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:a36515b1328dc5b3ffce79fe204985ca8572525452eacabee2166f44bb387b2c"},
|
||||
{file = "librt-0.7.8-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:b7e7f140c5169798f90b80d6e607ed2ba5059784968a004107c88ad61fb3641d"},
|
||||
{file = "librt-0.7.8-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:ff71447cb778a4f772ddc4ce360e6ba9c95527ed84a52096bd1bbf9fee2ec7c0"},
|
||||
{file = "librt-0.7.8-cp310-cp310-win32.whl", hash = "sha256:047164e5f68b7a8ebdf9fae91a3c2161d3192418aadd61ddd3a86a56cbe3dc85"},
|
||||
{file = "librt-0.7.8-cp310-cp310-win_amd64.whl", hash = "sha256:d6f254d096d84156a46a84861183c183d30734e52383602443292644d895047c"},
|
||||
{file = "librt-0.7.8-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ff3e9c11aa260c31493d4b3197d1e28dd07768594a4f92bec4506849d736248f"},
|
||||
{file = "librt-0.7.8-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ddb52499d0b3ed4aa88746aaf6f36a08314677d5c346234c3987ddc506404eac"},
|
||||
{file = "librt-0.7.8-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:e9c0afebbe6ce177ae8edba0c7c4d626f2a0fc12c33bb993d163817c41a7a05c"},
|
||||
{file = "librt-0.7.8-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:631599598e2c76ded400c0a8722dec09217c89ff64dc54b060f598ed68e7d2a8"},
|
||||
{file = "librt-0.7.8-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9c1ba843ae20db09b9d5c80475376168feb2640ce91cd9906414f23cc267a1ff"},
|
||||
{file = "librt-0.7.8-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:b5b007bb22ea4b255d3ee39dfd06d12534de2fcc3438567d9f48cdaf67ae1ae3"},
|
||||
{file = "librt-0.7.8-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:dbd79caaf77a3f590cbe32dc2447f718772d6eea59656a7dcb9311161b10fa75"},
|
||||
{file = "librt-0.7.8-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:87808a8d1e0bd62a01cafc41f0fd6818b5a5d0ca0d8a55326a81643cdda8f873"},
|
||||
{file = "librt-0.7.8-cp311-cp311-win32.whl", hash = "sha256:31724b93baa91512bd0a376e7cf0b59d8b631ee17923b1218a65456fa9bda2e7"},
|
||||
{file = "librt-0.7.8-cp311-cp311-win_amd64.whl", hash = "sha256:978e8b5f13e52cf23a9e80f3286d7546baa70bc4ef35b51d97a709d0b28e537c"},
|
||||
{file = "librt-0.7.8-cp311-cp311-win_arm64.whl", hash = "sha256:20e3946863d872f7cabf7f77c6c9d370b8b3d74333d3a32471c50d3a86c0a232"},
|
||||
{file = "librt-0.7.8-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:9b6943885b2d49c48d0cff23b16be830ba46b0152d98f62de49e735c6e655a63"},
|
||||
{file = "librt-0.7.8-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:46ef1f4b9b6cc364b11eea0ecc0897314447a66029ee1e55859acb3dd8757c93"},
|
||||
{file = "librt-0.7.8-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:907ad09cfab21e3c86e8f1f87858f7049d1097f77196959c033612f532b4e592"},
|
||||
{file = "librt-0.7.8-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2991b6c3775383752b3ca0204842743256f3ad3deeb1d0adc227d56b78a9a850"},
|
||||
{file = "librt-0.7.8-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:03679b9856932b8c8f674e87aa3c55ea11c9274301f76ae8dc4d281bda55cf62"},
|
||||
{file = "librt-0.7.8-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3968762fec1b2ad34ce57458b6de25dbb4142713e9ca6279a0d352fa4e9f452b"},
|
||||
{file = "librt-0.7.8-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:bb7a7807523a31f03061288cc4ffc065d684c39db7644c676b47d89553c0d714"},
|
||||
{file = "librt-0.7.8-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:ad64a14b1e56e702e19b24aae108f18ad1bf7777f3af5fcd39f87d0c5a814449"},
|
||||
{file = "librt-0.7.8-cp312-cp312-win32.whl", hash = "sha256:0241a6ed65e6666236ea78203a73d800dbed896cf12ae25d026d75dc1fcd1dac"},
|
||||
{file = "librt-0.7.8-cp312-cp312-win_amd64.whl", hash = "sha256:6db5faf064b5bab9675c32a873436b31e01d66ca6984c6f7f92621656033a708"},
|
||||
{file = "librt-0.7.8-cp312-cp312-win_arm64.whl", hash = "sha256:57175aa93f804d2c08d2edb7213e09276bd49097611aefc37e3fa38d1fb99ad0"},
|
||||
{file = "librt-0.7.8-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:4c3995abbbb60b3c129490fa985dfe6cac11d88fc3c36eeb4fb1449efbbb04fc"},
|
||||
{file = "librt-0.7.8-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:44e0c2cbc9bebd074cf2cdbe472ca185e824be4e74b1c63a8e934cea674bebf2"},
|
||||
{file = "librt-0.7.8-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:4d2f1e492cae964b3463a03dc77a7fe8742f7855d7258c7643f0ee32b6651dd3"},
|
||||
{file = "librt-0.7.8-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:451e7ffcef8f785831fdb791bd69211f47e95dc4c6ddff68e589058806f044c6"},
|
||||
{file = "librt-0.7.8-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3469e1af9f1380e093ae06bedcbdd11e407ac0b303a56bbe9afb1d6824d4982d"},
|
||||
{file = "librt-0.7.8-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f11b300027ce19a34f6d24ebb0a25fd0e24a9d53353225a5c1e6cadbf2916b2e"},
|
||||
{file = "librt-0.7.8-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:4adc73614f0d3c97874f02f2c7fd2a27854e7e24ad532ea6b965459c5b757eca"},
|
||||
{file = "librt-0.7.8-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:60c299e555f87e4c01b2eca085dfccda1dde87f5a604bb45c2906b8305819a93"},
|
||||
{file = "librt-0.7.8-cp313-cp313-win32.whl", hash = "sha256:b09c52ed43a461994716082ee7d87618096851319bf695d57ec123f2ab708951"},
|
||||
{file = "librt-0.7.8-cp313-cp313-win_amd64.whl", hash = "sha256:f8f4a901a3fa28969d6e4519deceab56c55a09d691ea7b12ca830e2fa3461e34"},
|
||||
{file = "librt-0.7.8-cp313-cp313-win_arm64.whl", hash = "sha256:43d4e71b50763fcdcf64725ac680d8cfa1706c928b844794a7aa0fa9ac8e5f09"},
|
||||
{file = "librt-0.7.8-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:be927c3c94c74b05128089a955fba86501c3b544d1d300282cc1b4bd370cb418"},
|
||||
{file = "librt-0.7.8-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:7b0803e9008c62a7ef79058233db7ff6f37a9933b8f2573c05b07ddafa226611"},
|
||||
{file = "librt-0.7.8-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:79feb4d00b2a4e0e05c9c56df707934f41fcb5fe53fd9efb7549068d0495b758"},
|
||||
{file = "librt-0.7.8-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b9122094e3f24aa759c38f46bd8863433820654927370250f460ae75488b66ea"},
|
||||
{file = "librt-0.7.8-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7e03bea66af33c95ce3addf87a9bf1fcad8d33e757bc479957ddbc0e4f7207ac"},
|
||||
{file = "librt-0.7.8-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:f1ade7f31675db00b514b98f9ab9a7698c7282dad4be7492589109471852d398"},
|
||||
{file = "librt-0.7.8-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:a14229ac62adcf1b90a15992f1ab9c69ae8b99ffb23cb64a90878a6e8a2f5b81"},
|
||||
{file = "librt-0.7.8-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5bcaaf624fd24e6a0cb14beac37677f90793a96864c67c064a91458611446e83"},
|
||||
{file = "librt-0.7.8-cp314-cp314-win32.whl", hash = "sha256:7aa7d5457b6c542ecaed79cec4ad98534373c9757383973e638ccced0f11f46d"},
|
||||
{file = "librt-0.7.8-cp314-cp314-win_amd64.whl", hash = "sha256:3d1322800771bee4a91f3b4bd4e49abc7d35e65166821086e5afd1e6c0d9be44"},
|
||||
{file = "librt-0.7.8-cp314-cp314-win_arm64.whl", hash = "sha256:5363427bc6a8c3b1719f8f3845ea53553d301382928a86e8fab7984426949bce"},
|
||||
{file = "librt-0.7.8-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:ca916919793a77e4a98d4a1701e345d337ce53be4a16620f063191f7322ac80f"},
|
||||
{file = "librt-0.7.8-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:54feb7b4f2f6706bb82325e836a01be805770443e2400f706e824e91f6441dde"},
|
||||
{file = "librt-0.7.8-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:39a4c76fee41007070f872b648cc2f711f9abf9a13d0c7162478043377b52c8e"},
|
||||
{file = "librt-0.7.8-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ac9c8a458245c7de80bc1b9765b177055efff5803f08e548dd4bb9ab9a8d789b"},
|
||||
{file = "librt-0.7.8-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:95b67aa7eff150f075fda09d11f6bfb26edffd300f6ab1666759547581e8f666"},
|
||||
{file = "librt-0.7.8-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:535929b6eff670c593c34ff435d5440c3096f20fa72d63444608a5aef64dd581"},
|
||||
{file = "librt-0.7.8-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:63937bd0f4d1cb56653dc7ae900d6c52c41f0015e25aaf9902481ee79943b33a"},
|
||||
{file = "librt-0.7.8-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:cf243da9e42d914036fd362ac3fa77d80a41cadcd11ad789b1b5eec4daaf67ca"},
|
||||
{file = "librt-0.7.8-cp314-cp314t-win32.whl", hash = "sha256:171ca3a0a06c643bd0a2f62a8944e1902c94aa8e5da4db1ea9a8daf872685365"},
|
||||
{file = "librt-0.7.8-cp314-cp314t-win_amd64.whl", hash = "sha256:445b7304145e24c60288a2f172b5ce2ca35c0f81605f5299f3fa567e189d2e32"},
|
||||
{file = "librt-0.7.8-cp314-cp314t-win_arm64.whl", hash = "sha256:8766ece9de08527deabcd7cb1b4f1a967a385d26e33e536d6d8913db6ef74f06"},
|
||||
{file = "librt-0.7.8-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c7e8f88f79308d86d8f39c491773cbb533d6cb7fa6476f35d711076ee04fceb6"},
|
||||
{file = "librt-0.7.8-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:389bd25a0db916e1d6bcb014f11aa9676cedaa485e9ec3752dfe19f196fd377b"},
|
||||
{file = "librt-0.7.8-cp39-cp39-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:73fd300f501a052f2ba52ede721232212f3b06503fa12665408ecfc9d8fd149c"},
|
||||
{file = "librt-0.7.8-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6d772edc6a5f7835635c7562f6688e031f0b97e31d538412a852c49c9a6c92d5"},
|
||||
{file = "librt-0.7.8-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:bfde8a130bd0f239e45503ab39fab239ace094d63ee1d6b67c25a63d741c0f71"},
|
||||
{file = "librt-0.7.8-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:fdec6e2368ae4f796fc72fad7fd4bd1753715187e6d870932b0904609e7c878e"},
|
||||
{file = "librt-0.7.8-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:00105e7d541a8f2ee5be52caacea98a005e0478cfe78c8080fbb7b5d2b340c63"},
|
||||
{file = "librt-0.7.8-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:c6f8947d3dfd7f91066c5b4385812c18be26c9d5a99ca56667547f2c39149d94"},
|
||||
{file = "librt-0.7.8-cp39-cp39-win32.whl", hash = "sha256:41d7bb1e07916aeb12ae4a44e3025db3691c4149ab788d0315781b4d29b86afb"},
|
||||
{file = "librt-0.7.8-cp39-cp39-win_amd64.whl", hash = "sha256:e90a8e237753c83b8e484d478d9a996dc5e39fd5bd4c6ce32563bc8123f132be"},
|
||||
{file = "librt-0.7.8.tar.gz", hash = "sha256:1a4ede613941d9c3470b0368be851df6bb78ab218635512d0370b27a277a0862"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "loguru"
|
||||
version = "0.7.3"
|
||||
description = "Python logging made (stupidly) simple"
|
||||
optional = false
|
||||
python-versions = "<4.0,>=3.5"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "loguru-0.7.3-py3-none-any.whl", hash = "sha256:31a33c10c8e1e10422bfd431aeb5d351c7cf7fa671e3c4df004162264b28220c"},
|
||||
{file = "loguru-0.7.3.tar.gz", hash = "sha256:19480589e77d47b8d85b2c827ad95d49bf31b0dcde16593892eb51dd18706eb6"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
colorama = {version = ">=0.3.4", markers = "sys_platform == \"win32\""}
|
||||
win32-setctime = {version = ">=1.0.0", markers = "sys_platform == \"win32\""}
|
||||
|
||||
[package.extras]
|
||||
dev = ["Sphinx (==8.1.3) ; python_version >= \"3.11\"", "build (==1.2.2) ; python_version >= \"3.11\"", "colorama (==0.4.5) ; python_version < \"3.8\"", "colorama (==0.4.6) ; python_version >= \"3.8\"", "exceptiongroup (==1.1.3) ; python_version >= \"3.7\" and python_version < \"3.11\"", "freezegun (==1.1.0) ; python_version < \"3.8\"", "freezegun (==1.5.0) ; python_version >= \"3.8\"", "mypy (==v0.910) ; python_version < \"3.6\"", "mypy (==v0.971) ; python_version == \"3.6\"", "mypy (==v1.13.0) ; python_version >= \"3.8\"", "mypy (==v1.4.1) ; python_version == \"3.7\"", "myst-parser (==4.0.0) ; python_version >= \"3.11\"", "pre-commit (==4.0.1) ; python_version >= \"3.9\"", "pytest (==6.1.2) ; python_version < \"3.8\"", "pytest (==8.3.2) ; python_version >= \"3.8\"", "pytest-cov (==2.12.1) ; python_version < \"3.8\"", "pytest-cov (==5.0.0) ; python_version == \"3.8\"", "pytest-cov (==6.0.0) ; python_version >= \"3.9\"", "pytest-mypy-plugins (==1.9.3) ; python_version >= \"3.6\" and python_version < \"3.8\"", "pytest-mypy-plugins (==3.1.0) ; python_version >= \"3.8\"", "sphinx-rtd-theme (==3.0.2) ; python_version >= \"3.11\"", "tox (==3.27.1) ; python_version < \"3.8\"", "tox (==4.23.2) ; python_version >= \"3.8\"", "twine (==6.0.1) ; python_version >= \"3.11\""]
|
||||
|
||||
[[package]]
|
||||
name = "mypy"
|
||||
version = "1.19.1"
|
||||
description = "Optional static typing for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "mypy-1.19.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5f05aa3d375b385734388e844bc01733bd33c644ab48e9684faa54e5389775ec"},
|
||||
{file = "mypy-1.19.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:022ea7279374af1a5d78dfcab853fe6a536eebfda4b59deab53cd21f6cd9f00b"},
|
||||
{file = "mypy-1.19.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee4c11e460685c3e0c64a4c5de82ae143622410950d6be863303a1c4ba0e36d6"},
|
||||
{file = "mypy-1.19.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:de759aafbae8763283b2ee5869c7255391fbc4de3ff171f8f030b5ec48381b74"},
|
||||
{file = "mypy-1.19.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:ab43590f9cd5108f41aacf9fca31841142c786827a74ab7cc8a2eacb634e09a1"},
|
||||
{file = "mypy-1.19.1-cp310-cp310-win_amd64.whl", hash = "sha256:2899753e2f61e571b3971747e302d5f420c3fd09650e1951e99f823bc3089dac"},
|
||||
{file = "mypy-1.19.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d8dfc6ab58ca7dda47d9237349157500468e404b17213d44fc1cb77bce532288"},
|
||||
{file = "mypy-1.19.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e3f276d8493c3c97930e354b2595a44a21348b320d859fb4a2b9f66da9ed27ab"},
|
||||
{file = "mypy-1.19.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2abb24cf3f17864770d18d673c85235ba52456b36a06b6afc1e07c1fdcd3d0e6"},
|
||||
{file = "mypy-1.19.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a009ffa5a621762d0c926a078c2d639104becab69e79538a494bcccb62cc0331"},
|
||||
{file = "mypy-1.19.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f7cee03c9a2e2ee26ec07479f38ea9c884e301d42c6d43a19d20fb014e3ba925"},
|
||||
{file = "mypy-1.19.1-cp311-cp311-win_amd64.whl", hash = "sha256:4b84a7a18f41e167f7995200a1d07a4a6810e89d29859df936f1c3923d263042"},
|
||||
{file = "mypy-1.19.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:a8174a03289288c1f6c46d55cef02379b478bfbc8e358e02047487cad44c6ca1"},
|
||||
{file = "mypy-1.19.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ffcebe56eb09ff0c0885e750036a095e23793ba6c2e894e7e63f6d89ad51f22e"},
|
||||
{file = "mypy-1.19.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b64d987153888790bcdb03a6473d321820597ab8dd9243b27a92153c4fa50fd2"},
|
||||
{file = "mypy-1.19.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c35d298c2c4bba75feb2195655dfea8124d855dfd7343bf8b8c055421eaf0cf8"},
|
||||
{file = "mypy-1.19.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:34c81968774648ab5ac09c29a375fdede03ba253f8f8287847bd480782f73a6a"},
|
||||
{file = "mypy-1.19.1-cp312-cp312-win_amd64.whl", hash = "sha256:b10e7c2cd7870ba4ad9b2d8a6102eb5ffc1f16ca35e3de6bfa390c1113029d13"},
|
||||
{file = "mypy-1.19.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:e3157c7594ff2ef1634ee058aafc56a82db665c9438fd41b390f3bde1ab12250"},
|
||||
{file = "mypy-1.19.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:bdb12f69bcc02700c2b47e070238f42cb87f18c0bc1fc4cdb4fb2bc5fd7a3b8b"},
|
||||
{file = "mypy-1.19.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f859fb09d9583a985be9a493d5cfc5515b56b08f7447759a0c5deaf68d80506e"},
|
||||
{file = "mypy-1.19.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c9a6538e0415310aad77cb94004ca6482330fece18036b5f360b62c45814c4ef"},
|
||||
{file = "mypy-1.19.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:da4869fc5e7f62a88f3fe0b5c919d1d9f7ea3cef92d3689de2823fd27e40aa75"},
|
||||
{file = "mypy-1.19.1-cp313-cp313-win_amd64.whl", hash = "sha256:016f2246209095e8eda7538944daa1d60e1e8134d98983b9fc1e92c1fc0cb8dd"},
|
||||
{file = "mypy-1.19.1-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:06e6170bd5836770e8104c8fdd58e5e725cfeb309f0a6c681a811f557e97eac1"},
|
||||
{file = "mypy-1.19.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:804bd67b8054a85447c8954215a906d6eff9cabeabe493fb6334b24f4bfff718"},
|
||||
{file = "mypy-1.19.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:21761006a7f497cb0d4de3d8ef4ca70532256688b0523eee02baf9eec895e27b"},
|
||||
{file = "mypy-1.19.1-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:28902ee51f12e0f19e1e16fbe2f8f06b6637f482c459dd393efddd0ec7f82045"},
|
||||
{file = "mypy-1.19.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:481daf36a4c443332e2ae9c137dfee878fcea781a2e3f895d54bd3002a900957"},
|
||||
{file = "mypy-1.19.1-cp314-cp314-win_amd64.whl", hash = "sha256:8bb5c6f6d043655e055be9b542aa5f3bdd30e4f3589163e85f93f3640060509f"},
|
||||
{file = "mypy-1.19.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:7bcfc336a03a1aaa26dfce9fff3e287a3ba99872a157561cbfcebe67c13308e3"},
|
||||
{file = "mypy-1.19.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b7951a701c07ea584c4fe327834b92a30825514c868b1f69c30445093fdd9d5a"},
|
||||
{file = "mypy-1.19.1-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b13cfdd6c87fc3efb69ea4ec18ef79c74c3f98b4e5498ca9b85ab3b2c2329a67"},
|
||||
{file = "mypy-1.19.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4f28f99c824ecebcdaa2e55d82953e38ff60ee5ec938476796636b86afa3956e"},
|
||||
{file = "mypy-1.19.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:c608937067d2fc5a4dd1a5ce92fd9e1398691b8c5d012d66e1ddd430e9244376"},
|
||||
{file = "mypy-1.19.1-cp39-cp39-win_amd64.whl", hash = "sha256:409088884802d511ee52ca067707b90c883426bd95514e8cfda8281dc2effe24"},
|
||||
{file = "mypy-1.19.1-py3-none-any.whl", hash = "sha256:f1235f5ea01b7db5468d53ece6aaddf1ad0b88d9e7462b86ef96fe04995d7247"},
|
||||
{file = "mypy-1.19.1.tar.gz", hash = "sha256:19d88bb05303fe63f71dd2c6270daca27cb9401c4ca8255fe50d1d920e0eb9ba"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
librt = {version = ">=0.6.2", markers = "platform_python_implementation != \"PyPy\""}
|
||||
mypy_extensions = ">=1.0.0"
|
||||
pathspec = ">=0.9.0"
|
||||
typing_extensions = ">=4.6.0"
|
||||
|
||||
[package.extras]
|
||||
dmypy = ["psutil (>=4.0)"]
|
||||
faster-cache = ["orjson"]
|
||||
install-types = ["pip"]
|
||||
mypyc = ["setuptools (>=50)"]
|
||||
reports = ["lxml"]
|
||||
|
||||
[[package]]
|
||||
name = "mypy-extensions"
|
||||
version = "1.1.0"
|
||||
description = "Type system extensions for programs checked with the mypy type checker."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "mypy_extensions-1.1.0-py3-none-any.whl", hash = "sha256:1be4cccdb0f2482337c4743e60421de3a356cd97508abadd57d47403e94f5505"},
|
||||
{file = "mypy_extensions-1.1.0.tar.gz", hash = "sha256:52e68efc3284861e772bbcd66823fde5ae21fd2fdb51c62a211403730b916558"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "outcome"
|
||||
version = "1.3.0.post0"
|
||||
description = "Capture the outcome of Python function calls."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "outcome-1.3.0.post0-py2.py3-none-any.whl", hash = "sha256:e771c5ce06d1415e356078d3bdd68523f284b4ce5419828922b6871e65eda82b"},
|
||||
{file = "outcome-1.3.0.post0.tar.gz", hash = "sha256:9dcf02e65f2971b80047b377468e72a268e15c0af3cf1238e6ff14f7f91143b8"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
attrs = ">=19.2.0"
|
||||
|
||||
[[package]]
|
||||
name = "packaging"
|
||||
version = "26.0"
|
||||
description = "Core utilities for Python packages"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "packaging-26.0-py3-none-any.whl", hash = "sha256:b36f1fef9334a5588b4166f8bcd26a14e521f2b55e6b9de3aaa80d3ff7a37529"},
|
||||
{file = "packaging-26.0.tar.gz", hash = "sha256:00243ae351a257117b6a241061796684b084ed1c516a08c48a3f7e147a9d80b4"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pathspec"
|
||||
version = "1.0.4"
|
||||
description = "Utility library for gitignore style pattern matching of file paths."
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "pathspec-1.0.4-py3-none-any.whl", hash = "sha256:fb6ae2fd4e7c921a165808a552060e722767cfa526f99ca5156ed2ce45a5c723"},
|
||||
{file = "pathspec-1.0.4.tar.gz", hash = "sha256:0210e2ae8a21a9137c0d470578cb0e595af87edaa6ebf12ff176f14a02e0e645"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
hyperscan = ["hyperscan (>=0.7)"]
|
||||
optional = ["typing-extensions (>=4)"]
|
||||
re2 = ["google-re2 (>=1.1)"]
|
||||
tests = ["pytest (>=9)", "typing-extensions (>=4.15)"]
|
||||
|
||||
[[package]]
|
||||
name = "pluggy"
|
||||
version = "1.6.0"
|
||||
description = "plugin and hook calling mechanisms for python"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746"},
|
||||
{file = "pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
dev = ["pre-commit", "tox"]
|
||||
testing = ["coverage", "pytest", "pytest-benchmark"]
|
||||
|
||||
[[package]]
|
||||
name = "pycparser"
|
||||
version = "3.0"
|
||||
description = "C parser in Python"
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["main"]
|
||||
markers = "os_name == \"nt\" and implementation_name != \"pypy\" and implementation_name != \"PyPy\""
|
||||
files = [
|
||||
{file = "pycparser-3.0-py3-none-any.whl", hash = "sha256:b727414169a36b7d524c1c3e31839a521725078d7b2ff038656844266160a992"},
|
||||
{file = "pycparser-3.0.tar.gz", hash = "sha256:600f49d217304a5902ac3c37e1281c9fe94e4d0489de643a9504c5cdfdfc6b29"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyfuse3"
|
||||
version = "3.4.2"
|
||||
description = "Python 3 bindings for libfuse 3 with async I/O support"
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pyfuse3-3.4.2.tar.gz", hash = "sha256:0a59031969c4ba51a5ec1b67f3c5c24f641a6a3f8119a86edad56debcb9084d9"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
trio = ">=0.15"
|
||||
|
||||
[[package]]
|
||||
name = "pygments"
|
||||
version = "2.19.2"
|
||||
description = "Pygments is a syntax highlighting package written in Python."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b"},
|
||||
{file = "pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
windows-terminal = ["colorama (>=0.4.6)"]
|
||||
|
||||
[[package]]
|
||||
name = "pyside6"
|
||||
version = "6.10.1"
|
||||
description = "Python bindings for the Qt cross-platform application and UI framework"
|
||||
optional = false
|
||||
python-versions = "<3.15,>=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pyside6-6.10.1-cp39-abi3-macosx_13_0_universal2.whl", hash = "sha256:d0e70dd0e126d01986f357c2a555722f9462cf8a942bf2ce180baf69f468e516"},
|
||||
{file = "pyside6-6.10.1-cp39-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:4053bf51ba2c2cb20e1005edd469997976a02cec009f7c46356a0b65c137f1fa"},
|
||||
{file = "pyside6-6.10.1-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:7d3ca20a40139ca5324a7864f1d91cdf2ff237e11bd16354a42670f2a4eeb13c"},
|
||||
{file = "pyside6-6.10.1-cp39-abi3-win_amd64.whl", hash = "sha256:9f89ff994f774420eaa38cec6422fddd5356611d8481774820befd6f3bb84c9e"},
|
||||
{file = "pyside6-6.10.1-cp39-abi3-win_arm64.whl", hash = "sha256:9c5c1d94387d1a32a6fae25348097918ef413b87dfa3767c46f737c6d48ae437"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
PySide6_Addons = "6.10.1"
|
||||
PySide6_Essentials = "6.10.1"
|
||||
shiboken6 = "6.10.1"
|
||||
|
||||
[[package]]
|
||||
name = "pyside6-addons"
|
||||
version = "6.10.1"
|
||||
description = "Python bindings for the Qt cross-platform application and UI framework (Addons)"
|
||||
optional = false
|
||||
python-versions = "<3.15,>=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pyside6_addons-6.10.1-cp39-abi3-macosx_13_0_universal2.whl", hash = "sha256:4d2b82bbf9b861134845803837011e5f9ac7d33661b216805273cf0c6d0f8e82"},
|
||||
{file = "pyside6_addons-6.10.1-cp39-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:330c229b58d30083a7b99ed22e118eb4f4126408429816a4044ccd0438ae81b4"},
|
||||
{file = "pyside6_addons-6.10.1-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:56864b5fecd6924187a2d0f7e98d968ed72b6cc267caa5b294cd7e88fff4e54c"},
|
||||
{file = "pyside6_addons-6.10.1-cp39-abi3-win_amd64.whl", hash = "sha256:b6e249d15407dd33d6a2ffabd9dc6d7a8ab8c95d05f16a71dad4d07781c76341"},
|
||||
{file = "pyside6_addons-6.10.1-cp39-abi3-win_arm64.whl", hash = "sha256:0de303c0447326cdc6c8be5ab066ef581e2d0baf22560c9362d41b8304fdf2db"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
PySide6_Essentials = "6.10.1"
|
||||
shiboken6 = "6.10.1"
|
||||
|
||||
[[package]]
|
||||
name = "pyside6-essentials"
|
||||
version = "6.10.1"
|
||||
description = "Python bindings for the Qt cross-platform application and UI framework (Essentials)"
|
||||
optional = false
|
||||
python-versions = "<3.15,>=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pyside6_essentials-6.10.1-cp39-abi3-macosx_13_0_universal2.whl", hash = "sha256:cd224aff3bb26ff1fca32c050e1c4d0bd9f951a96219d40d5f3d0128485b0bbe"},
|
||||
{file = "pyside6_essentials-6.10.1-cp39-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:e9ccbfb58c03911a0bce1f2198605b02d4b5ca6276bfc0cbcf7c6f6393ffb856"},
|
||||
{file = "pyside6_essentials-6.10.1-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:ec8617c9b143b0c19ba1cc5a7e98c538e4143795480cb152aee47802c18dc5d2"},
|
||||
{file = "pyside6_essentials-6.10.1-cp39-abi3-win_amd64.whl", hash = "sha256:9555a48e8f0acf63fc6a23c250808db841b28a66ed6ad89ee0e4df7628752674"},
|
||||
{file = "pyside6_essentials-6.10.1-cp39-abi3-win_arm64.whl", hash = "sha256:4d1d248644f1778f8ddae5da714ca0f5a150a5e6f602af2765a7d21b876da05c"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
shiboken6 = "6.10.1"
|
||||
|
||||
[[package]]
|
||||
name = "pytest"
|
||||
version = "9.0.2"
|
||||
description = "pytest: simple powerful testing with Python"
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b"},
|
||||
{file = "pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
colorama = {version = ">=0.4", markers = "sys_platform == \"win32\""}
|
||||
iniconfig = ">=1.0.1"
|
||||
packaging = ">=22"
|
||||
pluggy = ">=1.5,<2"
|
||||
pygments = ">=2.7.2"
|
||||
|
||||
[package.extras]
|
||||
dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "requests", "setuptools", "xmlschema"]
|
||||
|
||||
[[package]]
|
||||
name = "pytest-cov"
|
||||
version = "7.0.0"
|
||||
description = "Pytest plugin for measuring coverage."
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "pytest_cov-7.0.0-py3-none-any.whl", hash = "sha256:3b8e9558b16cc1479da72058bdecf8073661c7f57f7d3c5f22a1c23507f2d861"},
|
||||
{file = "pytest_cov-7.0.0.tar.gz", hash = "sha256:33c97eda2e049a0c5298e91f519302a1334c26ac65c1a483d6206fd458361af1"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
coverage = {version = ">=7.10.6", extras = ["toml"]}
|
||||
pluggy = ">=1.2"
|
||||
pytest = ">=7"
|
||||
|
||||
[package.extras]
|
||||
testing = ["process-tests", "pytest-xdist", "virtualenv"]
|
||||
|
||||
[[package]]
|
||||
name = "python-dotenv"
|
||||
version = "1.2.1"
|
||||
description = "Read key-value pairs from a .env file and set them as environment variables"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "python_dotenv-1.2.1-py3-none-any.whl", hash = "sha256:b81ee9561e9ca4004139c6cbba3a238c32b03e4894671e181b671e8cb8425d61"},
|
||||
{file = "python_dotenv-1.2.1.tar.gz", hash = "sha256:42667e897e16ab0d66954af0e60a9caa94f0fd4ecf3aaf6d2d260eec1aa36ad6"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
cli = ["click (>=5.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "ruff"
|
||||
version = "0.14.14"
|
||||
description = "An extremely fast Python linter and code formatter, written in Rust."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "ruff-0.14.14-py3-none-linux_armv6l.whl", hash = "sha256:7cfe36b56e8489dee8fbc777c61959f60ec0f1f11817e8f2415f429552846aed"},
|
||||
{file = "ruff-0.14.14-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:6006a0082336e7920b9573ef8a7f52eec837add1265cc74e04ea8a4368cd704c"},
|
||||
{file = "ruff-0.14.14-py3-none-macosx_11_0_arm64.whl", hash = "sha256:026c1d25996818f0bf498636686199d9bd0d9d6341c9c2c3b62e2a0198b758de"},
|
||||
{file = "ruff-0.14.14-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f666445819d31210b71e0a6d1c01e24447a20b85458eea25a25fe8142210ae0e"},
|
||||
{file = "ruff-0.14.14-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:3c0f18b922c6d2ff9a5e6c3ee16259adc513ca775bcf82c67ebab7cbd9da5bc8"},
|
||||
{file = "ruff-0.14.14-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1629e67489c2dea43e8658c3dba659edbfd87361624b4040d1df04c9740ae906"},
|
||||
{file = "ruff-0.14.14-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:27493a2131ea0f899057d49d303e4292b2cae2bb57253c1ed1f256fbcd1da480"},
|
||||
{file = "ruff-0.14.14-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:01ff589aab3f5b539e35db38425da31a57521efd1e4ad1ae08fc34dbe30bd7df"},
|
||||
{file = "ruff-0.14.14-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1cc12d74eef0f29f51775f5b755913eb523546b88e2d733e1d701fe65144e89b"},
|
||||
{file = "ruff-0.14.14-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bb8481604b7a9e75eff53772496201690ce2687067e038b3cc31aaf16aa0b974"},
|
||||
{file = "ruff-0.14.14-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:14649acb1cf7b5d2d283ebd2f58d56b75836ed8c6f329664fa91cdea19e76e66"},
|
||||
{file = "ruff-0.14.14-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:e8058d2145566510790eab4e2fad186002e288dec5e0d343a92fe7b0bc1b3e13"},
|
||||
{file = "ruff-0.14.14-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:e651e977a79e4c758eb807f0481d673a67ffe53cfa92209781dfa3a996cf8412"},
|
||||
{file = "ruff-0.14.14-py3-none-musllinux_1_2_i686.whl", hash = "sha256:cc8b22da8d9d6fdd844a68ae937e2a0adf9b16514e9a97cc60355e2d4b219fc3"},
|
||||
{file = "ruff-0.14.14-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:16bc890fb4cc9781bb05beb5ab4cd51be9e7cb376bf1dd3580512b24eb3fda2b"},
|
||||
{file = "ruff-0.14.14-py3-none-win32.whl", hash = "sha256:b530c191970b143375b6a68e6f743800b2b786bbcf03a7965b06c4bf04568167"},
|
||||
{file = "ruff-0.14.14-py3-none-win_amd64.whl", hash = "sha256:3dde1435e6b6fe5b66506c1dff67a421d0b7f6488d466f651c07f4cab3bf20fd"},
|
||||
{file = "ruff-0.14.14-py3-none-win_arm64.whl", hash = "sha256:56e6981a98b13a32236a72a8da421d7839221fa308b223b9283312312e5ac76c"},
|
||||
{file = "ruff-0.14.14.tar.gz", hash = "sha256:2d0f819c9a90205f3a867dbbd0be083bee9912e170fd7d9704cc8ae45824896b"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "shiboken6"
|
||||
version = "6.10.1"
|
||||
description = "Python/C++ bindings helper module"
|
||||
optional = false
|
||||
python-versions = "<3.15,>=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "shiboken6-6.10.1-cp39-abi3-macosx_13_0_universal2.whl", hash = "sha256:9f2990f5b61b0b68ecadcd896ab4441f2cb097eef7797ecc40584107d9850d71"},
|
||||
{file = "shiboken6-6.10.1-cp39-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:f4221a52dfb81f24a0d20cc4f8981cb6edd810d5a9fb28287ce10d342573a0e4"},
|
||||
{file = "shiboken6-6.10.1-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:c095b00f4d6bf578c0b2464bb4e264b351a99345374478570f69e2e679a2a1d0"},
|
||||
{file = "shiboken6-6.10.1-cp39-abi3-win_amd64.whl", hash = "sha256:c1601d3cda1fa32779b141663873741b54e797cb0328458d7466281f117b0a4e"},
|
||||
{file = "shiboken6-6.10.1-cp39-abi3-win_arm64.whl", hash = "sha256:5cf800917008587b551005a45add2d485cca66f5f7ecd5b320e9954e40448cc9"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "sniffio"
|
||||
version = "1.3.1"
|
||||
description = "Sniff out which async library your code is running under"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2"},
|
||||
{file = "sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "sortedcontainers"
|
||||
version = "2.4.0"
|
||||
description = "Sorted Containers -- Sorted List, Sorted Dict, Sorted Set"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "sortedcontainers-2.4.0-py2.py3-none-any.whl", hash = "sha256:a163dcaede0f1c021485e957a39245190e74249897e2ae4b2aa38595db237ee0"},
|
||||
{file = "sortedcontainers-2.4.0.tar.gz", hash = "sha256:25caa5a06cc30b6b83d11423433f65d1f9d76c4c6a0c90e3379eaa43b9bfdb88"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "trio"
|
||||
version = "0.32.0"
|
||||
description = "A friendly Python library for async concurrency and I/O"
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "trio-0.32.0-py3-none-any.whl", hash = "sha256:4ab65984ef8370b79a76659ec87aa3a30c5c7c83ff250b4de88c29a8ab6123c5"},
|
||||
{file = "trio-0.32.0.tar.gz", hash = "sha256:150f29ec923bcd51231e1d4c71c7006e65247d68759dd1c19af4ea815a25806b"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
attrs = ">=23.2.0"
|
||||
cffi = {version = ">=1.14", markers = "os_name == \"nt\" and implementation_name != \"pypy\""}
|
||||
idna = "*"
|
||||
outcome = "*"
|
||||
sniffio = ">=1.3.0"
|
||||
sortedcontainers = "*"
|
||||
|
||||
[[package]]
|
||||
name = "typing-extensions"
|
||||
version = "4.15.0"
|
||||
description = "Backported and Experimental Type Hints for Python 3.9+"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548"},
|
||||
{file = "typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "watchdog"
|
||||
version = "6.0.0"
|
||||
description = "Filesystem events monitoring"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "watchdog-6.0.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d1cdb490583ebd691c012b3d6dae011000fe42edb7a82ece80965b42abd61f26"},
|
||||
{file = "watchdog-6.0.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bc64ab3bdb6a04d69d4023b29422170b74681784ffb9463ed4870cf2f3e66112"},
|
||||
{file = "watchdog-6.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c897ac1b55c5a1461e16dae288d22bb2e412ba9807df8397a635d88f671d36c3"},
|
||||
{file = "watchdog-6.0.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6eb11feb5a0d452ee41f824e271ca311a09e250441c262ca2fd7ebcf2461a06c"},
|
||||
{file = "watchdog-6.0.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ef810fbf7b781a5a593894e4f439773830bdecb885e6880d957d5b9382a960d2"},
|
||||
{file = "watchdog-6.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:afd0fe1b2270917c5e23c2a65ce50c2a4abb63daafb0d419fde368e272a76b7c"},
|
||||
{file = "watchdog-6.0.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:bdd4e6f14b8b18c334febb9c4425a878a2ac20efd1e0b231978e7b150f92a948"},
|
||||
{file = "watchdog-6.0.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c7c15dda13c4eb00d6fb6fc508b3c0ed88b9d5d374056b239c4ad1611125c860"},
|
||||
{file = "watchdog-6.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6f10cb2d5902447c7d0da897e2c6768bca89174d0c6e1e30abec5421af97a5b0"},
|
||||
{file = "watchdog-6.0.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:490ab2ef84f11129844c23fb14ecf30ef3d8a6abafd3754a6f75ca1e6654136c"},
|
||||
{file = "watchdog-6.0.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:76aae96b00ae814b181bb25b1b98076d5fc84e8a53cd8885a318b42b6d3a5134"},
|
||||
{file = "watchdog-6.0.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a175f755fc2279e0b7312c0035d52e27211a5bc39719dd529625b1930917345b"},
|
||||
{file = "watchdog-6.0.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e6f0e77c9417e7cd62af82529b10563db3423625c5fce018430b249bf977f9e8"},
|
||||
{file = "watchdog-6.0.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:90c8e78f3b94014f7aaae121e6b909674df5b46ec24d6bebc45c44c56729af2a"},
|
||||
{file = "watchdog-6.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e7631a77ffb1f7d2eefa4445ebbee491c720a5661ddf6df3498ebecae5ed375c"},
|
||||
{file = "watchdog-6.0.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:c7ac31a19f4545dd92fc25d200694098f42c9a8e391bc00bdd362c5736dbf881"},
|
||||
{file = "watchdog-6.0.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:9513f27a1a582d9808cf21a07dae516f0fab1cf2d7683a742c498b93eedabb11"},
|
||||
{file = "watchdog-6.0.0-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7a0e56874cfbc4b9b05c60c8a1926fedf56324bb08cfbc188969777940aef3aa"},
|
||||
{file = "watchdog-6.0.0-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:e6439e374fc012255b4ec786ae3c4bc838cd7309a540e5fe0952d03687d8804e"},
|
||||
{file = "watchdog-6.0.0-py3-none-manylinux2014_aarch64.whl", hash = "sha256:7607498efa04a3542ae3e05e64da8202e58159aa1fa4acddf7678d34a35d4f13"},
|
||||
{file = "watchdog-6.0.0-py3-none-manylinux2014_armv7l.whl", hash = "sha256:9041567ee8953024c83343288ccc458fd0a2d811d6a0fd68c4c22609e3490379"},
|
||||
{file = "watchdog-6.0.0-py3-none-manylinux2014_i686.whl", hash = "sha256:82dc3e3143c7e38ec49d61af98d6558288c415eac98486a5c581726e0737c00e"},
|
||||
{file = "watchdog-6.0.0-py3-none-manylinux2014_ppc64.whl", hash = "sha256:212ac9b8bf1161dc91bd09c048048a95ca3a4c4f5e5d4a7d1b1a7d5752a7f96f"},
|
||||
{file = "watchdog-6.0.0-py3-none-manylinux2014_ppc64le.whl", hash = "sha256:e3df4cbb9a450c6d49318f6d14f4bbc80d763fa587ba46ec86f99f9e6876bb26"},
|
||||
{file = "watchdog-6.0.0-py3-none-manylinux2014_s390x.whl", hash = "sha256:2cce7cfc2008eb51feb6aab51251fd79b85d9894e98ba847408f662b3395ca3c"},
|
||||
{file = "watchdog-6.0.0-py3-none-manylinux2014_x86_64.whl", hash = "sha256:20ffe5b202af80ab4266dcd3e91aae72bf2da48c0d33bdb15c66658e685e94e2"},
|
||||
{file = "watchdog-6.0.0-py3-none-win32.whl", hash = "sha256:07df1fdd701c5d4c8e55ef6cf55b8f0120fe1aef7ef39a1c6fc6bc2e606d517a"},
|
||||
{file = "watchdog-6.0.0-py3-none-win_amd64.whl", hash = "sha256:cbafb470cf848d93b5d013e2ecb245d4aa1c8fd0504e863ccefa32445359d680"},
|
||||
{file = "watchdog-6.0.0-py3-none-win_ia64.whl", hash = "sha256:a1914259fa9e1454315171103c6a30961236f508b9b623eae470268bbcc6a22f"},
|
||||
{file = "watchdog-6.0.0.tar.gz", hash = "sha256:9ddf7c82fda3ae8e24decda1338ede66e1c99883db93711d8fb941eaa2d8c282"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
watchmedo = ["PyYAML (>=3.10)"]
|
||||
|
||||
[[package]]
|
||||
name = "win32-setctime"
|
||||
version = "1.2.0"
|
||||
description = "A small Python utility to set file creation time on Windows"
|
||||
optional = false
|
||||
python-versions = ">=3.5"
|
||||
groups = ["main"]
|
||||
markers = "sys_platform == \"win32\""
|
||||
files = [
|
||||
{file = "win32_setctime-1.2.0-py3-none-any.whl", hash = "sha256:95d644c4e708aba81dc3704a116d8cbc974d70b3bdb8be1d150e36be6e9d1390"},
|
||||
{file = "win32_setctime-1.2.0.tar.gz", hash = "sha256:ae1fdf948f5640aae05c511ade119313fb6a30d7eabe25fef9764dca5873c4c0"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
dev = ["black (>=19.3b0) ; python_version >= \"3.6\"", "pytest (>=4.6.2)"]
|
||||
|
||||
[metadata]
|
||||
lock-version = "2.1"
|
||||
python-versions = ">=3.13,<3.15"
|
||||
content-hash = "56774f975c6ce4dcb62d6d3fa2ab6ef6fddf89ef256e09541dce1c34bce30c69"
|
||||
35
pyproject.toml
Normal file
35
pyproject.toml
Normal file
@@ -0,0 +1,35 @@
|
||||
[project]
|
||||
name = "vault"
|
||||
version = "1.0.0"
|
||||
description = ""
|
||||
authors = [
|
||||
{name = "Jan Doubravský",email = "jan.doubravsky@gmail.com"}
|
||||
]
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.13,<3.15"
|
||||
dependencies = [
|
||||
"pyside6 (>=6.10.1,<7.0.0)",
|
||||
"pyfuse3 (>=3.4.2,<4.0.0)",
|
||||
"watchdog (>=6.0.0,<7.0.0)",
|
||||
"loguru (>=0.7.3,<0.8.0)",
|
||||
"python-dotenv (>=1.2.1,<2.0.0)"
|
||||
]
|
||||
|
||||
[tool.poetry]
|
||||
package-mode = false
|
||||
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
pytest = "^9.0.2"
|
||||
pytest-cov = "^7.0.0"
|
||||
ruff = "^0.14.14"
|
||||
mypy = "^1.19.1"
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
markers = [
|
||||
"integration: marks tests as integration tests (require udisks2, actual mounting)",
|
||||
]
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core>=2.0.0,<3.0.0"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
0
src/__init__.py
Normal file
0
src/__init__.py
Normal file
0
src/core/__init__.py
Normal file
0
src/core/__init__.py
Normal file
236
src/core/container.py
Normal file
236
src/core/container.py
Normal file
@@ -0,0 +1,236 @@
|
||||
"""Container management for mounting/unmounting vault images."""
|
||||
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
from loguru import logger
|
||||
|
||||
|
||||
class ContainerError(Exception):
|
||||
"""Raised when container operation fails."""
|
||||
|
||||
|
||||
class Container:
|
||||
"""Manages mounting and unmounting of a vault image.
|
||||
|
||||
Uses udisksctl for user-space mounting (no root required).
|
||||
|
||||
Attributes:
|
||||
image_path: Path to the .vault image file
|
||||
mount_point: Path where the image is mounted (None if not mounted)
|
||||
loop_device: Loop device path (e.g., /dev/loop0)
|
||||
"""
|
||||
|
||||
def __init__(self, image_path: Path) -> None:
|
||||
"""Initialize container.
|
||||
|
||||
Args:
|
||||
image_path: Path to the .vault image file
|
||||
"""
|
||||
self.image_path = image_path
|
||||
self.mount_point: Path | None = None
|
||||
self.loop_device: str | None = None
|
||||
self._object_path: str | None = None # udisks object path
|
||||
|
||||
def mount(self, mount_point: Path | None = None) -> Path:
|
||||
"""Mount the vault image.
|
||||
|
||||
Args:
|
||||
mount_point: Optional custom mount point. If None, uses
|
||||
udisks default (/run/media/$USER/VAULT)
|
||||
|
||||
Returns:
|
||||
Path where the image is mounted
|
||||
|
||||
Raises:
|
||||
ContainerError: If mounting fails
|
||||
"""
|
||||
if self.is_mounted():
|
||||
raise ContainerError("Container is already mounted")
|
||||
|
||||
if not self.image_path.exists():
|
||||
raise ContainerError(f"Image not found: {self.image_path}")
|
||||
|
||||
logger.info(f"Mounting: {self.image_path}")
|
||||
|
||||
try:
|
||||
# Set up loop device using udisksctl
|
||||
self._setup_loop_device()
|
||||
|
||||
# Mount the loop device
|
||||
self._mount_loop_device(mount_point)
|
||||
|
||||
logger.info(f"Mounted at: {self.mount_point}")
|
||||
return self.mount_point # type: ignore
|
||||
|
||||
except Exception as e:
|
||||
# Cleanup on failure
|
||||
self._cleanup()
|
||||
raise ContainerError(f"Failed to mount: {e}") from e
|
||||
|
||||
def _setup_loop_device(self) -> None:
|
||||
"""Set up loop device using udisksctl."""
|
||||
logger.debug(f"Setting up loop device for: {self.image_path}")
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
[
|
||||
"udisksctl",
|
||||
"loop-setup",
|
||||
"--file",
|
||||
str(self.image_path),
|
||||
"--no-user-interaction",
|
||||
],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
|
||||
# Parse output: "Mapped file /path/to/file as /dev/loop0."
|
||||
output = result.stdout.strip()
|
||||
logger.debug(f"udisksctl loop-setup output: {output}")
|
||||
|
||||
# Extract loop device path
|
||||
if "as" in output:
|
||||
self.loop_device = output.split("as")[-1].strip().rstrip(".")
|
||||
# Get the object path for later unmounting
|
||||
self._object_path = f"/org/freedesktop/UDisks2/block_devices/{Path(self.loop_device).name}"
|
||||
else:
|
||||
raise ContainerError(f"Unexpected udisksctl output: {output}")
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
raise ContainerError(f"udisksctl loop-setup failed: {e.stderr}") from e
|
||||
except FileNotFoundError:
|
||||
raise ContainerError(
|
||||
"udisksctl not found. Install with: sudo apt install udisks2"
|
||||
)
|
||||
|
||||
def _mount_loop_device(self, custom_mount_point: Path | None) -> None:
|
||||
"""Mount the loop device."""
|
||||
if not self.loop_device:
|
||||
raise ContainerError("No loop device set up")
|
||||
|
||||
logger.debug(f"Mounting loop device: {self.loop_device}")
|
||||
|
||||
try:
|
||||
cmd = [
|
||||
"udisksctl",
|
||||
"mount",
|
||||
"--block-device",
|
||||
self.loop_device,
|
||||
"--no-user-interaction",
|
||||
]
|
||||
|
||||
result = subprocess.run(
|
||||
cmd,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
|
||||
# Parse output: "Mounted /dev/loop0 at /run/media/user/VAULT"
|
||||
output = result.stdout.strip()
|
||||
logger.debug(f"udisksctl mount output: {output}")
|
||||
|
||||
if "at" in output:
|
||||
mount_path = output.split("at")[-1].strip().rstrip(".")
|
||||
self.mount_point = Path(mount_path)
|
||||
else:
|
||||
raise ContainerError(f"Unexpected udisksctl mount output: {output}")
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
raise ContainerError(f"udisksctl mount failed: {e.stderr}") from e
|
||||
|
||||
def unmount(self) -> None:
|
||||
"""Unmount the vault image.
|
||||
|
||||
Raises:
|
||||
ContainerError: If unmounting fails
|
||||
"""
|
||||
if not self.is_mounted():
|
||||
logger.warning("Container is not mounted")
|
||||
return
|
||||
|
||||
logger.info(f"Unmounting: {self.mount_point}")
|
||||
|
||||
try:
|
||||
self._unmount_loop_device()
|
||||
self._remove_loop_device()
|
||||
logger.info("Unmounted successfully")
|
||||
|
||||
except Exception as e:
|
||||
raise ContainerError(f"Failed to unmount: {e}") from e
|
||||
|
||||
finally:
|
||||
self._cleanup()
|
||||
|
||||
def _unmount_loop_device(self) -> None:
|
||||
"""Unmount the loop device."""
|
||||
if not self.loop_device:
|
||||
return
|
||||
|
||||
logger.debug(f"Unmounting: {self.loop_device}")
|
||||
|
||||
try:
|
||||
subprocess.run(
|
||||
[
|
||||
"udisksctl",
|
||||
"unmount",
|
||||
"--block-device",
|
||||
self.loop_device,
|
||||
"--no-user-interaction",
|
||||
],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
except subprocess.CalledProcessError as e:
|
||||
raise ContainerError(f"udisksctl unmount failed: {e.stderr}") from e
|
||||
|
||||
def _remove_loop_device(self) -> None:
|
||||
"""Remove the loop device."""
|
||||
if not self.loop_device:
|
||||
return
|
||||
|
||||
logger.debug(f"Removing loop device: {self.loop_device}")
|
||||
|
||||
try:
|
||||
subprocess.run(
|
||||
[
|
||||
"udisksctl",
|
||||
"loop-delete",
|
||||
"--block-device",
|
||||
self.loop_device,
|
||||
"--no-user-interaction",
|
||||
],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
except subprocess.CalledProcessError as e:
|
||||
# Loop device might already be removed
|
||||
logger.warning(f"Failed to remove loop device: {e.stderr}")
|
||||
|
||||
def _cleanup(self) -> None:
|
||||
"""Reset internal state."""
|
||||
self.mount_point = None
|
||||
self.loop_device = None
|
||||
self._object_path = None
|
||||
|
||||
def is_mounted(self) -> bool:
|
||||
"""Check if the container is currently mounted.
|
||||
|
||||
Returns:
|
||||
True if mounted, False otherwise
|
||||
"""
|
||||
return self.mount_point is not None and self.mount_point.exists()
|
||||
|
||||
def __enter__(self) -> "Container":
|
||||
"""Context manager entry - mount the container."""
|
||||
self.mount()
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb) -> None:
|
||||
"""Context manager exit - unmount the container."""
|
||||
if self.is_mounted():
|
||||
self.unmount()
|
||||
86
src/core/file_entry.py
Normal file
86
src/core/file_entry.py
Normal file
@@ -0,0 +1,86 @@
|
||||
"""File entry dataclass for representing files in the vault."""
|
||||
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class FileEntry:
|
||||
"""Immutable representation of a file in the vault.
|
||||
|
||||
Attributes:
|
||||
path: Relative path within the vault (e.g., 'documents/file.txt')
|
||||
hash: SHA-256 hash of the file content (e.g., 'sha256:abc123...')
|
||||
size: File size in bytes
|
||||
created: Creation timestamp (ISO format)
|
||||
modified: Last modification timestamp (ISO format)
|
||||
"""
|
||||
|
||||
path: str
|
||||
hash: str
|
||||
size: int
|
||||
created: datetime
|
||||
modified: datetime
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Convert to dictionary for JSON serialization."""
|
||||
return {
|
||||
"path": self.path,
|
||||
"hash": self.hash,
|
||||
"size": self.size,
|
||||
"created": self.created.isoformat(),
|
||||
"modified": self.modified.isoformat(),
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, data: dict) -> "FileEntry":
|
||||
"""Create FileEntry from dictionary (JSON deserialization)."""
|
||||
return cls(
|
||||
path=data["path"],
|
||||
hash=data["hash"],
|
||||
size=data["size"],
|
||||
created=datetime.fromisoformat(data["created"]),
|
||||
modified=datetime.fromisoformat(data["modified"]),
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_path(cls, base_path: Path, file_path: Path) -> "FileEntry":
|
||||
"""Create FileEntry from actual file on disk.
|
||||
|
||||
Args:
|
||||
base_path: Base path of the vault (mount point)
|
||||
file_path: Absolute path to the file
|
||||
|
||||
Returns:
|
||||
FileEntry with computed hash and timestamps
|
||||
"""
|
||||
import hashlib
|
||||
|
||||
relative_path = file_path.relative_to(base_path)
|
||||
stat = file_path.stat()
|
||||
|
||||
# Compute SHA-256 hash
|
||||
sha256 = hashlib.sha256()
|
||||
with open(file_path, "rb") as f:
|
||||
for chunk in iter(lambda: f.read(1024 * 1024), b""):
|
||||
sha256.update(chunk)
|
||||
|
||||
return cls(
|
||||
path=str(relative_path),
|
||||
hash=f"sha256:{sha256.hexdigest()}",
|
||||
size=stat.st_size,
|
||||
created=datetime.fromtimestamp(stat.st_ctime),
|
||||
modified=datetime.fromtimestamp(stat.st_mtime),
|
||||
)
|
||||
|
||||
def has_changed(self, other: "FileEntry") -> bool:
|
||||
"""Check if file has changed compared to another entry.
|
||||
|
||||
Compares hash for content change detection.
|
||||
"""
|
||||
return self.hash != other.hash
|
||||
|
||||
def is_newer_than(self, other: "FileEntry") -> bool:
|
||||
"""Check if this file is newer than another entry."""
|
||||
return self.modified > other.modified
|
||||
238
src/core/file_sync.py
Normal file
238
src/core/file_sync.py
Normal file
@@ -0,0 +1,238 @@
|
||||
"""File synchronization module for copying files between replicas.
|
||||
|
||||
Provides chunked copy with progress callbacks for large files.
|
||||
"""
|
||||
|
||||
import shutil
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
|
||||
from loguru import logger
|
||||
|
||||
# Default chunk size: 1 MB
|
||||
DEFAULT_CHUNK_SIZE = 1024 * 1024
|
||||
|
||||
|
||||
@dataclass
|
||||
class CopyProgress:
|
||||
"""Progress information for file copy operation."""
|
||||
|
||||
src_path: Path
|
||||
dst_path: Path
|
||||
bytes_copied: int
|
||||
total_bytes: int
|
||||
|
||||
@property
|
||||
def percent(self) -> float:
|
||||
"""Return progress as percentage (0-100)."""
|
||||
if self.total_bytes == 0:
|
||||
return 100.0
|
||||
return (self.bytes_copied / self.total_bytes) * 100
|
||||
|
||||
@property
|
||||
def is_complete(self) -> bool:
|
||||
"""Check if copy is complete."""
|
||||
return self.bytes_copied >= self.total_bytes
|
||||
|
||||
|
||||
# Type alias for progress callback
|
||||
ProgressCallback = Callable[[CopyProgress], None]
|
||||
|
||||
|
||||
def copy_file_with_progress(
|
||||
src: Path,
|
||||
dst: Path,
|
||||
callback: ProgressCallback | None = None,
|
||||
chunk_size: int = DEFAULT_CHUNK_SIZE,
|
||||
) -> None:
|
||||
"""Copy a file with progress callback.
|
||||
|
||||
Args:
|
||||
src: Source file path
|
||||
dst: Destination file path
|
||||
callback: Optional callback called after each chunk
|
||||
chunk_size: Size of chunks to read/write (default 1MB)
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If source file doesn't exist
|
||||
IsADirectoryError: If source or destination is a directory
|
||||
PermissionError: If permission denied
|
||||
"""
|
||||
if not src.exists():
|
||||
raise FileNotFoundError(f"Source file not found: {src}")
|
||||
|
||||
if src.is_dir():
|
||||
raise IsADirectoryError(f"Source is a directory: {src}")
|
||||
|
||||
if dst.exists() and dst.is_dir():
|
||||
raise IsADirectoryError(f"Destination is a directory: {dst}")
|
||||
|
||||
# Create parent directories if needed
|
||||
dst.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
total_bytes = src.stat().st_size
|
||||
bytes_copied = 0
|
||||
|
||||
logger.debug(f"Copying {src} -> {dst} ({total_bytes} bytes)")
|
||||
|
||||
with open(src, "rb") as fsrc, open(dst, "wb") as fdst:
|
||||
while chunk := fsrc.read(chunk_size):
|
||||
fdst.write(chunk)
|
||||
bytes_copied += len(chunk)
|
||||
|
||||
if callback:
|
||||
progress = CopyProgress(
|
||||
src_path=src,
|
||||
dst_path=dst,
|
||||
bytes_copied=bytes_copied,
|
||||
total_bytes=total_bytes,
|
||||
)
|
||||
callback(progress)
|
||||
|
||||
# Preserve file metadata (timestamps, permissions)
|
||||
shutil.copystat(src, dst)
|
||||
|
||||
logger.debug(f"Copy complete: {src} -> {dst}")
|
||||
|
||||
|
||||
def copy_directory_with_progress(
|
||||
src: Path,
|
||||
dst: Path,
|
||||
callback: ProgressCallback | None = None,
|
||||
chunk_size: int = DEFAULT_CHUNK_SIZE,
|
||||
) -> None:
|
||||
"""Recursively copy a directory with progress callback.
|
||||
|
||||
Args:
|
||||
src: Source directory path
|
||||
dst: Destination directory path
|
||||
callback: Optional callback called after each chunk of each file
|
||||
chunk_size: Size of chunks to read/write (default 1MB)
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If source directory doesn't exist
|
||||
NotADirectoryError: If source is not a directory
|
||||
"""
|
||||
if not src.exists():
|
||||
raise FileNotFoundError(f"Source directory not found: {src}")
|
||||
|
||||
if not src.is_dir():
|
||||
raise NotADirectoryError(f"Source is not a directory: {src}")
|
||||
|
||||
# Create destination directory
|
||||
dst.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Copy directory metadata
|
||||
shutil.copystat(src, dst)
|
||||
|
||||
for item in src.iterdir():
|
||||
src_item = item
|
||||
dst_item = dst / item.name
|
||||
|
||||
if item.is_dir():
|
||||
copy_directory_with_progress(src_item, dst_item, callback, chunk_size)
|
||||
else:
|
||||
copy_file_with_progress(src_item, dst_item, callback, chunk_size)
|
||||
|
||||
|
||||
def delete_file(path: Path) -> None:
|
||||
"""Delete a file.
|
||||
|
||||
Args:
|
||||
path: File path to delete
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If file doesn't exist
|
||||
IsADirectoryError: If path is a directory
|
||||
"""
|
||||
if not path.exists():
|
||||
raise FileNotFoundError(f"File not found: {path}")
|
||||
|
||||
if path.is_dir():
|
||||
raise IsADirectoryError(f"Path is a directory, use delete_directory: {path}")
|
||||
|
||||
path.unlink()
|
||||
logger.debug(f"Deleted file: {path}")
|
||||
|
||||
|
||||
def delete_directory(path: Path) -> None:
|
||||
"""Recursively delete a directory.
|
||||
|
||||
Args:
|
||||
path: Directory path to delete
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If directory doesn't exist
|
||||
NotADirectoryError: If path is not a directory
|
||||
"""
|
||||
if not path.exists():
|
||||
raise FileNotFoundError(f"Directory not found: {path}")
|
||||
|
||||
if not path.is_dir():
|
||||
raise NotADirectoryError(f"Path is not a directory: {path}")
|
||||
|
||||
shutil.rmtree(path)
|
||||
logger.debug(f"Deleted directory: {path}")
|
||||
|
||||
|
||||
def move_file(src: Path, dst: Path) -> None:
|
||||
"""Move/rename a file.
|
||||
|
||||
Args:
|
||||
src: Source file path
|
||||
dst: Destination file path
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If source doesn't exist
|
||||
"""
|
||||
if not src.exists():
|
||||
raise FileNotFoundError(f"Source not found: {src}")
|
||||
|
||||
# Create parent directories if needed
|
||||
dst.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
src.rename(dst)
|
||||
logger.debug(f"Moved: {src} -> {dst}")
|
||||
|
||||
|
||||
def sync_file(
|
||||
src: Path,
|
||||
dst: Path,
|
||||
callback: ProgressCallback | None = None,
|
||||
chunk_size: int = DEFAULT_CHUNK_SIZE,
|
||||
) -> bool:
|
||||
"""Synchronize a single file from source to destination.
|
||||
|
||||
Only copies if source is newer or destination doesn't exist.
|
||||
|
||||
Args:
|
||||
src: Source file path
|
||||
dst: Destination file path
|
||||
callback: Optional progress callback
|
||||
chunk_size: Chunk size for copying
|
||||
|
||||
Returns:
|
||||
True if file was copied, False if already up to date
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If source doesn't exist
|
||||
"""
|
||||
if not src.exists():
|
||||
raise FileNotFoundError(f"Source file not found: {src}")
|
||||
|
||||
# Always copy if destination doesn't exist
|
||||
if not dst.exists():
|
||||
copy_file_with_progress(src, dst, callback, chunk_size)
|
||||
return True
|
||||
|
||||
# Compare modification times
|
||||
src_mtime = src.stat().st_mtime
|
||||
dst_mtime = dst.stat().st_mtime
|
||||
|
||||
if src_mtime > dst_mtime:
|
||||
copy_file_with_progress(src, dst, callback, chunk_size)
|
||||
return True
|
||||
|
||||
logger.debug(f"File already up to date: {dst}")
|
||||
return False
|
||||
222
src/core/file_watcher.py
Normal file
222
src/core/file_watcher.py
Normal file
@@ -0,0 +1,222 @@
|
||||
"""File watcher module for detecting changes in vault mount point.
|
||||
|
||||
Uses watchdog library with inotify backend on Linux.
|
||||
"""
|
||||
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from loguru import logger
|
||||
from watchdog.events import (
|
||||
DirCreatedEvent,
|
||||
DirDeletedEvent,
|
||||
DirMovedEvent,
|
||||
FileClosedEvent,
|
||||
FileCreatedEvent,
|
||||
FileDeletedEvent,
|
||||
FileModifiedEvent,
|
||||
FileMovedEvent,
|
||||
FileSystemEvent,
|
||||
FileSystemEventHandler,
|
||||
)
|
||||
from watchdog.observers import Observer
|
||||
|
||||
|
||||
class EventType(Enum):
|
||||
"""Types of file system events."""
|
||||
|
||||
CREATED = "created"
|
||||
MODIFIED = "modified"
|
||||
DELETED = "deleted"
|
||||
MOVED = "moved"
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class FileEvent:
|
||||
"""Represents a file system event."""
|
||||
|
||||
event_type: EventType
|
||||
path: str
|
||||
is_directory: bool
|
||||
dest_path: str | None = None # Only for MOVED events
|
||||
|
||||
def __str__(self) -> str:
|
||||
if self.event_type == EventType.MOVED:
|
||||
return f"{self.event_type.value}: {self.path} -> {self.dest_path}"
|
||||
return f"{self.event_type.value}: {self.path}"
|
||||
|
||||
|
||||
# Type alias for file event callbacks
|
||||
FileEventCallback = Callable[[FileEvent], None]
|
||||
|
||||
|
||||
def _ensure_str(path: str | bytes) -> str:
|
||||
"""Convert bytes path to str if necessary."""
|
||||
if isinstance(path, bytes):
|
||||
return path.decode("utf-8", errors="replace")
|
||||
return path
|
||||
|
||||
|
||||
class VaultEventHandler(FileSystemEventHandler):
|
||||
"""Handles file system events and converts them to FileEvent objects."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
base_path: Path,
|
||||
callback: FileEventCallback,
|
||||
ignore_patterns: list[str] | None = None,
|
||||
) -> None:
|
||||
super().__init__()
|
||||
self.base_path = base_path
|
||||
self.callback = callback
|
||||
self.ignore_patterns = ignore_patterns or [".vault"]
|
||||
|
||||
def _should_ignore(self, path: str) -> bool:
|
||||
"""Check if path should be ignored."""
|
||||
rel_path = Path(path).relative_to(self.base_path)
|
||||
for pattern in self.ignore_patterns:
|
||||
if pattern in rel_path.parts:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _get_relative_path(self, path: str) -> str:
|
||||
"""Convert absolute path to relative path from base."""
|
||||
return str(Path(path).relative_to(self.base_path))
|
||||
|
||||
def _emit_event(
|
||||
self,
|
||||
event_type: EventType,
|
||||
src_path: str,
|
||||
is_directory: bool,
|
||||
dest_path: str | None = None,
|
||||
) -> None:
|
||||
"""Create and emit a FileEvent."""
|
||||
if self._should_ignore(src_path):
|
||||
return
|
||||
|
||||
rel_path = self._get_relative_path(src_path)
|
||||
rel_dest = self._get_relative_path(dest_path) if dest_path else None
|
||||
|
||||
file_event = FileEvent(
|
||||
event_type=event_type,
|
||||
path=rel_path,
|
||||
is_directory=is_directory,
|
||||
dest_path=rel_dest,
|
||||
)
|
||||
|
||||
logger.debug(f"File event: {file_event}")
|
||||
self.callback(file_event)
|
||||
|
||||
def on_created(self, event: FileSystemEvent) -> None:
|
||||
"""Handle file/directory creation."""
|
||||
if isinstance(event, (FileCreatedEvent, DirCreatedEvent)):
|
||||
self._emit_event(
|
||||
EventType.CREATED,
|
||||
_ensure_str(event.src_path),
|
||||
event.is_directory,
|
||||
)
|
||||
|
||||
def on_modified(self, event: FileSystemEvent) -> None:
|
||||
"""Handle file modification."""
|
||||
# Only track file modifications, not directory modifications
|
||||
if isinstance(event, FileModifiedEvent) and not event.is_directory:
|
||||
self._emit_event(
|
||||
EventType.MODIFIED,
|
||||
_ensure_str(event.src_path),
|
||||
is_directory=False,
|
||||
)
|
||||
|
||||
def on_deleted(self, event: FileSystemEvent) -> None:
|
||||
"""Handle file/directory deletion."""
|
||||
if isinstance(event, (FileDeletedEvent, DirDeletedEvent)):
|
||||
self._emit_event(
|
||||
EventType.DELETED,
|
||||
_ensure_str(event.src_path),
|
||||
event.is_directory,
|
||||
)
|
||||
|
||||
def on_moved(self, event: FileSystemEvent) -> None:
|
||||
"""Handle file/directory move/rename."""
|
||||
if isinstance(event, (FileMovedEvent, DirMovedEvent)):
|
||||
self._emit_event(
|
||||
EventType.MOVED,
|
||||
_ensure_str(event.src_path),
|
||||
event.is_directory,
|
||||
dest_path=_ensure_str(event.dest_path),
|
||||
)
|
||||
|
||||
def on_closed(self, event: FileSystemEvent) -> None:
|
||||
"""Handle file close - emit as modified for write operations."""
|
||||
# FileClosedEvent indicates a file was closed after writing
|
||||
# This is more reliable than on_modified for detecting actual saves
|
||||
if isinstance(event, FileClosedEvent) and not event.is_directory:
|
||||
# We emit MODIFIED because the file content was finalized
|
||||
self._emit_event(
|
||||
EventType.MODIFIED,
|
||||
_ensure_str(event.src_path),
|
||||
is_directory=False,
|
||||
)
|
||||
|
||||
|
||||
class FileWatcher:
|
||||
"""Watches a directory for file system changes."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
watch_path: Path,
|
||||
callback: FileEventCallback,
|
||||
ignore_patterns: list[str] | None = None,
|
||||
) -> None:
|
||||
self.watch_path = watch_path
|
||||
self.callback = callback
|
||||
self.ignore_patterns = ignore_patterns or [".vault"]
|
||||
self._observer: Any = None
|
||||
self._running = False
|
||||
|
||||
def start(self) -> None:
|
||||
"""Start watching for file system events."""
|
||||
if self._running:
|
||||
logger.warning("FileWatcher already running")
|
||||
return
|
||||
|
||||
if not self.watch_path.exists():
|
||||
raise FileNotFoundError(f"Watch path does not exist: {self.watch_path}")
|
||||
|
||||
handler = VaultEventHandler(
|
||||
base_path=self.watch_path,
|
||||
callback=self.callback,
|
||||
ignore_patterns=self.ignore_patterns,
|
||||
)
|
||||
|
||||
self._observer = Observer()
|
||||
self._observer.schedule(handler, str(self.watch_path), recursive=True)
|
||||
self._observer.start()
|
||||
self._running = True
|
||||
|
||||
logger.info(f"FileWatcher started for: {self.watch_path}")
|
||||
|
||||
def stop(self) -> None:
|
||||
"""Stop watching for file system events."""
|
||||
if not self._running or self._observer is None:
|
||||
return
|
||||
|
||||
self._observer.stop()
|
||||
self._observer.join(timeout=5.0)
|
||||
self._observer = None
|
||||
self._running = False
|
||||
|
||||
logger.info("FileWatcher stopped")
|
||||
|
||||
def is_running(self) -> bool:
|
||||
"""Check if watcher is currently running."""
|
||||
return self._running
|
||||
|
||||
def __enter__(self) -> "FileWatcher":
|
||||
self.start()
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type: type | None, exc_val: Exception | None, exc_tb: object) -> None:
|
||||
self.stop()
|
||||
202
src/core/image_manager.py
Normal file
202
src/core/image_manager.py
Normal file
@@ -0,0 +1,202 @@
|
||||
"""Image manager for creating and resizing vault disk images."""
|
||||
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
from loguru import logger
|
||||
|
||||
|
||||
class ImageError(Exception):
|
||||
"""Raised when image operation fails."""
|
||||
|
||||
|
||||
def create_sparse_image(path: Path, size_mb: int) -> None:
|
||||
"""Create a sparse disk image with exFAT filesystem.
|
||||
|
||||
Args:
|
||||
path: Path where to create the .vault file
|
||||
size_mb: Size of the image in megabytes
|
||||
|
||||
Raises:
|
||||
ImageError: If creation fails
|
||||
"""
|
||||
logger.info(f"Creating sparse image: {path} ({size_mb} MB)")
|
||||
|
||||
if path.exists():
|
||||
raise ImageError(f"File already exists: {path}")
|
||||
|
||||
try:
|
||||
# Create sparse file
|
||||
with open(path, "wb") as f:
|
||||
f.seek(size_mb * 1024 * 1024 - 1)
|
||||
f.write(b"\0")
|
||||
|
||||
logger.debug(f"Sparse file created: {path}")
|
||||
|
||||
# Format as exFAT
|
||||
_format_exfat(path)
|
||||
|
||||
logger.info(f"Image created successfully: {path}")
|
||||
|
||||
except Exception as e:
|
||||
# Cleanup on failure
|
||||
if path.exists():
|
||||
path.unlink()
|
||||
raise ImageError(f"Failed to create image: {e}") from e
|
||||
|
||||
|
||||
def _format_exfat(path: Path) -> None:
|
||||
"""Format image as exFAT filesystem.
|
||||
|
||||
Args:
|
||||
path: Path to the image file
|
||||
|
||||
Raises:
|
||||
ImageError: If formatting fails
|
||||
"""
|
||||
logger.debug(f"Formatting as exFAT: {path}")
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["mkfs.exfat", "-n", "VAULT", str(path)],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
logger.debug(f"mkfs.exfat output: {result.stdout}")
|
||||
except subprocess.CalledProcessError as e:
|
||||
raise ImageError(f"mkfs.exfat failed: {e.stderr}") from e
|
||||
except FileNotFoundError:
|
||||
raise ImageError(
|
||||
"mkfs.exfat not found. Install with: sudo apt install exfatprogs"
|
||||
)
|
||||
|
||||
|
||||
def resize_image(path: Path, new_size_mb: int) -> None:
|
||||
"""Resize an existing vault image.
|
||||
|
||||
The image must not be mounted during resize.
|
||||
|
||||
Args:
|
||||
path: Path to the .vault file
|
||||
new_size_mb: New size in megabytes (must be larger than current)
|
||||
|
||||
Raises:
|
||||
ImageError: If resize fails
|
||||
"""
|
||||
logger.info(f"Resizing image: {path} to {new_size_mb} MB")
|
||||
|
||||
if not path.exists():
|
||||
raise ImageError(f"Image not found: {path}")
|
||||
|
||||
current_size = path.stat().st_size
|
||||
new_size = new_size_mb * 1024 * 1024
|
||||
|
||||
if new_size <= current_size:
|
||||
raise ImageError(
|
||||
f"New size ({new_size_mb} MB) must be larger than current "
|
||||
f"({current_size // (1024 * 1024)} MB)"
|
||||
)
|
||||
|
||||
try:
|
||||
# Extend the sparse file
|
||||
with open(path, "r+b") as f:
|
||||
f.seek(new_size - 1)
|
||||
f.write(b"\0")
|
||||
|
||||
logger.debug(f"File extended to {new_size_mb} MB")
|
||||
|
||||
# Resize exFAT filesystem
|
||||
_resize_exfat(path)
|
||||
|
||||
logger.info(f"Image resized successfully: {path}")
|
||||
|
||||
except Exception as e:
|
||||
raise ImageError(f"Failed to resize image: {e}") from e
|
||||
|
||||
|
||||
def _resize_exfat(path: Path) -> None:
|
||||
"""Resize exFAT filesystem to fill the image.
|
||||
|
||||
Note: exfatprogs doesn't have a resize tool.
|
||||
We use exfatfsck to check and then the filesystem will
|
||||
automatically use the new space on next mount.
|
||||
|
||||
Args:
|
||||
path: Path to the image file
|
||||
|
||||
Raises:
|
||||
ImageError: If resize fails
|
||||
"""
|
||||
logger.debug(f"Checking exFAT filesystem: {path}")
|
||||
|
||||
try:
|
||||
# Run fsck to ensure filesystem integrity
|
||||
result = subprocess.run(
|
||||
["fsck.exfat", "-a", str(path)],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
# fsck.exfat returns 0 for clean, 1 for fixed errors
|
||||
if result.returncode not in (0, 1):
|
||||
raise ImageError(f"fsck.exfat failed: {result.stderr}")
|
||||
|
||||
logger.debug(f"fsck.exfat output: {result.stdout}")
|
||||
|
||||
except FileNotFoundError:
|
||||
raise ImageError(
|
||||
"fsck.exfat not found. Install with: sudo apt install exfatprogs"
|
||||
)
|
||||
|
||||
|
||||
def get_image_info(path: Path) -> dict:
|
||||
"""Get information about a vault image.
|
||||
|
||||
Args:
|
||||
path: Path to the .vault file
|
||||
|
||||
Returns:
|
||||
Dictionary with image information
|
||||
|
||||
Raises:
|
||||
ImageError: If reading info fails
|
||||
"""
|
||||
if not path.exists():
|
||||
raise ImageError(f"Image not found: {path}")
|
||||
|
||||
stat = path.stat()
|
||||
|
||||
# Get actual size on disk (sparse file may use less)
|
||||
try:
|
||||
# st_blocks is in 512-byte units
|
||||
actual_size = stat.st_blocks * 512
|
||||
except AttributeError:
|
||||
actual_size = stat.st_size
|
||||
|
||||
return {
|
||||
"path": str(path),
|
||||
"size_mb": stat.st_size // (1024 * 1024),
|
||||
"actual_size_mb": actual_size // (1024 * 1024),
|
||||
"sparse_ratio": actual_size / stat.st_size if stat.st_size > 0 else 0,
|
||||
}
|
||||
|
||||
|
||||
def delete_image(path: Path) -> None:
|
||||
"""Delete a vault image.
|
||||
|
||||
Args:
|
||||
path: Path to the .vault file
|
||||
|
||||
Raises:
|
||||
ImageError: If deletion fails
|
||||
"""
|
||||
logger.info(f"Deleting image: {path}")
|
||||
|
||||
if not path.exists():
|
||||
raise ImageError(f"Image not found: {path}")
|
||||
|
||||
try:
|
||||
path.unlink()
|
||||
logger.info(f"Image deleted: {path}")
|
||||
except OSError as e:
|
||||
raise ImageError(f"Failed to delete image: {e}") from e
|
||||
142
src/core/lock.py
Normal file
142
src/core/lock.py
Normal file
@@ -0,0 +1,142 @@
|
||||
"""Lock mechanism for exclusive vault access."""
|
||||
|
||||
import fcntl
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
class VaultLockError(Exception):
|
||||
"""Raised when vault lock cannot be acquired."""
|
||||
|
||||
|
||||
class VaultLock:
|
||||
"""Exclusive lock for vault access.
|
||||
|
||||
Uses fcntl.flock for file-based locking.
|
||||
Only one instance can have the lock at a time.
|
||||
|
||||
Usage:
|
||||
lock = VaultLock(mount_point / ".vault" / "lock")
|
||||
if lock.acquire():
|
||||
try:
|
||||
# work with vault
|
||||
finally:
|
||||
lock.release()
|
||||
"""
|
||||
|
||||
def __init__(self, lock_path: Path) -> None:
|
||||
"""Initialize lock.
|
||||
|
||||
Args:
|
||||
lock_path: Path to the lock file
|
||||
"""
|
||||
self.lock_path = lock_path
|
||||
self._lock_file: int | None = None
|
||||
|
||||
def acquire(self) -> bool:
|
||||
"""Try to acquire exclusive lock.
|
||||
|
||||
Returns:
|
||||
True if lock acquired, False if vault is already locked
|
||||
|
||||
Raises:
|
||||
VaultLockError: If lock acquisition fails for unexpected reason
|
||||
"""
|
||||
try:
|
||||
# Ensure parent directory exists
|
||||
self.lock_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Open lock file (create if doesn't exist)
|
||||
self._lock_file = os.open(
|
||||
str(self.lock_path),
|
||||
os.O_RDWR | os.O_CREAT,
|
||||
0o644,
|
||||
)
|
||||
|
||||
# Try to acquire exclusive lock (non-blocking)
|
||||
fcntl.flock(self._lock_file, fcntl.LOCK_EX | fcntl.LOCK_NB)
|
||||
|
||||
# Write PID to lock file
|
||||
os.ftruncate(self._lock_file, 0)
|
||||
os.write(self._lock_file, str(os.getpid()).encode())
|
||||
|
||||
return True
|
||||
|
||||
except BlockingIOError:
|
||||
# Lock is held by another process
|
||||
if self._lock_file is not None:
|
||||
os.close(self._lock_file)
|
||||
self._lock_file = None
|
||||
return False
|
||||
|
||||
except OSError as e:
|
||||
if self._lock_file is not None:
|
||||
os.close(self._lock_file)
|
||||
self._lock_file = None
|
||||
raise VaultLockError(f"Failed to acquire lock: {e}") from e
|
||||
|
||||
def release(self) -> None:
|
||||
"""Release the lock.
|
||||
|
||||
Safe to call even if lock is not held.
|
||||
"""
|
||||
if self._lock_file is not None:
|
||||
try:
|
||||
fcntl.flock(self._lock_file, fcntl.LOCK_UN)
|
||||
os.close(self._lock_file)
|
||||
except OSError:
|
||||
pass # Ignore errors during cleanup
|
||||
finally:
|
||||
self._lock_file = None
|
||||
|
||||
# Remove lock file
|
||||
try:
|
||||
self.lock_path.unlink()
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
def is_locked(self) -> bool:
|
||||
"""Check if vault is currently locked (by any process).
|
||||
|
||||
Returns:
|
||||
True if locked, False otherwise
|
||||
"""
|
||||
if not self.lock_path.exists():
|
||||
return False
|
||||
|
||||
try:
|
||||
fd = os.open(str(self.lock_path), os.O_RDONLY)
|
||||
try:
|
||||
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
|
||||
# We got the lock, so it wasn't locked
|
||||
fcntl.flock(fd, fcntl.LOCK_UN)
|
||||
return False
|
||||
except BlockingIOError:
|
||||
# Lock is held
|
||||
return True
|
||||
finally:
|
||||
os.close(fd)
|
||||
except OSError:
|
||||
return False
|
||||
|
||||
def get_owner_pid(self) -> int | None:
|
||||
"""Get PID of the process holding the lock.
|
||||
|
||||
Returns:
|
||||
PID if lock file exists and contains valid PID, None otherwise
|
||||
"""
|
||||
try:
|
||||
content = self.lock_path.read_text().strip()
|
||||
return int(content)
|
||||
except (FileNotFoundError, ValueError):
|
||||
return None
|
||||
|
||||
def __enter__(self) -> "VaultLock":
|
||||
"""Context manager entry."""
|
||||
if not self.acquire():
|
||||
raise VaultLockError("Vault is already locked by another process")
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb) -> None:
|
||||
"""Context manager exit."""
|
||||
self.release()
|
||||
238
src/core/manifest.py
Normal file
238
src/core/manifest.py
Normal file
@@ -0,0 +1,238 @@
|
||||
"""Manifest dataclass for vault metadata."""
|
||||
|
||||
import json
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Literal
|
||||
from uuid import uuid4
|
||||
|
||||
from src.core.file_entry import FileEntry
|
||||
|
||||
|
||||
LocationStatus = Literal["active", "unreachable"]
|
||||
|
||||
|
||||
@dataclass
|
||||
class Location:
|
||||
"""Represents a vault replica location.
|
||||
|
||||
Attributes:
|
||||
path: Absolute path to the .vault file
|
||||
last_seen: Last time this location was accessible
|
||||
status: Current status (active/unreachable)
|
||||
"""
|
||||
|
||||
path: str
|
||||
last_seen: datetime
|
||||
status: LocationStatus = "active"
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Convert to dictionary for JSON serialization."""
|
||||
return {
|
||||
"path": self.path,
|
||||
"last_seen": self.last_seen.isoformat(),
|
||||
"status": self.status,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, data: dict) -> "Location":
|
||||
"""Create Location from dictionary."""
|
||||
return cls(
|
||||
path=data["path"],
|
||||
last_seen=datetime.fromisoformat(data["last_seen"]),
|
||||
status=data["status"],
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class Manifest:
|
||||
"""Vault manifest containing all metadata.
|
||||
|
||||
This is stored as .vault/manifest.json inside each vault image.
|
||||
|
||||
Attributes:
|
||||
vault_id: Unique identifier for the vault (UUID)
|
||||
vault_name: Human-readable name
|
||||
version: Manifest format version
|
||||
created: Creation timestamp
|
||||
last_modified: Last modification timestamp
|
||||
image_size_mb: Size of the vault image in MB
|
||||
locations: List of all replica locations
|
||||
files: List of all files in the vault
|
||||
"""
|
||||
|
||||
vault_id: str
|
||||
vault_name: str
|
||||
image_size_mb: int
|
||||
created: datetime
|
||||
last_modified: datetime
|
||||
version: int = 1
|
||||
locations: list[Location] = field(default_factory=list)
|
||||
files: list[FileEntry] = field(default_factory=list)
|
||||
|
||||
@classmethod
|
||||
def create_new(cls, vault_name: str, image_size_mb: int, location_path: str) -> "Manifest":
|
||||
"""Create a new manifest for a fresh vault.
|
||||
|
||||
Args:
|
||||
vault_name: Human-readable name for the vault
|
||||
image_size_mb: Size of the vault image in MB
|
||||
location_path: Path to the first .vault file
|
||||
|
||||
Returns:
|
||||
New Manifest instance
|
||||
"""
|
||||
now = datetime.now()
|
||||
resolved = str(Path(location_path).resolve())
|
||||
return cls(
|
||||
vault_id=str(uuid4()),
|
||||
vault_name=vault_name,
|
||||
image_size_mb=image_size_mb,
|
||||
created=now,
|
||||
last_modified=now,
|
||||
locations=[Location(path=resolved, last_seen=now, status="active")],
|
||||
files=[],
|
||||
)
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Convert to dictionary for JSON serialization."""
|
||||
return {
|
||||
"vault_id": self.vault_id,
|
||||
"vault_name": self.vault_name,
|
||||
"version": self.version,
|
||||
"created": self.created.isoformat(),
|
||||
"last_modified": self.last_modified.isoformat(),
|
||||
"image_size_mb": self.image_size_mb,
|
||||
"locations": [loc.to_dict() for loc in self.locations],
|
||||
"files": [f.to_dict() for f in self.files],
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, data: dict) -> "Manifest":
|
||||
"""Create Manifest from dictionary."""
|
||||
return cls(
|
||||
vault_id=data["vault_id"],
|
||||
vault_name=data["vault_name"],
|
||||
version=data["version"],
|
||||
created=datetime.fromisoformat(data["created"]),
|
||||
last_modified=datetime.fromisoformat(data["last_modified"]),
|
||||
image_size_mb=data["image_size_mb"],
|
||||
locations=[Location.from_dict(loc) for loc in data["locations"]],
|
||||
files=[FileEntry.from_dict(f) for f in data["files"]],
|
||||
)
|
||||
|
||||
def save(self, mount_point: Path) -> None:
|
||||
"""Save manifest to .vault/manifest.json.
|
||||
|
||||
Args:
|
||||
mount_point: Path where the vault is mounted
|
||||
"""
|
||||
vault_dir = mount_point / ".vault"
|
||||
vault_dir.mkdir(exist_ok=True)
|
||||
|
||||
manifest_path = vault_dir / "manifest.json"
|
||||
with open(manifest_path, "w", encoding="utf-8") as f:
|
||||
json.dump(self.to_dict(), f, indent=2)
|
||||
|
||||
@classmethod
|
||||
def load(cls, mount_point: Path) -> "Manifest":
|
||||
"""Load manifest from .vault/manifest.json.
|
||||
|
||||
Args:
|
||||
mount_point: Path where the vault is mounted
|
||||
|
||||
Returns:
|
||||
Loaded Manifest instance
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If manifest doesn't exist
|
||||
"""
|
||||
manifest_path = mount_point / ".vault" / "manifest.json"
|
||||
with open(manifest_path, "r", encoding="utf-8") as f:
|
||||
data = json.load(f)
|
||||
return cls.from_dict(data)
|
||||
|
||||
def add_location(self, path: str) -> None:
|
||||
"""Add a new replica location.
|
||||
|
||||
Args:
|
||||
path: Absolute path to the .vault file
|
||||
"""
|
||||
resolved = str(Path(path).resolve())
|
||||
# Don't add duplicate locations
|
||||
for loc in self.locations:
|
||||
if str(Path(loc.path).resolve()) == resolved:
|
||||
loc.status = "active"
|
||||
loc.last_seen = datetime.now()
|
||||
self.last_modified = datetime.now()
|
||||
return
|
||||
|
||||
self.locations.append(
|
||||
Location(path=resolved, last_seen=datetime.now(), status="active")
|
||||
)
|
||||
self.last_modified = datetime.now()
|
||||
|
||||
def update_location_status(self, path: str, status: LocationStatus) -> None:
|
||||
"""Update status of a location.
|
||||
|
||||
Args:
|
||||
path: Path to the location
|
||||
status: New status
|
||||
"""
|
||||
resolved = str(Path(path).resolve())
|
||||
for loc in self.locations:
|
||||
if str(Path(loc.path).resolve()) == resolved:
|
||||
loc.status = status
|
||||
if status == "active":
|
||||
loc.last_seen = datetime.now()
|
||||
break
|
||||
self.last_modified = datetime.now()
|
||||
|
||||
def add_file(self, file_entry: FileEntry) -> None:
|
||||
"""Add or update a file entry.
|
||||
|
||||
Args:
|
||||
file_entry: File entry to add/update
|
||||
"""
|
||||
# Remove existing entry with same path
|
||||
self.files = [f for f in self.files if f.path != file_entry.path]
|
||||
self.files.append(file_entry)
|
||||
self.last_modified = datetime.now()
|
||||
|
||||
def add_file_from_path(self, base_path: Path, file_path: Path) -> FileEntry:
|
||||
"""Add a file entry from a file path.
|
||||
|
||||
Args:
|
||||
base_path: Base path (mount point) for relative path calculation
|
||||
file_path: Absolute path to the file
|
||||
|
||||
Returns:
|
||||
The created FileEntry
|
||||
"""
|
||||
entry = FileEntry.from_path(base_path, file_path)
|
||||
self.add_file(entry)
|
||||
return entry
|
||||
|
||||
def remove_file(self, path: str) -> None:
|
||||
"""Remove a file entry by path.
|
||||
|
||||
Args:
|
||||
path: Relative path of the file to remove
|
||||
"""
|
||||
self.files = [f for f in self.files if f.path != path]
|
||||
self.last_modified = datetime.now()
|
||||
|
||||
def get_file(self, path: str) -> FileEntry | None:
|
||||
"""Get file entry by path.
|
||||
|
||||
Args:
|
||||
path: Relative path of the file
|
||||
|
||||
Returns:
|
||||
FileEntry if found, None otherwise
|
||||
"""
|
||||
for f in self.files:
|
||||
if f.path == path:
|
||||
return f
|
||||
return None
|
||||
427
src/core/sync_manager.py
Normal file
427
src/core/sync_manager.py
Normal file
@@ -0,0 +1,427 @@
|
||||
"""Synchronization manager for coordinating changes between replicas.
|
||||
|
||||
Handles:
|
||||
- Change detection via file watcher
|
||||
- Propagation of changes to all replicas
|
||||
- Manifest comparison and sync on reconnect
|
||||
"""
|
||||
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
|
||||
from loguru import logger
|
||||
|
||||
from src.core.file_sync import (
|
||||
ProgressCallback,
|
||||
copy_file_with_progress,
|
||||
delete_directory,
|
||||
delete_file,
|
||||
move_file,
|
||||
)
|
||||
from src.core.file_watcher import EventType, FileEvent, FileWatcher
|
||||
from src.core.manifest import Manifest
|
||||
|
||||
|
||||
class SyncStatus(Enum):
|
||||
"""Status of synchronization."""
|
||||
|
||||
IDLE = "idle"
|
||||
SYNCING = "syncing"
|
||||
ERROR = "error"
|
||||
|
||||
|
||||
@dataclass
|
||||
class SyncEvent:
|
||||
"""Event describing a sync operation."""
|
||||
|
||||
event_type: EventType
|
||||
relative_path: str
|
||||
source_mount: Path
|
||||
target_mounts: list[Path]
|
||||
success: bool = True
|
||||
error: str | None = None
|
||||
|
||||
|
||||
# Type alias for sync event callbacks
|
||||
SyncEventCallback = Callable[[SyncEvent], None]
|
||||
|
||||
|
||||
@dataclass
|
||||
class ReplicaMount:
|
||||
"""Represents a mounted replica."""
|
||||
|
||||
mount_point: Path
|
||||
image_path: Path
|
||||
is_primary: bool = False
|
||||
|
||||
def get_file_path(self, relative_path: str) -> Path:
|
||||
"""Get full path for a relative file path."""
|
||||
return self.mount_point / relative_path
|
||||
|
||||
|
||||
class SyncManager:
|
||||
"""Manages synchronization between multiple vault replicas."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
on_sync_event: SyncEventCallback | None = None,
|
||||
on_progress: ProgressCallback | None = None,
|
||||
) -> None:
|
||||
self._replicas: list[ReplicaMount] = []
|
||||
self._primary: ReplicaMount | None = None
|
||||
self._watcher: FileWatcher | None = None
|
||||
self._status = SyncStatus.IDLE
|
||||
self._on_sync_event = on_sync_event
|
||||
self._on_progress = on_progress
|
||||
self._paused = False
|
||||
|
||||
@property
|
||||
def status(self) -> SyncStatus:
|
||||
"""Get current sync status."""
|
||||
return self._status
|
||||
|
||||
@property
|
||||
def replica_count(self) -> int:
|
||||
"""Get number of registered replicas."""
|
||||
return len(self._replicas)
|
||||
|
||||
@property
|
||||
def primary_mount(self) -> Path | None:
|
||||
"""Get primary mount point."""
|
||||
return self._primary.mount_point if self._primary else None
|
||||
|
||||
def add_replica(
|
||||
self,
|
||||
mount_point: Path,
|
||||
image_path: Path,
|
||||
is_primary: bool = False,
|
||||
) -> None:
|
||||
"""Add a replica to the sync manager.
|
||||
|
||||
Args:
|
||||
mount_point: Path where replica is mounted
|
||||
image_path: Path to the .vault image file
|
||||
is_primary: If True, this is the primary replica (user-facing)
|
||||
"""
|
||||
replica = ReplicaMount(
|
||||
mount_point=mount_point,
|
||||
image_path=image_path,
|
||||
is_primary=is_primary,
|
||||
)
|
||||
self._replicas.append(replica)
|
||||
|
||||
if is_primary:
|
||||
if self._primary is not None:
|
||||
logger.warning("Replacing existing primary replica")
|
||||
self._primary = replica
|
||||
logger.info(f"Primary replica set: {mount_point}")
|
||||
else:
|
||||
logger.info(f"Secondary replica added: {mount_point}")
|
||||
|
||||
def remove_replica(self, mount_point: Path) -> bool:
|
||||
"""Remove a replica from the sync manager.
|
||||
|
||||
Args:
|
||||
mount_point: Mount point of replica to remove
|
||||
|
||||
Returns:
|
||||
True if replica was removed, False if not found
|
||||
"""
|
||||
for i, replica in enumerate(self._replicas):
|
||||
if replica.mount_point == mount_point:
|
||||
if replica.is_primary:
|
||||
self._primary = None
|
||||
self.stop_watching()
|
||||
del self._replicas[i]
|
||||
logger.info(f"Replica removed: {mount_point}")
|
||||
return True
|
||||
return False
|
||||
|
||||
def start_watching(self) -> None:
|
||||
"""Start watching the primary replica for changes."""
|
||||
if self._primary is None:
|
||||
raise ValueError("No primary replica set")
|
||||
|
||||
if self._watcher is not None:
|
||||
logger.warning("Watcher already running")
|
||||
return
|
||||
|
||||
self._watcher = FileWatcher(
|
||||
watch_path=self._primary.mount_point,
|
||||
callback=self._handle_file_event,
|
||||
ignore_patterns=[".vault"],
|
||||
)
|
||||
self._watcher.start()
|
||||
logger.info(f"Started watching: {self._primary.mount_point}")
|
||||
|
||||
def stop_watching(self) -> None:
|
||||
"""Stop watching for changes."""
|
||||
if self._watcher is not None:
|
||||
self._watcher.stop()
|
||||
self._watcher = None
|
||||
logger.info("Stopped watching")
|
||||
|
||||
def pause_sync(self) -> None:
|
||||
"""Temporarily pause synchronization."""
|
||||
self._paused = True
|
||||
logger.debug("Sync paused")
|
||||
|
||||
def resume_sync(self) -> None:
|
||||
"""Resume synchronization."""
|
||||
self._paused = False
|
||||
logger.debug("Sync resumed")
|
||||
|
||||
def _handle_file_event(self, event: FileEvent) -> None:
|
||||
"""Handle a file system event from the watcher."""
|
||||
if self._paused:
|
||||
logger.debug(f"Sync paused, ignoring event: {event}")
|
||||
return
|
||||
|
||||
logger.debug(f"Handling event: {event}")
|
||||
|
||||
try:
|
||||
self._status = SyncStatus.SYNCING
|
||||
|
||||
if event.event_type == EventType.CREATED:
|
||||
self._propagate_create(event)
|
||||
elif event.event_type == EventType.MODIFIED:
|
||||
self._propagate_modify(event)
|
||||
elif event.event_type == EventType.DELETED:
|
||||
self._propagate_delete(event)
|
||||
elif event.event_type == EventType.MOVED:
|
||||
self._propagate_move(event)
|
||||
|
||||
self._status = SyncStatus.IDLE
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error handling event {event}: {e}")
|
||||
self._status = SyncStatus.ERROR
|
||||
self._emit_sync_event(
|
||||
event.event_type,
|
||||
event.path,
|
||||
[],
|
||||
success=False,
|
||||
error=str(e),
|
||||
)
|
||||
|
||||
def _get_secondary_replicas(self) -> list[ReplicaMount]:
|
||||
"""Get all non-primary replicas."""
|
||||
return [r for r in self._replicas if not r.is_primary]
|
||||
|
||||
def _propagate_create(self, event: FileEvent) -> None:
|
||||
"""Propagate file/directory creation to all replicas."""
|
||||
if self._primary is None:
|
||||
return
|
||||
|
||||
src_path = self._primary.get_file_path(event.path)
|
||||
target_mounts: list[Path] = []
|
||||
|
||||
for replica in self._get_secondary_replicas():
|
||||
dst_path = replica.get_file_path(event.path)
|
||||
try:
|
||||
if event.is_directory:
|
||||
dst_path.mkdir(parents=True, exist_ok=True)
|
||||
else:
|
||||
copy_file_with_progress(src_path, dst_path, self._on_progress)
|
||||
target_mounts.append(replica.mount_point)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to propagate create to {replica.mount_point}: {e}")
|
||||
|
||||
self._emit_sync_event(EventType.CREATED, event.path, target_mounts)
|
||||
|
||||
def _propagate_modify(self, event: FileEvent) -> None:
|
||||
"""Propagate file modification to all replicas."""
|
||||
if self._primary is None:
|
||||
return
|
||||
|
||||
src_path = self._primary.get_file_path(event.path)
|
||||
target_mounts: list[Path] = []
|
||||
|
||||
# Skip if source doesn't exist (might have been deleted quickly after modify)
|
||||
if not src_path.exists():
|
||||
logger.debug(f"Source no longer exists, skipping modify: {event.path}")
|
||||
return
|
||||
|
||||
for replica in self._get_secondary_replicas():
|
||||
dst_path = replica.get_file_path(event.path)
|
||||
try:
|
||||
copy_file_with_progress(src_path, dst_path, self._on_progress)
|
||||
target_mounts.append(replica.mount_point)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to propagate modify to {replica.mount_point}: {e}")
|
||||
|
||||
self._emit_sync_event(EventType.MODIFIED, event.path, target_mounts)
|
||||
|
||||
def _propagate_delete(self, event: FileEvent) -> None:
|
||||
"""Propagate file/directory deletion to all replicas."""
|
||||
target_mounts: list[Path] = []
|
||||
|
||||
for replica in self._get_secondary_replicas():
|
||||
dst_path = replica.get_file_path(event.path)
|
||||
try:
|
||||
if dst_path.exists():
|
||||
if dst_path.is_dir():
|
||||
delete_directory(dst_path)
|
||||
else:
|
||||
delete_file(dst_path)
|
||||
target_mounts.append(replica.mount_point)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to propagate delete to {replica.mount_point}: {e}")
|
||||
|
||||
self._emit_sync_event(EventType.DELETED, event.path, target_mounts)
|
||||
|
||||
def _propagate_move(self, event: FileEvent) -> None:
|
||||
"""Propagate file/directory move to all replicas."""
|
||||
if event.dest_path is None:
|
||||
logger.warning(f"Move event without dest_path: {event}")
|
||||
return
|
||||
|
||||
target_mounts: list[Path] = []
|
||||
|
||||
for replica in self._get_secondary_replicas():
|
||||
src_path = replica.get_file_path(event.path)
|
||||
dst_path = replica.get_file_path(event.dest_path)
|
||||
try:
|
||||
if src_path.exists():
|
||||
move_file(src_path, dst_path)
|
||||
target_mounts.append(replica.mount_point)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to propagate move to {replica.mount_point}: {e}")
|
||||
|
||||
self._emit_sync_event(EventType.MOVED, event.path, target_mounts)
|
||||
|
||||
def _emit_sync_event(
|
||||
self,
|
||||
event_type: EventType,
|
||||
path: str,
|
||||
target_mounts: list[Path],
|
||||
success: bool = True,
|
||||
error: str | None = None,
|
||||
) -> None:
|
||||
"""Emit a sync event to the callback."""
|
||||
if self._on_sync_event is None or self._primary is None:
|
||||
return
|
||||
|
||||
sync_event = SyncEvent(
|
||||
event_type=event_type,
|
||||
relative_path=path,
|
||||
source_mount=self._primary.mount_point,
|
||||
target_mounts=target_mounts,
|
||||
success=success,
|
||||
error=error,
|
||||
)
|
||||
self._on_sync_event(sync_event)
|
||||
|
||||
def sync_from_manifest(
|
||||
self,
|
||||
source_manifest: Manifest,
|
||||
target_mount: Path,
|
||||
target_manifest: Manifest,
|
||||
) -> int:
|
||||
"""Synchronize files based on manifest comparison.
|
||||
|
||||
Compares two manifests and syncs files from source to target
|
||||
based on timestamps (newer wins).
|
||||
|
||||
Args:
|
||||
source_manifest: Manifest of source replica
|
||||
target_mount: Mount point of target replica
|
||||
target_manifest: Manifest of target replica
|
||||
|
||||
Returns:
|
||||
Number of files synchronized
|
||||
"""
|
||||
if self._primary is None:
|
||||
raise ValueError("No primary replica set")
|
||||
|
||||
synced_count = 0
|
||||
|
||||
# Build lookup of target files
|
||||
target_files = {f.path: f for f in target_manifest.files}
|
||||
|
||||
for source_file in source_manifest.files:
|
||||
target_file = target_files.get(source_file.path)
|
||||
|
||||
should_copy = False
|
||||
if target_file is None:
|
||||
# File doesn't exist in target
|
||||
should_copy = True
|
||||
logger.debug(f"File missing in target: {source_file.path}")
|
||||
elif source_file.is_newer_than(target_file):
|
||||
# Source file is newer
|
||||
should_copy = True
|
||||
logger.debug(f"Source file newer: {source_file.path}")
|
||||
|
||||
if should_copy:
|
||||
src_path = self._primary.get_file_path(source_file.path)
|
||||
dst_path = target_mount / source_file.path
|
||||
|
||||
try:
|
||||
copy_file_with_progress(src_path, dst_path, self._on_progress)
|
||||
synced_count += 1
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to sync {source_file.path}: {e}")
|
||||
|
||||
# Check for files in target that don't exist in source (deletions)
|
||||
source_files = {f.path for f in source_manifest.files}
|
||||
for target_file in target_manifest.files:
|
||||
if target_file.path not in source_files:
|
||||
# File was deleted in source
|
||||
dst_path = target_mount / target_file.path
|
||||
try:
|
||||
if dst_path.exists():
|
||||
delete_file(dst_path)
|
||||
synced_count += 1
|
||||
logger.debug(f"Deleted from target: {target_file.path}")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to delete {target_file.path}: {e}")
|
||||
|
||||
return synced_count
|
||||
|
||||
def full_sync(self) -> dict[Path, int]:
|
||||
"""Perform full synchronization across all replicas.
|
||||
|
||||
Syncs from primary to all secondary replicas based on manifests.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping mount points to number of files synced
|
||||
"""
|
||||
if self._primary is None:
|
||||
raise ValueError("No primary replica set")
|
||||
|
||||
results: dict[Path, int] = {}
|
||||
|
||||
# Load primary manifest
|
||||
try:
|
||||
primary_manifest = Manifest.load(self._primary.mount_point)
|
||||
except FileNotFoundError:
|
||||
logger.warning("Primary manifest not found, creating empty manifest")
|
||||
primary_manifest = Manifest.create_new(
|
||||
vault_name="Vault",
|
||||
image_size_mb=0,
|
||||
location_path=str(self._primary.image_path),
|
||||
)
|
||||
|
||||
# Sync to each secondary
|
||||
for replica in self._get_secondary_replicas():
|
||||
try:
|
||||
target_manifest = Manifest.load(replica.mount_point)
|
||||
except FileNotFoundError:
|
||||
logger.warning(f"Target manifest not found: {replica.mount_point}")
|
||||
target_manifest = Manifest.create_new(
|
||||
vault_name="Vault",
|
||||
image_size_mb=0,
|
||||
location_path=str(replica.image_path),
|
||||
)
|
||||
|
||||
synced = self.sync_from_manifest(
|
||||
primary_manifest,
|
||||
replica.mount_point,
|
||||
target_manifest,
|
||||
)
|
||||
results[replica.mount_point] = synced
|
||||
logger.info(f"Synced {synced} files to {replica.mount_point}")
|
||||
|
||||
return results
|
||||
668
src/core/vault.py
Normal file
668
src/core/vault.py
Normal file
@@ -0,0 +1,668 @@
|
||||
"""Main Vault class for managing multiple container replicas.
|
||||
|
||||
This is the primary interface for working with a vault - it handles:
|
||||
- Opening/closing vaults with multiple replicas
|
||||
- Coordinating mounting of all containers
|
||||
- Setting up synchronization between replicas
|
||||
- Lock management for exclusive access
|
||||
"""
|
||||
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
|
||||
from loguru import logger
|
||||
|
||||
from src.core.container import Container, ContainerError
|
||||
from src.core.file_sync import ProgressCallback
|
||||
from src.core.lock import VaultLock
|
||||
from src.core.manifest import Manifest
|
||||
from src.core.sync_manager import SyncEventCallback, SyncManager, SyncStatus
|
||||
|
||||
|
||||
class VaultState(Enum):
|
||||
"""State of a vault."""
|
||||
|
||||
CLOSED = "closed"
|
||||
OPENING = "opening"
|
||||
OPEN = "open"
|
||||
SYNCING = "syncing"
|
||||
ERROR = "error"
|
||||
|
||||
|
||||
class VaultError(Exception):
|
||||
"""Raised when vault operation fails."""
|
||||
|
||||
|
||||
@dataclass
|
||||
class ReplicaInfo:
|
||||
"""Information about a replica."""
|
||||
|
||||
image_path: Path
|
||||
mount_point: Path | None = None
|
||||
is_primary: bool = False
|
||||
is_mounted: bool = False
|
||||
error: str | None = None
|
||||
|
||||
|
||||
class Vault:
|
||||
"""Manages a vault with multiple replicas.
|
||||
|
||||
A vault consists of:
|
||||
- One primary container (user-facing mount point)
|
||||
- Zero or more secondary containers (internal sync targets)
|
||||
- A sync manager for propagating changes
|
||||
|
||||
Example usage:
|
||||
vault = Vault()
|
||||
vault.open("/path/to/primary.vault", "/home/user/Vault")
|
||||
vault.add_replica("/path/to/backup.vault")
|
||||
# ... user works with files in /home/user/Vault ...
|
||||
vault.close()
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
on_state_change: "Callable[[VaultState], None] | None" = None,
|
||||
on_sync_event: SyncEventCallback | None = None,
|
||||
on_progress: ProgressCallback | None = None,
|
||||
) -> None:
|
||||
self._state = VaultState.CLOSED
|
||||
self._primary_container: Container | None = None
|
||||
self._secondary_containers: list[Container] = []
|
||||
self._sync_manager: SyncManager | None = None
|
||||
self._lock: VaultLock | None = None
|
||||
self._manifest: Manifest | None = None
|
||||
|
||||
self._on_state_change = on_state_change
|
||||
self._on_sync_event = on_sync_event
|
||||
self._on_progress = on_progress
|
||||
|
||||
@property
|
||||
def state(self) -> VaultState:
|
||||
"""Get current vault state."""
|
||||
return self._state
|
||||
|
||||
@property
|
||||
def is_open(self) -> bool:
|
||||
"""Check if vault is open."""
|
||||
return self._state == VaultState.OPEN or self._state == VaultState.SYNCING
|
||||
|
||||
@property
|
||||
def mount_point(self) -> Path | None:
|
||||
"""Get primary mount point."""
|
||||
if self._primary_container and self._primary_container.is_mounted():
|
||||
return self._primary_container.mount_point
|
||||
return None
|
||||
|
||||
@property
|
||||
def manifest(self) -> Manifest | None:
|
||||
"""Get current manifest."""
|
||||
return self._manifest
|
||||
|
||||
@property
|
||||
def replica_count(self) -> int:
|
||||
"""Get total number of replicas (primary + secondary)."""
|
||||
count = 1 if self._primary_container else 0
|
||||
return count + len(self._secondary_containers)
|
||||
|
||||
@property
|
||||
def sync_status(self) -> SyncStatus:
|
||||
"""Get current sync status."""
|
||||
if self._sync_manager:
|
||||
return self._sync_manager.status
|
||||
return SyncStatus.IDLE
|
||||
|
||||
def _set_state(self, state: VaultState) -> None:
|
||||
"""Update state and notify callback."""
|
||||
self._state = state
|
||||
if self._on_state_change:
|
||||
self._on_state_change(state)
|
||||
|
||||
def open(self, image_path: Path, mount_point: Path | None = None) -> Path:
|
||||
"""Open a vault from a .vault image.
|
||||
|
||||
Args:
|
||||
image_path: Path to the primary .vault file
|
||||
mount_point: Optional custom mount point
|
||||
|
||||
Returns:
|
||||
Path where vault is mounted
|
||||
|
||||
Raises:
|
||||
VaultError: If opening fails
|
||||
"""
|
||||
if self.is_open:
|
||||
raise VaultError("Vault is already open")
|
||||
|
||||
self._set_state(VaultState.OPENING)
|
||||
image_path = Path(image_path).resolve()
|
||||
|
||||
try:
|
||||
# Acquire lock
|
||||
lock_path = image_path.parent / f".{image_path.stem}.lock"
|
||||
self._lock = VaultLock(lock_path)
|
||||
if not self._lock.acquire():
|
||||
raise VaultError(f"Vault is locked by another process (PID: {self._lock.get_owner_pid()})")
|
||||
|
||||
# Create and mount primary container
|
||||
self._primary_container = Container(image_path)
|
||||
actual_mount = self._primary_container.mount(mount_point)
|
||||
|
||||
# Initialize sync manager
|
||||
self._sync_manager = SyncManager(
|
||||
on_sync_event=self._on_sync_event,
|
||||
on_progress=self._on_progress,
|
||||
)
|
||||
self._sync_manager.add_replica(
|
||||
actual_mount,
|
||||
image_path,
|
||||
is_primary=True,
|
||||
)
|
||||
|
||||
# Load or create manifest
|
||||
try:
|
||||
self._manifest = Manifest.load(actual_mount)
|
||||
logger.info(f"Loaded manifest: {self._manifest.vault_name}")
|
||||
except FileNotFoundError:
|
||||
logger.warning("No manifest found, creating new one")
|
||||
self._manifest = Manifest.create_new(
|
||||
vault_name=image_path.stem,
|
||||
image_size_mb=0, # TODO: get actual size
|
||||
location_path=str(image_path),
|
||||
)
|
||||
self._manifest.save(actual_mount)
|
||||
|
||||
# Try to mount secondary replicas from manifest
|
||||
self._mount_secondary_replicas()
|
||||
|
||||
# Start file watching
|
||||
self._sync_manager.start_watching()
|
||||
|
||||
self._set_state(VaultState.OPEN)
|
||||
logger.info(f"Vault opened: {self._manifest.vault_name}")
|
||||
return actual_mount
|
||||
|
||||
except Exception as e:
|
||||
self._cleanup()
|
||||
self._set_state(VaultState.ERROR)
|
||||
raise VaultError(f"Failed to open vault: {e}") from e
|
||||
|
||||
def _mount_secondary_replicas(self) -> None:
|
||||
"""Try to mount secondary replicas from manifest locations."""
|
||||
if not self._manifest or not self._sync_manager:
|
||||
return
|
||||
|
||||
for location in self._manifest.locations:
|
||||
location_path = Path(location.path)
|
||||
|
||||
# Skip primary (already mounted) - resolve both for consistent comparison
|
||||
if self._primary_container and location_path.resolve() == self._primary_container.image_path.resolve():
|
||||
continue
|
||||
|
||||
# Try to mount if available
|
||||
if location_path.exists():
|
||||
try:
|
||||
container = Container(location_path)
|
||||
mount = container.mount()
|
||||
self._secondary_containers.append(container)
|
||||
self._sync_manager.add_replica(mount, location_path)
|
||||
self._manifest.update_location_status(str(location_path), "active")
|
||||
logger.info(f"Secondary replica mounted: {location_path}")
|
||||
except ContainerError as e:
|
||||
logger.warning(f"Failed to mount secondary replica {location_path}: {e}")
|
||||
self._manifest.update_location_status(str(location_path), "unreachable")
|
||||
else:
|
||||
logger.warning(f"Secondary replica not available: {location_path}")
|
||||
self._manifest.update_location_status(str(location_path), "unreachable")
|
||||
|
||||
def check_replica_availability(self) -> dict[str, bool]:
|
||||
"""Check availability of all replica locations.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping location paths to availability status
|
||||
"""
|
||||
if not self._manifest:
|
||||
return {}
|
||||
|
||||
result: dict[str, bool] = {}
|
||||
|
||||
for location in self._manifest.locations:
|
||||
location_path = Path(location.path)
|
||||
is_available = location_path.exists()
|
||||
result[location.path] = is_available
|
||||
|
||||
# Update manifest status
|
||||
if is_available and location.status == "unreachable":
|
||||
self._manifest.update_location_status(location.path, "active")
|
||||
elif not is_available and location.status == "active":
|
||||
self._manifest.update_location_status(location.path, "unreachable")
|
||||
|
||||
return result
|
||||
|
||||
def reconnect_unavailable_replicas(self) -> int:
|
||||
"""Try to reconnect replicas that were previously unavailable.
|
||||
|
||||
Returns:
|
||||
Number of replicas successfully reconnected
|
||||
"""
|
||||
if not self._manifest or not self._sync_manager or not self._primary_container:
|
||||
return 0
|
||||
|
||||
reconnected = 0
|
||||
|
||||
# Get currently mounted paths (resolved for consistent comparison)
|
||||
mounted_paths = {self._primary_container.image_path.resolve()}
|
||||
for container in self._secondary_containers:
|
||||
mounted_paths.add(container.image_path.resolve())
|
||||
|
||||
for location in self._manifest.locations:
|
||||
location_path = Path(location.path)
|
||||
|
||||
# Skip if already mounted
|
||||
if location_path.resolve() in mounted_paths:
|
||||
continue
|
||||
|
||||
# Try to mount if now available
|
||||
if location_path.exists():
|
||||
try:
|
||||
container = Container(location_path)
|
||||
mount = container.mount()
|
||||
self._secondary_containers.append(container)
|
||||
self._sync_manager.add_replica(mount, location_path)
|
||||
self._manifest.update_location_status(location.path, "active")
|
||||
|
||||
# Sync to newly connected replica
|
||||
self._scan_and_update_manifest()
|
||||
self._sync_manager.full_sync()
|
||||
|
||||
logger.info(f"Reconnected replica: {location_path}")
|
||||
reconnected += 1
|
||||
|
||||
except ContainerError as e:
|
||||
logger.warning(f"Failed to reconnect replica {location_path}: {e}")
|
||||
|
||||
return reconnected
|
||||
|
||||
def get_unavailable_replicas(self) -> list[str]:
|
||||
"""Get list of unavailable replica paths.
|
||||
|
||||
Returns:
|
||||
List of paths to unavailable replicas
|
||||
"""
|
||||
if not self._manifest or not self._primary_container:
|
||||
return []
|
||||
|
||||
mounted_paths = {str(self._primary_container.image_path.resolve())}
|
||||
for container in self._secondary_containers:
|
||||
mounted_paths.add(str(container.image_path.resolve()))
|
||||
|
||||
unavailable = []
|
||||
for location in self._manifest.locations:
|
||||
if str(Path(location.path).resolve()) not in mounted_paths:
|
||||
unavailable.append(location.path)
|
||||
|
||||
return unavailable
|
||||
|
||||
def add_replica(self, image_path: Path) -> Path:
|
||||
"""Add a new replica to the vault.
|
||||
|
||||
Creates a new .vault image and syncs all content from primary.
|
||||
|
||||
Args:
|
||||
image_path: Path where to create new .vault file
|
||||
|
||||
Returns:
|
||||
Mount point of new replica
|
||||
|
||||
Raises:
|
||||
VaultError: If adding replica fails
|
||||
"""
|
||||
if not self.is_open:
|
||||
raise VaultError("Vault must be open to add replica")
|
||||
|
||||
if not self._sync_manager or not self._primary_container or not self._manifest:
|
||||
raise VaultError("Vault not properly initialized")
|
||||
|
||||
image_path = Path(image_path).resolve()
|
||||
|
||||
try:
|
||||
# Create new container with same settings as primary
|
||||
from src.core.image_manager import create_sparse_image
|
||||
|
||||
# Get primary size
|
||||
primary_size = self._manifest.image_size_mb
|
||||
if primary_size == 0:
|
||||
# Estimate from primary image file size
|
||||
primary_size = int(self._primary_container.image_path.stat().st_size / (1024 * 1024))
|
||||
|
||||
create_sparse_image(image_path, primary_size)
|
||||
|
||||
# Mount new container
|
||||
container = Container(image_path)
|
||||
mount = container.mount()
|
||||
|
||||
# Create vault directory and save manifest copy
|
||||
(mount / ".vault").mkdir(exist_ok=True)
|
||||
self._manifest.add_location(str(image_path))
|
||||
self._manifest.save(mount)
|
||||
|
||||
# Register with sync manager
|
||||
self._secondary_containers.append(container)
|
||||
self._sync_manager.add_replica(mount, image_path)
|
||||
|
||||
# Update manifest with actual files before sync
|
||||
self._scan_and_update_manifest()
|
||||
|
||||
# Perform full sync to new replica
|
||||
self._set_state(VaultState.SYNCING)
|
||||
self._sync_manager.full_sync()
|
||||
self._set_state(VaultState.OPEN)
|
||||
|
||||
# Save updated manifest to all replicas
|
||||
self._save_manifest_to_all()
|
||||
|
||||
logger.info(f"Replica added: {image_path}")
|
||||
return mount
|
||||
|
||||
except Exception as e:
|
||||
raise VaultError(f"Failed to add replica: {e}") from e
|
||||
|
||||
def remove_replica(self, image_path: Path) -> None:
|
||||
"""Remove a replica from the vault.
|
||||
|
||||
Unmounts and removes from manifest, but doesn't delete the file.
|
||||
|
||||
Args:
|
||||
image_path: Path to the replica to remove
|
||||
|
||||
Raises:
|
||||
VaultError: If removing fails
|
||||
"""
|
||||
if not self.is_open:
|
||||
raise VaultError("Vault must be open to remove replica")
|
||||
|
||||
image_path = Path(image_path).resolve()
|
||||
|
||||
# Can't remove primary
|
||||
if self._primary_container and image_path == self._primary_container.image_path.resolve():
|
||||
raise VaultError("Cannot remove primary replica")
|
||||
|
||||
# Find and unmount secondary
|
||||
for i, container in enumerate(self._secondary_containers):
|
||||
if container.image_path.resolve() == image_path:
|
||||
if self._sync_manager:
|
||||
self._sync_manager.remove_replica(container.mount_point) # type: ignore
|
||||
container.unmount()
|
||||
del self._secondary_containers[i]
|
||||
|
||||
# Update manifest
|
||||
if self._manifest:
|
||||
self._manifest.locations = [
|
||||
loc for loc in self._manifest.locations if loc.path != str(image_path)
|
||||
]
|
||||
self._save_manifest_to_all()
|
||||
|
||||
logger.info(f"Replica removed: {image_path}")
|
||||
return
|
||||
|
||||
raise VaultError(f"Replica not found: {image_path}")
|
||||
|
||||
def get_replicas(self) -> list[ReplicaInfo]:
|
||||
"""Get information about all replicas.
|
||||
|
||||
Returns:
|
||||
List of ReplicaInfo for all replicas
|
||||
"""
|
||||
replicas = []
|
||||
|
||||
if self._primary_container:
|
||||
replicas.append(ReplicaInfo(
|
||||
image_path=self._primary_container.image_path,
|
||||
mount_point=self._primary_container.mount_point,
|
||||
is_primary=True,
|
||||
is_mounted=self._primary_container.is_mounted(),
|
||||
))
|
||||
|
||||
for container in self._secondary_containers:
|
||||
replicas.append(ReplicaInfo(
|
||||
image_path=container.image_path,
|
||||
mount_point=container.mount_point,
|
||||
is_primary=False,
|
||||
is_mounted=container.is_mounted(),
|
||||
))
|
||||
|
||||
return replicas
|
||||
|
||||
def _scan_and_update_manifest(self) -> None:
|
||||
"""Scan primary mount point and update manifest with actual files."""
|
||||
if not self._manifest or not self._primary_container or not self._primary_container.mount_point:
|
||||
return
|
||||
|
||||
mount_point = self._primary_container.mount_point
|
||||
|
||||
# Walk through all files in mount point
|
||||
for file_path in mount_point.rglob("*"):
|
||||
# Skip directories and .vault directory
|
||||
if file_path.is_dir():
|
||||
continue
|
||||
if ".vault" in file_path.parts:
|
||||
continue
|
||||
|
||||
# Add or update file in manifest
|
||||
self._manifest.add_file_from_path(mount_point, file_path)
|
||||
|
||||
logger.debug(f"Manifest updated with {len(self._manifest.files)} files")
|
||||
|
||||
# Save manifest so full_sync can load it
|
||||
self._manifest.save(mount_point)
|
||||
|
||||
def sync(self) -> None:
|
||||
"""Manually trigger full synchronization.
|
||||
|
||||
Raises:
|
||||
VaultError: If sync fails
|
||||
"""
|
||||
if not self.is_open or not self._sync_manager:
|
||||
raise VaultError("Vault must be open to sync")
|
||||
|
||||
self._set_state(VaultState.SYNCING)
|
||||
try:
|
||||
self._sync_manager.full_sync()
|
||||
finally:
|
||||
self._set_state(VaultState.OPEN)
|
||||
|
||||
def get_space_info(self) -> dict[str, int | float] | None:
|
||||
"""Get disk space information for the vault.
|
||||
|
||||
Returns:
|
||||
Dictionary with total, used, free space in bytes, or None if not open
|
||||
"""
|
||||
if not self.is_open or not self._primary_container or not self._primary_container.mount_point:
|
||||
return None
|
||||
|
||||
import shutil
|
||||
|
||||
usage = shutil.disk_usage(self._primary_container.mount_point)
|
||||
return {
|
||||
"total": usage.total,
|
||||
"used": usage.used,
|
||||
"free": usage.free,
|
||||
"percent_used": (usage.used / usage.total * 100) if usage.total > 0 else 0.0,
|
||||
}
|
||||
|
||||
def check_space_warning(self, threshold_percent: float = 90.0) -> bool:
|
||||
"""Check if vault space usage exceeds warning threshold.
|
||||
|
||||
Args:
|
||||
threshold_percent: Warning threshold (default 90%)
|
||||
|
||||
Returns:
|
||||
True if space usage exceeds threshold
|
||||
"""
|
||||
info = self.get_space_info()
|
||||
if info is None:
|
||||
return False
|
||||
return info["percent_used"] >= threshold_percent
|
||||
|
||||
def resize(self, new_size_mb: int) -> None:
|
||||
"""Resize the vault to a new size.
|
||||
|
||||
This requires temporarily unmounting and remounting all containers.
|
||||
|
||||
Args:
|
||||
new_size_mb: New size in megabytes
|
||||
|
||||
Raises:
|
||||
VaultError: If resize fails
|
||||
"""
|
||||
if not self.is_open:
|
||||
raise VaultError("Vault must be open to resize")
|
||||
|
||||
if not self._manifest:
|
||||
raise VaultError("No manifest loaded")
|
||||
|
||||
from src.core.image_manager import resize_image
|
||||
|
||||
logger.info(f"Resizing vault to {new_size_mb} MB")
|
||||
|
||||
# Stop watching during resize
|
||||
if self._sync_manager:
|
||||
self._sync_manager.stop_watching()
|
||||
|
||||
try:
|
||||
# Save manifest before unmounting
|
||||
self._save_manifest_to_all()
|
||||
|
||||
# Unmount all containers
|
||||
containers_to_remount: list[tuple[Path, bool]] = []
|
||||
|
||||
if self._primary_container and self._primary_container.is_mounted():
|
||||
containers_to_remount.append((self._primary_container.image_path, True))
|
||||
self._primary_container.unmount()
|
||||
|
||||
for container in self._secondary_containers:
|
||||
if container.is_mounted():
|
||||
containers_to_remount.append((container.image_path, False))
|
||||
container.unmount()
|
||||
|
||||
# Resize all images
|
||||
for image_path, _ in containers_to_remount:
|
||||
try:
|
||||
resize_image(image_path, new_size_mb)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to resize {image_path}: {e}")
|
||||
|
||||
# Remount containers
|
||||
self._secondary_containers.clear()
|
||||
if self._sync_manager:
|
||||
# Reset sync manager
|
||||
self._sync_manager = SyncManager(
|
||||
on_sync_event=self._on_sync_event,
|
||||
on_progress=self._on_progress,
|
||||
)
|
||||
|
||||
for image_path, is_primary in containers_to_remount:
|
||||
try:
|
||||
container = Container(image_path)
|
||||
mount = container.mount()
|
||||
|
||||
if is_primary:
|
||||
self._primary_container = container
|
||||
if self._sync_manager:
|
||||
self._sync_manager.add_replica(mount, image_path, is_primary=True)
|
||||
else:
|
||||
self._secondary_containers.append(container)
|
||||
if self._sync_manager:
|
||||
self._sync_manager.add_replica(mount, image_path)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to remount {image_path}: {e}")
|
||||
|
||||
# Update manifest with new size
|
||||
self._manifest.image_size_mb = new_size_mb
|
||||
self._save_manifest_to_all()
|
||||
|
||||
# Restart watching
|
||||
if self._sync_manager:
|
||||
self._sync_manager.start_watching()
|
||||
|
||||
logger.info(f"Vault resized to {new_size_mb} MB")
|
||||
|
||||
except Exception as e:
|
||||
raise VaultError(f"Failed to resize vault: {e}") from e
|
||||
|
||||
def _save_manifest_to_all(self) -> None:
|
||||
"""Save manifest to all mounted replicas."""
|
||||
if not self._manifest:
|
||||
return
|
||||
|
||||
if self._primary_container and self._primary_container.mount_point:
|
||||
self._manifest.save(self._primary_container.mount_point)
|
||||
|
||||
for container in self._secondary_containers:
|
||||
if container.mount_point:
|
||||
self._manifest.save(container.mount_point)
|
||||
|
||||
def close(self) -> None:
|
||||
"""Close the vault.
|
||||
|
||||
Stops sync, saves manifest, unmounts all containers.
|
||||
"""
|
||||
if not self.is_open:
|
||||
return
|
||||
|
||||
logger.info("Closing vault...")
|
||||
|
||||
try:
|
||||
# Stop watching
|
||||
if self._sync_manager:
|
||||
self._sync_manager.stop_watching()
|
||||
|
||||
# Save manifest to all replicas
|
||||
self._save_manifest_to_all()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error during vault close: {e}")
|
||||
|
||||
finally:
|
||||
self._cleanup()
|
||||
self._set_state(VaultState.CLOSED)
|
||||
logger.info("Vault closed")
|
||||
|
||||
def _cleanup(self) -> None:
|
||||
"""Clean up all resources."""
|
||||
# Unmount secondary containers
|
||||
for container in self._secondary_containers:
|
||||
try:
|
||||
if container.is_mounted():
|
||||
container.unmount()
|
||||
except Exception as e:
|
||||
logger.error(f"Error unmounting secondary: {e}")
|
||||
self._secondary_containers.clear()
|
||||
|
||||
# Unmount primary
|
||||
if self._primary_container:
|
||||
try:
|
||||
if self._primary_container.is_mounted():
|
||||
self._primary_container.unmount()
|
||||
except Exception as e:
|
||||
logger.error(f"Error unmounting primary: {e}")
|
||||
self._primary_container = None
|
||||
|
||||
# Release lock
|
||||
if self._lock:
|
||||
self._lock.release()
|
||||
self._lock = None
|
||||
|
||||
self._sync_manager = None
|
||||
self._manifest = None
|
||||
|
||||
def __enter__(self) -> "Vault":
|
||||
"""Context manager - returns self (must call open() separately)."""
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb) -> None:
|
||||
"""Context manager exit - close vault."""
|
||||
self.close()
|
||||
0
src/ui/__init__.py
Normal file
0
src/ui/__init__.py
Normal file
1
src/ui/dialogs/__init__.py
Normal file
1
src/ui/dialogs/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Dialog windows
|
||||
125
src/ui/dialogs/manage_replicas.py
Normal file
125
src/ui/dialogs/manage_replicas.py
Normal file
@@ -0,0 +1,125 @@
|
||||
"""Replica management dialog."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
from PySide6.QtCore import Qt
|
||||
from PySide6.QtWidgets import (
|
||||
QDialog,
|
||||
QHBoxLayout,
|
||||
QHeaderView,
|
||||
QLabel,
|
||||
QMessageBox,
|
||||
QPushButton,
|
||||
QTableWidget,
|
||||
QTableWidgetItem,
|
||||
QVBoxLayout,
|
||||
QWidget,
|
||||
)
|
||||
|
||||
from src.core.vault import Vault
|
||||
|
||||
|
||||
class ManageReplicasDialog(QDialog):
|
||||
"""Dialog for managing vault replicas."""
|
||||
|
||||
def __init__(self, vault: Vault, parent: QWidget | None = None) -> None:
|
||||
super().__init__(parent)
|
||||
self._vault = vault
|
||||
self.setWindowTitle("Spravovat repliky")
|
||||
self.setMinimumSize(600, 400)
|
||||
self._setup_ui()
|
||||
self._refresh_list()
|
||||
|
||||
def _setup_ui(self) -> None:
|
||||
"""Set up dialog UI."""
|
||||
layout = QVBoxLayout(self)
|
||||
|
||||
# Header
|
||||
header = QLabel(f"Repliky vault: {self._vault.manifest.vault_name if self._vault.manifest else 'Vault'}")
|
||||
header.setStyleSheet("font-weight: bold; font-size: 14px;")
|
||||
layout.addWidget(header)
|
||||
|
||||
# Replica table
|
||||
self._table = QTableWidget()
|
||||
self._table.setColumnCount(4)
|
||||
self._table.setHorizontalHeaderLabels(["Cesta", "Typ", "Status", ""])
|
||||
self._table.horizontalHeader().setSectionResizeMode(0, QHeaderView.ResizeMode.Stretch)
|
||||
self._table.horizontalHeader().setSectionResizeMode(1, QHeaderView.ResizeMode.ResizeToContents)
|
||||
self._table.horizontalHeader().setSectionResizeMode(2, QHeaderView.ResizeMode.ResizeToContents)
|
||||
self._table.horizontalHeader().setSectionResizeMode(3, QHeaderView.ResizeMode.ResizeToContents)
|
||||
self._table.setSelectionBehavior(QTableWidget.SelectionBehavior.SelectRows)
|
||||
self._table.setEditTriggers(QTableWidget.EditTrigger.NoEditTriggers)
|
||||
layout.addWidget(self._table)
|
||||
|
||||
# Buttons
|
||||
btn_layout = QHBoxLayout()
|
||||
|
||||
refresh_btn = QPushButton("Obnovit")
|
||||
refresh_btn.clicked.connect(self._refresh_list)
|
||||
btn_layout.addWidget(refresh_btn)
|
||||
|
||||
btn_layout.addStretch()
|
||||
|
||||
close_btn = QPushButton("Zavřít")
|
||||
close_btn.clicked.connect(self.accept)
|
||||
btn_layout.addWidget(close_btn)
|
||||
|
||||
layout.addLayout(btn_layout)
|
||||
|
||||
def _refresh_list(self) -> None:
|
||||
"""Refresh the replica list."""
|
||||
replicas = self._vault.get_replicas()
|
||||
self._table.setRowCount(len(replicas))
|
||||
|
||||
for row, replica in enumerate(replicas):
|
||||
# Path
|
||||
path_item = QTableWidgetItem(str(replica.image_path))
|
||||
path_item.setToolTip(str(replica.image_path))
|
||||
self._table.setItem(row, 0, path_item)
|
||||
|
||||
# Type
|
||||
type_text = "Primární" if replica.is_primary else "Sekundární"
|
||||
type_item = QTableWidgetItem(type_text)
|
||||
if replica.is_primary:
|
||||
type_item.setForeground(Qt.GlobalColor.blue)
|
||||
self._table.setItem(row, 1, type_item)
|
||||
|
||||
# Status
|
||||
if replica.is_mounted:
|
||||
status_text = "Připojeno"
|
||||
status_item = QTableWidgetItem(status_text)
|
||||
status_item.setForeground(Qt.GlobalColor.darkGreen)
|
||||
else:
|
||||
status_text = "Odpojeno"
|
||||
status_item = QTableWidgetItem(status_text)
|
||||
status_item.setForeground(Qt.GlobalColor.red)
|
||||
self._table.setItem(row, 2, status_item)
|
||||
|
||||
# Remove button (only for secondary replicas)
|
||||
if not replica.is_primary:
|
||||
remove_btn = QPushButton("Odebrat")
|
||||
remove_btn.clicked.connect(lambda checked, p=replica.image_path: self._remove_replica(p))
|
||||
self._table.setCellWidget(row, 3, remove_btn)
|
||||
else:
|
||||
self._table.setItem(row, 3, QTableWidgetItem(""))
|
||||
|
||||
def _remove_replica(self, image_path: Path) -> None:
|
||||
"""Remove a replica after confirmation."""
|
||||
reply = QMessageBox.question(
|
||||
self,
|
||||
"Odebrat repliku",
|
||||
f"Opravdu chcete odebrat repliku?\n\n{image_path}\n\nSoubor nebude smazán, pouze odpojen.",
|
||||
QMessageBox.StandardButton.Yes | QMessageBox.StandardButton.No,
|
||||
QMessageBox.StandardButton.No,
|
||||
)
|
||||
|
||||
if reply == QMessageBox.StandardButton.Yes:
|
||||
try:
|
||||
self._vault.remove_replica(image_path)
|
||||
self._refresh_list()
|
||||
except Exception as e:
|
||||
QMessageBox.critical(
|
||||
self,
|
||||
"Chyba",
|
||||
f"Nepodařilo se odebrat repliku:\n{e}",
|
||||
)
|
||||
135
src/ui/dialogs/new_vault.py
Normal file
135
src/ui/dialogs/new_vault.py
Normal file
@@ -0,0 +1,135 @@
|
||||
"""New vault creation dialog."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
from PySide6.QtWidgets import (
|
||||
QDialog,
|
||||
QDialogButtonBox,
|
||||
QFileDialog,
|
||||
QFormLayout,
|
||||
QHBoxLayout,
|
||||
QLabel,
|
||||
QLineEdit,
|
||||
QPushButton,
|
||||
QSpinBox,
|
||||
QVBoxLayout,
|
||||
QWidget,
|
||||
)
|
||||
|
||||
|
||||
class NewVaultDialog(QDialog):
|
||||
"""Dialog for creating a new vault."""
|
||||
|
||||
def __init__(self, parent: QWidget | None = None) -> None:
|
||||
super().__init__(parent)
|
||||
self.setWindowTitle("Vytvořit nový vault")
|
||||
self.setMinimumWidth(450)
|
||||
self._setup_ui()
|
||||
|
||||
def _setup_ui(self) -> None:
|
||||
"""Set up dialog UI."""
|
||||
layout = QVBoxLayout(self)
|
||||
|
||||
# Form layout for inputs
|
||||
form = QFormLayout()
|
||||
|
||||
# Vault name
|
||||
self._name_edit = QLineEdit()
|
||||
self._name_edit.setPlaceholderText("Můj Vault")
|
||||
form.addRow("Název:", self._name_edit)
|
||||
|
||||
# Path selection
|
||||
path_layout = QHBoxLayout()
|
||||
self._path_edit = QLineEdit()
|
||||
self._path_edit.setPlaceholderText("/home/user/myvault.vault")
|
||||
path_layout.addWidget(self._path_edit)
|
||||
|
||||
browse_btn = QPushButton("Procházet...")
|
||||
browse_btn.clicked.connect(self._browse_path)
|
||||
path_layout.addWidget(browse_btn)
|
||||
|
||||
form.addRow("Cesta:", path_layout)
|
||||
|
||||
# Size
|
||||
size_layout = QHBoxLayout()
|
||||
self._size_spin = QSpinBox()
|
||||
self._size_spin.setMinimum(10)
|
||||
self._size_spin.setMaximum(100000) # 100 GB
|
||||
self._size_spin.setValue(1024) # Default 1 GB
|
||||
self._size_spin.setSuffix(" MB")
|
||||
size_layout.addWidget(self._size_spin)
|
||||
|
||||
# Quick size buttons
|
||||
for size, label in [(100, "100 MB"), (1024, "1 GB"), (10240, "10 GB")]:
|
||||
btn = QPushButton(label)
|
||||
btn.clicked.connect(lambda checked, s=size: self._size_spin.setValue(s))
|
||||
size_layout.addWidget(btn)
|
||||
|
||||
size_layout.addStretch()
|
||||
form.addRow("Velikost:", size_layout)
|
||||
|
||||
layout.addLayout(form)
|
||||
|
||||
# Info label
|
||||
info_label = QLabel(
|
||||
"Vault bude vytvořen jako sparse soubor - zabere místo pouze pro skutečná data."
|
||||
)
|
||||
info_label.setWordWrap(True)
|
||||
info_label.setStyleSheet("color: gray; font-size: 11px;")
|
||||
layout.addWidget(info_label)
|
||||
|
||||
layout.addStretch()
|
||||
|
||||
# Buttons
|
||||
buttons = QDialogButtonBox(
|
||||
QDialogButtonBox.StandardButton.Ok | QDialogButtonBox.StandardButton.Cancel
|
||||
)
|
||||
buttons.accepted.connect(self._validate_and_accept)
|
||||
buttons.rejected.connect(self.reject)
|
||||
layout.addWidget(buttons)
|
||||
|
||||
def _browse_path(self) -> None:
|
||||
"""Open file dialog to select vault path."""
|
||||
path, _ = QFileDialog.getSaveFileName(
|
||||
self,
|
||||
"Vyberte umístění pro vault",
|
||||
str(Path.home()),
|
||||
"Vault soubory (*.vault)",
|
||||
)
|
||||
if path:
|
||||
if not path.endswith(".vault"):
|
||||
path += ".vault"
|
||||
self._path_edit.setText(path)
|
||||
|
||||
# Auto-fill name from filename if empty
|
||||
if not self._name_edit.text():
|
||||
self._name_edit.setText(Path(path).stem)
|
||||
|
||||
def _validate_and_accept(self) -> None:
|
||||
"""Validate inputs and accept dialog."""
|
||||
if not self._name_edit.text().strip():
|
||||
self._name_edit.setFocus()
|
||||
return
|
||||
|
||||
if not self._path_edit.text().strip():
|
||||
self._path_edit.setFocus()
|
||||
return
|
||||
|
||||
path = Path(self._path_edit.text())
|
||||
if path.exists():
|
||||
# Could show warning dialog here
|
||||
pass
|
||||
|
||||
self.accept()
|
||||
|
||||
def get_result(self) -> dict:
|
||||
"""Get dialog result.
|
||||
|
||||
Returns:
|
||||
Dictionary with vault configuration
|
||||
"""
|
||||
return {
|
||||
"name": self._name_edit.text().strip(),
|
||||
"path": self._path_edit.text().strip(),
|
||||
"size_mb": self._size_spin.value(),
|
||||
}
|
||||
90
src/ui/dialogs/open_vault.py
Normal file
90
src/ui/dialogs/open_vault.py
Normal file
@@ -0,0 +1,90 @@
|
||||
"""Open vault dialog."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
from PySide6.QtWidgets import (
|
||||
QDialog,
|
||||
QDialogButtonBox,
|
||||
QFileDialog,
|
||||
QHBoxLayout,
|
||||
QLabel,
|
||||
QLineEdit,
|
||||
QPushButton,
|
||||
QVBoxLayout,
|
||||
QWidget,
|
||||
)
|
||||
|
||||
|
||||
class OpenVaultDialog(QDialog):
|
||||
"""Dialog for opening an existing vault."""
|
||||
|
||||
def __init__(self, parent: QWidget | None = None) -> None:
|
||||
super().__init__(parent)
|
||||
self.setWindowTitle("Otevřít vault")
|
||||
self.setMinimumWidth(400)
|
||||
self._selected_path: str | None = None
|
||||
self._setup_ui()
|
||||
|
||||
def _setup_ui(self) -> None:
|
||||
"""Set up dialog UI."""
|
||||
layout = QVBoxLayout(self)
|
||||
|
||||
# Instructions
|
||||
label = QLabel("Vyberte .vault soubor k otevření:")
|
||||
layout.addWidget(label)
|
||||
|
||||
# Path selection
|
||||
path_layout = QHBoxLayout()
|
||||
self._path_edit = QLineEdit()
|
||||
self._path_edit.setPlaceholderText("/cesta/k/vault.vault")
|
||||
path_layout.addWidget(self._path_edit)
|
||||
|
||||
browse_btn = QPushButton("Procházet...")
|
||||
browse_btn.clicked.connect(self._browse_path)
|
||||
path_layout.addWidget(browse_btn)
|
||||
|
||||
layout.addLayout(path_layout)
|
||||
|
||||
layout.addStretch()
|
||||
|
||||
# Buttons
|
||||
buttons = QDialogButtonBox(
|
||||
QDialogButtonBox.StandardButton.Ok | QDialogButtonBox.StandardButton.Cancel
|
||||
)
|
||||
buttons.accepted.connect(self._validate_and_accept)
|
||||
buttons.rejected.connect(self.reject)
|
||||
layout.addWidget(buttons)
|
||||
|
||||
def _browse_path(self) -> None:
|
||||
"""Open file dialog to select vault file."""
|
||||
path, _ = QFileDialog.getOpenFileName(
|
||||
self,
|
||||
"Vyberte vault soubor",
|
||||
str(Path.home()),
|
||||
"Vault soubory (*.vault);;Všechny soubory (*)",
|
||||
)
|
||||
if path:
|
||||
self._path_edit.setText(path)
|
||||
|
||||
def _validate_and_accept(self) -> None:
|
||||
"""Validate inputs and accept dialog."""
|
||||
path = self._path_edit.text().strip()
|
||||
if not path:
|
||||
self._path_edit.setFocus()
|
||||
return
|
||||
|
||||
if not Path(path).exists():
|
||||
# Could show error dialog
|
||||
self._path_edit.setFocus()
|
||||
return
|
||||
|
||||
self._selected_path = path
|
||||
self.accept()
|
||||
|
||||
def get_selected_path(self) -> str | None:
|
||||
"""Get selected vault path.
|
||||
|
||||
Returns:
|
||||
Path to selected vault file, or None
|
||||
"""
|
||||
return self._selected_path
|
||||
106
src/ui/dialogs/resize_vault.py
Normal file
106
src/ui/dialogs/resize_vault.py
Normal file
@@ -0,0 +1,106 @@
|
||||
"""Resize vault dialog."""
|
||||
|
||||
from PySide6.QtWidgets import (
|
||||
QDialog,
|
||||
QDialogButtonBox,
|
||||
QFormLayout,
|
||||
QLabel,
|
||||
QMessageBox,
|
||||
QSpinBox,
|
||||
QVBoxLayout,
|
||||
QWidget,
|
||||
)
|
||||
|
||||
from src.core.vault import Vault
|
||||
|
||||
|
||||
class ResizeVaultDialog(QDialog):
|
||||
"""Dialog for resizing a vault."""
|
||||
|
||||
def __init__(self, vault: Vault, parent: QWidget | None = None) -> None:
|
||||
super().__init__(parent)
|
||||
self._vault = vault
|
||||
self.setWindowTitle("Zvětšit vault")
|
||||
self.setMinimumWidth(400)
|
||||
self._new_size: int | None = None
|
||||
self._setup_ui()
|
||||
|
||||
def _setup_ui(self) -> None:
|
||||
"""Set up dialog UI."""
|
||||
layout = QVBoxLayout(self)
|
||||
|
||||
# Current size info
|
||||
manifest = self._vault.manifest
|
||||
current_size = manifest.image_size_mb if manifest else 0
|
||||
|
||||
info_label = QLabel(f"Aktuální velikost: {current_size} MB ({current_size / 1024:.1f} GB)")
|
||||
layout.addWidget(info_label)
|
||||
|
||||
# Get actual disk usage if mounted
|
||||
if self._vault.mount_point:
|
||||
import shutil
|
||||
usage = shutil.disk_usage(self._vault.mount_point)
|
||||
used_mb = usage.used // (1024 * 1024)
|
||||
total_mb = usage.total // (1024 * 1024)
|
||||
free_mb = usage.free // (1024 * 1024)
|
||||
usage_label = QLabel(
|
||||
f"Využito: {used_mb} MB / {total_mb} MB (volno: {free_mb} MB)"
|
||||
)
|
||||
layout.addWidget(usage_label)
|
||||
|
||||
# Form for new size
|
||||
form = QFormLayout()
|
||||
|
||||
self._size_spin = QSpinBox()
|
||||
self._size_spin.setMinimum(current_size + 100) # At least 100 MB more
|
||||
self._size_spin.setMaximum(500000) # 500 GB max
|
||||
self._size_spin.setValue(current_size * 2 if current_size > 0 else 2048)
|
||||
self._size_spin.setSuffix(" MB")
|
||||
self._size_spin.setSingleStep(1024)
|
||||
form.addRow("Nová velikost:", self._size_spin)
|
||||
|
||||
layout.addLayout(form)
|
||||
|
||||
# Warning
|
||||
warning_label = QLabel(
|
||||
"Pozor: Zvětšení vault může chvíli trvat a vyžaduje dočasné odpojení.\n"
|
||||
"Velikost lze pouze zvětšit, ne zmenšit."
|
||||
)
|
||||
warning_label.setStyleSheet("color: orange; font-size: 11px;")
|
||||
warning_label.setWordWrap(True)
|
||||
layout.addWidget(warning_label)
|
||||
|
||||
layout.addStretch()
|
||||
|
||||
# Buttons
|
||||
buttons = QDialogButtonBox(
|
||||
QDialogButtonBox.StandardButton.Ok | QDialogButtonBox.StandardButton.Cancel
|
||||
)
|
||||
buttons.accepted.connect(self._validate_and_accept)
|
||||
buttons.rejected.connect(self.reject)
|
||||
layout.addWidget(buttons)
|
||||
|
||||
def _validate_and_accept(self) -> None:
|
||||
"""Validate and accept dialog."""
|
||||
manifest = self._vault.manifest
|
||||
current_size = manifest.image_size_mb if manifest else 0
|
||||
new_size = self._size_spin.value()
|
||||
|
||||
if new_size <= current_size:
|
||||
QMessageBox.warning(
|
||||
self,
|
||||
"Neplatná velikost",
|
||||
"Nová velikost musí být větší než aktuální.",
|
||||
)
|
||||
return
|
||||
|
||||
self._new_size = new_size
|
||||
self.accept()
|
||||
|
||||
def get_new_size(self) -> int | None:
|
||||
"""Get the new size selected by user.
|
||||
|
||||
Returns:
|
||||
New size in MB, or None if cancelled
|
||||
"""
|
||||
return self._new_size
|
||||
104
src/ui/dialogs/sync_progress.py
Normal file
104
src/ui/dialogs/sync_progress.py
Normal file
@@ -0,0 +1,104 @@
|
||||
"""Sync progress dialog."""
|
||||
|
||||
from PySide6.QtCore import Qt, Signal
|
||||
from PySide6.QtWidgets import (
|
||||
QDialog,
|
||||
QLabel,
|
||||
QProgressBar,
|
||||
QPushButton,
|
||||
QTextEdit,
|
||||
QVBoxLayout,
|
||||
QWidget,
|
||||
)
|
||||
|
||||
from src.core.file_sync import CopyProgress
|
||||
|
||||
|
||||
class SyncProgressDialog(QDialog):
|
||||
"""Dialog showing synchronization progress."""
|
||||
|
||||
# Signal emitted when cancel is requested
|
||||
cancel_requested = Signal()
|
||||
|
||||
def __init__(self, parent: QWidget | None = None) -> None:
|
||||
super().__init__(parent)
|
||||
self.setWindowTitle("Synchronizace")
|
||||
self.setMinimumSize(450, 300)
|
||||
self.setWindowFlags(self.windowFlags() & ~Qt.WindowType.WindowCloseButtonHint)
|
||||
self._cancelled = False
|
||||
self._setup_ui()
|
||||
|
||||
def _setup_ui(self) -> None:
|
||||
"""Set up dialog UI."""
|
||||
layout = QVBoxLayout(self)
|
||||
|
||||
# Status label
|
||||
self._status_label = QLabel("Připravuji synchronizaci...")
|
||||
self._status_label.setStyleSheet("font-weight: bold;")
|
||||
layout.addWidget(self._status_label)
|
||||
|
||||
# Current file label
|
||||
self._file_label = QLabel("")
|
||||
self._file_label.setWordWrap(True)
|
||||
layout.addWidget(self._file_label)
|
||||
|
||||
# Progress bar
|
||||
self._progress_bar = QProgressBar()
|
||||
self._progress_bar.setRange(0, 100)
|
||||
self._progress_bar.setValue(0)
|
||||
layout.addWidget(self._progress_bar)
|
||||
|
||||
# Log area
|
||||
self._log_area = QTextEdit()
|
||||
self._log_area.setReadOnly(True)
|
||||
self._log_area.setMaximumHeight(150)
|
||||
layout.addWidget(self._log_area)
|
||||
|
||||
# Cancel button
|
||||
self._cancel_btn = QPushButton("Zrušit")
|
||||
self._cancel_btn.clicked.connect(self._on_cancel)
|
||||
layout.addWidget(self._cancel_btn)
|
||||
|
||||
def set_status(self, status: str) -> None:
|
||||
"""Set the status message."""
|
||||
self._status_label.setText(status)
|
||||
|
||||
def set_current_file(self, file_path: str) -> None:
|
||||
"""Set the current file being processed."""
|
||||
self._file_label.setText(f"Soubor: {file_path}")
|
||||
|
||||
def set_progress(self, percent: float) -> None:
|
||||
"""Set the progress bar value."""
|
||||
self._progress_bar.setValue(int(percent))
|
||||
|
||||
def add_log(self, message: str) -> None:
|
||||
"""Add a message to the log."""
|
||||
self._log_area.append(message)
|
||||
|
||||
def update_from_copy_progress(self, progress: CopyProgress) -> None:
|
||||
"""Update dialog from CopyProgress object."""
|
||||
self.set_current_file(str(progress.src_path.name))
|
||||
self.set_progress(progress.percent)
|
||||
|
||||
def set_complete(self, success: bool = True) -> None:
|
||||
"""Mark synchronization as complete."""
|
||||
if success:
|
||||
self._status_label.setText("Synchronizace dokončena")
|
||||
self._progress_bar.setValue(100)
|
||||
else:
|
||||
self._status_label.setText("Synchronizace selhala")
|
||||
|
||||
self._cancel_btn.setText("Zavřít")
|
||||
self._cancel_btn.clicked.disconnect()
|
||||
self._cancel_btn.clicked.connect(self.accept)
|
||||
|
||||
def is_cancelled(self) -> bool:
|
||||
"""Check if sync was cancelled."""
|
||||
return self._cancelled
|
||||
|
||||
def _on_cancel(self) -> None:
|
||||
"""Handle cancel button click."""
|
||||
self._cancelled = True
|
||||
self._status_label.setText("Rušení...")
|
||||
self._cancel_btn.setEnabled(False)
|
||||
self.cancel_requested.emit()
|
||||
50
src/ui/notifications.py
Normal file
50
src/ui/notifications.py
Normal file
@@ -0,0 +1,50 @@
|
||||
"""System notifications for Vault."""
|
||||
|
||||
import subprocess
|
||||
|
||||
from loguru import logger
|
||||
|
||||
|
||||
class NotificationManager:
|
||||
"""Manages system notifications."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._app_name = "Vault"
|
||||
|
||||
def notify(
|
||||
self,
|
||||
title: str,
|
||||
message: str,
|
||||
critical: bool = False,
|
||||
timeout_ms: int = 5000,
|
||||
) -> None:
|
||||
"""Show a system notification.
|
||||
|
||||
Args:
|
||||
title: Notification title
|
||||
message: Notification message
|
||||
critical: If True, notification is marked as critical/urgent
|
||||
timeout_ms: Timeout in milliseconds (0 = no timeout)
|
||||
"""
|
||||
try:
|
||||
# Use notify-send on Linux
|
||||
cmd = [
|
||||
"notify-send",
|
||||
"--app-name", self._app_name,
|
||||
"--expire-time", str(timeout_ms),
|
||||
]
|
||||
|
||||
if critical:
|
||||
cmd.extend(["--urgency", "critical"])
|
||||
|
||||
cmd.append(title)
|
||||
if message:
|
||||
cmd.append(message)
|
||||
|
||||
subprocess.run(cmd, check=False)
|
||||
logger.debug(f"Notification sent: {title}")
|
||||
|
||||
except FileNotFoundError:
|
||||
logger.warning("notify-send not found, notification not shown")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to show notification: {e}")
|
||||
385
src/ui/tray_app.py
Normal file
385
src/ui/tray_app.py
Normal file
@@ -0,0 +1,385 @@
|
||||
"""System tray application for Vault.
|
||||
|
||||
Main entry point for the GUI application.
|
||||
"""
|
||||
|
||||
import signal
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
from loguru import logger
|
||||
from PySide6.QtCore import QTimer
|
||||
from PySide6.QtGui import QIcon
|
||||
from PySide6.QtWidgets import QApplication, QMenu, QSystemTrayIcon
|
||||
|
||||
from src.core.sync_manager import SyncStatus
|
||||
from src.core.vault import Vault, VaultError, VaultState
|
||||
from src.ui.dialogs.new_vault import NewVaultDialog
|
||||
from src.ui.dialogs.open_vault import OpenVaultDialog
|
||||
from src.ui.notifications import NotificationManager
|
||||
|
||||
|
||||
class VaultTrayApp:
|
||||
"""System tray application for managing Vault."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._app = QApplication(sys.argv)
|
||||
self._app.setQuitOnLastWindowClosed(False)
|
||||
self._app.setApplicationName("Vault")
|
||||
|
||||
self._vault = Vault(
|
||||
on_state_change=self._on_vault_state_change,
|
||||
on_sync_event=lambda e: self._on_sync_event(e),
|
||||
)
|
||||
self._notifications = NotificationManager()
|
||||
self._space_warned = False
|
||||
|
||||
self._tray = QSystemTrayIcon()
|
||||
self._setup_tray()
|
||||
|
||||
# Status check timer
|
||||
self._status_timer = QTimer()
|
||||
self._status_timer.timeout.connect(self._update_status)
|
||||
self._status_timer.start(5000) # Check every 5 seconds
|
||||
|
||||
# Replica availability check timer
|
||||
self._replica_timer = QTimer()
|
||||
self._replica_timer.timeout.connect(self._check_replica_availability)
|
||||
self._replica_timer.start(30000) # Check every 30 seconds
|
||||
|
||||
# Setup signal handlers for graceful shutdown
|
||||
self._setup_signal_handlers()
|
||||
|
||||
def _setup_signal_handlers(self) -> None:
|
||||
"""Setup signal handlers for graceful shutdown."""
|
||||
def signal_handler(signum: int, frame: object) -> None:
|
||||
logger.info(f"Received signal {signum}, shutting down gracefully...")
|
||||
self._quit()
|
||||
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
signal.signal(signal.SIGTERM, signal_handler)
|
||||
|
||||
# Use a timer to allow signal processing in Qt event loop
|
||||
self._signal_timer = QTimer()
|
||||
self._signal_timer.timeout.connect(lambda: None) # Keep event loop responsive
|
||||
self._signal_timer.start(500)
|
||||
|
||||
def _setup_tray(self) -> None:
|
||||
"""Set up tray icon and menu."""
|
||||
self._update_icon()
|
||||
self._tray.setToolTip("Vault - Resilientní úložiště")
|
||||
|
||||
menu = QMenu()
|
||||
|
||||
# Status header
|
||||
self._status_action = menu.addAction("Žádný vault otevřen")
|
||||
self._status_action.setEnabled(False)
|
||||
|
||||
menu.addSeparator()
|
||||
|
||||
# Open folder action
|
||||
self._open_folder_action = menu.addAction("Otevřít složku")
|
||||
self._open_folder_action.triggered.connect(self._open_folder)
|
||||
self._open_folder_action.setEnabled(False)
|
||||
|
||||
menu.addSeparator()
|
||||
|
||||
# Vault management
|
||||
menu.addAction("Vytvořit nový vault...").triggered.connect(self._new_vault)
|
||||
menu.addAction("Otevřít vault...").triggered.connect(self._open_vault)
|
||||
self._close_action = menu.addAction("Zavřít vault")
|
||||
self._close_action.triggered.connect(self._close_vault)
|
||||
self._close_action.setEnabled(False)
|
||||
|
||||
menu.addSeparator()
|
||||
|
||||
# Replica management
|
||||
self._add_replica_action = menu.addAction("Přidat repliku...")
|
||||
self._add_replica_action.triggered.connect(self._add_replica)
|
||||
self._add_replica_action.setEnabled(False)
|
||||
|
||||
self._manage_replicas_action = menu.addAction("Spravovat repliky...")
|
||||
self._manage_replicas_action.triggered.connect(self._manage_replicas)
|
||||
self._manage_replicas_action.setEnabled(False)
|
||||
|
||||
menu.addSeparator()
|
||||
|
||||
# Sync action
|
||||
self._sync_action = menu.addAction("Synchronizovat")
|
||||
self._sync_action.triggered.connect(self._manual_sync)
|
||||
self._sync_action.setEnabled(False)
|
||||
|
||||
# Resize action
|
||||
self._resize_action = menu.addAction("Zvětšit vault...")
|
||||
self._resize_action.triggered.connect(self._resize_vault)
|
||||
self._resize_action.setEnabled(False)
|
||||
|
||||
menu.addSeparator()
|
||||
|
||||
# Quit
|
||||
menu.addAction("Ukončit").triggered.connect(self._quit)
|
||||
|
||||
self._tray.setContextMenu(menu)
|
||||
self._tray.activated.connect(self._on_tray_activated)
|
||||
self._tray.show()
|
||||
|
||||
def _update_icon(self) -> None:
|
||||
"""Update tray icon based on vault state."""
|
||||
# Use built-in icons for now
|
||||
if not self._vault.is_open:
|
||||
# Gray - no vault open
|
||||
icon = QIcon.fromTheme("folder-grey", QIcon.fromTheme("folder"))
|
||||
elif self._vault.sync_status == SyncStatus.SYNCING:
|
||||
# Blue - syncing
|
||||
icon = QIcon.fromTheme("folder-sync", QIcon.fromTheme("folder-download"))
|
||||
elif self._vault.sync_status == SyncStatus.ERROR:
|
||||
# Red - error
|
||||
icon = QIcon.fromTheme("folder-important", QIcon.fromTheme("dialog-error"))
|
||||
elif self._vault.get_unavailable_replicas():
|
||||
# Yellow - some replicas unavailable
|
||||
icon = QIcon.fromTheme("folder-yellow", QIcon.fromTheme("folder-visiting"))
|
||||
else:
|
||||
# Green - all good
|
||||
icon = QIcon.fromTheme("folder-green", QIcon.fromTheme("folder-open"))
|
||||
|
||||
self._tray.setIcon(icon)
|
||||
|
||||
def _update_status(self) -> None:
|
||||
"""Update status display."""
|
||||
self._update_icon()
|
||||
|
||||
if not self._vault.is_open:
|
||||
self._status_action.setText("Žádný vault otevřen")
|
||||
self._open_folder_action.setEnabled(False)
|
||||
self._close_action.setEnabled(False)
|
||||
self._add_replica_action.setEnabled(False)
|
||||
self._manage_replicas_action.setEnabled(False)
|
||||
self._sync_action.setEnabled(False)
|
||||
self._resize_action.setEnabled(False)
|
||||
else:
|
||||
manifest = self._vault.manifest
|
||||
name = manifest.vault_name if manifest else "Vault"
|
||||
replicas = self._vault.replica_count
|
||||
unavailable = self._vault.get_unavailable_replicas()
|
||||
|
||||
if unavailable:
|
||||
online = replicas
|
||||
total = replicas + len(unavailable)
|
||||
status_text = f"{name} ({online}/{total} replik online)"
|
||||
else:
|
||||
status_text = f"{name} ({replicas} replik{'a' if replicas == 1 else 'y' if replicas < 5 else ''})"
|
||||
|
||||
if self._vault.sync_status == SyncStatus.SYNCING:
|
||||
status_text += " - synchronizace..."
|
||||
|
||||
self._status_action.setText(status_text)
|
||||
self._open_folder_action.setEnabled(True)
|
||||
self._close_action.setEnabled(True)
|
||||
self._add_replica_action.setEnabled(True)
|
||||
self._manage_replicas_action.setEnabled(True)
|
||||
self._sync_action.setEnabled(True)
|
||||
self._resize_action.setEnabled(True)
|
||||
|
||||
# Check space warning
|
||||
if self._vault.check_space_warning(90.0):
|
||||
space_info = self._vault.get_space_info()
|
||||
if space_info:
|
||||
free_mb = space_info["free"] // (1024 * 1024)
|
||||
if not hasattr(self, "_space_warned") or not self._space_warned:
|
||||
self._notifications.notify(
|
||||
"Vault téměř plný",
|
||||
f"Zbývá pouze {free_mb} MB volného místa",
|
||||
critical=True,
|
||||
)
|
||||
self._space_warned = True
|
||||
else:
|
||||
self._space_warned = False
|
||||
|
||||
def _check_replica_availability(self) -> None:
|
||||
"""Check for replica availability and reconnect if possible."""
|
||||
if not self._vault.is_open:
|
||||
return
|
||||
|
||||
unavailable = self._vault.get_unavailable_replicas()
|
||||
if unavailable:
|
||||
# Try to reconnect
|
||||
reconnected = self._vault.reconnect_unavailable_replicas()
|
||||
if reconnected > 0:
|
||||
self._notifications.notify(
|
||||
"Repliky připojeny",
|
||||
f"Připojeno {reconnected} replik{'a' if reconnected == 1 else 'y' if reconnected < 5 else ''}",
|
||||
)
|
||||
self._update_status()
|
||||
|
||||
def _on_vault_state_change(self, state: VaultState) -> None:
|
||||
"""Handle vault state change."""
|
||||
logger.debug(f"Vault state changed: {state}")
|
||||
self._update_status()
|
||||
|
||||
if state == VaultState.OPEN:
|
||||
self._notifications.notify(
|
||||
"Vault otevřen",
|
||||
f"Mount point: {self._vault.mount_point}",
|
||||
)
|
||||
elif state == VaultState.CLOSED:
|
||||
self._notifications.notify("Vault zavřen", "")
|
||||
elif state == VaultState.ERROR:
|
||||
self._notifications.notify("Chyba", "Nastala chyba při práci s vault", critical=True)
|
||||
|
||||
def _on_sync_event(self, event: object) -> None:
|
||||
"""Handle sync event."""
|
||||
self._update_icon()
|
||||
|
||||
def _on_tray_activated(self, reason: QSystemTrayIcon.ActivationReason) -> None:
|
||||
"""Handle tray icon activation."""
|
||||
if reason == QSystemTrayIcon.ActivationReason.DoubleClick:
|
||||
self._open_folder()
|
||||
|
||||
def _open_folder(self) -> None:
|
||||
"""Open vault mount point in file manager."""
|
||||
if not self._vault.is_open or not self._vault.mount_point:
|
||||
return
|
||||
|
||||
try:
|
||||
# Try xdg-open (Linux)
|
||||
subprocess.Popen(["xdg-open", str(self._vault.mount_point)])
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to open folder: {e}")
|
||||
self._notifications.notify("Chyba", f"Nepodařilo se otevřít složku: {e}", critical=True)
|
||||
|
||||
def _new_vault(self) -> None:
|
||||
"""Show new vault dialog."""
|
||||
dialog = NewVaultDialog()
|
||||
if dialog.exec():
|
||||
result = dialog.get_result()
|
||||
try:
|
||||
from src.core.image_manager import create_sparse_image
|
||||
|
||||
image_path = Path(result["path"])
|
||||
create_sparse_image(image_path, result["size_mb"])
|
||||
|
||||
mount = self._vault.open(image_path)
|
||||
|
||||
# Update manifest with user-provided name
|
||||
if self._vault.manifest:
|
||||
self._vault.manifest.vault_name = result["name"]
|
||||
self._vault.manifest.image_size_mb = result["size_mb"]
|
||||
self._vault.manifest.save(mount)
|
||||
|
||||
self._update_status()
|
||||
self._open_folder()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to create vault: {e}")
|
||||
self._notifications.notify("Chyba", f"Nepodařilo se vytvořit vault: {e}", critical=True)
|
||||
|
||||
def _open_vault(self) -> None:
|
||||
"""Show open vault dialog."""
|
||||
dialog = OpenVaultDialog()
|
||||
if dialog.exec():
|
||||
vault_path = dialog.get_selected_path()
|
||||
if vault_path:
|
||||
try:
|
||||
self._vault.open(Path(vault_path))
|
||||
self._update_status()
|
||||
self._open_folder()
|
||||
except VaultError as e:
|
||||
logger.error(f"Failed to open vault: {e}")
|
||||
self._notifications.notify("Chyba", f"Nepodařilo se otevřít vault: {e}", critical=True)
|
||||
|
||||
def _close_vault(self) -> None:
|
||||
"""Close current vault."""
|
||||
if self._vault.is_open:
|
||||
self._vault.close()
|
||||
self._update_status()
|
||||
|
||||
def _add_replica(self) -> None:
|
||||
"""Show add replica dialog."""
|
||||
if not self._vault.is_open:
|
||||
return
|
||||
|
||||
from PySide6.QtWidgets import QFileDialog
|
||||
|
||||
path, _ = QFileDialog.getSaveFileName(
|
||||
None,
|
||||
"Vyberte umístění pro novou repliku",
|
||||
"",
|
||||
"Vault soubory (*.vault)",
|
||||
)
|
||||
|
||||
if path:
|
||||
if not path.endswith(".vault"):
|
||||
path += ".vault"
|
||||
try:
|
||||
self._vault.add_replica(Path(path))
|
||||
self._notifications.notify("Replika přidána", f"Nová replika: {path}")
|
||||
self._update_status()
|
||||
except VaultError as e:
|
||||
logger.error(f"Failed to add replica: {e}")
|
||||
self._notifications.notify("Chyba", f"Nepodařilo se přidat repliku: {e}", critical=True)
|
||||
|
||||
def _manage_replicas(self) -> None:
|
||||
"""Show replica management dialog."""
|
||||
if not self._vault.is_open:
|
||||
return
|
||||
|
||||
from src.ui.dialogs.manage_replicas import ManageReplicasDialog
|
||||
|
||||
dialog = ManageReplicasDialog(self._vault)
|
||||
dialog.exec()
|
||||
self._update_status()
|
||||
|
||||
def _manual_sync(self) -> None:
|
||||
"""Trigger manual synchronization."""
|
||||
if self._vault.is_open:
|
||||
try:
|
||||
self._vault.sync()
|
||||
self._notifications.notify("Synchronizace dokončena", "")
|
||||
except VaultError as e:
|
||||
logger.error(f"Sync failed: {e}")
|
||||
self._notifications.notify("Chyba synchronizace", str(e), critical=True)
|
||||
|
||||
def _resize_vault(self) -> None:
|
||||
"""Show resize vault dialog."""
|
||||
if not self._vault.is_open:
|
||||
return
|
||||
|
||||
from src.ui.dialogs.resize_vault import ResizeVaultDialog
|
||||
|
||||
dialog = ResizeVaultDialog(self._vault)
|
||||
if dialog.exec():
|
||||
new_size = dialog.get_new_size()
|
||||
if new_size:
|
||||
try:
|
||||
self._vault.resize(new_size)
|
||||
self._notifications.notify(
|
||||
"Vault zvětšen",
|
||||
f"Nová velikost: {new_size} MB",
|
||||
)
|
||||
self._update_status()
|
||||
except VaultError as e:
|
||||
logger.error(f"Resize failed: {e}")
|
||||
self._notifications.notify("Chyba", f"Nepodařilo se zvětšit vault: {e}", critical=True)
|
||||
|
||||
def _quit(self) -> None:
|
||||
"""Quit the application."""
|
||||
if self._vault.is_open:
|
||||
self._vault.close()
|
||||
self._tray.hide()
|
||||
self._app.quit()
|
||||
|
||||
def run(self) -> int:
|
||||
"""Run the application."""
|
||||
logger.info("Vault tray application starting...")
|
||||
return self._app.exec()
|
||||
|
||||
|
||||
def main() -> int:
|
||||
"""Main entry point for tray application."""
|
||||
app = VaultTrayApp()
|
||||
return app.run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
1
tests/__init__.py
Normal file
1
tests/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Tests package
|
||||
117
tests/test_container.py
Normal file
117
tests/test_container.py
Normal file
@@ -0,0 +1,117 @@
|
||||
"""Tests for Container class.
|
||||
|
||||
Note: These tests require udisks2 to be installed and running.
|
||||
They actually mount/unmount images, so they're integration tests.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from src.core.container import Container, ContainerError
|
||||
from src.core.image_manager import create_sparse_image
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def vault_image(tmp_path: Path) -> Path:
|
||||
"""Create a temporary vault image for testing."""
|
||||
image_path = tmp_path / "test.vault"
|
||||
create_sparse_image(image_path, size_mb=10)
|
||||
return image_path
|
||||
|
||||
|
||||
class TestContainer:
|
||||
"""Tests for Container class."""
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_mount_and_unmount(self, vault_image: Path) -> None:
|
||||
"""Test mounting and unmounting a vault image."""
|
||||
container = Container(vault_image)
|
||||
|
||||
# Mount
|
||||
mount_point = container.mount()
|
||||
assert container.is_mounted()
|
||||
assert mount_point.exists()
|
||||
assert mount_point.is_dir()
|
||||
|
||||
# Should be able to write files
|
||||
test_file = mount_point / "test.txt"
|
||||
test_file.write_text("Hello, Vault!")
|
||||
assert test_file.exists()
|
||||
|
||||
# Unmount
|
||||
container.unmount()
|
||||
assert not container.is_mounted()
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_context_manager(self, vault_image: Path) -> None:
|
||||
"""Test using container as context manager."""
|
||||
with Container(vault_image) as container:
|
||||
assert container.is_mounted()
|
||||
mount_point = container.mount_point
|
||||
assert mount_point is not None
|
||||
assert mount_point.exists()
|
||||
|
||||
# Should be unmounted after context exits
|
||||
assert not container.is_mounted()
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_mount_creates_vault_directory(self, vault_image: Path) -> None:
|
||||
"""Test that .vault directory can be created in mounted image."""
|
||||
with Container(vault_image) as container:
|
||||
vault_dir = container.mount_point / ".vault" # type: ignore
|
||||
vault_dir.mkdir()
|
||||
assert vault_dir.exists()
|
||||
|
||||
# Create manifest file
|
||||
manifest = vault_dir / "manifest.json"
|
||||
manifest.write_text('{"test": true}')
|
||||
assert manifest.exists()
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_mount_already_mounted(self, vault_image: Path) -> None:
|
||||
"""Test that mounting already mounted container fails."""
|
||||
container = Container(vault_image)
|
||||
container.mount()
|
||||
|
||||
try:
|
||||
with pytest.raises(ContainerError, match="already mounted"):
|
||||
container.mount()
|
||||
finally:
|
||||
container.unmount()
|
||||
|
||||
def test_mount_nonexistent_image(self, tmp_path: Path) -> None:
|
||||
"""Test that mounting nonexistent image fails."""
|
||||
container = Container(tmp_path / "nonexistent.vault")
|
||||
|
||||
with pytest.raises(ContainerError, match="not found"):
|
||||
container.mount()
|
||||
|
||||
def test_is_mounted_initially_false(self, vault_image: Path) -> None:
|
||||
"""Test that container is not mounted initially."""
|
||||
container = Container(vault_image)
|
||||
assert not container.is_mounted()
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_unmount_not_mounted(self, vault_image: Path) -> None:
|
||||
"""Test that unmounting not mounted container is safe."""
|
||||
container = Container(vault_image)
|
||||
|
||||
# Should not raise
|
||||
container.unmount()
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_data_persists_after_remount(self, vault_image: Path) -> None:
|
||||
"""Test that data persists after unmount and remount."""
|
||||
test_content = "Persistent data test"
|
||||
|
||||
# Write data
|
||||
with Container(vault_image) as container:
|
||||
test_file = container.mount_point / "persistent.txt" # type: ignore
|
||||
test_file.write_text(test_content)
|
||||
|
||||
# Read data after remount
|
||||
with Container(vault_image) as container:
|
||||
test_file = container.mount_point / "persistent.txt" # type: ignore
|
||||
assert test_file.exists()
|
||||
assert test_file.read_text() == test_content
|
||||
151
tests/test_file_entry.py
Normal file
151
tests/test_file_entry.py
Normal file
@@ -0,0 +1,151 @@
|
||||
"""Tests for FileEntry dataclass."""
|
||||
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
import tempfile
|
||||
|
||||
import pytest
|
||||
|
||||
from src.core.file_entry import FileEntry
|
||||
|
||||
|
||||
class TestFileEntry:
|
||||
"""Tests for FileEntry dataclass."""
|
||||
|
||||
def test_create_file_entry(self) -> None:
|
||||
"""Test creating a FileEntry instance."""
|
||||
now = datetime.now()
|
||||
entry = FileEntry(
|
||||
path="documents/test.txt",
|
||||
hash="sha256:abc123",
|
||||
size=1024,
|
||||
created=now,
|
||||
modified=now,
|
||||
)
|
||||
|
||||
assert entry.path == "documents/test.txt"
|
||||
assert entry.hash == "sha256:abc123"
|
||||
assert entry.size == 1024
|
||||
assert entry.created == now
|
||||
assert entry.modified == now
|
||||
|
||||
def test_file_entry_is_immutable(self) -> None:
|
||||
"""Test that FileEntry is frozen (immutable)."""
|
||||
now = datetime.now()
|
||||
entry = FileEntry(
|
||||
path="test.txt",
|
||||
hash="sha256:abc",
|
||||
size=100,
|
||||
created=now,
|
||||
modified=now,
|
||||
)
|
||||
|
||||
with pytest.raises(AttributeError):
|
||||
entry.path = "other.txt" # type: ignore
|
||||
|
||||
def test_to_dict(self) -> None:
|
||||
"""Test serialization to dictionary."""
|
||||
created = datetime(2026, 1, 28, 10, 30, 0)
|
||||
modified = datetime(2026, 1, 28, 14, 20, 0)
|
||||
entry = FileEntry(
|
||||
path="documents/file.txt",
|
||||
hash="sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
|
||||
size=1234,
|
||||
created=created,
|
||||
modified=modified,
|
||||
)
|
||||
|
||||
result = entry.to_dict()
|
||||
|
||||
assert result == {
|
||||
"path": "documents/file.txt",
|
||||
"hash": "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
|
||||
"size": 1234,
|
||||
"created": "2026-01-28T10:30:00",
|
||||
"modified": "2026-01-28T14:20:00",
|
||||
}
|
||||
|
||||
def test_from_dict(self) -> None:
|
||||
"""Test deserialization from dictionary."""
|
||||
data = {
|
||||
"path": "documents/file.txt",
|
||||
"hash": "sha256:abc123",
|
||||
"size": 1234,
|
||||
"created": "2026-01-28T10:30:00",
|
||||
"modified": "2026-01-28T14:20:00",
|
||||
}
|
||||
|
||||
entry = FileEntry.from_dict(data)
|
||||
|
||||
assert entry.path == "documents/file.txt"
|
||||
assert entry.hash == "sha256:abc123"
|
||||
assert entry.size == 1234
|
||||
assert entry.created == datetime(2026, 1, 28, 10, 30, 0)
|
||||
assert entry.modified == datetime(2026, 1, 28, 14, 20, 0)
|
||||
|
||||
def test_from_path(self) -> None:
|
||||
"""Test creating FileEntry from actual file."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
base_path = Path(tmpdir)
|
||||
file_path = base_path / "test.txt"
|
||||
file_path.write_text("Hello, World!")
|
||||
|
||||
entry = FileEntry.from_path(base_path, file_path)
|
||||
|
||||
assert entry.path == "test.txt"
|
||||
assert entry.hash.startswith("sha256:")
|
||||
assert entry.size == 13 # len("Hello, World!")
|
||||
assert entry.created is not None
|
||||
assert entry.modified is not None
|
||||
|
||||
def test_from_path_nested_directory(self) -> None:
|
||||
"""Test creating FileEntry from file in nested directory."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
base_path = Path(tmpdir)
|
||||
nested_dir = base_path / "documents" / "work"
|
||||
nested_dir.mkdir(parents=True)
|
||||
file_path = nested_dir / "report.txt"
|
||||
file_path.write_text("Test content")
|
||||
|
||||
entry = FileEntry.from_path(base_path, file_path)
|
||||
|
||||
assert entry.path == "documents/work/report.txt"
|
||||
|
||||
def test_has_changed_same_hash(self) -> None:
|
||||
"""Test has_changed returns False for same hash."""
|
||||
now = datetime.now()
|
||||
entry1 = FileEntry(
|
||||
path="test.txt", hash="sha256:abc", size=100, created=now, modified=now
|
||||
)
|
||||
entry2 = FileEntry(
|
||||
path="test.txt", hash="sha256:abc", size=100, created=now, modified=now
|
||||
)
|
||||
|
||||
assert not entry1.has_changed(entry2)
|
||||
|
||||
def test_has_changed_different_hash(self) -> None:
|
||||
"""Test has_changed returns True for different hash."""
|
||||
now = datetime.now()
|
||||
entry1 = FileEntry(
|
||||
path="test.txt", hash="sha256:abc", size=100, created=now, modified=now
|
||||
)
|
||||
entry2 = FileEntry(
|
||||
path="test.txt", hash="sha256:xyz", size=100, created=now, modified=now
|
||||
)
|
||||
|
||||
assert entry1.has_changed(entry2)
|
||||
|
||||
def test_is_newer_than(self) -> None:
|
||||
"""Test is_newer_than comparison."""
|
||||
old_time = datetime(2026, 1, 1, 10, 0, 0)
|
||||
new_time = datetime(2026, 1, 2, 10, 0, 0)
|
||||
|
||||
old_entry = FileEntry(
|
||||
path="test.txt", hash="sha256:abc", size=100, created=old_time, modified=old_time
|
||||
)
|
||||
new_entry = FileEntry(
|
||||
path="test.txt", hash="sha256:xyz", size=100, created=new_time, modified=new_time
|
||||
)
|
||||
|
||||
assert new_entry.is_newer_than(old_entry)
|
||||
assert not old_entry.is_newer_than(new_entry)
|
||||
324
tests/test_file_sync.py
Normal file
324
tests/test_file_sync.py
Normal file
@@ -0,0 +1,324 @@
|
||||
"""Tests for file_sync module."""
|
||||
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from src.core.file_sync import (
|
||||
CopyProgress,
|
||||
copy_directory_with_progress,
|
||||
copy_file_with_progress,
|
||||
delete_directory,
|
||||
delete_file,
|
||||
move_file,
|
||||
sync_file,
|
||||
)
|
||||
|
||||
|
||||
class TestCopyProgress:
|
||||
"""Tests for CopyProgress dataclass."""
|
||||
|
||||
def test_percent_calculation(self) -> None:
|
||||
progress = CopyProgress(
|
||||
src_path=Path("src"),
|
||||
dst_path=Path("dst"),
|
||||
bytes_copied=50,
|
||||
total_bytes=100,
|
||||
)
|
||||
assert progress.percent == 50.0
|
||||
|
||||
def test_percent_with_zero_total(self) -> None:
|
||||
progress = CopyProgress(
|
||||
src_path=Path("src"),
|
||||
dst_path=Path("dst"),
|
||||
bytes_copied=0,
|
||||
total_bytes=0,
|
||||
)
|
||||
assert progress.percent == 100.0
|
||||
|
||||
def test_is_complete_true(self) -> None:
|
||||
progress = CopyProgress(
|
||||
src_path=Path("src"),
|
||||
dst_path=Path("dst"),
|
||||
bytes_copied=100,
|
||||
total_bytes=100,
|
||||
)
|
||||
assert progress.is_complete is True
|
||||
|
||||
def test_is_complete_false(self) -> None:
|
||||
progress = CopyProgress(
|
||||
src_path=Path("src"),
|
||||
dst_path=Path("dst"),
|
||||
bytes_copied=50,
|
||||
total_bytes=100,
|
||||
)
|
||||
assert progress.is_complete is False
|
||||
|
||||
|
||||
class TestCopyFileWithProgress:
|
||||
"""Tests for copy_file_with_progress function."""
|
||||
|
||||
def test_copy_file(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "source.txt"
|
||||
dst = tmp_path / "dest.txt"
|
||||
src.write_text("Hello, World!")
|
||||
|
||||
copy_file_with_progress(src, dst)
|
||||
|
||||
assert dst.exists()
|
||||
assert dst.read_text() == "Hello, World!"
|
||||
|
||||
def test_copy_file_creates_parent_dirs(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "source.txt"
|
||||
dst = tmp_path / "nested" / "deep" / "dest.txt"
|
||||
src.write_text("content")
|
||||
|
||||
copy_file_with_progress(src, dst)
|
||||
|
||||
assert dst.exists()
|
||||
assert dst.read_text() == "content"
|
||||
|
||||
def test_copy_file_with_callback(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "source.txt"
|
||||
dst = tmp_path / "dest.txt"
|
||||
# Create a file larger than chunk size
|
||||
content = "x" * 5000
|
||||
src.write_text(content)
|
||||
|
||||
progress_calls: list[CopyProgress] = []
|
||||
|
||||
def callback(progress: CopyProgress) -> None:
|
||||
progress_calls.append(progress)
|
||||
|
||||
copy_file_with_progress(src, dst, callback=callback, chunk_size=1000)
|
||||
|
||||
assert len(progress_calls) >= 1
|
||||
assert progress_calls[-1].is_complete
|
||||
|
||||
def test_copy_file_preserves_timestamps(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "source.txt"
|
||||
dst = tmp_path / "dest.txt"
|
||||
src.write_text("content")
|
||||
|
||||
# Get original timestamps
|
||||
src_stat = src.stat()
|
||||
|
||||
copy_file_with_progress(src, dst)
|
||||
|
||||
dst_stat = dst.stat()
|
||||
assert abs(src_stat.st_mtime - dst_stat.st_mtime) < 1
|
||||
|
||||
def test_copy_file_source_not_found(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "nonexistent.txt"
|
||||
dst = tmp_path / "dest.txt"
|
||||
|
||||
with pytest.raises(FileNotFoundError):
|
||||
copy_file_with_progress(src, dst)
|
||||
|
||||
def test_copy_file_source_is_directory(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "srcdir"
|
||||
dst = tmp_path / "dest.txt"
|
||||
src.mkdir()
|
||||
|
||||
with pytest.raises(IsADirectoryError):
|
||||
copy_file_with_progress(src, dst)
|
||||
|
||||
def test_copy_file_destination_is_directory(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "source.txt"
|
||||
dst = tmp_path / "dstdir"
|
||||
src.write_text("content")
|
||||
dst.mkdir()
|
||||
|
||||
with pytest.raises(IsADirectoryError):
|
||||
copy_file_with_progress(src, dst)
|
||||
|
||||
|
||||
class TestCopyDirectoryWithProgress:
|
||||
"""Tests for copy_directory_with_progress function."""
|
||||
|
||||
def test_copy_directory(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "srcdir"
|
||||
dst = tmp_path / "dstdir"
|
||||
src.mkdir()
|
||||
(src / "file1.txt").write_text("content1")
|
||||
(src / "file2.txt").write_text("content2")
|
||||
|
||||
copy_directory_with_progress(src, dst)
|
||||
|
||||
assert dst.exists()
|
||||
assert (dst / "file1.txt").read_text() == "content1"
|
||||
assert (dst / "file2.txt").read_text() == "content2"
|
||||
|
||||
def test_copy_nested_directory(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "srcdir"
|
||||
dst = tmp_path / "dstdir"
|
||||
src.mkdir()
|
||||
(src / "nested").mkdir()
|
||||
(src / "nested" / "deep.txt").write_text("deep content")
|
||||
|
||||
copy_directory_with_progress(src, dst)
|
||||
|
||||
assert (dst / "nested" / "deep.txt").read_text() == "deep content"
|
||||
|
||||
def test_copy_directory_not_found(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "nonexistent"
|
||||
dst = tmp_path / "dstdir"
|
||||
|
||||
with pytest.raises(FileNotFoundError):
|
||||
copy_directory_with_progress(src, dst)
|
||||
|
||||
def test_copy_directory_source_is_file(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "source.txt"
|
||||
dst = tmp_path / "dstdir"
|
||||
src.write_text("content")
|
||||
|
||||
with pytest.raises(NotADirectoryError):
|
||||
copy_directory_with_progress(src, dst)
|
||||
|
||||
|
||||
class TestDeleteFile:
|
||||
"""Tests for delete_file function."""
|
||||
|
||||
def test_delete_file(self, tmp_path: Path) -> None:
|
||||
file = tmp_path / "test.txt"
|
||||
file.write_text("content")
|
||||
|
||||
delete_file(file)
|
||||
|
||||
assert not file.exists()
|
||||
|
||||
def test_delete_file_not_found(self, tmp_path: Path) -> None:
|
||||
file = tmp_path / "nonexistent.txt"
|
||||
|
||||
with pytest.raises(FileNotFoundError):
|
||||
delete_file(file)
|
||||
|
||||
def test_delete_file_is_directory(self, tmp_path: Path) -> None:
|
||||
dir_path = tmp_path / "testdir"
|
||||
dir_path.mkdir()
|
||||
|
||||
with pytest.raises(IsADirectoryError):
|
||||
delete_file(dir_path)
|
||||
|
||||
|
||||
class TestDeleteDirectory:
|
||||
"""Tests for delete_directory function."""
|
||||
|
||||
def test_delete_directory(self, tmp_path: Path) -> None:
|
||||
dir_path = tmp_path / "testdir"
|
||||
dir_path.mkdir()
|
||||
(dir_path / "file.txt").write_text("content")
|
||||
|
||||
delete_directory(dir_path)
|
||||
|
||||
assert not dir_path.exists()
|
||||
|
||||
def test_delete_nested_directory(self, tmp_path: Path) -> None:
|
||||
dir_path = tmp_path / "testdir"
|
||||
dir_path.mkdir()
|
||||
(dir_path / "nested").mkdir()
|
||||
(dir_path / "nested" / "deep.txt").write_text("content")
|
||||
|
||||
delete_directory(dir_path)
|
||||
|
||||
assert not dir_path.exists()
|
||||
|
||||
def test_delete_directory_not_found(self, tmp_path: Path) -> None:
|
||||
dir_path = tmp_path / "nonexistent"
|
||||
|
||||
with pytest.raises(FileNotFoundError):
|
||||
delete_directory(dir_path)
|
||||
|
||||
def test_delete_directory_is_file(self, tmp_path: Path) -> None:
|
||||
file = tmp_path / "test.txt"
|
||||
file.write_text("content")
|
||||
|
||||
with pytest.raises(NotADirectoryError):
|
||||
delete_directory(file)
|
||||
|
||||
|
||||
class TestMoveFile:
|
||||
"""Tests for move_file function."""
|
||||
|
||||
def test_move_file(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "source.txt"
|
||||
dst = tmp_path / "dest.txt"
|
||||
src.write_text("content")
|
||||
|
||||
move_file(src, dst)
|
||||
|
||||
assert not src.exists()
|
||||
assert dst.exists()
|
||||
assert dst.read_text() == "content"
|
||||
|
||||
def test_move_file_creates_parent_dirs(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "source.txt"
|
||||
dst = tmp_path / "nested" / "deep" / "dest.txt"
|
||||
src.write_text("content")
|
||||
|
||||
move_file(src, dst)
|
||||
|
||||
assert not src.exists()
|
||||
assert dst.exists()
|
||||
|
||||
def test_move_file_not_found(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "nonexistent.txt"
|
||||
dst = tmp_path / "dest.txt"
|
||||
|
||||
with pytest.raises(FileNotFoundError):
|
||||
move_file(src, dst)
|
||||
|
||||
|
||||
class TestSyncFile:
|
||||
"""Tests for sync_file function."""
|
||||
|
||||
def test_sync_new_file(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "source.txt"
|
||||
dst = tmp_path / "dest.txt"
|
||||
src.write_text("content")
|
||||
|
||||
result = sync_file(src, dst)
|
||||
|
||||
assert result is True
|
||||
assert dst.exists()
|
||||
assert dst.read_text() == "content"
|
||||
|
||||
def test_sync_newer_source(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "source.txt"
|
||||
dst = tmp_path / "dest.txt"
|
||||
|
||||
# Create destination first
|
||||
dst.write_text("old content")
|
||||
time.sleep(0.1)
|
||||
|
||||
# Create newer source
|
||||
src.write_text("new content")
|
||||
|
||||
result = sync_file(src, dst)
|
||||
|
||||
assert result is True
|
||||
assert dst.read_text() == "new content"
|
||||
|
||||
def test_sync_older_source(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "source.txt"
|
||||
dst = tmp_path / "dest.txt"
|
||||
|
||||
# Create source first
|
||||
src.write_text("old content")
|
||||
time.sleep(0.1)
|
||||
|
||||
# Create newer destination
|
||||
dst.write_text("new content")
|
||||
|
||||
result = sync_file(src, dst)
|
||||
|
||||
assert result is False
|
||||
assert dst.read_text() == "new content"
|
||||
|
||||
def test_sync_file_not_found(self, tmp_path: Path) -> None:
|
||||
src = tmp_path / "nonexistent.txt"
|
||||
dst = tmp_path / "dest.txt"
|
||||
|
||||
with pytest.raises(FileNotFoundError):
|
||||
sync_file(src, dst)
|
||||
242
tests/test_file_watcher.py
Normal file
242
tests/test_file_watcher.py
Normal file
@@ -0,0 +1,242 @@
|
||||
"""Tests for file_watcher module."""
|
||||
|
||||
import time
|
||||
from pathlib import Path
|
||||
from threading import Event
|
||||
|
||||
import pytest
|
||||
|
||||
from src.core.file_watcher import EventType, FileEvent, FileWatcher
|
||||
|
||||
|
||||
class TestFileEvent:
|
||||
"""Tests for FileEvent dataclass."""
|
||||
|
||||
def test_create_file_event(self) -> None:
|
||||
event = FileEvent(
|
||||
event_type=EventType.CREATED,
|
||||
path="test.txt",
|
||||
is_directory=False,
|
||||
)
|
||||
assert event.event_type == EventType.CREATED
|
||||
assert event.path == "test.txt"
|
||||
assert event.is_directory is False
|
||||
assert event.dest_path is None
|
||||
|
||||
def test_move_event_with_dest(self) -> None:
|
||||
event = FileEvent(
|
||||
event_type=EventType.MOVED,
|
||||
path="old.txt",
|
||||
is_directory=False,
|
||||
dest_path="new.txt",
|
||||
)
|
||||
assert event.event_type == EventType.MOVED
|
||||
assert event.path == "old.txt"
|
||||
assert event.dest_path == "new.txt"
|
||||
|
||||
def test_str_representation(self) -> None:
|
||||
event = FileEvent(EventType.CREATED, "test.txt", False)
|
||||
assert str(event) == "created: test.txt"
|
||||
|
||||
def test_str_representation_moved(self) -> None:
|
||||
event = FileEvent(EventType.MOVED, "old.txt", False, "new.txt")
|
||||
assert str(event) == "moved: old.txt -> new.txt"
|
||||
|
||||
|
||||
class TestFileWatcher:
|
||||
"""Tests for FileWatcher class."""
|
||||
|
||||
def test_start_and_stop(self, tmp_path: Path) -> None:
|
||||
events: list[FileEvent] = []
|
||||
watcher = FileWatcher(tmp_path, callback=events.append)
|
||||
|
||||
assert not watcher.is_running()
|
||||
watcher.start()
|
||||
assert watcher.is_running()
|
||||
watcher.stop()
|
||||
assert not watcher.is_running()
|
||||
|
||||
def test_context_manager(self, tmp_path: Path) -> None:
|
||||
events: list[FileEvent] = []
|
||||
|
||||
with FileWatcher(tmp_path, callback=events.append) as watcher:
|
||||
assert watcher.is_running()
|
||||
|
||||
assert not watcher.is_running()
|
||||
|
||||
def test_start_nonexistent_path_raises(self, tmp_path: Path) -> None:
|
||||
nonexistent = tmp_path / "nonexistent"
|
||||
events: list[FileEvent] = []
|
||||
watcher = FileWatcher(nonexistent, callback=events.append)
|
||||
|
||||
with pytest.raises(FileNotFoundError):
|
||||
watcher.start()
|
||||
|
||||
def test_double_start_is_safe(self, tmp_path: Path) -> None:
|
||||
events: list[FileEvent] = []
|
||||
watcher = FileWatcher(tmp_path, callback=events.append)
|
||||
|
||||
watcher.start()
|
||||
watcher.start() # Should not raise
|
||||
assert watcher.is_running()
|
||||
watcher.stop()
|
||||
|
||||
def test_double_stop_is_safe(self, tmp_path: Path) -> None:
|
||||
events: list[FileEvent] = []
|
||||
watcher = FileWatcher(tmp_path, callback=events.append)
|
||||
|
||||
watcher.start()
|
||||
watcher.stop()
|
||||
watcher.stop() # Should not raise
|
||||
assert not watcher.is_running()
|
||||
|
||||
def test_detects_file_creation(self, tmp_path: Path) -> None:
|
||||
events: list[FileEvent] = []
|
||||
event_received = Event()
|
||||
|
||||
def callback(event: FileEvent) -> None:
|
||||
events.append(event)
|
||||
event_received.set()
|
||||
|
||||
with FileWatcher(tmp_path, callback=callback):
|
||||
# Create a file
|
||||
test_file = tmp_path / "test.txt"
|
||||
test_file.write_text("hello")
|
||||
|
||||
# Wait for event
|
||||
event_received.wait(timeout=2.0)
|
||||
|
||||
# Check that we got a CREATED event
|
||||
created_events = [e for e in events if e.event_type == EventType.CREATED]
|
||||
assert len(created_events) >= 1
|
||||
assert any(e.path == "test.txt" for e in created_events)
|
||||
|
||||
def test_detects_file_deletion(self, tmp_path: Path) -> None:
|
||||
# Create file first
|
||||
test_file = tmp_path / "test.txt"
|
||||
test_file.write_text("hello")
|
||||
|
||||
events: list[FileEvent] = []
|
||||
event_received = Event()
|
||||
|
||||
def callback(event: FileEvent) -> None:
|
||||
events.append(event)
|
||||
if event.event_type == EventType.DELETED:
|
||||
event_received.set()
|
||||
|
||||
with FileWatcher(tmp_path, callback=callback):
|
||||
# Delete the file
|
||||
test_file.unlink()
|
||||
|
||||
# Wait for event
|
||||
event_received.wait(timeout=2.0)
|
||||
|
||||
# Check that we got a DELETED event
|
||||
deleted_events = [e for e in events if e.event_type == EventType.DELETED]
|
||||
assert len(deleted_events) >= 1
|
||||
assert any(e.path == "test.txt" for e in deleted_events)
|
||||
|
||||
def test_detects_file_move(self, tmp_path: Path) -> None:
|
||||
# Create file first
|
||||
test_file = tmp_path / "old.txt"
|
||||
test_file.write_text("hello")
|
||||
|
||||
events: list[FileEvent] = []
|
||||
event_received = Event()
|
||||
|
||||
def callback(event: FileEvent) -> None:
|
||||
events.append(event)
|
||||
if event.event_type == EventType.MOVED:
|
||||
event_received.set()
|
||||
|
||||
with FileWatcher(tmp_path, callback=callback):
|
||||
# Move the file
|
||||
new_file = tmp_path / "new.txt"
|
||||
test_file.rename(new_file)
|
||||
|
||||
# Wait for event
|
||||
event_received.wait(timeout=2.0)
|
||||
|
||||
# Check that we got a MOVED event
|
||||
moved_events = [e for e in events if e.event_type == EventType.MOVED]
|
||||
assert len(moved_events) >= 1
|
||||
assert any(e.path == "old.txt" and e.dest_path == "new.txt" for e in moved_events)
|
||||
|
||||
def test_ignores_vault_directory(self, tmp_path: Path) -> None:
|
||||
# Create .vault directory
|
||||
vault_dir = tmp_path / ".vault"
|
||||
vault_dir.mkdir()
|
||||
|
||||
events: list[FileEvent] = []
|
||||
|
||||
with FileWatcher(tmp_path, callback=events.append):
|
||||
# Create file inside .vault
|
||||
(vault_dir / "manifest.json").write_text("{}")
|
||||
time.sleep(0.5)
|
||||
|
||||
# No events should be recorded for .vault directory
|
||||
assert all(".vault" not in e.path for e in events)
|
||||
|
||||
def test_custom_ignore_patterns(self, tmp_path: Path) -> None:
|
||||
events: list[FileEvent] = []
|
||||
event_received = Event()
|
||||
|
||||
def callback(event: FileEvent) -> None:
|
||||
events.append(event)
|
||||
event_received.set()
|
||||
|
||||
with FileWatcher(tmp_path, callback=callback, ignore_patterns=[".vault", "__pycache__"]):
|
||||
# Create ignored directory
|
||||
cache_dir = tmp_path / "__pycache__"
|
||||
cache_dir.mkdir()
|
||||
(cache_dir / "test.pyc").write_text("cached")
|
||||
time.sleep(0.2)
|
||||
|
||||
# Create non-ignored file
|
||||
(tmp_path / "regular.txt").write_text("hello")
|
||||
event_received.wait(timeout=2.0)
|
||||
|
||||
# Only regular.txt events should be recorded
|
||||
assert all("__pycache__" not in e.path for e in events)
|
||||
assert any("regular.txt" in e.path for e in events)
|
||||
|
||||
def test_detects_nested_file_creation(self, tmp_path: Path) -> None:
|
||||
# Create nested directory
|
||||
nested = tmp_path / "subdir" / "nested"
|
||||
nested.mkdir(parents=True)
|
||||
|
||||
events: list[FileEvent] = []
|
||||
event_received = Event()
|
||||
|
||||
def callback(event: FileEvent) -> None:
|
||||
events.append(event)
|
||||
if event.event_type == EventType.CREATED and "deep.txt" in event.path:
|
||||
event_received.set()
|
||||
|
||||
with FileWatcher(tmp_path, callback=callback):
|
||||
# Create file in nested directory
|
||||
(nested / "deep.txt").write_text("nested content")
|
||||
event_received.wait(timeout=2.0)
|
||||
|
||||
# Check event has correct relative path
|
||||
created_events = [e for e in events if e.event_type == EventType.CREATED]
|
||||
assert any("subdir/nested/deep.txt" in e.path or "subdir\\nested\\deep.txt" in e.path for e in created_events)
|
||||
|
||||
def test_detects_directory_creation(self, tmp_path: Path) -> None:
|
||||
events: list[FileEvent] = []
|
||||
event_received = Event()
|
||||
|
||||
def callback(event: FileEvent) -> None:
|
||||
events.append(event)
|
||||
if event.is_directory and event.event_type == EventType.CREATED:
|
||||
event_received.set()
|
||||
|
||||
with FileWatcher(tmp_path, callback=callback):
|
||||
# Create directory
|
||||
(tmp_path / "newdir").mkdir()
|
||||
event_received.wait(timeout=2.0)
|
||||
|
||||
# Check directory creation event
|
||||
dir_events = [e for e in events if e.is_directory and e.event_type == EventType.CREATED]
|
||||
assert len(dir_events) >= 1
|
||||
assert any(e.path == "newdir" for e in dir_events)
|
||||
147
tests/test_image_manager.py
Normal file
147
tests/test_image_manager.py
Normal file
@@ -0,0 +1,147 @@
|
||||
"""Tests for image_manager module."""
|
||||
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from src.core.image_manager import (
|
||||
ImageError,
|
||||
create_sparse_image,
|
||||
delete_image,
|
||||
get_image_info,
|
||||
resize_image,
|
||||
)
|
||||
|
||||
|
||||
class TestCreateSparseImage:
|
||||
"""Tests for create_sparse_image function."""
|
||||
|
||||
def test_create_sparse_image(self) -> None:
|
||||
"""Test creating a sparse image."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
path = Path(tmpdir) / "test.vault"
|
||||
|
||||
create_sparse_image(path, size_mb=10)
|
||||
|
||||
assert path.exists()
|
||||
# File should be 10MB in logical size
|
||||
assert path.stat().st_size == 10 * 1024 * 1024
|
||||
|
||||
def test_create_sparse_image_is_sparse(self) -> None:
|
||||
"""Test that created image is actually sparse (uses less disk space)."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
path = Path(tmpdir) / "test.vault"
|
||||
|
||||
create_sparse_image(path, size_mb=100)
|
||||
|
||||
stat = path.stat()
|
||||
# Actual disk usage should be much less than logical size
|
||||
# st_blocks is in 512-byte units
|
||||
actual_size = stat.st_blocks * 512
|
||||
logical_size = stat.st_size
|
||||
|
||||
# Actual size should be less than 10% of logical size for a sparse file
|
||||
# (exFAT metadata takes some space, so not 0)
|
||||
assert actual_size < logical_size * 0.1
|
||||
|
||||
def test_create_sparse_image_already_exists(self) -> None:
|
||||
"""Test that creating image fails if file exists."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
path = Path(tmpdir) / "test.vault"
|
||||
path.touch()
|
||||
|
||||
with pytest.raises(ImageError, match="already exists"):
|
||||
create_sparse_image(path, size_mb=10)
|
||||
|
||||
def test_create_sparse_image_invalid_path(self) -> None:
|
||||
"""Test that creating image fails for invalid path."""
|
||||
path = Path("/nonexistent/directory/test.vault")
|
||||
|
||||
with pytest.raises(ImageError):
|
||||
create_sparse_image(path, size_mb=10)
|
||||
|
||||
|
||||
class TestResizeImage:
|
||||
"""Tests for resize_image function."""
|
||||
|
||||
def test_resize_image(self) -> None:
|
||||
"""Test resizing an image."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
path = Path(tmpdir) / "test.vault"
|
||||
create_sparse_image(path, size_mb=10)
|
||||
|
||||
resize_image(path, new_size_mb=20)
|
||||
|
||||
assert path.stat().st_size == 20 * 1024 * 1024
|
||||
|
||||
def test_resize_image_smaller_fails(self) -> None:
|
||||
"""Test that shrinking an image fails."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
path = Path(tmpdir) / "test.vault"
|
||||
create_sparse_image(path, size_mb=20)
|
||||
|
||||
with pytest.raises(ImageError, match="must be larger"):
|
||||
resize_image(path, new_size_mb=10)
|
||||
|
||||
def test_resize_image_same_size_fails(self) -> None:
|
||||
"""Test that resizing to same size fails."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
path = Path(tmpdir) / "test.vault"
|
||||
create_sparse_image(path, size_mb=10)
|
||||
|
||||
with pytest.raises(ImageError, match="must be larger"):
|
||||
resize_image(path, new_size_mb=10)
|
||||
|
||||
def test_resize_nonexistent_image(self) -> None:
|
||||
"""Test that resizing nonexistent image fails."""
|
||||
path = Path("/nonexistent/test.vault")
|
||||
|
||||
with pytest.raises(ImageError, match="not found"):
|
||||
resize_image(path, new_size_mb=20)
|
||||
|
||||
|
||||
class TestGetImageInfo:
|
||||
"""Tests for get_image_info function."""
|
||||
|
||||
def test_get_image_info(self) -> None:
|
||||
"""Test getting image info."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
path = Path(tmpdir) / "test.vault"
|
||||
create_sparse_image(path, size_mb=50)
|
||||
|
||||
info = get_image_info(path)
|
||||
|
||||
assert info["path"] == str(path)
|
||||
assert info["size_mb"] == 50
|
||||
assert info["actual_size_mb"] < 50 # Sparse file
|
||||
assert 0 < info["sparse_ratio"] < 1
|
||||
|
||||
def test_get_image_info_nonexistent(self) -> None:
|
||||
"""Test getting info for nonexistent image."""
|
||||
path = Path("/nonexistent/test.vault")
|
||||
|
||||
with pytest.raises(ImageError, match="not found"):
|
||||
get_image_info(path)
|
||||
|
||||
|
||||
class TestDeleteImage:
|
||||
"""Tests for delete_image function."""
|
||||
|
||||
def test_delete_image(self) -> None:
|
||||
"""Test deleting an image."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
path = Path(tmpdir) / "test.vault"
|
||||
create_sparse_image(path, size_mb=10)
|
||||
assert path.exists()
|
||||
|
||||
delete_image(path)
|
||||
|
||||
assert not path.exists()
|
||||
|
||||
def test_delete_nonexistent_image(self) -> None:
|
||||
"""Test deleting nonexistent image fails."""
|
||||
path = Path("/nonexistent/test.vault")
|
||||
|
||||
with pytest.raises(ImageError, match="not found"):
|
||||
delete_image(path)
|
||||
182
tests/test_lock.py
Normal file
182
tests/test_lock.py
Normal file
@@ -0,0 +1,182 @@
|
||||
"""Tests for VaultLock."""
|
||||
|
||||
import multiprocessing
|
||||
import os
|
||||
import tempfile
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from src.core.lock import VaultLock, VaultLockError
|
||||
|
||||
|
||||
class TestVaultLock:
|
||||
"""Tests for VaultLock class."""
|
||||
|
||||
def test_acquire_and_release(self) -> None:
|
||||
"""Test basic lock acquire and release."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
lock_path = Path(tmpdir) / ".vault" / "lock"
|
||||
lock = VaultLock(lock_path)
|
||||
|
||||
assert lock.acquire()
|
||||
assert lock_path.exists()
|
||||
lock.release()
|
||||
|
||||
def test_lock_creates_directory(self) -> None:
|
||||
"""Test that lock creates parent directory if needed."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
lock_path = Path(tmpdir) / "nested" / "dir" / "lock"
|
||||
lock = VaultLock(lock_path)
|
||||
|
||||
assert lock.acquire()
|
||||
assert lock_path.parent.exists()
|
||||
lock.release()
|
||||
|
||||
def test_lock_writes_pid(self) -> None:
|
||||
"""Test that lock file contains PID."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
lock_path = Path(tmpdir) / "lock"
|
||||
lock = VaultLock(lock_path)
|
||||
|
||||
lock.acquire()
|
||||
pid = lock.get_owner_pid()
|
||||
lock.release()
|
||||
|
||||
assert pid == os.getpid()
|
||||
|
||||
def test_release_removes_lock_file(self) -> None:
|
||||
"""Test that release removes lock file."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
lock_path = Path(tmpdir) / "lock"
|
||||
lock = VaultLock(lock_path)
|
||||
|
||||
lock.acquire()
|
||||
lock.release()
|
||||
|
||||
assert not lock_path.exists()
|
||||
|
||||
def test_release_safe_when_not_locked(self) -> None:
|
||||
"""Test that release is safe to call when not locked."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
lock_path = Path(tmpdir) / "lock"
|
||||
lock = VaultLock(lock_path)
|
||||
|
||||
# Should not raise
|
||||
lock.release()
|
||||
|
||||
def test_is_locked_when_not_locked(self) -> None:
|
||||
"""Test is_locked returns False when not locked."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
lock_path = Path(tmpdir) / "lock"
|
||||
lock = VaultLock(lock_path)
|
||||
|
||||
assert not lock.is_locked()
|
||||
|
||||
def test_is_locked_when_locked(self) -> None:
|
||||
"""Test is_locked returns True when locked."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
lock_path = Path(tmpdir) / "lock"
|
||||
lock = VaultLock(lock_path)
|
||||
|
||||
lock.acquire()
|
||||
try:
|
||||
# Check from different VaultLock instance
|
||||
other_lock = VaultLock(lock_path)
|
||||
assert other_lock.is_locked()
|
||||
finally:
|
||||
lock.release()
|
||||
|
||||
def test_context_manager(self) -> None:
|
||||
"""Test context manager usage."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
lock_path = Path(tmpdir) / "lock"
|
||||
|
||||
with VaultLock(lock_path):
|
||||
assert lock_path.exists()
|
||||
|
||||
assert not lock_path.exists()
|
||||
|
||||
def test_context_manager_raises_when_locked(self) -> None:
|
||||
"""Test context manager raises when already locked."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
lock_path = Path(tmpdir) / "lock"
|
||||
lock1 = VaultLock(lock_path)
|
||||
lock1.acquire()
|
||||
|
||||
try:
|
||||
with pytest.raises(VaultLockError):
|
||||
with VaultLock(lock_path):
|
||||
pass
|
||||
finally:
|
||||
lock1.release()
|
||||
|
||||
def test_get_owner_pid_no_lock_file(self) -> None:
|
||||
"""Test get_owner_pid returns None when no lock file."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
lock_path = Path(tmpdir) / "lock"
|
||||
lock = VaultLock(lock_path)
|
||||
|
||||
assert lock.get_owner_pid() is None
|
||||
|
||||
|
||||
def _acquire_lock_in_subprocess(lock_path: str, result_queue: multiprocessing.Queue) -> None:
|
||||
"""Helper function to acquire lock in subprocess."""
|
||||
lock = VaultLock(Path(lock_path))
|
||||
acquired = lock.acquire()
|
||||
result_queue.put(acquired)
|
||||
if acquired:
|
||||
time.sleep(0.5) # Hold lock briefly
|
||||
lock.release()
|
||||
|
||||
|
||||
class TestVaultLockMultiprocess:
|
||||
"""Tests for VaultLock with multiple processes."""
|
||||
|
||||
def test_second_process_cannot_acquire(self) -> None:
|
||||
"""Test that second process cannot acquire lock."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
lock_path = Path(tmpdir) / "lock"
|
||||
lock = VaultLock(lock_path)
|
||||
|
||||
# Acquire lock in main process
|
||||
assert lock.acquire()
|
||||
|
||||
try:
|
||||
# Try to acquire in subprocess
|
||||
result_queue: multiprocessing.Queue = multiprocessing.Queue()
|
||||
process = multiprocessing.Process(
|
||||
target=_acquire_lock_in_subprocess,
|
||||
args=(str(lock_path), result_queue),
|
||||
)
|
||||
process.start()
|
||||
process.join(timeout=2)
|
||||
|
||||
# Subprocess should not have acquired the lock
|
||||
acquired_in_subprocess = result_queue.get(timeout=1)
|
||||
assert not acquired_in_subprocess
|
||||
finally:
|
||||
lock.release()
|
||||
|
||||
def test_process_can_acquire_after_release(self) -> None:
|
||||
"""Test that process can acquire lock after it's released."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
lock_path = Path(tmpdir) / "lock"
|
||||
lock = VaultLock(lock_path)
|
||||
|
||||
# Acquire and release
|
||||
lock.acquire()
|
||||
lock.release()
|
||||
|
||||
# Now subprocess should be able to acquire
|
||||
result_queue: multiprocessing.Queue = multiprocessing.Queue()
|
||||
process = multiprocessing.Process(
|
||||
target=_acquire_lock_in_subprocess,
|
||||
args=(str(lock_path), result_queue),
|
||||
)
|
||||
process.start()
|
||||
process.join(timeout=2)
|
||||
|
||||
acquired_in_subprocess = result_queue.get(timeout=1)
|
||||
assert acquired_in_subprocess
|
||||
266
tests/test_manifest.py
Normal file
266
tests/test_manifest.py
Normal file
@@ -0,0 +1,266 @@
|
||||
"""Tests for Manifest dataclass."""
|
||||
|
||||
import tempfile
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
from src.core.file_entry import FileEntry
|
||||
from src.core.manifest import Location, Manifest
|
||||
|
||||
|
||||
class TestLocation:
|
||||
"""Tests for Location dataclass."""
|
||||
|
||||
def test_create_location(self) -> None:
|
||||
"""Test creating a Location instance."""
|
||||
now = datetime.now()
|
||||
loc = Location(
|
||||
path="/mnt/disk1/vault.vault",
|
||||
last_seen=now,
|
||||
status="active",
|
||||
)
|
||||
|
||||
assert loc.path == "/mnt/disk1/vault.vault"
|
||||
assert loc.last_seen == now
|
||||
assert loc.status == "active"
|
||||
|
||||
def test_to_dict(self) -> None:
|
||||
"""Test serialization to dictionary."""
|
||||
time = datetime(2026, 1, 28, 15, 45, 0)
|
||||
loc = Location(
|
||||
path="/mnt/disk1/vault.vault",
|
||||
last_seen=time,
|
||||
status="active",
|
||||
)
|
||||
|
||||
result = loc.to_dict()
|
||||
|
||||
assert result == {
|
||||
"path": "/mnt/disk1/vault.vault",
|
||||
"last_seen": "2026-01-28T15:45:00",
|
||||
"status": "active",
|
||||
}
|
||||
|
||||
def test_from_dict(self) -> None:
|
||||
"""Test deserialization from dictionary."""
|
||||
data = {
|
||||
"path": "/mnt/nas/vault.vault",
|
||||
"last_seen": "2026-01-25T08:00:00",
|
||||
"status": "unreachable",
|
||||
}
|
||||
|
||||
loc = Location.from_dict(data)
|
||||
|
||||
assert loc.path == "/mnt/nas/vault.vault"
|
||||
assert loc.last_seen == datetime(2026, 1, 25, 8, 0, 0)
|
||||
assert loc.status == "unreachable"
|
||||
|
||||
|
||||
class TestManifest:
|
||||
"""Tests for Manifest dataclass."""
|
||||
|
||||
def test_create_new_manifest(self) -> None:
|
||||
"""Test creating a new manifest."""
|
||||
manifest = Manifest.create_new(
|
||||
vault_name="My Vault",
|
||||
image_size_mb=1024,
|
||||
location_path="/mnt/disk1/myvault.vault",
|
||||
)
|
||||
|
||||
assert manifest.vault_name == "My Vault"
|
||||
assert manifest.image_size_mb == 1024
|
||||
assert manifest.version == 1
|
||||
assert len(manifest.locations) == 1
|
||||
assert manifest.locations[0].path == "/mnt/disk1/myvault.vault"
|
||||
assert manifest.locations[0].status == "active"
|
||||
assert len(manifest.files) == 0
|
||||
assert manifest.vault_id # UUID should be set
|
||||
|
||||
def test_to_dict(self) -> None:
|
||||
"""Test serialization to dictionary."""
|
||||
manifest = Manifest.create_new(
|
||||
vault_name="Test Vault",
|
||||
image_size_mb=512,
|
||||
location_path="/test/vault.vault",
|
||||
)
|
||||
|
||||
result = manifest.to_dict()
|
||||
|
||||
assert result["vault_name"] == "Test Vault"
|
||||
assert result["image_size_mb"] == 512
|
||||
assert result["version"] == 1
|
||||
assert len(result["locations"]) == 1
|
||||
assert len(result["files"]) == 0
|
||||
assert "vault_id" in result
|
||||
assert "created" in result
|
||||
assert "last_modified" in result
|
||||
|
||||
def test_from_dict(self) -> None:
|
||||
"""Test deserialization from dictionary."""
|
||||
data = {
|
||||
"vault_id": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"vault_name": "My Vault",
|
||||
"version": 1,
|
||||
"created": "2026-01-28T10:30:00",
|
||||
"last_modified": "2026-01-28T15:45:00",
|
||||
"image_size_mb": 10240,
|
||||
"locations": [
|
||||
{
|
||||
"path": "/mnt/disk1/myvault.vault",
|
||||
"last_seen": "2026-01-28T15:45:00",
|
||||
"status": "active",
|
||||
}
|
||||
],
|
||||
"files": [
|
||||
{
|
||||
"path": "documents/file.txt",
|
||||
"hash": "sha256:abc123",
|
||||
"size": 1234,
|
||||
"created": "2026-01-28T10:30:00",
|
||||
"modified": "2026-01-28T14:20:00",
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
manifest = Manifest.from_dict(data)
|
||||
|
||||
assert manifest.vault_id == "550e8400-e29b-41d4-a716-446655440000"
|
||||
assert manifest.vault_name == "My Vault"
|
||||
assert manifest.image_size_mb == 10240
|
||||
assert len(manifest.locations) == 1
|
||||
assert len(manifest.files) == 1
|
||||
assert manifest.files[0].path == "documents/file.txt"
|
||||
|
||||
def test_save_and_load(self) -> None:
|
||||
"""Test saving and loading manifest to/from file."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
mount_point = Path(tmpdir)
|
||||
|
||||
# Create and save
|
||||
manifest = Manifest.create_new(
|
||||
vault_name="Test Vault",
|
||||
image_size_mb=512,
|
||||
location_path="/test/vault.vault",
|
||||
)
|
||||
manifest.save(mount_point)
|
||||
|
||||
# Verify file exists
|
||||
manifest_path = mount_point / ".vault" / "manifest.json"
|
||||
assert manifest_path.exists()
|
||||
|
||||
# Load and verify
|
||||
loaded = Manifest.load(mount_point)
|
||||
assert loaded.vault_id == manifest.vault_id
|
||||
assert loaded.vault_name == manifest.vault_name
|
||||
assert loaded.image_size_mb == manifest.image_size_mb
|
||||
|
||||
def test_add_location(self) -> None:
|
||||
"""Test adding a new location."""
|
||||
manifest = Manifest.create_new(
|
||||
vault_name="Test",
|
||||
image_size_mb=512,
|
||||
location_path="/disk1/vault.vault",
|
||||
)
|
||||
original_modified = manifest.last_modified
|
||||
|
||||
manifest.add_location("/disk2/vault.vault")
|
||||
|
||||
assert len(manifest.locations) == 2
|
||||
assert manifest.locations[1].path == "/disk2/vault.vault"
|
||||
assert manifest.locations[1].status == "active"
|
||||
assert manifest.last_modified >= original_modified
|
||||
|
||||
def test_update_location_status(self) -> None:
|
||||
"""Test updating location status."""
|
||||
manifest = Manifest.create_new(
|
||||
vault_name="Test",
|
||||
image_size_mb=512,
|
||||
location_path="/disk1/vault.vault",
|
||||
)
|
||||
|
||||
manifest.update_location_status("/disk1/vault.vault", "unreachable")
|
||||
|
||||
assert manifest.locations[0].status == "unreachable"
|
||||
|
||||
def test_add_file(self) -> None:
|
||||
"""Test adding a file entry."""
|
||||
manifest = Manifest.create_new(
|
||||
vault_name="Test",
|
||||
image_size_mb=512,
|
||||
location_path="/disk1/vault.vault",
|
||||
)
|
||||
now = datetime.now()
|
||||
file_entry = FileEntry(
|
||||
path="test.txt",
|
||||
hash="sha256:abc",
|
||||
size=100,
|
||||
created=now,
|
||||
modified=now,
|
||||
)
|
||||
|
||||
manifest.add_file(file_entry)
|
||||
|
||||
assert len(manifest.files) == 1
|
||||
assert manifest.files[0].path == "test.txt"
|
||||
|
||||
def test_add_file_updates_existing(self) -> None:
|
||||
"""Test that adding a file with same path updates it."""
|
||||
manifest = Manifest.create_new(
|
||||
vault_name="Test",
|
||||
image_size_mb=512,
|
||||
location_path="/disk1/vault.vault",
|
||||
)
|
||||
now = datetime.now()
|
||||
|
||||
# Add first version
|
||||
entry1 = FileEntry(
|
||||
path="test.txt", hash="sha256:old", size=100, created=now, modified=now
|
||||
)
|
||||
manifest.add_file(entry1)
|
||||
|
||||
# Add updated version
|
||||
entry2 = FileEntry(
|
||||
path="test.txt", hash="sha256:new", size=200, created=now, modified=now
|
||||
)
|
||||
manifest.add_file(entry2)
|
||||
|
||||
assert len(manifest.files) == 1
|
||||
assert manifest.files[0].hash == "sha256:new"
|
||||
assert manifest.files[0].size == 200
|
||||
|
||||
def test_remove_file(self) -> None:
|
||||
"""Test removing a file entry."""
|
||||
manifest = Manifest.create_new(
|
||||
vault_name="Test",
|
||||
image_size_mb=512,
|
||||
location_path="/disk1/vault.vault",
|
||||
)
|
||||
now = datetime.now()
|
||||
entry = FileEntry(
|
||||
path="test.txt", hash="sha256:abc", size=100, created=now, modified=now
|
||||
)
|
||||
manifest.add_file(entry)
|
||||
|
||||
manifest.remove_file("test.txt")
|
||||
|
||||
assert len(manifest.files) == 0
|
||||
|
||||
def test_get_file(self) -> None:
|
||||
"""Test getting a file entry by path."""
|
||||
manifest = Manifest.create_new(
|
||||
vault_name="Test",
|
||||
image_size_mb=512,
|
||||
location_path="/disk1/vault.vault",
|
||||
)
|
||||
now = datetime.now()
|
||||
entry = FileEntry(
|
||||
path="test.txt", hash="sha256:abc", size=100, created=now, modified=now
|
||||
)
|
||||
manifest.add_file(entry)
|
||||
|
||||
found = manifest.get_file("test.txt")
|
||||
not_found = manifest.get_file("nonexistent.txt")
|
||||
|
||||
assert found is not None
|
||||
assert found.path == "test.txt"
|
||||
assert not_found is None
|
||||
391
tests/test_sync_manager.py
Normal file
391
tests/test_sync_manager.py
Normal file
@@ -0,0 +1,391 @@
|
||||
"""Tests for sync_manager module."""
|
||||
|
||||
import time
|
||||
from pathlib import Path
|
||||
from threading import Event
|
||||
|
||||
import pytest
|
||||
|
||||
from src.core.file_watcher import EventType
|
||||
from src.core.manifest import Manifest
|
||||
from src.core.sync_manager import (
|
||||
ReplicaMount,
|
||||
SyncEvent,
|
||||
SyncManager,
|
||||
SyncStatus,
|
||||
)
|
||||
|
||||
|
||||
class TestReplicaMount:
|
||||
"""Tests for ReplicaMount dataclass."""
|
||||
|
||||
def test_get_file_path(self, tmp_path: Path) -> None:
|
||||
replica = ReplicaMount(
|
||||
mount_point=tmp_path / "mount",
|
||||
image_path=tmp_path / "vault.vault",
|
||||
is_primary=False,
|
||||
)
|
||||
assert replica.get_file_path("docs/file.txt") == tmp_path / "mount" / "docs" / "file.txt"
|
||||
|
||||
|
||||
class TestSyncManager:
|
||||
"""Tests for SyncManager class."""
|
||||
|
||||
def test_initial_state(self) -> None:
|
||||
manager = SyncManager()
|
||||
assert manager.status == SyncStatus.IDLE
|
||||
assert manager.replica_count == 0
|
||||
assert manager.primary_mount is None
|
||||
|
||||
def test_add_replica(self, tmp_path: Path) -> None:
|
||||
manager = SyncManager()
|
||||
mount = tmp_path / "mount"
|
||||
mount.mkdir()
|
||||
image = tmp_path / "vault.vault"
|
||||
|
||||
manager.add_replica(mount, image, is_primary=False)
|
||||
|
||||
assert manager.replica_count == 1
|
||||
assert manager.primary_mount is None
|
||||
|
||||
def test_add_primary_replica(self, tmp_path: Path) -> None:
|
||||
manager = SyncManager()
|
||||
mount = tmp_path / "mount"
|
||||
mount.mkdir()
|
||||
image = tmp_path / "vault.vault"
|
||||
|
||||
manager.add_replica(mount, image, is_primary=True)
|
||||
|
||||
assert manager.replica_count == 1
|
||||
assert manager.primary_mount == mount
|
||||
|
||||
def test_remove_replica(self, tmp_path: Path) -> None:
|
||||
manager = SyncManager()
|
||||
mount = tmp_path / "mount"
|
||||
mount.mkdir()
|
||||
image = tmp_path / "vault.vault"
|
||||
|
||||
manager.add_replica(mount, image)
|
||||
assert manager.replica_count == 1
|
||||
|
||||
result = manager.remove_replica(mount)
|
||||
assert result is True
|
||||
assert manager.replica_count == 0
|
||||
|
||||
def test_remove_nonexistent_replica(self, tmp_path: Path) -> None:
|
||||
manager = SyncManager()
|
||||
result = manager.remove_replica(tmp_path / "nonexistent")
|
||||
assert result is False
|
||||
|
||||
def test_start_watching_without_primary_raises(self) -> None:
|
||||
manager = SyncManager()
|
||||
with pytest.raises(ValueError, match="No primary replica"):
|
||||
manager.start_watching()
|
||||
|
||||
def test_start_and_stop_watching(self, tmp_path: Path) -> None:
|
||||
manager = SyncManager()
|
||||
mount = tmp_path / "mount"
|
||||
mount.mkdir()
|
||||
image = tmp_path / "vault.vault"
|
||||
|
||||
manager.add_replica(mount, image, is_primary=True)
|
||||
manager.start_watching()
|
||||
manager.stop_watching()
|
||||
|
||||
def test_pause_and_resume_sync(self, tmp_path: Path) -> None:
|
||||
events: list[SyncEvent] = []
|
||||
manager = SyncManager(on_sync_event=events.append)
|
||||
|
||||
primary = tmp_path / "primary"
|
||||
secondary = tmp_path / "secondary"
|
||||
primary.mkdir()
|
||||
secondary.mkdir()
|
||||
|
||||
manager.add_replica(primary, tmp_path / "primary.vault", is_primary=True)
|
||||
manager.add_replica(secondary, tmp_path / "secondary.vault")
|
||||
manager.start_watching()
|
||||
|
||||
# Pause sync
|
||||
manager.pause_sync()
|
||||
|
||||
# Create file while paused
|
||||
(primary / "paused.txt").write_text("created while paused")
|
||||
time.sleep(0.3)
|
||||
|
||||
# No events should be recorded
|
||||
assert len(events) == 0
|
||||
assert not (secondary / "paused.txt").exists()
|
||||
|
||||
manager.stop_watching()
|
||||
|
||||
|
||||
class TestSyncManagerPropagation:
|
||||
"""Tests for file propagation in SyncManager."""
|
||||
|
||||
def test_propagate_file_creation(self, tmp_path: Path) -> None:
|
||||
events: list[SyncEvent] = []
|
||||
event_received = Event()
|
||||
|
||||
def on_event(event: SyncEvent) -> None:
|
||||
events.append(event)
|
||||
event_received.set()
|
||||
|
||||
manager = SyncManager(on_sync_event=on_event)
|
||||
|
||||
primary = tmp_path / "primary"
|
||||
secondary = tmp_path / "secondary"
|
||||
primary.mkdir()
|
||||
secondary.mkdir()
|
||||
|
||||
manager.add_replica(primary, tmp_path / "primary.vault", is_primary=True)
|
||||
manager.add_replica(secondary, tmp_path / "secondary.vault")
|
||||
manager.start_watching()
|
||||
|
||||
# Create file in primary
|
||||
(primary / "test.txt").write_text("hello")
|
||||
|
||||
# Wait for sync
|
||||
event_received.wait(timeout=2.0)
|
||||
manager.stop_watching()
|
||||
|
||||
# Check file was synced
|
||||
assert (secondary / "test.txt").exists()
|
||||
assert (secondary / "test.txt").read_text() == "hello"
|
||||
|
||||
# Check event
|
||||
created_events = [e for e in events if e.event_type == EventType.CREATED]
|
||||
assert len(created_events) >= 1
|
||||
|
||||
def test_propagate_file_deletion(self, tmp_path: Path) -> None:
|
||||
events: list[SyncEvent] = []
|
||||
event_received = Event()
|
||||
|
||||
def on_event(event: SyncEvent) -> None:
|
||||
events.append(event)
|
||||
if event.event_type == EventType.DELETED:
|
||||
event_received.set()
|
||||
|
||||
manager = SyncManager(on_sync_event=on_event)
|
||||
|
||||
primary = tmp_path / "primary"
|
||||
secondary = tmp_path / "secondary"
|
||||
primary.mkdir()
|
||||
secondary.mkdir()
|
||||
|
||||
# Create file in both
|
||||
(primary / "delete.txt").write_text("to delete")
|
||||
(secondary / "delete.txt").write_text("to delete")
|
||||
|
||||
manager.add_replica(primary, tmp_path / "primary.vault", is_primary=True)
|
||||
manager.add_replica(secondary, tmp_path / "secondary.vault")
|
||||
manager.start_watching()
|
||||
|
||||
# Delete file in primary
|
||||
(primary / "delete.txt").unlink()
|
||||
|
||||
# Wait for sync
|
||||
event_received.wait(timeout=2.0)
|
||||
manager.stop_watching()
|
||||
|
||||
# Check file was deleted in secondary
|
||||
assert not (secondary / "delete.txt").exists()
|
||||
|
||||
def test_propagate_file_move(self, tmp_path: Path) -> None:
|
||||
events: list[SyncEvent] = []
|
||||
event_received = Event()
|
||||
|
||||
def on_event(event: SyncEvent) -> None:
|
||||
events.append(event)
|
||||
if event.event_type == EventType.MOVED:
|
||||
event_received.set()
|
||||
|
||||
manager = SyncManager(on_sync_event=on_event)
|
||||
|
||||
primary = tmp_path / "primary"
|
||||
secondary = tmp_path / "secondary"
|
||||
primary.mkdir()
|
||||
secondary.mkdir()
|
||||
|
||||
# Create file in both
|
||||
(primary / "old.txt").write_text("content")
|
||||
(secondary / "old.txt").write_text("content")
|
||||
|
||||
manager.add_replica(primary, tmp_path / "primary.vault", is_primary=True)
|
||||
manager.add_replica(secondary, tmp_path / "secondary.vault")
|
||||
manager.start_watching()
|
||||
|
||||
# Move file in primary
|
||||
(primary / "old.txt").rename(primary / "new.txt")
|
||||
|
||||
# Wait for sync
|
||||
event_received.wait(timeout=2.0)
|
||||
manager.stop_watching()
|
||||
|
||||
# Check file was moved in secondary
|
||||
assert not (secondary / "old.txt").exists()
|
||||
assert (secondary / "new.txt").exists()
|
||||
|
||||
def test_propagate_to_multiple_replicas(self, tmp_path: Path) -> None:
|
||||
events: list[SyncEvent] = []
|
||||
event_received = Event()
|
||||
|
||||
def on_event(event: SyncEvent) -> None:
|
||||
events.append(event)
|
||||
event_received.set()
|
||||
|
||||
manager = SyncManager(on_sync_event=on_event)
|
||||
|
||||
primary = tmp_path / "primary"
|
||||
secondary1 = tmp_path / "secondary1"
|
||||
secondary2 = tmp_path / "secondary2"
|
||||
primary.mkdir()
|
||||
secondary1.mkdir()
|
||||
secondary2.mkdir()
|
||||
|
||||
manager.add_replica(primary, tmp_path / "primary.vault", is_primary=True)
|
||||
manager.add_replica(secondary1, tmp_path / "secondary1.vault")
|
||||
manager.add_replica(secondary2, tmp_path / "secondary2.vault")
|
||||
manager.start_watching()
|
||||
|
||||
# Create file in primary
|
||||
(primary / "multi.txt").write_text("multi content")
|
||||
|
||||
# Wait for sync
|
||||
event_received.wait(timeout=2.0)
|
||||
time.sleep(0.2) # Extra time for all replicas
|
||||
manager.stop_watching()
|
||||
|
||||
# Check file was synced to all secondaries
|
||||
assert (secondary1 / "multi.txt").exists()
|
||||
assert (secondary2 / "multi.txt").exists()
|
||||
|
||||
|
||||
class TestSyncManagerManifestSync:
|
||||
"""Tests for manifest-based synchronization."""
|
||||
|
||||
def test_sync_from_manifest_new_file(self, tmp_path: Path) -> None:
|
||||
manager = SyncManager()
|
||||
|
||||
primary = tmp_path / "primary"
|
||||
secondary = tmp_path / "secondary"
|
||||
primary.mkdir()
|
||||
secondary.mkdir()
|
||||
(primary / ".vault").mkdir()
|
||||
(secondary / ".vault").mkdir()
|
||||
|
||||
# Create file in primary
|
||||
(primary / "newfile.txt").write_text("new content")
|
||||
|
||||
manager.add_replica(primary, tmp_path / "primary.vault", is_primary=True)
|
||||
manager.add_replica(secondary, tmp_path / "secondary.vault")
|
||||
|
||||
# Create manifests
|
||||
source_manifest = Manifest.create_new("Test", 100, str(tmp_path / "primary.vault"))
|
||||
source_manifest.add_file_from_path(primary, primary / "newfile.txt")
|
||||
|
||||
target_manifest = Manifest.create_new("Test", 100, str(tmp_path / "secondary.vault"))
|
||||
|
||||
# Sync
|
||||
synced = manager.sync_from_manifest(source_manifest, secondary, target_manifest)
|
||||
|
||||
assert synced == 1
|
||||
assert (secondary / "newfile.txt").exists()
|
||||
assert (secondary / "newfile.txt").read_text() == "new content"
|
||||
|
||||
def test_sync_from_manifest_newer_source(self, tmp_path: Path) -> None:
|
||||
manager = SyncManager()
|
||||
|
||||
primary = tmp_path / "primary"
|
||||
secondary = tmp_path / "secondary"
|
||||
primary.mkdir()
|
||||
secondary.mkdir()
|
||||
(primary / ".vault").mkdir()
|
||||
(secondary / ".vault").mkdir()
|
||||
|
||||
# Create file in both with different content
|
||||
(secondary / "update.txt").write_text("old content")
|
||||
time.sleep(0.1)
|
||||
(primary / "update.txt").write_text("new content")
|
||||
|
||||
manager.add_replica(primary, tmp_path / "primary.vault", is_primary=True)
|
||||
manager.add_replica(secondary, tmp_path / "secondary.vault")
|
||||
|
||||
# Create manifests
|
||||
source_manifest = Manifest.create_new("Test", 100, str(tmp_path / "primary.vault"))
|
||||
source_manifest.add_file_from_path(primary, primary / "update.txt")
|
||||
|
||||
target_manifest = Manifest.create_new("Test", 100, str(tmp_path / "secondary.vault"))
|
||||
target_manifest.add_file_from_path(secondary, secondary / "update.txt")
|
||||
|
||||
# Sync
|
||||
synced = manager.sync_from_manifest(source_manifest, secondary, target_manifest)
|
||||
|
||||
assert synced == 1
|
||||
assert (secondary / "update.txt").read_text() == "new content"
|
||||
|
||||
def test_sync_from_manifest_deleted_file(self, tmp_path: Path) -> None:
|
||||
manager = SyncManager()
|
||||
|
||||
primary = tmp_path / "primary"
|
||||
secondary = tmp_path / "secondary"
|
||||
primary.mkdir()
|
||||
secondary.mkdir()
|
||||
(primary / ".vault").mkdir()
|
||||
(secondary / ".vault").mkdir()
|
||||
|
||||
# Create file only in secondary (simulating deletion in primary)
|
||||
(secondary / "deleted.txt").write_text("will be deleted")
|
||||
|
||||
manager.add_replica(primary, tmp_path / "primary.vault", is_primary=True)
|
||||
manager.add_replica(secondary, tmp_path / "secondary.vault")
|
||||
|
||||
# Create manifests - source has no files, target has one
|
||||
source_manifest = Manifest.create_new("Test", 100, str(tmp_path / "primary.vault"))
|
||||
|
||||
target_manifest = Manifest.create_new("Test", 100, str(tmp_path / "secondary.vault"))
|
||||
target_manifest.add_file_from_path(secondary, secondary / "deleted.txt")
|
||||
|
||||
# Sync
|
||||
synced = manager.sync_from_manifest(source_manifest, secondary, target_manifest)
|
||||
|
||||
assert synced == 1
|
||||
assert not (secondary / "deleted.txt").exists()
|
||||
|
||||
def test_full_sync(self, tmp_path: Path) -> None:
|
||||
manager = SyncManager()
|
||||
|
||||
primary = tmp_path / "primary"
|
||||
secondary1 = tmp_path / "secondary1"
|
||||
secondary2 = tmp_path / "secondary2"
|
||||
primary.mkdir()
|
||||
secondary1.mkdir()
|
||||
secondary2.mkdir()
|
||||
(primary / ".vault").mkdir()
|
||||
(secondary1 / ".vault").mkdir()
|
||||
(secondary2 / ".vault").mkdir()
|
||||
|
||||
# Create files in primary
|
||||
(primary / "file1.txt").write_text("content1")
|
||||
(primary / "file2.txt").write_text("content2")
|
||||
|
||||
manager.add_replica(primary, tmp_path / "primary.vault", is_primary=True)
|
||||
manager.add_replica(secondary1, tmp_path / "secondary1.vault")
|
||||
manager.add_replica(secondary2, tmp_path / "secondary2.vault")
|
||||
|
||||
# Create and save primary manifest
|
||||
primary_manifest = Manifest.create_new("Test", 100, str(tmp_path / "primary.vault"))
|
||||
primary_manifest.add_file_from_path(primary, primary / "file1.txt")
|
||||
primary_manifest.add_file_from_path(primary, primary / "file2.txt")
|
||||
primary_manifest.save(primary)
|
||||
|
||||
# Create empty manifests for secondaries
|
||||
Manifest.create_new("Test", 100, str(tmp_path / "secondary1.vault")).save(secondary1)
|
||||
Manifest.create_new("Test", 100, str(tmp_path / "secondary2.vault")).save(secondary2)
|
||||
|
||||
# Full sync
|
||||
results = manager.full_sync()
|
||||
|
||||
assert results[secondary1] == 2
|
||||
assert results[secondary2] == 2
|
||||
assert (secondary1 / "file1.txt").read_text() == "content1"
|
||||
assert (secondary2 / "file2.txt").read_text() == "content2"
|
||||
233
tests/test_vault.py
Normal file
233
tests/test_vault.py
Normal file
@@ -0,0 +1,233 @@
|
||||
"""Tests for vault module."""
|
||||
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from src.core.image_manager import create_sparse_image
|
||||
from src.core.vault import Vault, VaultError, VaultState
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def vault_image(tmp_path: Path) -> Path:
|
||||
"""Create a test vault image."""
|
||||
image_path = tmp_path / "test.vault"
|
||||
create_sparse_image(image_path, 10) # 10 MB
|
||||
return image_path
|
||||
|
||||
|
||||
class TestVault:
|
||||
"""Tests for Vault class."""
|
||||
|
||||
def test_initial_state(self) -> None:
|
||||
vault = Vault()
|
||||
assert vault.state == VaultState.CLOSED
|
||||
assert vault.is_open is False
|
||||
assert vault.mount_point is None
|
||||
assert vault.replica_count == 0
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_open_and_close(self, vault_image: Path) -> None:
|
||||
vault = Vault()
|
||||
|
||||
# Open
|
||||
mount = vault.open(vault_image)
|
||||
assert vault.is_open
|
||||
assert vault.state == VaultState.OPEN
|
||||
assert vault.mount_point == mount
|
||||
assert mount.exists()
|
||||
|
||||
# Close
|
||||
vault.close()
|
||||
assert vault.state == VaultState.CLOSED
|
||||
assert vault.is_open is False
|
||||
assert vault.mount_point is None
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_context_manager(self, vault_image: Path) -> None:
|
||||
with Vault() as vault:
|
||||
vault.open(vault_image)
|
||||
assert vault.is_open
|
||||
|
||||
assert vault.state == VaultState.CLOSED
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_state_change_callback(self, vault_image: Path) -> None:
|
||||
states: list[VaultState] = []
|
||||
|
||||
def on_state_change(state: VaultState) -> None:
|
||||
states.append(state)
|
||||
|
||||
vault = Vault(on_state_change=on_state_change)
|
||||
vault.open(vault_image)
|
||||
vault.close()
|
||||
|
||||
assert VaultState.OPENING in states
|
||||
assert VaultState.OPEN in states
|
||||
assert VaultState.CLOSED in states
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_open_creates_manifest(self, vault_image: Path) -> None:
|
||||
vault = Vault()
|
||||
mount = vault.open(vault_image)
|
||||
|
||||
assert vault.manifest is not None
|
||||
assert vault.manifest.vault_name == "test" # from filename
|
||||
assert (mount / ".vault" / "manifest.json").exists()
|
||||
|
||||
vault.close()
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_open_already_open_raises(self, vault_image: Path) -> None:
|
||||
vault = Vault()
|
||||
vault.open(vault_image)
|
||||
|
||||
with pytest.raises(VaultError, match="already open"):
|
||||
vault.open(vault_image)
|
||||
|
||||
vault.close()
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_get_replicas(self, vault_image: Path) -> None:
|
||||
vault = Vault()
|
||||
vault.open(vault_image)
|
||||
|
||||
replicas = vault.get_replicas()
|
||||
assert len(replicas) == 1
|
||||
assert replicas[0].is_primary is True
|
||||
assert replicas[0].is_mounted is True
|
||||
assert replicas[0].image_path == vault_image
|
||||
|
||||
vault.close()
|
||||
|
||||
|
||||
class TestVaultLocking:
|
||||
"""Tests for vault locking."""
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_vault_is_locked_when_open(self, vault_image: Path) -> None:
|
||||
vault = Vault()
|
||||
vault.open(vault_image)
|
||||
|
||||
# Lock file should exist
|
||||
lock_path = vault_image.parent / f".{vault_image.stem}.lock"
|
||||
assert lock_path.exists()
|
||||
|
||||
vault.close()
|
||||
|
||||
# Lock should be released
|
||||
assert not lock_path.exists()
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_second_vault_cannot_open_locked(self, vault_image: Path) -> None:
|
||||
vault1 = Vault()
|
||||
vault1.open(vault_image)
|
||||
|
||||
vault2 = Vault()
|
||||
with pytest.raises(VaultError, match="locked"):
|
||||
vault2.open(vault_image)
|
||||
|
||||
vault1.close()
|
||||
|
||||
|
||||
class TestVaultReplicas:
|
||||
"""Tests for vault replica management."""
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_add_replica(self, vault_image: Path, tmp_path: Path) -> None:
|
||||
vault = Vault()
|
||||
vault.open(vault_image)
|
||||
|
||||
# Create a file in primary
|
||||
(vault.mount_point / "test.txt").write_text("hello") # type: ignore
|
||||
|
||||
# Add replica
|
||||
replica_path = tmp_path / "replica.vault"
|
||||
replica_mount = vault.add_replica(replica_path)
|
||||
|
||||
assert vault.replica_count == 2
|
||||
assert replica_path.exists()
|
||||
assert (replica_mount / "test.txt").exists()
|
||||
assert (replica_mount / "test.txt").read_text() == "hello"
|
||||
|
||||
vault.close()
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_remove_replica(self, vault_image: Path, tmp_path: Path) -> None:
|
||||
vault = Vault()
|
||||
vault.open(vault_image)
|
||||
|
||||
# Add replica
|
||||
replica_path = tmp_path / "replica.vault"
|
||||
vault.add_replica(replica_path)
|
||||
assert vault.replica_count == 2
|
||||
|
||||
# Remove replica
|
||||
vault.remove_replica(replica_path)
|
||||
assert vault.replica_count == 1
|
||||
|
||||
vault.close()
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_cannot_remove_primary(self, vault_image: Path) -> None:
|
||||
vault = Vault()
|
||||
vault.open(vault_image)
|
||||
|
||||
with pytest.raises(VaultError, match="primary"):
|
||||
vault.remove_replica(vault_image)
|
||||
|
||||
vault.close()
|
||||
|
||||
|
||||
class TestVaultSync:
|
||||
"""Tests for vault synchronization."""
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_file_propagation(self, vault_image: Path, tmp_path: Path) -> None:
|
||||
vault = Vault()
|
||||
vault.open(vault_image)
|
||||
|
||||
# Add replica
|
||||
replica_path = tmp_path / "replica.vault"
|
||||
replica_mount = vault.add_replica(replica_path)
|
||||
|
||||
# Create file in primary - should propagate to replica
|
||||
(vault.mount_point / "sync_test.txt").write_text("synced content") # type: ignore
|
||||
time.sleep(0.5) # Wait for sync
|
||||
|
||||
assert (replica_mount / "sync_test.txt").exists()
|
||||
assert (replica_mount / "sync_test.txt").read_text() == "synced content"
|
||||
|
||||
vault.close()
|
||||
|
||||
@pytest.mark.integration
|
||||
def test_manual_sync(self, vault_image: Path, tmp_path: Path) -> None:
|
||||
vault = Vault()
|
||||
vault.open(vault_image)
|
||||
|
||||
# Create file before adding replica
|
||||
(vault.mount_point / "existing.txt").write_text("existing") # type: ignore
|
||||
|
||||
# Add replica (should sync during add)
|
||||
replica_path = tmp_path / "replica.vault"
|
||||
replica_mount = vault.add_replica(replica_path)
|
||||
|
||||
assert (replica_mount / "existing.txt").exists()
|
||||
|
||||
# Create another file
|
||||
(vault.mount_point / "new.txt").write_text("new") # type: ignore
|
||||
|
||||
# Pause sync and modify
|
||||
vault._sync_manager.pause_sync() # type: ignore
|
||||
(vault.mount_point / "paused.txt").write_text("paused") # type: ignore
|
||||
time.sleep(0.3)
|
||||
|
||||
# File shouldn't be in replica yet
|
||||
# (might be there due to timing, so just trigger manual sync)
|
||||
|
||||
# Resume and sync manually
|
||||
vault._sync_manager.resume_sync() # type: ignore
|
||||
vault.sync()
|
||||
|
||||
vault.close()
|
||||
Reference in New Issue
Block a user