đź“… Day 20 - Auditing a Media Archive and Taking Control of Backups
🎯 Goal
Stop chaos from spreading.
Two practical goals:
1) Audit + restore order in a messy external SSD media archive (Drone/GoPro) without manual hunting.
2) Regain control of Time Machine behavior by approaching scheduling in a clean, reversible way.
Not “hacking.” This was operational discipline.
âś… What I Did
1) Audited missing / empty DJI subtitle files (SRT) safely
Built a find ... -print0 pipeline to scan DJI clips and check whether matching .SRT files existed and were non-empty.
Results:
- found at least one clip with missing SRT
- found
.SRTfiles that existed but were 0 bytes - moved empty
.SRTfiles into a holding folder (_HOLD_EMPTY_SRT/...) instead of deleting (reversible)
Created a CSV summary (date,clip,status,mp4_path,srt_path,notes) to track archive health without terminal spam.
2) Hit a shell trap and corrected the workflow
While pasting “script-like chunks” into zsh, I triggered errors like:
zsh: command not found: #zsh: read-only variable: status
Meaning:
- pasted comments got interpreted as commands in that context
- used a variable name (
status) that can clash depending on shell/environment
Fix mindset:
- run multi-step logic as a controlled block (script/heredoc), not pasted fragments
- use safer variable names (
tm_status,srt_status, etc.) - validate with dry-run output first, then move files
3) Mapped the SSD structure to stop guessing
Since tree wasn’t installed, I generated:
- a directory map (“tree replacement”)
- a media index CSV (counts per extension + sample paths)
Outcome: I could see the real structure and avoid moving files based on assumptions.
4) Built a restore plan instead of blindly moving files
Created a GoPro restore plan CSV using anchors:
.LRVand.THMin the “UnProcessed” folder used as destination truth markers- actions labeled as:
MOVE,OK,CONFLICT,AMBIGUOUS,UNPLACED
The point was: review the decision file before touching anything.
5) Took control of Time Machine frequency (macOS)
macOS doesn’t provide a clean “backup every X hours” toggle.
Clean approach:
- disable automatic backups in Settings
- use
launchdscheduling to runtmutil startbackup --autoat a defined interval
Key principle: reversible automation (create → enable → verify logs → disable/remove).
đź”— Key Cybersecurity Connections
- Backups are security
- ransomware + deletion + corruption become survivable if backups exist and are testable.
- Treat file operations like incident response
- observe → inventory → plan → execute
- log outputs and preserve reversibility with holding folders
- Verify, don’t assume
- wrong scan scope creates wrong conclusions — same failure mode as security investigations.
⚠️ Challenges
- no
tree→ needed a fast replacement to map structure - shell paste mistakes in
zshcreated misleading errors - archive structure drift: multiple “truths” (dated folders vs organized folders)
- risk of losing hours without automation and a plan
đź§ What I Learned
- A CSV plan is a weapon against chaos:
- prevents destructive decisions
- makes actions reviewable and measurable
- “Safe moves” beat “cleanup”:
_HOLD_*folders are part of a professional workflow- deletion is last, not first
- macOS control still exists even when the GUI hides it:
launchd+tmutilgives auditable, removable scheduling.
âŹď¸Ź Next Steps
- execute only safe
MOVEactions from the restore plan (no overwrites) - create a second report focused on
UNPLACEDitems:- group by filename patterns and search wider scope
- build a reusable “archive health check” command:
- counts, orphan files, missing anchors
- for Time Machine:
- confirm schedule triggers
- confirm backups complete successfully
- confirm logs show clean runs
đź’ Reflection
Not glamorous — but it’s the difference between “using computers” and “running systems”:
- controlled changes
- traceable actions
- reversible operations
- minimal drama
I’m done trusting “looks right.” If it isn’t provable, it isn’t real.
âś… Lessons Learned
What worked
- inventory-first approach (map → index)
- CSV plan before moving anything
- holding folders instead of deletion
What broke
- assumptions about file locations
- pasting mixed fragments into
zsh
Why it broke
- wrong scan scope = wrong conclusions
- the shell executes what you paste literally unless you contain it properly
Fix / takeaway
- map first, then automate
- plan → review → execute, never execute → regret
-
use clean script blocks and safe variable names
