
Issue #7 — “The Test That Only Failed When You Watched It”
Issue #7 — “The Test That Only Failed When You Watched It”

🌼 Lady Daisy Bug: A QA & Test Automation Digest with Personality
Welcome to Lady Daisy Bug — your new favorite corner of the internet where quality assurance meets charm, code, and character. Whether you’re debugging tests in the moonlight or strategizing a flawless regression suite before your first coffee, this newsletter’s for you.
I’m your host, Lady Daisy Bug — part test whisperer, part bug wrangler, and full-time believer in thoughtful testing. Each issue will blend bite-sized insights, automation tips, and a little QA storytelling to brighten your day (and your pipelines).
Let’s squash some bugs — and do it in style.
🌟 Bug of the Week: The Test That Only Failed When You Watched It
Story:
I ran the test in CI: ✅ Pass.
I ran the test headlessly: ✅ Pass.
I ran the test locally, visible, with my eyes fixed on the emulator window: ❌ Fail.
What. Was. Happening?
This wasn’t your typical flake. This was psychological warfare. The moment I opened the emulator window and watched the test run in real-time, it fell apart. UI taps missed. Elements disappeared. Animations stuttered like haunted VHS tapes.
But every time I closed the emulator or ran it minimized? Perfectly fine.
Symptoms:
- Tests failed only when the emulator window was open or focused.
- Visual flickering, occasional missed taps, timing issues.
- Headless and background executions were stable.
- Logs showed no clear errors — just timeouts and null returns.
Root Cause:
The emulator was running in hardware rendering mode, but my laptop’s GPU was throttled by window compositing and desktop effects when the emulator was visible.
Translation: the emulator slowed down only when visible, causing subtle race conditions in animations, loading, and gesture responsiveness.
This is a classic case of observer effect in testing — not philosophical, but resource-bound.
How I figured it out:
- Added high-precision timestamps to every test step.
- Noticed increased lag (100–200ms delays) only in watched runs.
- Switched emulator graphics mode to software — and suddenly, consistency.
Fix:
- For CI: always run emulator headless or in no-window mode:
- For local tests: toggle emulator graphics to software under advanced settings.
- Added explicit wait for frame rendering using:
📌 Lesson: Sometimes your test isn’t broken — your GPU pipeline is.
🧪 Test Garden Tip: When to Go Headless — and When Not To
✅ Use headless mode for consistency in CI:
--no-window --gpu off # or use swiftshader_indirect
✅ Use software rendering if your hardware GPU causes flakiness.
✅ Always check:
- CPU & GPU load
- Emulator logs (𝚊𝚍𝚋 𝚕𝚘𝚐𝚌𝚊𝚝 or 𝚎𝚖𝚞𝚕𝚊𝚝𝚘𝚛 -𝚟𝚎𝚛𝚋𝚘𝚜𝚎)
- Background apps stealing focus (yes, Slack can ruin your run)
✅ For critical animations or gesture tests, record both modes (visible and headless) to confirm consistent behavior.
💬 Lady’s Log
“In QA, even your gaze can be disruptive.”
I used to think tests were like math: deterministic, consistent, pure. But then I learned… opening a window, literally, can make your test flake.
CI’s blind. Emulators lie. But your intuition? Trust that. Every time.
— 𝓛𝓪𝓭𝔂 𝓓𝓪𝓲𝓼𝔂 𝓑𝓾𝓰 🐞🌼
📚 Petal Picks
- 👁️ Observer effect (information technology) — Wikipedia
- 🔬 Configure hardware acceleration for the Android Emulator — Android Developers
- 🎥 Tool: scrcpy — view & record Android screens with no emulator flakiness
💌 Coming Next: Issue #8 Teaser
“The Retry That Hid a Real Bug” It passed after 3 retries, so no one noticed. But under the hood? A real bug was dodging automation like Neo dodges bullets.
Should you retry at all — and when does resilience become denial?