
Software Testing Podcast - Migrating Test Automation - The Evil Tester Show Episode 027
Should you use AI to help you migrate test automation code? And what should you actually migrate, the tests coverage hasn’t changed. In this episode we discus show abstractions and AI can be used to migrate… and discuss when you shouldn’t.
Welcome to The Evil Tester Show! In this episode, host Alan Richardson dives into the complex world of test automation migrations. Have you ever wondered what it really takes to move your automated test execution code from one tool or language to another—like switching from WebDriver to Playwright, or migrating from Java to TypeScript? Alan breaks down the pitfalls, challenges, and best practices you need to consider before taking the leap. He explains why migrating isn’t just about copying test cases, how abstraction layers can save you time and headaches, and why using AI and solid design principles can streamline your transition. Whether you’re facing unsupported tools, evolving frameworks, or strategic changes in your testing approach, this episode offers practical advice to plan and execute a seamless migration—without burying new problems beneath old ones.
Here’s a taste of what’s inside:
1. Why Migrate—And When You Really Shouldn’t Before any big move, Alan urges teams to get their “why” straight. Is your current tool unsupported? Is your framework truly incompatible, or are you missing some hidden potential? Migrate for the right reasons and make sure your decision isn’t just papering over problems that could follow you to the next tool.
2. Don’t Confuse Migration with a Rewrite Too many teams treat migration like a rewrite—often with disastrous results. Alan emphasizes the importance of planning ahead, solving existing flakiness and coverage issues before you move, and carefully evaluating all options (not just the shiny new tool you think you want).
3. The Secret Weapon: Abstraction Layers The podcast’s biggest takeaway: Don’t migrate “test cases”—migrate abstractions. If your tests are full of direct calls like webdriver.openPage()
, you’ve got work to do. Build out robust abstraction layers (think page objects or logical user flows) and keep your tests clean. When it comes time to migrate, you’ll only need to move those underlying layers, not thousands of individual test case scripts.
4. Taming Flakiness and the Risks of Retries Migration is not the time to rely on self-healing tests or retries. Any test flakiness must be rooted out and fixed before porting code. Bringing instability into a new stack only multiplies headaches later.
5. Harnessing AI—But Stay in Control AI-assisted migration really shines at mapping old code to new, but Alan warns against “agentic” (hands-off) approaches. Use AI as a powerful tool, not as the driver—you need understanding and control to ensure things work reliably in CI/CD pipelines.
6. Learn Fast: Tackle the Hardest Stuff Early Pro tip: Once you’re ready, start your migration with the simplest test, just to get going—then dive into the hardest, flakiest, most complex workflows. You’ll uncover potential blockers early and kick-start team learning.
“We’re not migrating test cases when we change a tool. We’re migrating the physical interaction layer with our application… ”
Episode Summary
Introduction to Code Migrations in Test Automation
- Definition and Context: The video discusses the process of migrating automated test execution code from one tool or library to another, such as from TypeScript Playwright to Java WebDriver or vice versa.
- Trigger for Discussion: The presenter was prompted by recent trends of using AI and commercial services to assist with such migrations and emphasizes that migration should not be undertaken lightly due to its complexity and expense.
Reasons and Considerations for Migrating
- Importance of Justification: It is crucial to have a sound, evaluated reason for migration—such as lack of tool support or the need for features exclusive to another tool—rather than migrating to mask existing problems like flaky tests.
- Proper Evaluation Process: The evaluation should include multiple tools and languages and should not be performed ad hoc during the migration itself. The root cause of limitations should be established to avoid transferring issues from one tool to another due to inadequate skill or misuse, rather than true tool limitations.
Preparing for Migration
- Problem Resolution Before Migration: It is recommended to resolve existing issues, such as flaky tests or unnecessary test coverage, before initiating migration. The migration process itself is not the ideal time to remediate such problems.
- Scope Reduction and Focus: Prioritize migrating only important and maintained tests. Remove redundant, obsolete, or failing tests to streamline and improve the migration outcome.
Migration Process and Best Practices
- MVP and Technical Risk Management: Start with the simplest test or project component to establish a working base, then tackle the most complex or risky cases to learn the tool’s capabilities and to inform team training.
- Integration and Early Detection: Integrate migrated code into continuous integration/continuous deployment (CI/CD) pipelines early to promptly surface synchronization or flakiness issues, and address them before proceeding.
Harnessing AI and Abstraction Layers
- Role of AI in Migration: AI tools are highly effective in code translation and migration tasks, providing efficiency gains and reducing manual effort. However, careful oversight and validation are necessary to ensure code quality.
- Importance of Abstraction Layers: Effective use of abstraction layers (such as page objects or logical user classes) simplifies migration. Codebase should avoid direct use of tool-specific commands in tests, instead leveraging abstractions so that only the underlying layers require significant migration.
Focus on Migrating the Right Code
- Not Migrating Test Cases Directly: Emphasis is placed on migrating the physical interaction layers (the code that interacts with the application under test) rather than the high-level test cases, which represent business logic and rarely change across tools.
- Good Design Practices: Creating and using abstractions (e.g., page objects, logical flows) minimizes effort during migration and ensures the test logic remains stable, regardless of changes in the underlying automation framework.
Flakiness, Retries, and Self-Healing Tests
- Avoiding Retrying and Auto-Healing: Flaky tests or retry strategies should be eliminated pre-migration, not ported over. Retrying hides issues, leading to persistent problems in the new tool.
- Synchronization and Maintenance: Synchronization should be handled via built-in mechanisms (e.g., Playwright’s auto-waiting), but external retry strategies should be avoided. Maintenance burden is reduced when abstractions encapsulate complex interactions and synchronization logic.
Abstraction and Domain Modelling in Test Automation
- Physical vs. Logical Abstractions: Physical models represent specific UI components and web elements, while logical models encapsulate user flows and business logic. Organized abstraction facilitates easier migration and maintenance.
- Test Code Structure: Ideally, test code should use logical abstractions, reducing the impact of migration. Any necessary error handling or direct element access should be restricted and abstracted at the lower levels.
Conclusion and Key Takeaways
- Summary of Approach: Successful migration is driven by sound reasons, thorough pre-migration preparation, leveraging of abstraction layers, and the effective use of AI.
- Maintaining Test Quality: Continuous integration and the removal of flaky or retry-dependent tests are vital for long-term quality and maintainability after migration. The main focus should always be on migrating the physical interaction layer, not the test cases themselves.
https://www.patreon.com/c/eviltester">Join our Patreon from as little as $1 a month for early access to videos, ad-free videos, free e-books and courses, and lots of exclusive content.