🤖

AI in Test Automation

Jul 3, 2025

Summary

  • The session, led by Sadashukla from Amazon, focused on leveraging generative AI tools to automate and optimize the creation, review, and maintenance of test automation scripts.
  • Key demonstrations covered AI-assisted script generation, code review, bug fixing, and documentation using tools like Amazon Q Developer.
  • Emphasis was placed on increasing efficiency, empowering manual testers, and overcoming framework/language barriers in test automation.
  • Attendees were encouraged to reach out with questions or share experiences with similar tools.

Action Items

  • Sadashukla: Share the document detailing coding standards and optimization techniques with attendees.
  • Attendees: Reach out to Sadashukla via LinkedIn or YouTube with any queries or feedback.
  • Attendees: Share experiences with other code generation tools (besides GitHub Copilot or Amazon Q Developer) with the community.

Current State and Challenges in Test Automation

  • Test automation scripts typically require specific framework and language expertise (e.g., Selenium, Playwright, Cypress).
  • Manual test script creation is time-consuming and often lags behind development sprints due to limited QA bandwidth.
  • Teams are increasingly required to handle both manual and automated testing with fewer resources.
  • Hiring for niche skill sets can be challenging and delay automation progress.

Using Generative AI for Test Automation

  • Generative AI and LLMs can drastically reduce effort and time in writing test scripts by generating code based on prompts or test cases.
  • While AI can generate scripts, experienced automation engineers are still needed for validation, review, and coverage assessment.
  • AI enables testers to focus more on areas like unit testing and code improvement, beyond just script writing.

Demo Highlights: AI-Driven Code Generation and Review

  • Demonstrated using Amazon Q Developer to generate Selenium and API automation scripts with simple, specific prompts.
  • Importance of providing clear, context-rich prompts—specifying element IDs, expected outputs, and framework details—for quality code generation.
  • Showcased how AI tools can review code for best practices, catch poor logging, exception handling, and suggest/code fixes automatically.
  • AI can also generate unit tests and documentation, reducing manual overhead.

Practical Tips for Effective Prompting and AI Usage

  • Always specify programming language, framework, and business purpose in prompts.
  • Define input values, expected results, and avoid overly complex single-line requests.
  • Provide contextual framework or architecture information to improve code outcomes.
  • Use available AI features (e.g., explain code, refactor, optimize, generate documentation) to speed up onboarding and code understanding.

Impact, Next Steps, and Community Engagement

  • Using AI can reduce script creation time by up to 70% for boilerplate/repetitive tasks and ~40-50% overall in practical scenarios.
  • Faster onboarding of new team members and improved code quality/coverage.
  • Attendees are encouraged to experiment with AI-driven tools, ask questions, and share their findings with peers.

Decisions

  • Leverage generative AI for test automation — Accelerate script creation, code review, and documentation to improve efficiency and coverage without reducing the need for skilled QA professionals.

Open Questions / Follow-Ups

  • No unresolved technical questions during the session; the presenter invited ongoing questions and feedback via LinkedIn or YouTube.
  • Attendees to report back with experiences using other code generation tools to broaden community knowledge.