Effective AI Usage in Testing: From ChatGPT to Test Code Generation
Learning objective
AI is becoming a core part of modern software engineering workflows. This workshop equips testers with practical knowledge and hands-on skills to use tools such as ChatGPT and AI-powered IDEs (for example Cursor) in daily quality engineering work.
After completing the workshop, participants will be able to:
- understand how large language models (LLMs) like ChatGPT work,
- create effective prompts that improve testing productivity,
- use AI responsibly when generating and reviewing automated test code.
Scope
- LLM fundamentals and practical understanding of ChatGPT
- Key concepts: AI, NLP, LLMs, prompts
- Model mechanics: tokenization, embeddings, attention mechanism
- Transformer architecture and model training basics
- Prompt engineering methods and best practices
- role prompting ("act as")
- few-shot prompting
- chain-of-thought style reasoning
- challenge/critique prompts
- language strategy for prompting
- Testing-focused AI applications
- test data generation
- test case generation
- pair-testing with AI
- using AI to accelerate learning and analysis
- Practical exercise: generating CI setup (GitHub Actions) with AI
- AI IDE workflows (Cursor and similar)
- AI-assisted coding and refactoring for tests
- semantic search and working with technical documentation
- End-to-end workshop labs
- building automated tests with Playwright and JavaScript/TypeScript
- evaluating what to delegate to AI vs what to keep manual
- Risk and quality controls
- common AI failure modes and hallucinations
- safe usage patterns in test engineering
- Next-step learning paths, including API-level AI usage
Preparation
Who should attend?
The workshop is intended for testers and QA engineers who know programming fundamentals (preferably JavaScript).
What to prepare
Participants should bring a laptop prepared according to trainer instructions provided before the workshop.
Teaching methods
The training is predominantly hands-on workshop work supported by short theory modules. Participants learn through practical exercises and guided implementation.
Training materials
- workshop presentation,
- ready-to-run code examples (organized in branches/modules),
- working notes and reference links,
- additional guidance materials used during labs.
Benefits
- Strong practical understanding of LLMs in a testing context.
- Ability to design prompts that improve output quality and usefulness.
- Better use of AI IDEs for test development and maintenance.
- Hands-on experience generating and validating automated tests with AI support.
- Clear understanding of AI risks and how to apply safe team practices.
- A practical workflow that can be reused in real QA projects.