Throughout my career as an SDET, I’ve encountered a wide range of industry best practices—many of which are widely accepted and often promoted as the standard for effective testing and automation. In the early stages of my journey, I made a conscious effort to follow these practices closely, assuming they would naturally lead to better results and smoother workflows.
However, as I gained more hands-on experience across different teams and projects, I began to notice that not all best practices deliver equal value in every context. Some introduced unnecessary complexity, others created more work than benefit, and a few even hindered progress when applied without careful consideration.
In this post, I’d like to highlight a few common SDET best practices that, in my experience, can become counterproductive if not adapted to the specific needs and dynamics of your project.
100% Test Automation Coverage
Chasing 100% automated test coverage might seem like the gold standard, but in reality, it often leads to diminishing returns, bloated maintenance, and a false sense of confidence. In my experience, not all code is worth testing, especially boilerplate or low-risk logic where the cost of automation outweighs the value. Instead of obsessing over the coverage number, I’ve found it far more effective to focus on testing the parts of the system that truly matter—those that are complex, critical, or prone to change. Coverage should guide us, not control us. It’s about smart testing, not just more testing.
Over-Engineering Test Frameworks
I’ve seen teams fall into the trap of over-engineering their test frameworks right from the start—building complex abstractions, layering utilities, and future-proofing for scenarios that may never happen. While the intention is good, it often slows down test writing, adds unnecessary complexity, and creates a steep learning curve for new contributors. Early on, what really matters is getting fast, reliable tests in place and proving value. A lightweight, pragmatic approach gives you room to adapt as the project evolves. Test frameworks should grow with the needs of the product, not outpace them.
Overloading the Top of the Testing Pyramid
One pattern I’ve seen time and again in testing is how easily we drift away from the testing pyramid. It’s tempting to focus heavily on UI and API tests because they feel more “real” or visible, but they’re also more brittle, slower to run, and expensive to maintain. In reality, most of our confidence should come from fast, reliable tests lower down the pyramid—especially unit tests. A practical approach I’ve adopted is to encourage developers to take on more of the integration-level testing themselves. That gives us, as testers, the chance to review, identify gaps, and add focused tests where needed. It creates a healthier test suite and a more collaborative process without overloading any one layer.
The Testing Blind Spot: Ignoring Test Data
One area I’ve seen consistently overlooked in testing is proper test data management. As testers, we often focus on writing and executing functional tests, but give far less thought to where the data comes from, how reliable it is, or whether it accurately reflects real-world scenarios. The result? Flaky tests, false positives, and hours wasted debugging issues that aren’t actually bugs. I’ve learned that investing time upfront in creating stable, reusable, and realistic test data pays off massively in the long run. Whether it’s through well-structured test fixtures, data seeding strategies, or isolated environments, good data practices are just as important as the tests themselves—but they rarely get the attention they deserve.
Overlooking Non-Functional Testing:
As testers, it’s easy to focus heavily on functional testing—making sure the features work as expected—and overlook non-functional aspects like performance, security, and usability. Even if these areas aren’t always our direct responsibility, ignoring them entirely can lead to costly surprises down the road. I’ve found it valuable to stay aware of non-functional requirements and collaborate closely with the teams handling them. Being oblivious to these factors means missing critical risks that impact user experience and system stability. Non-functional testing might not always be in our job description, but it should never be off our radar.
The Late Arrival of Testers in the Development Process
We all know that getting testers involved early in the development process is a best practice, but in reality, it doesn’t always happen. I’ve seen firsthand how testing gets brought in late—sometimes only after features are built or even when bugs start piling up. Waiting to be invited often means missed opportunities to influence requirements, design, and testability. From my experience, the best way to change this is by being proactive: reaching out to developers and product teams early, asking questions, offering to review stories or designs, and making ourselves visible as partners rather than gatekeepers. Early involvement doesn’t just improve quality—it builds trust and helps catch issues before they become costly problems.
Blindly Following Industry Trends:
As testers, it’s easy to get caught up in the latest trends and buzzwords—whether it’s chasing 100% test coverage, building overly complex frameworks from day one, or piling on UI and API tests. I’ve realised that a lot of the challenges we face, like ignoring test data management, skipping non-functional testing, or getting involved too late, often stem from this mindset of following what’s popular rather than what’s practical. Instead of questioning whether these trends fit our context, we sometimes adopt them wholesale, which can lead to wasted effort, brittle tests, and missed opportunities. The key is to stay thoughtful and adapt best practices to what truly works for our team and product, not just what’s fashionable.
In conclusion, while SDET best practices offer valuable frameworks and guidance, blindly following them without considering the unique context of your project can lead to inefficient or even counterproductive results. It’s essential for testing professionals to critically assess these practices and tailor them to fit their team’s goals, product needs, and realities. By doing so, we not only avoid common pitfalls but also cultivate a culture of continuous learning, thoughtful adaptation, and innovation—key ingredients for delivering high-quality software in an ever-evolving landscape.