So, what does a standardized approach to performance engineering look like? Where does an enterprise start? What should it consider? Here are seven considerations to a successful implementation.
1. Settle on a Platform That Covers All the Bases
To adopt a single, standardized performance engineering approach, an enterprise should first standardize on a single performance testing platform that’s designed from the get-go to support the full gamut of enterprise testing requirements.
Different teams and methodologies
Enterprises today employ a melange of methodologies that are carried out by centralized teams of experts (internal and external), autonomous development teams or a combination of both. A standardized platform must work equally well for everybody.
Different types of applications and technologies
The platform must also be able to test the full range of applications — from monolithic core systems and enterprise-grade packaged applications like SAP, Oracle, Citrix, Pega, Salesforce, Guidewire, Finacle, et al. to dynamic microservices-based applications. There should be a similarly wide technology coverage, from the latest frameworks to “older” technologies. Enterprises must be able to use the same solution to test the performance of all their apps end-to-end as well as test individual APIs at the individual component level. A standardized platform must work equally well for everything
The platform should not tether the enterprise to a single deployment option. Virtually every organization’s environment is some combination of on-premises, private cloud and public cloud. As enterprises increasingly transition their applications to the cloud (or from private to public cloud, or back again), they need a solution that can test performance for complex migrations — e.g., moving SAP to an S/4 HANA implementation.
2. Make things easy for non-experts
For different teams to use the same performance testing approach for their own specific needs, testing must be easy. Ease of use is what enables widespread adoption among teams who are not performance experts. Testing tools should have a short learning curve and not require specialized expertise. Look for tools that avoid the need to have deep coding skills — low-code and no-code approaches that leverage intuitive drag-and-drop, point-and-click functionality are best.
Testing should be flexible enough to adapt to the way testers (whether autonomous or centralized) work, not the other way around. Specifically, in addition to performing testing through a codeless GUI, the platform should enable DevOps teams to design and run tests <as:code> within the command line interface (CLI) or their day-to-day IDE.
3. Test Fast to Release Fast
How quickly tests can be run is directly related to how easy the testing tool is to use. Tests take longer with tools that are tricky to learn. What distinguishes a fast testing tool from a slow one is the test script design/maintenance, test resource reservation and test results analysis/reporting — actually pulling the trigger on tests is pretty much the same for all tools. So, capabilities that impact faster tests include how much manual effort and specialized know-how are involved in designing test scripts, whether scripts have to be rewritten from scratch every time code changes, how easy it is to reuse functional tests as performance tests, natively integrating performance tests into automated CI/CD pipelines, etc.
Bear in mind that testing faster not only benefits DevOps but also up-levels the productivity of the entire organization. Centralized teams can get more done in less time, freeing them up for more “expert-level” work such as new strategic initiatives, deeper analysis, DevOps enablement governance and more.
It follows that the easier the tools are to use, the faster an enterprise can scale a consistent performance engineering approach across the entire organization. Everybody should be able to get up to speed on new tools in just a couple of days, with an enterprise-wide deployment in weeks.
4. Promote Deep Collaboration
Enterprise-wide performance engineering is most effective and efficient when it’s a team sport. An approach that makes it easy for various teams to collaborate enables performance expertise to scale without adding more experts. This collaboration manifests itself in two ways:
Efficiency Having developers, performance engineers, business analysts and others all working “on the same page” makes it easy to design tests with agreed-upon service level objectives (SLOs) that define measurable performance metrics — and ensures that everyone is measuring performance consistently and getting apples-to-apples results. This is much more unlikely when lots of different teams are all using lots of different tools. With consistent reporting, root cause analysis and trend reporting is easier across the board.
Effectiveness Performance engineering experts take on a more enabler role. Instead of assuming responsibility for all testing operations themselves, they build out the building blocks that allow non-expert autonomous teams to test at the pace of development. They can structure the test environment, implement quality-control guardrails, set up automated CI pipelines and embed best practices into performance engineering processes that empower decentralized teams.
5. Automate Performance Testing in CI Pipelines
Integrating automated performance tests into CI pipelines — continuous performance testing — is the holy grail of scaling performance engineering for autonomous DevOps teams. Given today’s super-fast dev cycles, it’s not just impractical but impossible for performance engineers to manually build, run and analyze performance tests for hundreds of code pushes every day.
6. Leverage All the Tools in the Tech Stack
Look for opportunities to integrate best-of-breed solutions in the toolchain to be a force-multiplier and “up-level” one another.
7. Think Cloud-Native
No matter where they are on their cloud journey, enterprises have to ensure that their approach to performance engineering is cloud-ready. Not only are applications moving to the cloud, but so are the software development lifecycle, process and tools. An enterprise’s approach to performance engineering should anticipate several complexities:
Different migration scenarios Whether the approach to SaaS is lift & shift, replatforming or refactoring the architecture of packaged or on-premise apps, organizations need to be able to baseline performance, before and after, to ensure KPIs are met.
Multi-cloud strategy Performance engineering tools should be vendor-agnostic so that performance and scalability can be measured across different cloud providers (AWS, Google, Azure). If one cloud provider has a security breach, cost spike or a service issue, organizations need to have their applications initially developed, and then measured, so that they can immediately shift from one operator to another without a change in user experience.
Cloud testing complexity Scalability isn’t free. Enterprises should adopt an approach to ensure that scale doesn’t camouflage non-performant code and make use of dynamic infrastructure to spin up (and down) testing resources as needed.
Cloud technology complexity The approach needs to work with every layer of the cloud (IaaS, PaaS, SaaS), across all layers of the cloud software development lifecycle: cloud CI tools like AWS CodeBuild, Google CloudBuild, Microsoft Azure DevOps, and cloud orchestrators like OpenShift, Kubernetes, EKS, GKE and AKS. Performance testing needs to scale with the cloud-based software development model.
Enterprises expect and demand a high level of confidence in the quality of their software releases. The expertise to realize this confidence has traditionally rested with only a few specialists. But there are not enough experts to keep up with the pace of development as enterprises transition to faster, more frequent releases.
What’s needed is an approach that scales performance engineering across the entire organization.
A successful approach standardizes performance engineering — especially performance testing — among different teams with different backgrounds and skill sets, for different kinds of applications. A standardized approach should be easy to use for both performance experts and non-experts alike, from CoEs and other centralized teams to autonomous DevOps teams. The same approach accommodates complex end-to-end testing of enterprise-grade applications like SAP, Oracle, Salesforce and Citrix-virtualized apps as well as API testing in microservices-based architectures.
Having both performance engineers and DevOps teams standardize on a performance engineering approach that works equally well for both gives enterprises the predictability, validation and assurance that they’re used to but at the volume and velocity of automating in an Agile environment — quality at scale.