Nitin Unni, Associate Vice President – Engineering, GlobalLogic

Nitin Unni is Associate Vice President – Engineering at GlobalLogic. With over two decades of experience in Engineering and the IT sector, Nitin spearheads the deep QA Practice at GlobalLogic. He works at the intersection of development and production of the latest QA Tech and Tools for large DX programs. He specializes in Digital User Experience to ensure company and customer quality standards are being met for developed projects. Prior to joining GlobalLogic, he was associated with Wipro Technologies for over 10 years as Practice Head for Connected Devices, Compute, and Storage. He also co-founded LightBulb Innovations Pvt. Limited, where he was responsible for bringing technology innovations to renewable resources in the agriculture domain.


In the world of software development and testing, it is important to have automated tools that can assist in scaling the software development life cycle (SDLC). Agile development is a lot more than just a simple set of processes. The approach is indeed a lot more than just a way of working that aims to facilitate better software delivery. With the increased interest in continuous integration and delivery, the pressure on maintaining software quality has certainly increased. More and more businesses are realizing that a sustainable strategy for quality is a key success factor. 

During the testing processes, while the best practices and frameworks help us answer the question “Are we doing the testing right”, it is analytics and data that help us in answering the question “Are we doing the right testing”. Throughout the entire SDLC, a plethora of data (log, wire, test outputs, reports, etc.) can be extracted, mined, classified, and analyzed to yield valuable insights that help us answer these questions better. Applying various methodologies and tools for this data analysis is what we term a “Shift-Deep” approach.

What is Shift-Deep testing and why should we care?

While the paradigms of shift-left and shift-right in testing have existed for a considerable amount of time, the utilization and implementation of data and analytics within the process of testing to improve itself is something new. This has gained currency in the last 12 months and is now increasingly getting known as “shift-deep testing”. Some traditional techniques such as root cause analysis, 5 whys, and FMEA have now been subsumed under this paradigm. At its core, shift-deep testing comprises key tenets such as accurate and continuous data capture during the software test process, organizing and structuring the captured data for effective analysis, and extracting insights from this data (utilizing manual and/or AI/ML-driven algorithms) to further optimize, refine, and improve the test life cycle.

So why should we care about this? Well, the application of shift-deep techniques is not only significantly helping in improving overall software delivery but is also now being seen as a great differentiator in actual industry use cases such as predictive maintenance, equipment reliability, and TCO optimization.  

Stages of Shift-Deep testing

Stage 1: Accurate and continuous data capture

Accurate and continuous data capture is extremely imperative for the entire shift-deep approach.  Fortunately, in today’s agile and tech-enabled era, this is achievable to a greater extent. There are efficient and mature tools to instrument, log, store, and process very large swathes of data.  Gone are the days when every new software project would start with creating (from scratch) a logging engine. Today frameworks like ELK (Elasticsearch, Logstash, Kibana) or EFK (Elasticsearch, fluentd, Kibana) do the job.  With the advent of the public cloud, availability and accessibility to cheap and fast storage is also a huge enabler for the stage of shift-deep testing. 

Stage 2. Organizing and structuring the captured data

Once data is captured, the next stage is to organize and structure it. In certain cases, the tools/stack supports a good level of analysis capabilities and customization. This stage of data organization is tremendously crucial because the better organized and structured your test ecosystem data is, the easier and more effective the analysis and interpretation for gauging product quality and finding defects. 

Stage 3. Extracting insights from this data

The acme of shift-deep is when the organized data starts yielding consistent and coherent insights. This can transform the complete testing process and give a significant uptick in the value of testing. In Stage 3, attempts are made to extract insights from the organized data first in the form of manual SOPs/steps and then using AI/ML-driven algorithms. However, the area which is really exciting is the use of AI and ML algorithms and technology for gathering insights. 

Several more areas of exploration are underway in this Stage 3, some of which are as follows:

  • Differential testing — comparing application versions overbuilds, classifying the differences, and learning from feedback on the classification.
  • Visual testing — leveraging image-based learning and screen comparisons to test the look and feel of an application.
  • Declarative testing — specifying the intent of a test in a natural or domain-specific language, and having the system figure out how to carry out the test.
  • Self-healing automation — auto-correcting element selection in tests when the UI changes

The road ahead for Shift-Deep testing

Testing at every stage of development – from the early stages of prototyping to making sure your software is of the highest quality before you finally launch it – is standard procedure in the software development industry. To quote an oft-used cliche – “Data is the new oil”, and so, with data and its value brought into the testing space, shift-deep testing will only grow and become more and more sophisticated. The road ahead for shift-deep testing is definitely bright and optimistic. 

Content Disclaimer

Related Articles