Nuances of Quality Engineering for emerging technologies

Curious to discover more about Xray Enterprise and Test Case Designer?
TRY NOW Learn More

Increasingly digital consumption of products and services, along with the shift to remote work and education, have significantly affected how software is developed and tested. According to Deloitte’s “2023 Quality Engineering Trends” report, “new technologies” is still the top factor driving testing spend.

Factors Driving Testing Spend Xray Blog

Artificial Intelligence (AI) is arguably the “hottest” of these emerging technologies, and we have already discussed several aspects of it in our AI in software testing articles. So, in this article we wanted to spend more time on the Internet of Things (IoT) as well as augmented and virtual reality (AR and VR). Their high popularity is highlighted by the Perforce’s “2024 State of Continuous Testing” report: 

New Testing Approaches Xray Blog

While the core goals (defect prevention, balancing speed and quality, satisfying functional & non-functional requirements, etc.) of quality engineering (QE) still apply to these technologies, we wanted to explore the nuances and their impact on QE efforts. We hope QE professionals find our insights informative for 2024 and beyond as they continue to innovate.


Impact of IoT and AR/VR on QE

A wide range of industries — from automotive and medical technology to consumer electronics and entertainment — have already embraced intelligent products with regular software updates as part of their business strategies. That means compounding complexity - increased integration issues; higher availability requirements; stricter safety compliance; challenges in lifecycle management post-release. 

Let’s put ourselves in the shoes of a QE team that has been involved in “typical” software development projects before (so they are very familiar with the “basics”) and is now asked to evaluate whether an IoT or AR/VR product is ok to release. What are the “traps” they should be aware of?

At a high level, we can think about 3 affected QE areas:

1. Risk factors;
2. Testing methods;
3. Tooling considerations.


Let’s analyze each of these, focusing on less obvious tips.

And one note before we dive in: these technologies can be not just a target for testing but also a supporting tool (e.g. testing AR vs using AR for testing manufactured products). In this article, our primary focus is on the first scenario.

Risk factors

Risk Group 1 - “Too much” variety

For these emerging technologies, the level of possible personalization is one of the main selling points. It does sound exciting, but also leads to insufficient cross-product standardization and ”overwhelming” creativity of users from the QE perspective. We wanted to tackle this risk first because it affects other areas.

Some of the key components of this risk:

1. Integration and compatibility

One critical point to keep in mind - the QE approach to these emerging technologies should put even more emphasis on testing the synergy between hardware and software, not individually evaluating each “half”.

For the Internet of Things sphere, compatibility with different operating systems and their versions as well as different device generations is crucial. That also means paying special attention to data translation and mapping mechanisms - ensuring your product is interoperable to enable easier data exchange between non-standardized devices from various manufacturers that participate in the same IoT network.

For AR/VR, there is dependency on specialized hardware of different types, commonly headsets, smartphones, and glasses. We will need to understand whether our product aims to provide a more powerful, immersive experience with a computer dependency (similar to devices like Oculus Rift/Quest/Go, HTC Vive, Valve Index) or a more mobile one, relying on a smartphone (e.g. Samsung Gear, Google Daydream). Then, our QE plan will need to account for multiple possible output streams, testing the actual AR/VR experience on both the device and the desktop environments (i.e. the initial projection along with the shareable “recording”).

Do not forget about the back-end systems that aggregate, store, and analyze data transmitted by our product. Usually, this integration aspect is dependent on the cloud platforms such as AWS, Azure, Google Cloud Platform, but if our product relies on the local server(s), we would need to account for that in our QE plan.

2. Connectivity

It can be thought of as a subset of integration but we wanted to highlight it separately because it is one of the most challenging layers to test due to the nearly infinite variety of settings and protocols. As you prepare to explore this aspect, you have to analyze the following possibilities:

  • What are the overall connectivity categories our product supports (wired, wireless, both?)
  • For wired, what are the types of ports (USB/CAN/etc.) and cables we should consider?
  • For wireless (where most of the fun is):
    • Will it be WiFi, satellite, cellular (4G/5G), and/or Bluetooth?
    • What will the data transmission protocols be (e.g. ZigBee, LoRa)? What about the application layer protocols (e.g. MQTT, CoAP)?
    • Will our IoT products be managed through endpoints available on mobile devices or through dedicated IoT routers?


Then look into specific risk factors and boundaries of the selected combinations. For instance:

  • Direct communication between devices or through intermediary nodes;
  • Signal latency and data rate;
  • Scale of the network/device density;
  • Power consumption;
  • Maximum supported distance;
  • Signal bands (unlicensed or licensed, sub-GHz or GHz range, etc.).

 

3. Environment

Whereas Integration and Connectivity cover the technologies surrounding our product, this component includes other external factors we should consider for our QE efforts. These are things you usually wouldn't worry about as much when testing a more traditional piece of software, e.g.:

  • Indoor or outdoor experience
  • Interaction/viewing angle between the user and the device, light reflections
  • Presence or absence of obstacles
  • Varied shapes and textures (e.g. ability to still detect and interpret text on the blurred image) - contrast between the surface/objects and the user’s skin color
  • User body types
  • Multiple users in the same real/virtual space

 

Risk Group 2 - More Complex Non-Functional Requirements 

  1. Security and privacy

The risk of identity theft and security violations is even greater given the variety of authentication methods as well as the extent of digital and biometric data collection. Such complexity necessitates dedicated security and privacy guidelines, e.g. the Internet of Things Cybersecurity Improvement Act of 2020. In the AR/VR space, some guidance about the data gathered through immersive experiences can be found in broader laws and regulations such as COPPA, FERPA, and HIPAA. In addition to validating robust security measures we will discuss below, the product design and QE effort should also address unambiguous consent mechanisms and data anonymization to protect user privacy.

One security aspect that may not be top of mind for teams with software experiences is the physical hardening of devices to limit access. The criticality of implementing both physical and digital security controls for IoT and AR/VR devices is hard to overstate.

Another aspect is the direct consequence of Integration and Connectivity - the threat may come as a result of the “betrayal” by the IoT counterparts. Security loopholes in such complex environments are often not transparent, one input in a certain module can make a completely different module vulnerable. If even one component slacks on cybersecurity measures (or a user decides not to change the default password on one of the devices because it does not seem “significant”), the whole network will be at risk. 

In addition to testing both external and internal types of attacks, consider stronger default credentials and multi-factor authentication support. Users can also choose to apply network segmentation to our IoT devices which isolates them from more sensitive data. Our QE plan would need to account for such a possibility and test that our devices can securely maintain that segmentation while getting enough resources for satisfactory performance.

Also, we will need to understand all the layers where security risk exist and what are the nuances of each - e.g. the encryption of the software app, the protocol encryption that the service uses to communicate between the product and e.g. user’s smartphone, as well as the encryption on e.g. the home Wi-Fi. How is the encryption of data at rest and in transit performed? Depending on the communication protocols we discussed, which network ports (e.g. 'MQTT Broker' port, CoAP' port, 'AMQP' port) are open and under which conditions? Answers to these questions will help with more comprehensive test design for our QE strategy.

Lastly, consider the negative impact of other emerging technologies. AI has plenty of helpful use cases, but unfortunately that is also true for hackers. They could build tools that are easier to scale and faster to carry out their attacks. While the tactics and “footprints” of traditional IoT/AR/VR threats presented by such cyber attackers will appear the same, the magnitude and customization of AI-powered attacks will make them increasingly hard to thoroughly test and prepare for.

  1. Usability, reliability and performance, accessibility 

Interacting with VR and AR products could lead to serious health issues (e.g. motion sickness, headaches, eye strain, seizures, and collision with real-world objects while in a virtual space), so our QE team should cover scenarios with a wide range of motions, environments, duration, distance, etc. In VR, haptic feedback is a significant aspect of the user experience as the accuracy and responsiveness can greatly enhance the sense of immersion. Therefore, it is a vital part of our evaluation. 

Overall performance plays a big role in the IoT/AR/VR usability. Many of the key metrics will be familiar - loading speed, load tolerance, throughput efficiency, uptime, etc. - but the degree of required results is higher, e.g.:

  • Latency is expected to be ultra-low to maintain persistent, real-time connection;
  • Rendering quality and the frame rate should be very high with minimal buffering;
  • Transition between reality and virtual/augmented space should be seamless;
  • Reliability, especially in medical and industrial applications, should adhere to the standards of “mission-critical systems” (e.g. 99.5%);
  • Machine vision and object detection/recognition should maintain performance across an extensive area.

For both usability and accessibility purposes, we should pay special attention to the diversity of user profiles and interfaces like touch-based mobile apps (considering text-to-speech, screen readers, imperfect grammar, vision-related limitations), voice-controlled systems (considering accents or speech defects, audio limitations), and/or gesture recognition with spatial tracking (considering disability-related motions). Remember to validate not only the object interaction but also menu navigation and other auxiliary functions.

All these functional and non-functional considerations further reiterate the importance of testing IoT/VR/AR in a more hands-on manner - with real devices and a wide range of users in realistic environments. Now that we have learned about some of the more risky factors to consider for these emerging technologies, how can we tackle them and improve our defect prevention?


Testing methods

In addition to “traditional” methods (risk-based test design, mix of automated and exploratory, etc.), we wanted to highlight a couple of newer methods that are particularly relevant for risk mitigation when it comes to IoT and AR/VR - crowdsourced testing and chaos engineering.

Crowdsourced Testing

One of the most effective approaches to testing has always been a customer-first mindset. Crowdsourced testing facilitates that by leveraging higher flexibility and wider reach of a diverse group of testers from various locations. They can embark on “unfiltered” customer journeys and help assess how an IoT/AR/VR product performs in real-world usage scenarios.

Depending on the timeline and resources, we can consider three stages:
1) “dogfooding”, with a limited set of internal users;
2) beta testing, with a limited set of external testers;
3) broader crowdsourced initiative.

Here are a few considerations for crowdsourced testing:

  • Ensure you have transparent user consent and NDA agreements, adhere to ethical data collection practices.

  • Leverage ideas from the “Risk factors” section to ensure you select the testers that support the diversity and coverage goals specific to your product.

  • Establish a clear system of frequent communication and carefully coordinate testing activities to avoid redundancy.

  • Share clear requirements and expectations, especially for more subjective metrics (e.g. latency can be clearly measured while immersiveness does not have true benchmarks).
    • If it does not already exist in requirements, create a map of your IoT/AR/VR network as an easily shareable artifact.
    • For AR testing, environmental setup often requires objects to be tagged with special markers prior to the session.
  • Combine different feedback methods (e.g. bug report templates, surveys, moderated user feedback sessions).
    • When possible, review tester conversation and body language during their sessions, gathering physiological data through wearable devices and/or observation. While they may not recognize signs of discomfort, additional objective data can help identify health risks earlier.
    • Enable testers to record their VR sessions to help with debugging (e.g. Chromecast allows projecting the headset content onto a smart TV).
    • This method also enables you to test post-release monitoring which will be critical for the success and evolution of your product. 

 

Chaos Engineering

Trying to prevent a disaster is a must, but given the complex web of dependencies and risks, there is a strong possibility it will still happen. Even unlikely edge cases do eventually crop up and can have serious impacts. So, from the start of the project, it is important to plan not just for maintenance and resilience but also for recovery

Chaos engineering simulates diverse types of failures and assists in identifying system vulnerabilities under erratic conditions. For instance, your engineering team could create CI/CD pipelines to inject attacks into the application code in non-production to verify the code is not malfunctioning or experiencing latency issues.

Here are a few considerations for chaos engineering based on the “Risk factors” section and the possible circumstances of the maintenance and recovery:

  • What are the update settings - automatic (as soon as available) vs scheduled vs manual? What if an update happens in the middle of user interaction?

  • What if simultaneous updates are about to happen across several devices on the network? Are there any precedence/hierarchy issues?

  • Can the device handle interruptions and interferences (e.g. overlapping/colliding signals, crosstalk, ripple, noise, and transients)?

  • What about harsh operating conditions (e.g. extreme temperature, humidity) and the changes from one condition to another?

  • What if there is an intermittent or a long loss of connectivity? These emerging technologies often incorporate a feature that stores data even while the network is unavailable.

  • How does the device operate at low power/in a power-saving mode? What if there is a sudden spike/overflow of data? Or a sudden drop in the signal strength?

  • What are the “Plan C” redundancies when it comes to connectivity, power, user interface? How will the variety of connected devices deal with not only the primary but also the secondary failures?

  • What if a multi-version rollback is required for our device? What if it happens for a connected device instead?
     

With the knowledge of risks and mitigation methods, let’s talk about the tools that can help us achieve the QE goals.

 

Tooling considerations 

The two key characteristics you would be looking for are “versatile” and “intelligent”. And while “intelligent” does mean the engagement with AI is a plus, according to the same Perforce report, AI adoption is not high enough yet, so we still need solid human skills for these emerging technologies. 

  1. As ecosystem complexity increases, a versatile and scalable platform is needed that can effectively support hybrid environments to test embedded software and firmware across devices. Support for edge computing testing is crucial as IoT devices often rely on edge nodes for data processing and analysis.

  2. Virtualization (device and protocol simulators) can be useful to have in the toolkit as it allows you to start testing earlier in the process and run multiple tests simultaneously. However, just remember that it will never be the perfect substitute and the real-device testing is still required to capture the more subtle nuances and variations.

  3. Continuous, real-time monitoring is vital for promptly detecting issues and optimizing performance, more so for IoT products. Consider logging and auditing mechanisms at different levels (e.g. network, application). Additionally, the observability strategy and toolkit should include a data collection plan for building effective predictive testing models to enrich the customer experience in the long run. More advanced “mission control” platforms can supervise systems, rely on predictive analytics to detect the onset of failure or anomalous behavior, and send commands to automatically correct issues.

  4. Related to monitoring, the amount of convoluted dependencies in these emerging technologies requires more robust root cause analysis and reporting capabilities to drive continuous improvement.

The rapid growth of use cases around emerging technologies like IoT and AR/VR is enabling organizations to disrupt the physical barriers of customer interactions. While industry adoption remains a work in progress, tech giants have invested billions in the metaverse, guaranteeing its relevance for at least the near future.

Organizations need QE to keep up with this digital transformation, accelerate the lifecycle, and reduce time to market. Comprehensive QE strategies that cover these technologies should proactively re-evaluate personalized, evolving user experiences and associated risk factors, leverage crowd testing and chaos engineering, and adopt intelligent, versatile tooling solutions.

In today’s virtually connected world, rolling out new products often requires frequent and efficient collaboration across numerous stakeholders. Test management is often at the heart of that development process - and Xray Enterprise is a versatile test management tool, easily handling different modern testing types and techniques. Several intelligent capabilities like Test Case Designer are well-suited for the complexities of the emerging technologies.

Let's embrace the challenges and opportunities that lie ahead together.

Comments (0)