Increasingly digital consumption of products and services, along with the shift to remote work and education, have significantly affected how software is developed and tested. According to Deloitte’s “2023 Quality Engineering Trends” report, “new technologies” is still the top factor driving testing spend.
Artificial Intelligence (AI) is arguably the “hottest” of these emerging technologies, and we have already discussed several aspects of it in our AI in software testing articles. So, in this article we wanted to spend more time on the Internet of Things (IoT) as well as augmented and virtual reality (AR and VR). Their high popularity is highlighted by the Perforce’s “2024 State of Continuous Testing” report:
While the core goals (defect prevention, balancing speed and quality, satisfying functional & non-functional requirements, etc.) of quality engineering (QE) still apply to these technologies, we wanted to explore the nuances and their impact on QE efforts. We hope QE professionals find our insights informative for 2024 and beyond as they continue to innovate.
A wide range of industries — from automotive and medical technology to consumer electronics and entertainment — have already embraced intelligent products with regular software updates as part of their business strategies. That means compounding complexity - increased integration issues; higher availability requirements; stricter safety compliance; challenges in lifecycle management post-release.
Let’s put ourselves in the shoes of a QE team that has been involved in “typical” software development projects before (so they are very familiar with the “basics”) and is now asked to evaluate whether an IoT or AR/VR product is ok to release. What are the “traps” they should be aware of?
At a high level, we can think about 3 affected QE areas:
1. Risk factors;
2. Testing methods;
3. Tooling considerations.
Let’s analyze each of these, focusing on less obvious tips.
And one note before we dive in: these technologies can be not just a target for testing but also a supporting tool (e.g. testing AR vs using AR for testing manufactured products). In this article, our primary focus is on the first scenario.
For these emerging technologies, the level of possible personalization is one of the main selling points. It does sound exciting, but also leads to insufficient cross-product standardization and ”overwhelming” creativity of users from the QE perspective. We wanted to tackle this risk first because it affects other areas.
Some of the key components of this risk:
1. Integration and compatibilityOne critical point to keep in mind - the QE approach to these emerging technologies should put even more emphasis on testing the synergy between hardware and software, not individually evaluating each “half”.
For the Internet of Things sphere, compatibility with different operating systems and their versions as well as different device generations is crucial. That also means paying special attention to data translation and mapping mechanisms - ensuring your product is interoperable to enable easier data exchange between non-standardized devices from various manufacturers that participate in the same IoT network.
For AR/VR, there is dependency on specialized hardware of different types, commonly headsets, smartphones, and glasses. We will need to understand whether our product aims to provide a more powerful, immersive experience with a computer dependency (similar to devices like Oculus Rift/Quest/Go, HTC Vive, Valve Index) or a more mobile one, relying on a smartphone (e.g. Samsung Gear, Google Daydream). Then, our QE plan will need to account for multiple possible output streams, testing the actual AR/VR experience on both the device and the desktop environments (i.e. the initial projection along with the shareable “recording”).
Do not forget about the back-end systems that aggregate, store, and analyze data transmitted by our product. Usually, this integration aspect is dependent on the cloud platforms such as AWS, Azure, Google Cloud Platform, but if our product relies on the local server(s), we would need to account for that in our QE plan.
It can be thought of as a subset of integration but we wanted to highlight it separately because it is one of the most challenging layers to test due to the nearly infinite variety of settings and protocols. As you prepare to explore this aspect, you have to analyze the following possibilities:
Then look into specific risk factors and boundaries of the selected combinations. For instance:
3. Environment
Whereas Integration and Connectivity cover the technologies surrounding our product, this component includes other external factors we should consider for our QE efforts. These are things you usually wouldn't worry about as much when testing a more traditional piece of software, e.g.:
The risk of identity theft and security violations is even greater given the variety of authentication methods as well as the extent of digital and biometric data collection. Such complexity necessitates dedicated security and privacy guidelines, e.g. the Internet of Things Cybersecurity Improvement Act of 2020. In the AR/VR space, some guidance about the data gathered through immersive experiences can be found in broader laws and regulations such as COPPA, FERPA, and HIPAA. In addition to validating robust security measures we will discuss below, the product design and QE effort should also address unambiguous consent mechanisms and data anonymization to protect user privacy.
One security aspect that may not be top of mind for teams with software experiences is the physical hardening of devices to limit access. The criticality of implementing both physical and digital security controls for IoT and AR/VR devices is hard to overstate.
Another aspect is the direct consequence of Integration and Connectivity - the threat may come as a result of the “betrayal” by the IoT counterparts. Security loopholes in such complex environments are often not transparent, one input in a certain module can make a completely different module vulnerable. If even one component slacks on cybersecurity measures (or a user decides not to change the default password on one of the devices because it does not seem “significant”), the whole network will be at risk.
In addition to testing both external and internal types of attacks, consider stronger default credentials and multi-factor authentication support. Users can also choose to apply network segmentation to our IoT devices which isolates them from more sensitive data. Our QE plan would need to account for such a possibility and test that our devices can securely maintain that segmentation while getting enough resources for satisfactory performance.
Also, we will need to understand all the layers where security risk exist and what are the nuances of each - e.g. the encryption of the software app, the protocol encryption that the service uses to communicate between the product and e.g. user’s smartphone, as well as the encryption on e.g. the home Wi-Fi. How is the encryption of data at rest and in transit performed? Depending on the communication protocols we discussed, which network ports (e.g. 'MQTT Broker' port, CoAP' port, 'AMQP' port) are open and under which conditions? Answers to these questions will help with more comprehensive test design for our QE strategy.
Lastly, consider the negative impact of other emerging technologies. AI has plenty of helpful use cases, but unfortunately that is also true for hackers. They could build tools that are easier to scale and faster to carry out their attacks. While the tactics and “footprints” of traditional IoT/AR/VR threats presented by such cyber attackers will appear the same, the magnitude and customization of AI-powered attacks will make them increasingly hard to thoroughly test and prepare for.
Interacting with VR and AR products could lead to serious health issues (e.g. motion sickness, headaches, eye strain, seizures, and collision with real-world objects while in a virtual space), so our QE team should cover scenarios with a wide range of motions, environments, duration, distance, etc. In VR, haptic feedback is a significant aspect of the user experience as the accuracy and responsiveness can greatly enhance the sense of immersion. Therefore, it is a vital part of our evaluation.
Overall performance plays a big role in the IoT/AR/VR usability. Many of the key metrics will be familiar - loading speed, load tolerance, throughput efficiency, uptime, etc. - but the degree of required results is higher, e.g.:
For both usability and accessibility purposes, we should pay special attention to the diversity of user profiles and interfaces like touch-based mobile apps (considering text-to-speech, screen readers, imperfect grammar, vision-related limitations), voice-controlled systems (considering accents or speech defects, audio limitations), and/or gesture recognition with spatial tracking (considering disability-related motions). Remember to validate not only the object interaction but also menu navigation and other auxiliary functions.
All these functional and non-functional considerations further reiterate the importance of testing IoT/VR/AR in a more hands-on manner - with real devices and a wide range of users in realistic environments. Now that we have learned about some of the more risky factors to consider for these emerging technologies, how can we tackle them and improve our defect prevention?
In addition to “traditional” methods (risk-based test design, mix of automated and exploratory, etc.), we wanted to highlight a couple of newer methods that are particularly relevant for risk mitigation when it comes to IoT and AR/VR - crowdsourced testing and chaos engineering.
One of the most effective approaches to testing has always been a customer-first mindset. Crowdsourced testing facilitates that by leveraging higher flexibility and wider reach of a diverse group of testers from various locations. They can embark on “unfiltered” customer journeys and help assess how an IoT/AR/VR product performs in real-world usage scenarios.
Depending on the timeline and resources, we can consider three stages:
1) “dogfooding”, with a limited set of internal users;
2) beta testing, with a limited set of external testers;
3) broader crowdsourced initiative.
Here are a few considerations for crowdsourced testing:
Trying to prevent a disaster is a must, but given the complex web of dependencies and risks, there is a strong possibility it will still happen. Even unlikely edge cases do eventually crop up and can have serious impacts. So, from the start of the project, it is important to plan not just for maintenance and resilience but also for recovery.
Chaos engineering simulates diverse types of failures and assists in identifying system vulnerabilities under erratic conditions. For instance, your engineering team could create CI/CD pipelines to inject attacks into the application code in non-production to verify the code is not malfunctioning or experiencing latency issues.
Here are a few considerations for chaos engineering based on the “Risk factors” section and the possible circumstances of the maintenance and recovery:
With the knowledge of risks and mitigation methods, let’s talk about the tools that can help us achieve the QE goals.
The two key characteristics you would be looking for are “versatile” and “intelligent”. And while “intelligent” does mean the engagement with AI is a plus, according to the same Perforce report, AI adoption is not high enough yet, so we still need solid human skills for these emerging technologies.
The rapid growth of use cases around emerging technologies like IoT and AR/VR is enabling organizations to disrupt the physical barriers of customer interactions. While industry adoption remains a work in progress, tech giants have invested billions in the metaverse, guaranteeing its relevance for at least the near future.
Organizations need QE to keep up with this digital transformation, accelerate the lifecycle, and reduce time to market. Comprehensive QE strategies that cover these technologies should proactively re-evaluate personalized, evolving user experiences and associated risk factors, leverage crowd testing and chaos engineering, and adopt intelligent, versatile tooling solutions.
In today’s virtually connected world, rolling out new products often requires frequent and efficient collaboration across numerous stakeholders. Test management is often at the heart of that development process - and Xray Enterprise is a versatile test management tool, easily handling different modern testing types and techniques. Several intelligent capabilities like Test Case Designer are well-suited for the complexities of the emerging technologies.
Let's embrace the challenges and opportunities that lie ahead together.