Clinton Williams is a Senior Manager of Test Engineering at Groupon. Clinton has been working at Groupon for just over 6 years, starting out as a mobile quality engineer, and then moved onto managing the mobile teams day-to-day, and is now the senior manager of test engineering at the company. We spoke to Clinton about the Groupon apps, teams and anything in between.
Great to have you Clinton! Let’s start with your team and your apps?
״My team is working on the iOS and Android apps of the Groupon consumer app, LivingSocial, and the Groupon merchant app.Across the mobile and web bookable marketplace and our financial backend systems, it’s roughly 48 people in my team in total.
How is development being managed?
Within the mobile application – it’s actually something we’re attacking at the moment – we’re trying to remove that silo of having specialized people who only know a particular area of the app and they become the bottleneck because that’s the go-to person. So, as far as mobile application development, we do have people who know areas of the app really well, but generally speaking we try to share the workload across all of the teams.
What can you tell us about your quality process? As I recall, you do a lot of automation, manual testing, and things in-between that connect. How does that work?
We connect a lot of things together, so we do straight up testing of features. For example, we focus a lot on particular stories, pairing a QA with a developer on a story, working within a sprint, and signing off features. From a regression perspective, we focus more on user flows and customer experience. As long as the customer can do what they need to do (searching for a deal, purchasing a deal, checking out, etc.), whether smaller things like this button should actually be on the right side of the screen versus the left are less impactful. With a smaller QA team, we’ve switched away from doing full-scale regression of 1,000 test cases manually to really focusing hard on end-to-end flows. We’ve gone down to about 150 to 300 tests on a regression front, and we are looking at potentially shifting our automation to Appium to focus on those 150 to 300 test cases as well. Because we’re not going full detail on a lot of things, we also rely heavily on outside metrics. That’s where cat-food comes in. If our business users see something that doesn’t look quite right, that’s generally a good indication that a customer may see something that’s not quite right, or there was a type of behavior or flow that we weren’t expecting. Our reviews are also something that we take into account. So looking at 1-star reviews on a weekly and monthly basis, seeing where the trends are and digging into those areas. Those give us a kind of smoking gun.”
How do you know if you’re testing the right things? How do you know if your users use the app the way that you planned?
“That’s a really good question. For the most part, we rely on the product managers to do their due diligence there as far as looking at the NST tracking and metrics that we put into the application, which actually do show where users clicked, how many deals they scrolled through, and helps us with things like the conversion rate.”
“In addition, we have a pretty well-established dog-food process. We run two-week sprints now, two-week regressions, two-week releases. So at the start of every sprint, we’ll release a new dog-food version, and we’ll continually release updates. And then we have a support team who monitors user reviews, crashes, all that sort of stuff, and is also responsible for first line triage of all incoming feedback and determining where it needs to be routed.”
“And of course, we use TestFairy to run our dog-food program, it helps us to easily push our user feedback into JIRA. TestFairy gives us the screenshots, the video, and a good view of the flow of what the customer was doing, it allows us to easily distribute our apps to users and automatically update their application more frequently. We use JIRA for all communication backwards and forwards with our employees, it’s much easier that way, everything has a documentation trail.
From a QA perspective, TestFairy makes our process a lot more efficient. The fact that we don’t have to reach out to users and ask for a screenshot, or we don’t need to figure out a way to get our hands on their device and look at what the logs are saying, that definitely makes it much easier to jump in and reproduce. It also lets us look through what the user was doing, because sometimes the way a user explains what they were trying to do is vastly different to what they actually were doing. Or what they were expecting or describing is vastly different to what you actually see on the screen.
Can you recall a million dollar bug?
The one that sticks out to me is when we were doing the LivingSocial migration. Basically we were skinning the Groupon app in such a way that we could re-skin it as LivingSocial as well, and release a Groupon in a LivingSocial app from the same code base. As part of that project, there was a whole lot of back-end work and front-end work that needed to be done both on the website and on the mobile applications, and we decided that after all the testing we’d done, the one thing we should do is really incentivize users. We increased our cat-food reward for when people were finding bugs and launched one specifically for LivingSocial. When we launched that, there was a very particular scenario we came up with where users who had been migrated in a particular time period weren’t migrated across properly, so as a result that particular subset of 10,000 or 20,000 users wouldn’t be able to purchase or checkout. There was no way that we would have found that in testing unless we had stumbled across one of those real-world user accounts that had been migrated.
There’s usually a few times per release where TestFairy finds bugs before QA finds them. So it will come in through feedback and because TestFairy is constantly going out, it will go out and something will come in as a piece of feedback. QA hasn’t started regression yet, we triage the feedback, and fix the problem.”
So the Groupon app with the TestFairy SDK is run by thousands of company employees all around the world, using different languages and devices. You don’t tell them what to do, right? You just encourage them to use the app and let you know if they find something?
We don’t tell them what to do, at least not at this point. Most of the time, it gets released and people pick it up and use it day-to-day. They raise a ticket or put feedback into the system, and then they get a $20 Groupon Bucks per bug if it’s the first unique bug of that type, or $50 Groupon Bucks if their improvement idea actually ends up in the application. We don’t just use it for bugs, it’s also used to suggest improvements to the app.”
By the way, you call it cat-food? Is that the same as dog-food? 🙂
“Yes. Groupon’s internal logo is effectively a cat, and I believe that came from the first CEO who was a cat-lover and so it naturally evolved from there. Hence we call dog-food cat-food, but all behind the scenes it’s dog-food. We don’t want to confuse developers too much.”
What would you recommend a person in your chair in a different company to do better, either with TestFairy or with another tool? Any conclusion?
I think the most important thing that I’ve learned is you can test against the requirements as much as you want, but if you just do that, you end up blindsiding yourself to a lot of other things. Customer experience is key. If the customer is not enjoying using the app, they’re not going to come back. It doesn’t matter how good that particular feature is or how much it works to plan. If it’s a problem, it’s a problem. If it’s something they don’t enjoy using, they’re not going to keep coming back. End of story.