Joelle Waksman, Senior Manager of Customer Support at Samsara (previously Director of Customer Experience at Calendly) joins us in this episode of Support Ops Simplified to discuss Calendly’s top three support
Jim Israel, Director of Customer Success at Otto Motors – a division of Clearpath Robotics joins us in this episode of Support Ops Simplified to discuss how he leads his
Calling Unhappy Customers Brian Martynowicz, Support and Service Manager at Login VSI joins us in this episode of Support Ops Simplified to discuss how to deal with unhappy customers, and how to
Eric Broulette, Director of Global Support at Flywheel joins us in this episode of Support Ops Simplified to discuss customer support analytics. Connect with Eric and Flywheel here: Eric Broulette
Gary McGrath, Success Operations Manager at Paddle joins us in this episode of Support Ops Simplified to discuss call center analytics. Connect with Gary and Paddle here: Gary McGrath Paddle
Vlad Danilov, Customer Service Representative at ManyChat joins us in this episode of Support Ops Simplified to discuss agent quality control. You will learn: What keeps Vlad up at night
Stacy Justino, Director of Customer Happiness at Wistia joins us in this episode of Support Ops Simplified to discuss customer happiness operations. You will learn: The BEST method to find
Eitan Pick, Vice President of Customer Service at Lumen joins us in this episode of Support Ops Simplified to discuss call centre operations. You will learn: What keeps Eitan up
Ryan Steinberg, Manager, Global Support Operations at Intercom joins us in this episode of Support Ops Simplified about analyzing CSAT
Sid: Recording is on, and here we go. Hello everyone to another episode of Support Operations Simplified. Today we have with us Ryan Steinberg, from Intercom, one of America’s fastest-growing messaging platforms.
Ryan, nice to have you on the podcast. Can you introduce yourself, please?
Ryan Steinberg: Sure, thanks for having me, Sid. Like Sid said, name is Ryan Steinberg. I work at Intercom, customer communication platform. I’ve been working at Intercom for over four years now, so it’s been a bit of a wild ride. About a 100 employees when I started, now we’re north of 600, so crazy growth, similar sort of growth on our support team. I actually started off on our support team back in the day as a frontline support rep, started off when the team was around 14 people.
After about a year of being on the team, I realized that our analytics and our reporting and our KPIs, and basically all of our numbers were just crappy or non-existent. I finally got the bravery to teach myself SQL and look at our visualization tool at that time. After much pestering, basically, carved out our support operations role here at Intercom, and I’ve basically been doing that ever since. Now I lead our global support operations team, so we have a team of four people, including myself, that do everything involving numbers and KPIs, then reporting and all that fun stuff that I initially got into now.
Sid: Awesome. Now, that’s great. I know about Intercom’s growth and the fact that you guys have set the standards here for some of the messaging platforms, and how people interact with their customers through them. Tell us a little bit more about how you got into support ops, that you said you carved out a niche for yourself basically, but what drove that?
Ryan: I studied econometrics at University of Michigan, so numbers and like weren’t really foreign to me. There’s a whole nice meaty story about how I actually got into San Francisco and got into tech and all that, but I won’t go into that, because that’s like 10 minutes. It’s a good story though, if you ever [unintelligible 00:02:18].
Anyway, how did I get into it? Well, I realized that there was a real opportunity for our team that, we were using Intercom at that time, and Intercom was reporting within the actual product back then was really, really lightweight. We really didn’t have any information around the things that we cared about, so we didn’t have information at the level of detail that we wanted around your first response times or time to closes or how efficient people on the team actually were, how many conversations they were taking, how many customers they were talking to, how long it took them to resolve conversations, all that good stuff. Being a little career-minded, I guess you would call it, I realized that there was a niche there, or something that I was interested in.
I was very comfortable with the numbers, it really lit a fire under my ass to actually finally get over the hump of teaching myself SQL and the visualization tool. Once I did all those things, I basically talked to our head of support at the time, Jeff Gardner, I talked to our new VP of sales, Albie, who’s here still, [unintelligible 00:03:20] here still. Basically, with enough convincing, they realized that there was an opportunity here and I need to have somebody doing this full time. Basically, I’ve been doing that ever since.
Sid: Interesting. What did that do for your team and for the business? How did it change your day to day operations?
Ryan: Sure. Three years ago, before I started doing things, basically we had Jeff, our head of support at that time, who has [unintelligible 00:03:51] as a mobile developer, and was actually hired to be a mobile Dev. for Intercom back in the day, but ended up starting this support function randomly. That’s a whole other story too, but he was writing SQL queries in Looker, basically to generate all the stats that we were using to make decisions about headcount, hiring, where we were hiring people, what kind of support we were actually providing to our customers. I’m sure he’s a lovely mobile Dev, but his SQL is a leave something to be desired.
There were some errors here and there, there are some double counting of things, there was a lot of issues in the SQL there, so it was a pretty radical shift overnight to have somebody switch into that role full time and doing that with gaining expertise. It was my first time actually using SQL on a job, so I wasn’t an expert in the beginning, but somebody who can focus on it full time and test things, make sure everything was running in the right way without having to worry about running a support team of 25 people at that time, probably.
Ryan: It was pretty radical overnight, and ever since then, we switched over to Tableau, which has been fantastic. We have dozens and dozens of lovely little operational charts that we use to make decisions every single day, so pretty radical overnight shift that took three years.
Sid: It’s one of those things where a lot of the companies that grow very quickly, it’s a similar challenge, like you’re flying by the seat of your pants, trying to collect all of this data and understand it the best way you can, but maybe being a little too scrappy about it. That definitely rings true. As part of the toolset that you guys built around this, can you give me an example of something that you found or something that the data told you that ended up improving your customer experience in a big way?
Ryan: That’s a good question, what do we want to talk about?
Sid: I’m sure there’s many examples.
Ryan: There’s a lot of examples. I’m trying to think of one that really pinpoints something fantastic. Okay, let’s see. This is a good one in that, it’s both like a little medium that it gets into performance management of individuals, as well as the customer experience. Within Intercom, at the end of a conversation, somebody on our support team will close out that conversation which will trigger a CSAT survey being sent through the messenger, so pretty standard stuff. We’re very lucky in that our support team– every single time we send a CSAT survey, about 40% of the time, customers are actually using that CSAT survey and giving us feedback, which is two to three x above the standard 15% that most people are getting with those surveys. We’re very lucky that we get a lot of feedback from our customers.
What’s interesting, though, is that there was a product decision back when we did launch the CSAT surveys in the messenger to basically not send a [inaudible 00:07:03] when certain conditions aren’t met or are met.
One of those is that if a conversation has less than 250 characters, we’re not going to send the CSAT survey, and this makes sense if something is super, super easy, or somebody is just talking to themselves like, “Oh, I figured it out, close it,” stuff like that. The other side of things is that, if a admins or somebody, one of our agents, I guess you would call them, on our team, or the customer doesn’t write back in the conversation in over seven days, we will not send a CSAT survey. This is pretty interesting. Thinking about it from a product perspective, I guess it does make sense in that this is a stale conversation, you would assume at that point, so you don’t want to be sending CSAT surveys to something that isn’t really relevant to somebody anyway.
What was interesting is operationally, because Intercoms support reps, people on our support team know the product really, really well. They’re very much aware of this seven-day exclusion.
Basically, what we uncovered, let’s say two years ago, was that people were snoozing conversations, basically removing them from their inbox, but not closing them for over seven days for conversations that they think would have a higher likelihood of them getting a negative CSAT survey. A common example we heard was like feature requests for things that we don’t have, that we might never have, situations where somebody outside of our team maybe had a big script that maybe affected the customer. They will be snoozing these for seven days to avoid those CSAT surveys. We uncovered this after some of the managers brought this to our attention, and through a bunch of hard work digging through all of our data tables and writing some custom stuff to basically track this, we started tracking the rate in which people were avoiding CSAT via this snoozing or just not responding to the conversation in over seven days. That was a really big one in that it helped root out some negatives sort of– either negative or malicious behavior, depending on how you want to look at it, on our support team.
More importantly, it gave more opportunities for our customers to actually give us feedback-
Ryan: -despite the fact that maybe these conversations weren’t the best thing, the best shining example of Intercom or Intercom support team. We think it’s super important that every single person that wants to give us feedback has the chance to give us feedback. That was a really good one, and that took a lot of digging in our back end table. That’s one example.
Sid: No, and there’s a couple of points in that example that you mentioned, that are really interesting. The first one is, we’re talking about what transformation led to an improved customer experience. It’s interesting that it also happened to be related to performance management, to a certain extent, right?
Sid: I’ll come back and explore that a little more. The other point that was really interesting to me, is the fact that, as you’re getting these feedback, you might be finding nuggets in there that customers are sending back to you. You’re getting 40% over here in terms of CSAT responses. How are you differentiating between customers talking about CSAT that is directly related to the quality of service versus product features or lack thereof?
Ryan: Man, that is a tough question. It’s something that rings very true to my icy backbone, somebody who talks to customers for a long time.
Sid: That’s the biggest thing that you hear from your team, right? Is like, “Hey, I just got dinged for something that wasn’t even my fault. I provided a great level of service, but the negativity here is not pertaining to that.” Right?
Ryan: Yes. I’d say there are three things that we do there. The first one is that, we’re pretty extensively tagging all of our different conversations. This will allow us to identify conversations if somebody has a feature request, if the conversation has feature requests on it, and we’re getting a higher negative rate of CSATs there, we can dig into those and look at the remarks that people are actually leaving for us and figure it out that way.
A second one is that we have this tool that we built internally called carousel, that basically allows us to take conversations from within Intercom, bring them into a different UI, and allows us to review them based on quality, based on difficulty tone, leave comments for improvement, did they follow internal workflows. We have all those different conversations, and everybody on the team is responsible for leaving at least five carousel reviews each week, and then the managers have a higher number for each of their individuals on their team. With that, we can root out what is actually going on in these conversations and once you get a negative CSAT since we are reviewing all of those in carousel.
That’s a good way for us to bubble up, this person got a negative CSAT rating on this conversation, but it was rated in carousel as fantastic, fantastic about the quality and tone, what’s actually going on there.
Sid: Sorry, this duel is something that’s home-built Intercom.
Ryan: Yes, correct. It’s something that we built using our API, basically something that any of our customers can build. We built that out using our API basically ports, a random selection of conversations into this tool, and then we can review them from there. Using those first two things, we basically came up with a new workflow for managers, since managers are reviewing all these negative, all these neutral CSAT ratings that we get. Managers can now add a tag onto a conversation, and we can see that that conversation has that tag now. It was added by the manager, and with that, we can exclude certain CSAT surveys from negatively impacting somebody.
We’re really, really strict about when people can actually use this, when managers can use this. A common example is the one that you just stated, which is that somebody writes into us, they have a feature request, we don’t have it, we don’t think we’re going to build it anytime soon, we let the customer know that. The person on our support team does a fantastic job, but the customer basically leaves a rating that reads exactly like this, and has to be pretty strict, which is that, Ryan was great at support, but screw you Intercom, you don’t have feature X, Y or Z.
With that, manager can come in, read that, look through the conversation, see if anything happened that could have been improved. If there was, they don’t use the tag, but if it was perfect, and the customer just gave us a negative rating because they didn’t like that we didn’t have a feature, that we can exclude that CSAT from the individual’s KPIs. It’s a little bit of justice for that kind of situation. It doesn’t cover everything. There are a lot of gray areas there consciously decided not to exclude those CSAT surveys, but it’s a little win for individuals.
Sid: No, absolutely. As we’re talking to more and more people in the industry, people who are trying to get to the bottom of what drives customer interactions to being positive or negative, a lot of that is in the details. It’s not something that’s coming out very clearly in the KPI of the CSAT itself, you have to ask why, at least a few times, to get to the bottom of what was it exactly that caused that negative interaction? I think you guys at least have the process in place. In terms of scale, there’s a lot of man-hours trying to get tagging and identifying these things to get it running.
Going back to it, one of the other things was around performance management, and the fact that it’s closely tied to the customer experience. What are the KPIs that you guys are tracking on the agent side? How closely do you see that being related to your CSAT?
Ryan: Yes, definitely. There are three main metrics that we look at, I’d say for individuals. The first one is this thing called CPH, so conversations pulled per working hour. Basically it looks at number of hours that somebody’s working throughout the week, how many conversations did they pull? Meaning how many conversations did they take from an inbox and Intercom and respond to immediately after? How many of those are they doing per hour? That’s our volume metric that we use.
The second one is internal subsequent response time. Some people call it subsequent response time, some people call it next response time, all sorts of stuff. It’s basically after the first response, how quickly are you getting back to the customer after that? How quickly is the conversation basically moving along after we put a first touch on the conversation? That’s our second main one.
Then the third one is CSAT. Directly related, we have CSAT KPIs for each of the different roles that we have on our team. That maps, obviously one to one to CSAT for the team overall.
The interesting thing about all of our different metrics is that very consciously, we’ve mapped these in a one to one sort of basis to the team KPIs that we actually care about. Here in our company, it depends obviously on what segment somebody’s in, what they’re asking about. Overall, I’d say the general experience that we’d like to provide to everybody is a first response time of two to four hours, in which we’re clearly letting somebody know, hey, you can expect to hear back from us within that time period. We hit that time period. After that, we have a subsequent response time of less than 25 minutes, so the conversation is moving along pretty quickly. After that, ideally, we’re closing the conversation in under 48 hours.
The customer lets us know confirm resolve, they give us a thumbs up, they say thanks that helped, or whatever it might be, because that conversation [unintelligible 00:16:40] under 48 hours, and then we send the CSAT survey the customer gives us a happy CSAT result. Each of those different team metrics, not very much one to one with individual metrics and that we have first response time for the team, we have conversations pulling for working our for individuals, basically mapping one to one, you pull the conversation and then you leave a first response, those things equal each other. It’s the same action.
Same thing with subsequent response time and CSAT. We’re measuring subsequent response time for the team. We’re measuring it for individuals, same as CSAT team and individuals. Very consciously, we’ve made sure that these things map one to one because we have a team of around 90 people around the world working and talking to our customers. We want to make sure that at scale, these people have a clear connection to the customer base as a whole into the experience that we’re providing. Obviously, Intercom’s goal is to make internet business personal. If anybody’s ever talked to our support team, I’m sure you’ve gotten GIFs, emojis and the like, you can make you build a direct one to one relationship with somebody on our team. At the end of the day, we have this massive customer base of over 30,000 paying customers that we need to serve. Making sure that people have that bigger vision in mind, even when they’re just looking at their individual guys is super important to us.
Sid: Interesting. That’s really interesting because when you were talking initially about the KPIs, one of the things was around productivity. Which is how many calls are you pulling per hour? Then having that in a way, do you have a lot of guards and metrics in place to say, these are some of the guidelines for how many conversations people are interacting with? I guess in terms of productivity, other than volume, are there any other things that you guys are using?
Ryan: Yes, definitely. For productivity, we mapped that out for each individual role that we have, and mapped that out for how long somebody’s basically been enrolled. If you’re three to six months in a CSS one, so a Customer Support Specialist one, you have a different KPI than somebody who’s a 12 to 15 mode customer support engineer, or a CSE. We’re doing it that way. I think I forgot your question.
Sid: I was saying other than volume, are there any other metrics that you were counting towards the productivity aspect?
Ryan: Yes. I’d say there’s like, I call it the holy trinity of customer time-based metrics. It’s not very pithy, but it works. I just call it the Holy Trinity. Anyway, it’s first response time, it’s subsequent response time, and it’s time to close. With that, we basically see how long does somebody have to wait before they got in contact with us via the Intercom messenger? How quickly do we get through the conversation responding back to the customer after that first response, and how long did it take us to actually close the conversation? With those two, we have a pretty good idea of the conversation journey. For individuals, though, we look at a couple different things.
One is that, that subsequent response time, basically how quickly are people getting back to the customer within the conversation. That’s a pretty good idea of how productive they are in the conversation, because you basically have two levers that we can pull here. You either need to pull more conversations, deliver more first response times the customers, or you need to talk to the customers that you’re actually already interacting with in your inbox, which is absolute response time. We’re constantly trying to find the perfect balance between those two levels.
In addition to that, we’re also looking at a broader team level of how many back and forths are there in each of our conversations. We think that’s a pretty good proxy for how efficient are our admin tools, how efficient is our customer onboarding, are we starting from square one with the customer, or is the customer coming to us with a very specific question about a very specific feature? Are they asking us super broad like, hey, I want to set up a user auto message, I have no idea where to start, and requires a full walkthrough of the UI? We think that’s a pretty good metric that measures both our team’s efficiency, how fluid is the conversation with the customer, as well as how good is our onboarding of customers, how good is our documentation, all that stuff.
Sid: Right. That’s really interesting. The onboarding and documentation aspect is one that I was very curious about, because in your case, I know I hear this a lot from the enterprise accounts or the enterprise support implementations where they’re looking heavily into self-service tools and KCS to have that as a first point of contact for a lot of their customers. Does Intercom interact or integrate into those, and are you looking at those as being one of the filters in there?
Ryan: Definitely. Automation is a big thing for us, but this past year, and it’s going to be even bigger next year. We actually have somebody full-time on my team who is in charge of all the automation technology that we have in our suite. We are lucky enough that Intercom has a couple of key features that allow us to automate some of our conversations away. The first one is we have a help desk, so we can create articles about our product, put them in our help desk or help center, and customers can go there and queer things themselves if they’re so inclined.
In addition to that, we also have this thing called article suggestions, which is that if somebody hasn’t taken the time to ask a question, or to look into our help docs, and they ask a question of our support team, we have some ML that basically looks through all of our support documentation and suggests some of those that might be related to the individual. If that doesn’t work, we have another option, which is this thing called answers, which basically allows us to look through all the different types of conversations that we get and create pre-populated different answers for the customer’s questions.
Super common one is, somebody asks us, “Hey, how do I change my email and my profile picture?” What we can do is we see that cluster of conversations there, pick a bunch of different examples of how somebody would ask that question. Hey, where do I change my email? How do I change my email address? Is there a way to change my email address? Trying to change my email address. All those different things, we can cluster them together and then create an answer that answers that question. If you want to change your email address with us, you can go here in this setting, link them to it.
With that, we can answer a bunch of the low hanging fruit that typically wouldn’t bog down some of our support team. With those two technologies, it may not seem like much, but we’re resolving 4% of our conversations that would have gone to CS with articles, suggestions and answers. We’re super proud about this because, 4% doesn’t seem like much. It’s a pretty small number. When I’m thinking about it in actual dollar terms, that 4% allowed us to not backfill four people on our team, which if you’re being super conservative with fully loading salary, and infrastructure, computers and Wi-Fi, and all that sort of stuff, that 4% basically equals $400,000 that we save not just this year, but next year as well and the year after that, the year after that. We’re very proud of this 4% rate of automated resolution war, is what we call it.
Sid: The three things that you talked about over there in terms of being able to suggest the answers, get more insight from what questions people are asking and put on articles together based on that, I think in a lot of cases, that’s what people are striving for. I think you guys are ahead of the game over there in certain sense. What does the future hold?
Ryan: We see that with some of our customers. Not to interrupt you. We are in B2B, so most of our questions are going to be a little bit more complicated than how do I change my profile picture or email, but we have B2C customers that all they’re dealing with pretty much the entire day are these transactional questions. We see people resolving 30% of their conversations automatically using these two pieces of technology, which gets my mouth watering thinking about that with our team, but we’re not shooting for anything close to that anytime soon. We don’t think that’s realistic. We’re pretty hopeful that we can get this up to 6% or 7% next year.
Sid: While it’s ideal that the conversations resolve themselves with these articles, I think the other big part, and this is really hard to measure in terms of the ROI, is the agents being able to share the knowledge in and amongst themselves, which probably improves on onboarding and so on, because that knowledge is now freely available, when they do get that interaction and have to do handle it in person.
Ryan: I’m not really getting what you’re meaning there. Can you explain it a little bit.
Sid: What I meant is let’s say the customer wasn’t able to find the answer or didn’t really use the self-help section to get the answer, reached an agent, but for a new agent to come on board and be able to get to that point where they can answer those questions, the fact that the knowledge is in this knowledge base or in this shared pool, where they can access it, means they get to that answer quicker with the customers. There’s definitely a value and a return on that investment even if you do end up with an in-person interaction.
Ryan: Definitely. I’d say so. It’s nice, just if you’re a new agent on the team, and you see that we’ve suggested three articles about something, you see that they didn’t click on any of them, maybe your answer is there. If they clicked all three, then you know, okay, probably isn’t in there. It does provide a little bit of the guideposts to help resolve the conversation, which is quite useful. You’re spot on there.
Sid: Cool. Going back to the question I was getting into earlier, you guys have done a lot. You guys have grown really quickly. What does the future look like? What are some of the things that are on your mid to long-term range in terms of improvements, in terms of augmenting what you guys already have?
Ryan: First one [unintelligible 00:27:01] since we were just talking about it, is getting that reward that made of automated resolution up from 4% where it is now to 6% or 7% by the end of next year. We have somebody thinking about this sort of stuff full-time now on my team. I really looking for some big improvements there middle of next year. Very excited about that. I think that’s really cool.
Similarly, in the automation space, we released this thing called inbound custom bots, which basically it’s not crazy new tech, it’s not a completely new concept, it’s basically the phone tree of old where somebody will go into the messenger and they’ll start a new conversation, and within that conversation, they’ll basically have to select what track they want to go to. I have a technical question, I’ve a billing question, I’ve a non-technical question. We haven’t been able to roll this out yet, we’re dealing with some– because a lot of our reporting is home-baked, using just a bunch of sequel queries and redshift and our backend tables.
We need to do some tweaking there to make sure that when we roll out inbound custom bots, it’s not breaking all of our metrics and reporting, which it would right now, unfortunately. We’re working on that, which is fun. It’s always fun to check that accumulates overnight when somebody releases a new product. You’re like, crap, we needed fix this. That’s a fun experience.
Thinking about inbound custom bots, basically, right now, the way that we’re segmenting our customers is very much with a butcher’s knife. It’s entirely based on what segment they’re in, how much they’re paying us, what products do they have, they have any of our premium products. They sell zone, all those different things roll up into this segmentation strategy that routes different conversations, the different sort of experiences and inboxes in Intercom. Which is fine, but what it doesn’t allow us to do is get any customer input, because the example that I like to use is that somebody who’s paying you, your number one customer, the biggest customer that you have, if they write in and they say, “Hey, I just had a feature request about this thing, can you move this icon five pixels to the left? It’s sort of messing with me, and it’s right there.” That probably doesn’t need a two hour response time. They can probably wait a day. If they’re very clearly letting us know, hey, just have a feature request. We don’t need to get back to them super quick there.
On the other hand, somebody who’s paying us, somebody who’s on our early-stage program, which is fantastic, and they get the whole suite of Intercom, but they’re super small businesses, they’re not paying us that much every single month, somebody who’s in that sort of bucket, if they’re writing in and they’re saying, “Hey, I think your API is down.” They probably need to get [inaudible 00:29:46] response time then before hours or one business day that we’re currently providing.
Being able to take customer input and allow them to self-select how big of an issue this is, that’s the next stage of our evolution, which taking that and mixing it with our current segmentation strategy around all of our different conversations, that’s the next thing that I’m really excited for, because it allows us to transition from that butcher knife into a little bit more of a scalpel, which I’m pretty excited for. Just for the customer experience side of things.
Sid: Absolutely. That definitely does sound like it would improve the experience overall in a big way, because you would know what that touch point would need to look like before you even engage. Right?
Ryan: Yes. There’s a whole bunch of agent efficiencies that will be gained there. Somebody writes in and it’s like, “Hey, I set up this message that was supposed to go to these sets of users. It didn’t go to this one individual. What gives there?” If we have this bot that basically goes through a flow, it’s like, “Hey, can you send us the link to the message and the user that you think it should have gone to?” That saves our team a couple of back and forth asking for that information, or allows them to go straight into the investigation about what actually went wrong, as opposed to having to collect information.
Sid: Interesting. No, definitely. Ryan, I think you have a great thought process here in terms of how you guys have got here, in terms of where you started with the analytics and what you have on the roadmap. Who are the people who influenced you, and and where did you get all of this from? Are there any mentors that have influenced you in your career?
Ryan: I want to not name drop a bunch of Intercom people because that’s what I’ve lived and breathed, the past four years. Our head of strategy, Dez, is a pretty big inspiration. You can hear about him elsewhere, go to our blog and read about him. A book that Jeff Gardner, former head of support, recommended to me when I first joined, though, was The Effortless Experience by Matt Dixon. That has been very much top of mind for me throughout like my support experience journey, as well as the support operations journey, which is just making sure that we’re doing the things that make it as lightweight as possible for the customer. When I first read that book, and when I was first joining Intercom, all we talked about was making internet business personal.
When I first started out, not having thought about that that much, I think like, “Oh, what’s personal? It’s like, well, you screw up,” you give somebody a gift card, you give somebody [inaudible 00:32:31] a discount.
Sid: Hey, Ryan. Hey, can you hear me? Hey, can you hear me, Ryan?
Ryan: [inaudible 00:32:58] Hey, hey.
Sid: Hey, Ryan.
Ryan: Can you hear me?
Sid: I can hear you now.
Sid: Sorry about that. I was trying to tell you that the connection here was a little flicky. I got you to the point where you were describe– You mentioned the book. Then you were describing what it meant by making the internet business personal.
Ryan: Great. Okay, I’ll start from there.
Sid: Okay. Let’s give it a couple of seconds, and then you can start. Okay.
Ryan: This book, The Effortless Experience by Matt Dixon. When I first started at Intercom, our mission is to make internet business personal. Not having had a ton of time to think about that and being in my front-line support role, I thought, making internet business personal. That’s like, knowing your bank teller really well. If you screw up, you give them some gift card, or you send them, we’re sorry card, or something like that. As I continue to actually do the job and talk to customers, what I realized is that, people don’t really give a shit about your gift cards.
The book really does a great job talking about this. Which is that, if our API is down, people don’t really want to get a gift card right then and there to say you’re sorry, they just want to know, what impact is this having on me? What impact is this having on my customers, and when is it going to be resolved? That’s it. It might seem– I guess the biggest thing that I was struggling with is that, that seems like such a cold hard way to view this interaction with the customer.
As I was at Intercom where I heard people talk about this making internet business personal slogan, this mantra, this mission a little bit more. What I realized is that, what that is about is treating people like people, and with that just respecting their time. Respecting them as individuals and not treating them like cattle that move here and there, or to just think that a gift card is going to make everything okay. It’s realizing that, we all make mistakes, but that when a mistake is made, let them know. Let them know the API’s down and we’re working on a fix or it should be fixed in 30 minutes and this is the impact that it should have. Don’t send them cookies or cakes or give them a gift card or anything like that.
Sid: Don’t trivialize the impact or the experience that they’re having. Right? Own up to it and come up with a fix, which speaks probably better than being able to give them something that doesn’t mean really a whole lot.
Ryan: Exactly, exactly. It’s like, “Is my gift card going to be able to buy API access to Intercom?” No. It’s that. Okay, so let’s just fix this thing and we can move on with our lives.
Sid: Interesting. That’s, again, very good thought. Very good perspective. In fact, this is the second or third time that someone on our podcast has mentioned the book, Effortless Experience. It’s definitely has had a lot of nuggets. I haven’t read it myself, but I should go pick up a copy apparently. Because it has a lot of nuggets that people keep referring to.
Ryan: Yes. It’s quite good.
Sid: Ryan, thanks again for all your time. This was a great conversation. I’m sure our audience are going to get a lot out of it. Happy to have you on the show. Thanks for joining.
Ryan: Thank you for having me, Sid. Good talking to you.
Sid: Right. I’m going to–
[00:36:35] [END OF AUDIO]