Manual data work is the bane of my existence, but I’m afraid to leave and find the same thing somewhere else.
Who said this?
Based on the quote, a reasonable guess would be:
- A data professional, perhaps a data scientist or business analyst.
- Someone who works at a small company, maybe less than 50 people, who is stuck with manual data work until they have more budget to invest.
You’d have good cause to guess #1:
Data teams are overwhelmed with data quality issues.
The signs are everywhere, if you know what to look for.
Like this thread from Reddit with 127 upvotes: “Clean data doesn’t exist. Stakeholders nit-picking shades of color on charts.” The mismatch between significance and workload is overwhelming. Thought leaders in data talk constantly about data quality issues.
But that’s not the worst of it. And the quote above was not from a data pro.
Now, you’d also have good cause to guess a small company. But…
Big companies have serious data issues too.
Julia Gilinets (CRO of Pocus) describes her experience across multiple organizations this way: “I needed 5-10 different tools to gather any of the data I needed to do my job well in any given day. … Why are we chasing down customer data everywhere? It’s frustrating and inefficient.”
Similarly, Cassidy Shield (Chief Growth Officer of Refine Labs) points out that data quality is the #1 problem preventing sales and marketing success.
The problem goes well beyond CRM data quality - it’s about the entire GTM stack. Because CRM is now networked with so many other tools, especially marketing automation, CS, ERP/billing, sales automation, and even product data warehouses. There are many inputs incoming.
We’re closer to the source of the “manual data work” quote. But it wasn’t from someone in sales or marketing.
It was a Head of Customer Success at a global unicorn.
I have worked in two hypergrowth unicorns that had more than 40 people on their BI and data teams, and less than two on Go-To-Market operations. With an imbalance like this, you get sluggish GTM activity, and sloppy handoffs from marketing to sales, from sales to CS, from sales to finance, from product to sales, from product to CS, etc.
Naturally, it’s difficult for under supported GTM teams to perform well. And in this current economic environment, there is no more hiding inefficient revenue creation.
Which brings us to our main point:
The Unicorns are Dying.
A list of unicorns with layoffs in the last year reads like a bad Dr. Seuss poem.
Gong, ThoughtSpot, Clari, Workato. Qualtrics, Carta, Verily. Hopin, Klarna, Beamery, Paddle. Moglix, ShareChat, PluralSight, Capsule, Intercom, Convoy, Stripe, Brex, Scale AI, Twilio.
And there are more.
Some of these even recently raised substantial rounds. Brex raised $100M in January of last year, then let go of 11% of its workforce in October. The hypergrowth SaaS layoffs are so extensive that some are saying “it’s time for cockroaches, not unicorns.”
The number of new unicorns “took a nosedive” in Q3 of last year. Some theorize half of all US unicorns may fail within the next two years. In fact, according to Forbes, 44 unicorn founders have lost $96B of personal fortune in the last year alone.
Why are the unicorns dying?
It could be that they took too much cash.
It could be that they persisted in a “growth at all costs” mentality (and VCs may be partly to blame).
It could be, as with FTX, due to fraud.
It could be that they never cracked product-market fit (PMF).
But in many cases, the canary in this coal mine seems to be the fundamental lack of data support for sales, marketing and customer success teams.
When you find yourself wondering: what did they do with all that cash? It certainly wasn’t “shore up their GTM operations.”
Can you really create efficient revenue with siloed, messy systems?
Over the last 6-8 months, Go-To-Market efficiency has become the “pulse” of the entire SaaS industry. All tech investors, and therefore all tech startups, are now more concerned with burn rate and CAC reduction than anything else.
Most tech companies beyond a certain stage are so overwhelmed with GTM data issues that their sales, marketing and CS teams just feel swamped.
But there’s a fundamental imbalance in how tech companies are structuring their teams.
GTM inefficiency is partly a structural design problem.
Think about the architecture of every SaaS team. On one side you have Product. On the other side, you have GTM. Within GTM, you’ve got sales, marketing and CS.
Do you know any SaaS teams who serve accurate customer data to each of these teams, within the tools they use every day?
It’s rare at best. At worst, the problem is so ubiquitous as to be laughable.
Instead, at a famous global fintech unicorn, marketers are asked to learn SQL to query for audience segments from a data warehouse, and sales teams have to wait for Slack replies from support teams (who have to check 2 different systems for compliance and application processing) just to reply to angry customers about the status of their application.
At one of the most renowned B2B data vendors worldwide (300 employees), US sales teams are so fed up with inaccurate data served to them by offshore ops teams that they're nearly in open revolt. The ops team doesn't seem to prioritize a solution. In reality, those ops teams are themselves underwater fighting data fires.
At a 100-person edtech startup, billing was based on product usage, which led to a heavily over-engineered dynamic pricing engine. The system was so prone to breakage, the data leadership called it an "insane problem" —and any downtime lost $20,000 in revenue daily— just from a single facet of customer data.
Everywhere I turn, customer-facing teams are exhausted, frustrated, doing tons of manual work, and, at the end of the day, still cannot serve their customers well.
Go-to-market efficiency lives or dies on the availability and quality of customer data.
The three data programs of growth stage SaaS companies:
In order to course-correct, we need to understand how data becomes valuable to a SaaS team. There are three primary ways data is leveraged to benefit:
- Product Automation & Analytics: The setup, study and triggered workflows related to actions taken by users in the app.
- GTM Data Quality & Automation (Customer Data Automation, for short): The integration of customer-facing systems, and the workflows and cleansing mechanisms on top.
- Business Intelligence: The pipelines and visualization layers used to enable million-dollar bets by executive teams.
The first is absolutely the realm of the Data team when it sits under Product.
The second currently has no real owner — ops teams make reports just fine out of their CRMs like Salesforce, but these aren’t holistic, so now you have an entire ETL industry that exists to move data from GTM tech to the warehouse. But that didn’t solve data quality, so we got Reverse ETL to bring the data back to GTM tech.
And the third –- dashboards (Business Intelligence) — is a nightmare scenario, where accurate, fresh data from GTM tech is nigh impossible, data teams are frustrated (cf. the Reddit thread above), and “business requirements” for dashboards go well beyond reason in the hyper-detailed nature of requests.
Let’s get this straight:
- We have heavy investment in data teams and stacks. So heavy that if you go to LinkedIn Sales Navigator right now, filter to US Software companies with 50-1,000 employees, you will see only 6,000 people with any variation of GTM operations titles. You will see over 40,000 with data/analytics/BI titles.
- But data teams are suffering from data quality issues, a lack of perceived ROI or value, and whiplash in changing BI requirements (lack of “domain expertise” doesn’t help — meaning, it’s hard to know how to work with GTM data when you don’t come from that background).
- Ops teams are utterly overwhelmed with tools to implement and integrate, processes to enforce, analyses and metrics to produce, and never a headcount in sight.
GTM teams (sales, marketing, CS, even finance) suffer because, despite massive investments in data, everyone on the data value chain above is largely miserable, or at least needs some serious help.
“The fat data layer in B2B SaaS”
In September of last year, Sacra wrote this incredible article. They identify a massive uptick in investment in the data stack, the many tools don’t actually engage with customers but are essentially middleware and data management.
We have “human middleware” too, an ugly term but an important one to wrestle with. Another article, perhaps.
In that article, Sacra does not, unfortunately, offer any guidance on what B2B SaaS companies can do to address this ever-increasing data stack bloat.
I will attempt to do that here, with the big caveat that I’m totally unqualified to make recommendations to executive teams. But this is the situation as I see it, and so far it has resonated with data leaders like Pedram Navid, Richard Makara, Joe Reis.
How do we streamline data to increase GTM efficiency?
First, create a customer data model.
What are the key attributes of a customer and the related entities? Account, contact, opportunity, contract, billing account, user, user account, ticket. Mapping these out, first as concepts on paper and then as they relate to the various GTM and product systems, is a vital discipline and the best first step you can take.
This isn’t as esoteric as it sounds. The customer doesn’t view themselves as five pieces across five different systems. But that’s how they are treated, if your GTM teams don’t have the data in their daily-use systems to get context and engage with the customer appropriately. This problem is especially obvious in CS teams (harkening back to our original quote inspiration), but it’s true across the board.
To do this, you’ll need to identify an “owner” or a couple owners of customer data. Likely, if you have RevOps, they’ll be a crucial voice in this initiative. But your head of data, analytics, or BI should also be at the table. In fact, it’s an area of expertise that data pros have (data modelling and data architecture), and it’s a great way for data teams to add immediate value.
If you skip this step, you’ll be fixing data flow issues for a long time.
Then, identify the right solution for unifying customer data according to the model.
If you solve for the quality of the data sync across the stack you can actually solve most of the data quality issues (you still need solid process adherence, of course, but that'll never go away).
By and large, there hasn't been any serious innovation in integration technology (ETL, iPaaS, etc.) in a decade or so. Connectors are commoditized and cheap to build - which is why many iPaaS leaders publish thousands of “connector pairing” pages each week.
A lot of people will argue middleware or “another tool can't be the answer” (direct quote from a data leader conversation).
And the scope of the problem merits innovative solutions. The problem of data quality in GTM stacks costs companies their entire livelihood. iPaaS and ETL and reverse ETL clearly haven’t solved the issue. It's crippling unicorns we thought were so stable they'd go straight to IPO.
The underinvestment in GTM ops data infrastructure needs to change.
Dig into critical features like stateful sync, global deduplication, sync vs. integration, data lineage, and data quality monitoring.
Restructure teams to support GTM - intentionally.
But aside from a tool - get your data teams involved actively in GTM support. Train data engineers on Salesforce and HubSpot and NetSuite. They will bring excellent insight to the table in terms of how to navigate the nuances of the inherited data structures within the GTM tech stack.
As Pedram Navid put it on The Distributed Truth podcast: “Data leaders need to attach to business value.”
These days, that means enabling GTM efficiency.
Little else matters.
Don’t wait until it’s too late.
The situation as it is is crushing the people who work in data, in GTM ops, and in Go-To-Market teams themselves.
I had a brief but poignant interaction recently with Jay Cuthrell (disclaimer: Jay’s views here do not reflect the views of his employer, IBM).
I asked Jay what his observations were about GTM data quality’s impact on revenue teams within unicorns. Jay pointed out that “it’s not uncommon for a unicorn to experience turnover in key team members that created heroic [data] pipelines.” The result is undocumented, entangled systems - in other words, vast amounts of technical debt and poor data quality.
The result, according to Jay: “Result: You chase waterfalls in revenue, mirages of margin, and get distracted… squandering time through losses from opportunity cost that compounds when you are literally sticking a wet finger in the air to gauge GTM signal.”
So we’re left with our original case in point:
"Manual data work is the bane of my existence. But I'm afraid to leave and find the same thing somewhere else."
Let’s change this before our companies fall prey to the same issues as the last batch of SaaS darlings.
Evan Dunn runs growth marketing at Syncari. Prior to this, Evan spent a decade in a mix of performance marketing, AI product leadership, and analytics consulting. Connect with him on LinkedIn or Twitter.