Data

The Unicorns Are Dying: How Misguided Data Team Structure Inhibits Growth

Manual data work is the bane of my existence, but I’m afraid to leave and find the same thing somewhere else.

Who said this?

Based on the quote, a reasonable guess would be:

  1. A data professional, perhaps a data scientist or business analyst.
  2. Someone who works at a small company, maybe less than 50 people, who is stuck with manual data work until they have more budget to invest.

You’d have good cause to guess #1:

Data teams are overwhelmed with data quality issues.

The signs are everywhere, if you know what to look for.

Like this thread from Reddit with 127 upvotes: “Clean data doesn’t exist. Stakeholders nit-picking shades of color on charts.” The mismatch between significance and workload is overwhelming. Thought leaders in data talk constantly about data quality issues.

But that’s not the worst of it. And the quote above was not from a data pro. Now, you’d also have good cause to guess a small company. But…

Big companies have serious data issues too.

Julia Gilinets (CRO of Pocus) describes her experience, spanning across multiple organizations, like this: “I needed 5-10 different tools to gather any of the data I needed to do my job well in any given day… Why are we chasing down customer data everywhere? It’s frustrating and inefficient.”

Similarly, Cassidy Shield (Chief Growth Officer of Refine Labs) points out that data quality is the #1 problem preventing sales and marketing success.

The problem goes well beyond CRM data quality—it’s about the entire GTM stack. CRM is now connected  with other tools, especially marketing automation, CS, ERP/billing, sales automation, and even product data warehouses.

We’re getting closer to the source of the “manual data work” quote. But it wasn’t from someone in sales or marketing.

It was from the Head of Customer Success at a global unicorn.

I have worked in two hypergrowth unicorns that had more than 40 people on their BI and data teams, and less than two on Go-To-Market operations. With an imbalance like this, you get sluggish GTM activity, and sloppy handoffs from marketing to sales, from sales to CS, from sales to finance, from product to sales, from product to CS, etc.

Naturally, it’s difficult for under-supported GTM teams to perform well. And in this current economic environment, there is no more room for hiding inefficient revenue creation.

Which brings us to our main point:

The Unicorns are Dying.

A list of unicorns with layoffs in the last year reads like a bad Dr. Seuss poem.

Gong, ThoughtSpot, Clari, Workato. Qualtrics, Carta, Verily. Hopin, Klarna, Beamery, Paddle. Moglix, ShareChat, PluralSight, Capsule, Intercom, Convoy, Stripe, Brex, Scale AI, Twilio.

And there are more.

Some of these raised substantial rounds in the last few months. Brex raised $100M in January of last year, then let go of 11% of its workforce in October. The hyper growth SaaS layoffs are so extensive that some are saying “it’s time for cockroaches, not unicorns.”

The number of new unicorns “took a nosedive” in Q3 of last year. Some theorize half of all US unicorns may fail within the next two years. In fact, according to Forbes, 44 unicorn founders have lost $96B of personal fortune in the last year alone.

But why? Why are the unicorns dying?

It could be that they took too much cash.

It could be that they persisted in a “growth at all costs” mentality (and VCs may be partly to blame).

It could be, as with FTX, due to fraud.

It could be that they never cracked product-market fit (PMF).

But in many cases, the canary in this coal mine seems to be the fundamental lack of data support for sales, marketing and customer success teams.

When you find yourself wondering: what did they do with all that cash? It certainly wasn’t “shoring up their GTM operations.”

Can you really create efficient revenue with siloed, messy systems?

Over the last 6-8 months, Go-To-Market efficiency has become the “pulse” of the entire SaaS industry. All tech investors, and therefore all tech startups, are now concerned with burn rate and CAC reduction, more than with anything else.

Most tech companies beyond a certain stage are so overwhelmed with GTM data issues that their sales, marketing and CS teams just feel swamped.

But there’s a fundamental imbalance in how tech companies are structuring their teams.

GTM inefficiency is partly a structural design problem.

Think about the architecture of every SaaS team. You’ve got Product on one side and GTM on the other. Within GTM, you’ve got sales, marketing and CS.

Do you know any SaaS teams who serve accurate customer data to each of these teams, with the tools they use every day?

It’s rare at best. At worst, the problem is so ubiquitous as to be laughable.

Instead, at a famous global fintech unicorn, marketers are asked to learn SQL to query for audience segments from a data warehouse, and sales teams have to wait for Slack replies from support teams (who have to check 2 different systems for compliance and application processing) just to reply to angry customers about the status of their application.

At one of the most renowned B2B data vendors worldwide (300 employees), US sales teams are so fed up with inaccurate data served to them by offshore ops teams that they’re nearly in open revolt. The ops team doesn’t seem to prioritize a solution. In reality, those ops teams are fighting data fires themselves.

At a 100-person edtech startup, billing was based on product usage, which led to a heavily over-engineered dynamic pricing engine. The system was so prone to breakage, the data leadership called it an “insane problem”—and any downtime lost $20,000 in revenue daily—just from a single facet of customer data.

Everywhere I turn, customer-facing teams are exhausted, frustrated, doing tons of manual work, and, at the end of the day, still cannot serve their customers well.

Go-to-market efficiency lives or dies on the availability and quality of customer data.

The three data programs of growth stage SaaS companies:

In order to course-correct, we need to understand how data becomes valuable to a SaaS team. There are three primary ways data is leveraged to benefit:

  1. Product Automation & Analytics: The setup, study and triggered workflows related to actions taken by users in the app.
  2. GTM Data Quality & Automation (Customer Data Automation, for short): The integration of customer-facing systems, and the workflows and cleansing mechanisms on top.
  3. Business Intelligence: The pipelines and visualization layers used to enable million-dollar bets by executive teams.

The first is absolutely the realm of the Data team when it sits under Product.

The second currently has no real owner—ops teams make reports out of their CRMs like Salesforce, but these aren’t holistic. We’ve got an entire ETL industry that exists to move data from GTM tech to the warehouse but that doesn’t solve data quality issues, so we got Reverse ETL to bring the data back to GTM tech.

And the third–dashboards (Business Intelligence)—is a nightmare scenario, where accurate, fresh data from GTM tech is impossible to be gathered, making data teams frustrated (cf. the Reddit thread above). At the same time “business requirements” for dashboards go well beyond reason in the hyper-detailed nature of requests.

Let’s get this straight:

  1. We see heavy investment in data teams and data stacks. If you go to LinkedIn Sales Navigator right now, filter to US Software companies with 50-1,000 employees, you will see only 6,000 people with any variation of GTM operations titles. You will see over 40,000 with data/analytics/BI titles.
  2. Data teams are suffering from data quality issues, the lack of perceived ROI or value, and whiplash in changing BI requirements (the lack of “domain expertise” doesn’t help—meaning, it’s hard to know how to work with GTM data when you don’t come from that background).
  3. Ops teams are overwhelmed with tools to implement and integrate processes to enforce, analyses and metrics to produce without the headcount in sight.

GTM teams (sales, marketing, CS, even finance) suffer because, despite massive investments in data, everyone on the data value chain above is largely miserable, or at least needs help.

“The fat data layer in B2B SaaS”

In September of last year, Sacra wrote this incredible article. They identify a massive uptick in investment in the data stack, the many tools don’t actually engage with customers but are essentially middleware and data management.

We have “human middleware” too, an ugly term but an important one to wrestle with. Another article, perhaps.

In that article, Sacra does not, unfortunately, offer any guidance on what B2B SaaS companies can do to address this ever-increasing data stack bloat.

I will attempt to do that here, with the big caveat that I’m totally unqualified to make recommendations to executive teams. But this is the situation as I see it, and so far it has resonated with data leaders like Pedram Navid, Richard Makara, Joe Reis.

How do we streamline data to increase GTM efficiency?

First, create a customer data model.

What are the key attributes of a customer and the related entities? Account, contact, opportunity, contract, billing account, user, user account, ticket Mapping these out, first as concepts on paper and then as they relate to the various GTM and product systems, is vital and it’s also the best first step you can take.

This isn’t as esoteric as it sounds. The customer doesn’t view themselves as five pieces across five different systems. But that’s how they are treated if your GTM teams don’t have the data in their daily-use systems to get context and engage with the customer appropriately. This problem is especially obvious in CS teams (harkening back to our original quote inspiration), but it’s true across the board.

To do this, you’ll need to identify an “owner” or a couple owners of customer data. Likely, if you have RevOps, they should play a critical role in this initiative. If you’ve got a Head of data, Head of Analytics, or Head of BI on your team, they should definitely be involved. In fact, it’s data teams’ area of expertise (data modeling and data architecture), so it’s a great way for them to add immediate value.

If you skip this step, you’ll be fixing data flow issues for a long time.

Then, identify the right solution for unifying customer data according to the model.

If you solve data quality issues across your data stack, you’ll be half way done (you still need solid process adherence, of course, but that’ll never go away).

By and large, there hasn’t been any serious innovation in integration technology (ETL, iPaaS, etc.) in a decade or so. Connectors are commoditized and cheap to build—which is why many iPaaS leaders publish thousands of “connector pairing” pages each week.

A lot of people will argue middleware or “another tool can’t be the answer” (direct quote from a data leader conversation).

The problem of data quality in GTM stacks costs companies their entire livelihood. iPaaS, ETL, and reverse ETL clearly haven’t solved the issue. It’s crippling unicorns we thought were so stable they’d go straight to IPO.

The underinvestment in GTM ops data infrastructure needs to change.

Dig into critical features like stateful sync, global deduplication, sync vs. integration, data lineage, and data quality monitoring.

Restructure teams to support GTM—intentionally.

Get your data teams actively involved in GTM support. Train data engineers on Salesforce, HubSpot and NetSuite. They will bring excellent insight to the table in terms of how to navigate the nuances of the inherited data structures within the GTM tech stack.

As Pedram Navid put it on The Distributed Truth podcast: “Data leaders need to attach to business value.”

These days, that means enabling GTM efficiency.

Little else matters.

Don’t wait until it’s too late.

The situation as it is is crushing the people who work in data, in GTM ops, and in Go-To-Market teams.

I had a brief but poignant interaction recently with Jay Cuthrell (disclaimer: Jay’s views here do not reflect the views of his employer, IBM).

I asked Jay what his observations were about GTM data quality’s impact on revenue teams within unicorns. Jay pointed out  that “it’s not uncommon for a unicorn to experience turnover in key team members that created heroic [data] pipelines.” The result: undocumented, entangled systems—in other words—vast amounts of technical debt and poor data quality.

The result, according to Jay: “You chase waterfalls in revenue, mirages of margin, and get distracted… squandering time through losses from opportunity costs that compound when you are literally sticking a wet finger in the air to gauge GTM signal.”

So we’re left with our original case in point:

“Manual data work is the bane of my existence. But I’m afraid to leave and find the same thing somewhere else.”

Let’s change this before our companies fall prey to the same issues as the last batch of SaaS darlings.


Evan Dunn runs growth marketing at Syncari. Prior to this, Evan spent a decade in a mix of performance marketing, AI product leadership, and analytics consulting. Connect with him on LinkedIn or Twitter.