deeflect

Designing for users you cannot talk to

Three years of enterprise B2B design at VALK with almost no user research access - and the proxy methods that replaced it.

Two years ago, I wrote a user flow for a feature that I was genuinely proud of. Clean logic, thoughtful edge cases, good empty states. We shipped it. A few months later, I asked if anyone had used it. The answer from our account manager was something like: “I think one client mentioned it in passing. They said it was fine.”

Fine.

That was the most feedback I got on three weeks of work. And honestly, that’s pretty typical for what I do.

I’ve been the sole designer at VALK for over three years now. The product is used by portfolio managers, compliance officers, and traders at institutional financial firms, hedge funds, investment banks, asset managers across fifteen-plus countries. These are real people sitting at real desks making real decisions with the interfaces I design. And with very rare exceptions, I have never spoken to a single one of them.

This is enterprise B2B design. The feedback loop doesn’t just close slowly, sometimes it doesn’t close at all.

The access problem

In consumer product design, the research process is something you can actually plan around. User interviews, usability testing, surveys, A/B tests, session recordings. You have direct access to the people using your product and many ways to observe their behavior.

Enterprise B2B breaks this completely.

When your users are bankers and fund managers at regulated financial institutions, reaching them is not a matter of posting a recruitment survey in a Slack community. There are layers of intermediaries: account managers, legal teams, compliance officers who are deeply skeptical of anything that involves external parties observing their workflows. The relationship between VALK and our clients is formal, often governed by contracts that make any kind of research request a complicated conversation.

And even when access is technically possible, the users themselves are not interested in talking to a designer. They are busy professionals. Their time costs money in a very literal sense. Sitting on a call to walk through their pain points with a 25-year-old designer in Istanbul is not how they want to spend an afternoon.

So you adapt. You find proxy methods. You build a different kind of research practice from whatever scraps are available.

What I actually use instead of user research

Support tickets are underrated. When I can get access to them, usually filtered through our CEO or the account managers, they are the closest thing I have to unfiltered user feedback. Not because they tell me what users want, but because they tell me where users are confused or stuck. A user filing a ticket is a user who already failed to figure something out on their own. That’s a signal. I started keeping a running document of recurring ticket themes, because patterns in support requests map very directly to design failures.

Feature requests are a different kind of signal. They tell you what users are trying to do, not necessarily what they’re asking for. When several institutional clients asked for a specific data export format, the surface request was about file types. The underlying need was about fitting VALK’s output into their existing internal reporting workflows. I only got there by reading the requests carefully and asking follow-up questions through the account managers, and that distinction changed how we approached the problem entirely. We didn’t just add export formats. We redesigned the data structure to be more flexible upstream.

Then there is over-the-shoulder observation during demos. This is probably the most valuable thing I have access to, and it happens accidentally. When the CEO or an account manager demos the product to a prospective client over a screen share call, sometimes I’m invited to watch. Not to present, just to observe. And in those moments, I watch how the person on the other end reacts. Where they lean forward. Where they look confused. What questions they ask. I’ve redesigned small but meaningful things based purely on watching someone hesitate over a UI element during a twenty-minute sales demo.

The CEO as a proxy for the user

In the absence of direct user access, the CEO became my most important research partner. I didn’t fully understand this when I first started at VALK, it became clearer over time.

He comes from a finance background. He understands the domain in a way that most designers, myself included, have to actively work to develop. When he looks at a feature and says “a portfolio manager would never think about it this way,” that’s not just an opinion. That’s accumulated domain knowledge I can’t get anywhere else.

I had to learn how to use this properly. Early on, I would show him designs and ask “does this look good?”; which was the wrong question entirely. Later I learned to ask things like “when someone is reviewing deal documentation, what’s the first thing they’re checking?” or “what would make a compliance officer trust this interface more?” Framing questions around user behavior and mental models, not visual preferences, produced much more useful answers.

It’s not perfect. A CEO’s perspective has its own biases, he naturally gravitates toward power users and edge cases because those are the clients he talks to most. But it’s a real signal, and it became the closest thing I had to user research for a long time.

Heuristic evaluation became my foundation

When you don’t have users to test with, you fall back on principles. Nielsen’s heuristics, Fitts’s Law, cognitive load theory, established patterns for financial interfaces, these stopped being academic references and became my primary evaluation tools.

Before shipping anything significant, I do a structured self-audit. I go through the design and force myself to answer specific questions: Where is the user’s attention going first? Is there a clear visual hierarchy? If something goes wrong, does the interface communicate it clearly and suggest a recovery path? Is there anything that looks interactive but isn’t, or vice versa? Are we exposing complexity that the user doesn’t need at this stage?

This sounds obvious. Every designer learns these things. But there’s a difference between knowing heuristics and actually building a disciplined habit of applying them, especially when you’re working fast and there’s no one sitting next to you to catch your blind spots.

The financial domain adds extra dimensions to this. Data density is a real concern. Portfolio managers and traders are used to dense interfaces, Bloomberg terminal-style, and there’s a balance between giving them the information they want visible and overwhelming a less experienced user. Error handling matters more when a mistake could affect a transaction. Trust signals, visual consistency, precision in numbers and dates, clear audit trails, these are not nice-to-haves, they’re baseline requirements.

I’ve spent a lot of time thinking about what “trust” looks like in a financial interface. It’s partly visual, clean, professional, no unnecessary decoration, but it’s more about behavioral consistency. If the interface behaves predictably every time, users build trust in it even without consciously thinking about it. Surprise is the enemy of trust in enterprise software.

Competitive analysis as a form of research

VALK operates in a space with a few direct competitors, companies like Securitize and Tokeny working on similar infrastructure for private capital markets. I’ve spent real time studying their interfaces, not to copy them, but to understand the design decisions they’ve made and why.

Competitive analysis is not a substitute for user research, but it’s informative in a specific way: these products are trying to solve similar problems for similar users, and how they’ve chosen to solve them is a signal. When two competitors make the same structural decision, how they handle investor onboarding, how they present cap table data, it suggests that decision was probably informed by user feedback somewhere in their process. When they diverge dramatically, it’s worth understanding what’s driving the difference.

I’m also looking for patterns in what these products don’t do well. Gaps that exist across multiple competitors usually exist because the problem is genuinely hard, and that’s worth understanding before you assume you can solve it easily.

When I actually got to talk to a user

It happened maybe four or five times over three years. Every single time, it was illuminating in ways I did not expect.

The most memorable was a call with someone who managed fund operations at a mid-sized asset manager. We had added a reporting view that I was proud of, clean layout, good data hierarchy, exportable. She mentioned almost immediately that she never used it. I asked why. She said her team had a specific internal format they reported to their LPs in, and VALK’s view didn’t match it, so they were exporting raw data and reformatting it themselves in Excel every time.

I had designed something that looked great and solved what I thought the problem was. She had a completely different problem. And she had adapted her workflow around my design without ever telling anyone it was costing her team time every month.

That conversation changed a feature direction. But more than that, it reminded me that I am almost certainly wrong about something right now, in something I shipped three months ago, and I won’t know about it for a long time.

This is the fundamental discomfort of enterprise B2B design. You carry assumptions you cannot fully validate, and you have to make peace with that while still trying to get closer to correct.

The assumption trap

Every design decision in a context like this is built on a stack of assumptions. Assumptions about what users are trying to do, about their mental models, about what they find confusing. Without research, you cannot fully validate those assumptions. You can only minimize the most dangerous ones.

What I’ve learned to do is make my assumptions explicit. When I’m designing something, I write down the assumptions underneath the decisions: “This design assumes that the primary user reviewing deal documents is doing so before a committee meeting, not during it. This design assumes that the user is familiar with basic cap table concepts. This design assumes that users will use this feature repeatedly, not once.” Writing them down forces clarity. It also means that when something doesn’t work, you can sometimes trace it back to a specific assumption that was wrong.

It doesn’t eliminate the problem. But it makes the problem more legible.

Three years, four countries, same product

I’m writing this in Istanbul, in the last few weeks before my partner Anna and I move to Los Angeles. Strange moment to reflect, three years at the same product, from Saint Petersburg to Phuket to Bali to Turkey, always working on the same interface for users I mostly couldn’t talk to.

Remote design in a B2B context is a particular kind of isolation. You’re away from the team, away from the users, away from the industry events where you might run into people who use products like yours. You’re designing in a room with your own assumptions for company.

I’ve gotten better at enterprise UX precisely because I had to develop more rigorous internal practices, heuristic evaluation, structured competitive analysis, extracting signal from indirect sources, building the discipline to make assumptions explicit. These things compensated, partially, for the access I didn’t have.

But I’m also aware that the best version of this work would involve more direct user contact. Even one research session per quarter would change what I’m able to do. It’s something I want to push for more actively as we keep growing.

The assumption that I can’t talk to users is itself an assumption worth testing more often.

Originally published on kargaev.me. Imported to blog.deeflect.com archive.