A 5-step approach to improve data platform experience
May 16, 2025
•
Frederic Vanderveken
Boost data platform UX with a 5-step process:gather feedback, map user journeys, reduce friction, and continuously improve through iteration
In data-centric organizations, data is integrated into everyday workflows to accelerate decision-making, improve operational efficiency, and unlock new opportunities for innovation and value creation. At the heart of this integration lies the data platform. It’s the foundation that enables teams across the enterprise to access, process, and transform data into insights.
As data maturity grows, so does platform adoption, leading to an increasing focus on the experience of its users. This experience, often referred to as user experience (UX) or developer experience (DevEx, DX), determines how effectively data practitioners can navigate tools and processes to deliver value.
Research consistently shows that investing in a high-quality user and developer experience not only boosts productivity but also drives measurable business outcomes [1–5]. As more users engage with the platform and its influence on business outcomes becomes clearer, a critical question arises: how to establish a systematic process to continuously keep improving the data platform experience?
Three principles for a great platform experience
Designing a great platform experience means looking beyond technical capabilities and focusing on how users get work done. Whether they’re discovering data, building pipelines, training models, or monitoring production systems, the goal is to make those tasks more productive, impactful, and satisfying.
Research has shown that the developer experience can be distilled into three core dimensions [3]:
Feedback loops: fast iteration and immediate validation help users learn, build, and debug quickly. Best practices:
Optimize CI/CD processes to reduce build and test times.
Streamline human handoffs, eg, establishing SLAs for code reviews and approvals.
Reduce support dependency by granting developers sufficient debugging access.
Cognitive load: clear interfaces and streamlined processes reduce mental overhead. Best practices:
Standardize tooling and frameworks.
Provide intelligent defaults and reduce unnecessary choices left for the user.
Keep documentation concise, up-to-date, and discoverable.
Flow state: reliable tools and autonomy allow users to stay focused and engaged. Best practices:
Protect focus time and reduce context-switching.
Offer self-service capabilities to empower users to build without the need for approvals.
Align work with meaningful challenges to explore new ideas and energise developers.
These dimensions provide a model for understanding the platform experience. By embedding these best practices into the platform design principles, high user satisfaction can be achieved from the start.
However, even the most thoughtfully designed platforms won’t anticipate every need or eliminate all friction. Users will always encounter challenges, identify gaps, or come up with new ideas to improve their workflows. That’s why it’s important to build a systematic approach to capture, analyze, and act on user feedback.
The responsibility of the user experience team
The user experience team (or sometimes just a dedicated person or the product owner) should be the owner of the process to continuously track and address user needs. This team acts as the engine behind a better platform experience and has two essential goals:
Ensuring the platform team works on the right topics to improve the user experience.
Building and maintaining trust between the platform team and its users.
This engine doesn’t implement the solutions. The ownership of the solutions should remain with the platform development team. Instead, the engine identifies the most important user challenges, communicates them to the development team and makes its progress transparent.
A five-step process to improve the platform experience
Improving the data platform experience is not a one-off project. It should become a continuous part of how the platform team operates. In the following, a step-by-step guidebook is presented on how to build a successful user experience program. It consists of the following steps:
Bootstrap the initiative: Set clear goals, secure leadership support, and dedicate resources.
Understand the platform users: Gather user feedback, identify root causes and assess their impact.
Discover and prioritise solutions: Identify what’s possible and prioritize the solutions based on user value and effort.
Drive ownership and follow-up: Assign owners, support delivery, communicate progress, and track adoption.
Measure Outcomes: Define success metrics and compare before-and-after data to validate impact.
Note that steps 2–5 are not a one-off exercise. Instead, they should be continuously repeated because everything is always changing.
Improving platform experience is not a project, it’s a continuous process.
Embedding these steps into the platform team’s operating model ensures a consistent enhancement of the platform experience. In the next sections, the different steps are explained in more detail, together with a checklist of deliverables required before proceeding to the next step.
1. Bootstrap the initiative
The new initiative should start with the simple but powerful question: Why do we want to improve the platform experience? The answer forms the vision that should guide every decision that follows. For some organizations, it might be about retaining top talent. Others might focus on boosting developer productivity or accelerating revenue-generating projects. Whatever the reason, the vision must be clear, relevant, and aligned with the organization’s broader goals.
It’s essential that the leaders align on the vision and communicate the “why” to the people. They should actively advocate for it to unlock buy-in, momentum, and a sense of shared purpose.
The next step is to appoint someone to lead the initiative. This person’s job is to connect people, insights, and actions. For this, the ability to build trust, understand the technical landscape and user workflows is required. This person doesn’t need to have all the answers on day one, but does need the credibility, influence, and drive to connect the dots and keep the platform experience program alive.
Deliverables:
A clear vision that outlines why the platform experience needs to improve.
Platform users are aware of and understand the vision.
Someone is assigned to lead the initiative.
2. Understand the platform users
a) Identify the platform personas
Before improving the platform experience, first, it needs to be clear who the platform is designed for. A data platform typically supports a wide range of users, each with distinct responsibilities, workflows, and expertise. These different user types are called personas.
Understanding personas is crucial because user complaints about the platform might stem from groups outside its intended audience. Clear persona mapping ensures we’re addressing the right users, avoiding miscommunication, and accurately prioritizing improvements based on their actual needs.
Common data platform personas include:
Data Engineers: Build and maintain data pipelines.
Data Scientists: Develop machine learning models and experiments.
Data Analysts: Explore and interpret data to support business decisions.
Data Stewards: Manage data quality, lineage, and access.

Example personas acting on a data platform.
In large organizations, these personas often fragment even further. The needs of a data engineer working on real-time systems can look very different from one focused on batch processing. Likewise, analysts from marketing, finance, or operations may all interact with the platform in unique ways.
Understanding these personas is the essential first step. Without knowing who the users are and how they work, it is impossible to design an experience that truly meets their needs.
Deliverables:
A list of personas with their respective goals.
The number of users per persona type.
b) Create a journey map for every persona and goal
With the personas identified, the next step is to understand what they do and how they do it. This is where journey mapping becomes useful.
For each combination of persona and goal, create a detailed journey map. This map should capture the full workflow, step by step, from the initial intent to the final result. Include every task the user performs, the tools they use, and the touchpoints involved.

On a data platform, no user operates in a vacuum. An important part of the platform is to facilitate governance and communication across personas. A data scientist may need approval from a data steward. A machine learning engineer might depend on an infrastructure team to deploy a model. These cross-persona interactions are often where friction and bottlenecks occur. Therefore, we also recommend focusing on these interfaces:
Capture both the individual steps and the handoffs between personas.
Identify where governance actions, reviews, or approvals take place.
Once several journeys are mapped out, look across them for patterns:
Are certain tasks repeated across different personas?
Which tasks have a lot of dependencies?
Which personas have a lot of dependencies?
These insights might help identify shared bottlenecks, critical dependencies and high-impact areas for improvement.
Deliverables:
A set of journey maps for every <persona, goal> combination.
A list of common tasks and critical dependencies.
c) Understand user needs and frustrations
With journey maps in hand, the next step is to uncover where and why users experience friction. To discover the user needs and frustrations, we recommend gathering feedback in both pull- and push-based approaches.
The push-based approach allows users to directly share feedback with the platform team. For this to work, there must be a simple and accessible way for users to submit their input. This can be through a feedback form, a Jira ticket, a Teams channel, or any other preferred method. Regardless of the medium, it is helpful to provide users with a feedback template. A template helps users structure their input and reduces the chance of receiving incomplete requests. The template can include questions such as:
What aspect or feature of the product are you giving feedback on?
What is the feedback?
What is the impact on your workflow or productivity?
How frequently do you encounter this situation?
Do you have suggestions on how we could improve this?
The pull-based approach is about actively pulling information from the platform users. For this, it is recommended to combine qualitative and quantitative methods. The qualitative methods help to discover new problems, and the quantitative methods validate and measure the extent of these problems. Both perspectives are essential for building a full picture.
Qualitative methods try to probe the perception of the users to understand their experiences, pain points, and ideas. These insights are rich, nuanced, and often reveal issues you wouldn’t find through metrics alone. Common methods are conducting interviews and running surveys with users across different personas.
Best practices for survey and interview questions:
Use open-ended questions to allow for nuanced responses.
Avoid compound questions (two questions in one) to reduce confusion.
Ask users to describe actual scenarios and concrete examples.
Probe issue severity and filter small annoyances from high-impact blockers by asking follow-up questions such as:
How often does this problem occur?
How much time does your current workaround take?
Have you already explored or requested a fundamental solution before?
Quantitative methods focus on metrics and telemetry that show how users interact with the platform. Example metrics include:
Build and test durations: long cycles can indicate broken feedback loops.
Time to onboard: long onboarding times typically point to high cognitive load or poor documentation.
Incident response or debugging time: delays may break flow and cause user frustration.
Note that these metrics need to be tailored to your platform’s capabilities and the specific tasks your personas perform. For example, a metric that tracks time spent on data discovery might be relevant for analysts but not for ML engineers.
In a later section, we dive deeper into collecting insights from metrics.
d) Identify and prioritise root causes
The previous methods to gather information are essential to get an overview of the user’s needs and frustrations. However, this input typically reflects the many symptoms of a small number of big problems. The next step is to trace the input back to the fundamentals and identify potential systemic root causes. Often, a handful of systemic issues, such as lack of documentation, unclear ownership, or inconsistent tooling, create a ripple effect across many different workflows.

💡 Tip: to make sense of user feedback, leverage the three dimensions: feedback loops, flow state, and cognitive load. Look for delays in responses or reviews that break momentum, interruptions or context switching that disrupt focus, and complex processes or unclear tools that increase mental effort. Most user issues fall within one or more of these dimensions. Hence, using this model helps in quickly categorizing problems and might help in root cause discovery.
Deliverables:
Users have access to a simple tool or process to submit feedback.
A list of interview and survey questions
A prioritised list of user needs, each with its root cause(s) and estimated impact
3. Discover and prioritise solutions
Solution discovery
Once user problems have been identified, analyzed, and prioritized, it’s time to move into the solution discovery phase. This is where user needs are translated into potential platform improvements. This phase typically involves collaboration with engineering teams to share findings from the user research and brainstorm for solutions. As solutions are explored, user needs can be grouped into three different categories:
a) User needs/frustrations conflicting with platform principles or compliance.
b) Known user issues already in progress.
c) New or unresolved user needs.
a) User needs/frustrations conflicting with platform principles or compliance
Some user requests will conflict with the foundational design of the platform or compliance regulations. For example, users might suggest implementing role-based access control (RBAC) because it feels familiar. However, the platform may intentionally use purpose-based access control (PBAC) to meet specific regulatory or security requirements. In these cases, it’s crucial to understand the underlying frustration:
Is the real issue the complexity of managing permissions?
Is role-switching in PBAC slow or unintuitive?
…
Rather than dismissing these needs outright, dig into the root causes to uncover alternative solutions that may still improve the experience. In addition, transparency to the user is also vital. Clearly communicate why certain architectural decisions were made and which aspects of the platform are non-negotiable. This helps users distinguish between fixed constraints and areas open to change.
💡 Tip: don’t use “industry constraints” as a blanket excuse to avoid change. In regulated environments, it’s easy to point to compliance as a blocker. However, meaningful improvements are often still possible within those constraints.
b) Known issues already in progress
Sometimes, users will raise problems that are already being addressed by the platform team. In this case, the root cause is typically a lack of communication between the platform team and the users. This is an opportunity to improve the communication process:
Share updates on the solution’s status and expected delivery timelines.
Validate that the current solution will actually solve the user’s need.
Manage expectations and ensure users feel heard and informed.
New or unresolved user needs
For issues that aren’t yet on the platform team’s roadmap, a deeper investigation is required. This investigation can involve brainstorming sessions, building POCs to eliminate uncertainties and doing cross-team consultations to learn from other domains or departments that have already solved similar challenges.
In this category, it’s important to assess both the feasibility and impact of the solution. Estimating development effort alongside the potential gain allows the platform team to prioritize effectively and make informed trade-offs.
Deliverables:
A list of user needs that conflict with platform principles or compliance.
A list of user needs currently being addressed, including status updates and estimated delivery timelines.
A list of “new” user needs, with a short solution description, an estimated implementation effort and an expected impact.
Prioritization of solutions
Prioritization within the user experience scope
With a clear list of problem-solution pairs (each with estimated impact and effort) we’re ready to prioritize. We recommend the Impact vs Effort matrix mapping, which helps visualize and categorize initiatives:
Quick wins: High impact, low effort. Prioritize these to build momentum.
Strategic bets: High impact, high effort. Worth the investment but requires planning and alignment.
Incremental improvements: Low impact, low effort. Useful for steady progress.
Money pits: Low impact, high effort. Avoid or deprioritize.

Keep in mind that the Impact vs. Effort matrix is a simplification. In reality, prioritization is more nuanced and involves additional dimensions such as urgency, user learning curves, implementation timelines, and previous initiatives. Nevertheless, this simplified matrix provides clear guidance, aiding prioritization and preventing frustration or misaligned expectations.
Once initiatives are mapped, we recommend the following principles to move from ideas to results:
Start small, deliver fast: Don’t try to solve everything at once. Focus on a few high-leverage improvements and deliver them quickly. Reducing work-in-progress (WIP) keeps teams agile and gives users visible progress to build trust.
Prioritise deliberately: Be strategic in what you tackle first. Quick wins are great for immediate value, but also consider initiatives that align closely with company priorities or unlock broader platform improvements. Keep an open mind, don’t let the platform experience become an isolated goal.
Prioritization within the data platform scope
It’s important to keep in mind that the work identified from the platform experience track is only a part of the full platform story. There are three main work input flows:
Top-down initiatives: typically determined by managers and executives.
Bottom-up initiatives: typically determined by the engineers building the data platform.
User-requested initiatives: typically determined by user complaints or needs.
It’s important to have a clear view of the platform’s strategic direction as a whole and the contribution of each of these three flows. As a result, the initiatives originating from the users cannot all be implemented at once and therefore, quantification and prioritisation are of utmost importance.
Deliverables:
An impact vs effort map of all the user needs.
A prioritised list (or roadmap) of initiatives to be tackled
4. Drive ownership and follow-up
Once the priority is determined, initiatives can be launched. For this, it is essential to assign clear ownership of every problem-solution pair. Every initiative should have a named owner responsible for execution, stakeholder updates, and tracking success. Ownership ensures accountability and keeps progress moving.
In addition, it’s crucial to keep the users in the loop. Be transparent and honest with the users. If something is deprioritised, explain why. If something is added to the agenda of the development team, keep them up to date about the progress and provide an estimation of the new delivery date. This builds trust between the developers and the users of the data platform. Also, keep tight feedback loops with the users and work iteratively on the solution to ensure that the solutions really solve the users’ needs and frustrations.
💡 Tip: Communicate wins widely: Don’t just ship, but celebrate. Share successes broadly to show momentum, boost morale, and reinforce the platform team’s role as an enabler. When users see their feedback turning into tangible improvements, engagement increases.
Outcomes:
Clear initiative owners.
Communication plans to keep stakeholders aligned.
5. Measure outcomes
To ensure that efforts are delivering real impact, it’s recommended to measure progress against the originally defined goals in the vision. Measurements help validate what’s working, reveal where friction still exists, and provide a way to communicate impact to stakeholders.
A key distinction to make is between perception and workflow metrics. Perception metrics capture how users feel about the platform: their satisfaction, pain points, and confidence levels. These are vital because experience is as much about how things feel as how they function. Workflow metrics track how efficiently and effectively users can complete their tasks: reflecting the actual performance of tools, processes, and systems.
There’s no universal set of metrics that works for every organization. The measurement approach should be co-created with the team, tailored to the platform, and aligned with the vision. That said, here are some commonly used metrics that can serve as inspiration:
Perception metrics
Satisfaction Scores: Surveys to gauge how satisfied developers are with tools, processes, and support.
Ease of Onboarding: Measures how quickly new users can ramp up and make meaningful contributions.
Perceived Tooling Effectiveness: How useful users find CI/CD pipelines, debugging tools, documentation, etc.
Perceived Code Complexity: Captures how intuitive and maintainable users find the codebase or platform APIs.
Adoption & Engagement: Tracks recurring platform tool users, signalling utility and ease of use.
Workflow metrics
Delivery performance: Change Lead Time and Deployment Frequency
Build Success Rate: Indicates platform stability and reliability during development cycles.
Time to First E2E Result: Measures how long it takes to spin up a working environment and run a full pipeline or application.
Change Failure Rate & Recovery Time: Shows how often deployments break and how quickly teams recover.
Developer Productivity (Time on Non-Coding Tasks): Highlights time spent waiting on builds, managing dependencies, or handling unclear errors.
In data-centric organizations, data is integrated into everyday workflows to accelerate decision-making, improve operational efficiency, and unlock new opportunities for innovation and value creation. At the heart of this integration lies the data platform. It’s the foundation that enables teams across the enterprise to access, process, and transform data into insights.
As data maturity grows, so does platform adoption, leading to an increasing focus on the experience of its users. This experience, often referred to as user experience (UX) or developer experience (DevEx, DX), determines how effectively data practitioners can navigate tools and processes to deliver value.
Research consistently shows that investing in a high-quality user and developer experience not only boosts productivity but also drives measurable business outcomes [1–5]. As more users engage with the platform and its influence on business outcomes becomes clearer, a critical question arises: how to establish a systematic process to continuously keep improving the data platform experience?
Three principles for a great platform experience
Designing a great platform experience means looking beyond technical capabilities and focusing on how users get work done. Whether they’re discovering data, building pipelines, training models, or monitoring production systems, the goal is to make those tasks more productive, impactful, and satisfying.
Research has shown that the developer experience can be distilled into three core dimensions [3]:
Feedback loops: fast iteration and immediate validation help users learn, build, and debug quickly. Best practices:
Optimize CI/CD processes to reduce build and test times.
Streamline human handoffs, eg, establishing SLAs for code reviews and approvals.
Reduce support dependency by granting developers sufficient debugging access.
Cognitive load: clear interfaces and streamlined processes reduce mental overhead. Best practices:
Standardize tooling and frameworks.
Provide intelligent defaults and reduce unnecessary choices left for the user.
Keep documentation concise, up-to-date, and discoverable.
Flow state: reliable tools and autonomy allow users to stay focused and engaged. Best practices:
Protect focus time and reduce context-switching.
Offer self-service capabilities to empower users to build without the need for approvals.
Align work with meaningful challenges to explore new ideas and energise developers.
These dimensions provide a model for understanding the platform experience. By embedding these best practices into the platform design principles, high user satisfaction can be achieved from the start.
However, even the most thoughtfully designed platforms won’t anticipate every need or eliminate all friction. Users will always encounter challenges, identify gaps, or come up with new ideas to improve their workflows. That’s why it’s important to build a systematic approach to capture, analyze, and act on user feedback.
The responsibility of the user experience team
The user experience team (or sometimes just a dedicated person or the product owner) should be the owner of the process to continuously track and address user needs. This team acts as the engine behind a better platform experience and has two essential goals:
Ensuring the platform team works on the right topics to improve the user experience.
Building and maintaining trust between the platform team and its users.
This engine doesn’t implement the solutions. The ownership of the solutions should remain with the platform development team. Instead, the engine identifies the most important user challenges, communicates them to the development team and makes its progress transparent.
A five-step process to improve the platform experience
Improving the data platform experience is not a one-off project. It should become a continuous part of how the platform team operates. In the following, a step-by-step guidebook is presented on how to build a successful user experience program. It consists of the following steps:
Bootstrap the initiative: Set clear goals, secure leadership support, and dedicate resources.
Understand the platform users: Gather user feedback, identify root causes and assess their impact.
Discover and prioritise solutions: Identify what’s possible and prioritize the solutions based on user value and effort.
Drive ownership and follow-up: Assign owners, support delivery, communicate progress, and track adoption.
Measure Outcomes: Define success metrics and compare before-and-after data to validate impact.
Note that steps 2–5 are not a one-off exercise. Instead, they should be continuously repeated because everything is always changing.
Improving platform experience is not a project, it’s a continuous process.
Embedding these steps into the platform team’s operating model ensures a consistent enhancement of the platform experience. In the next sections, the different steps are explained in more detail, together with a checklist of deliverables required before proceeding to the next step.
1. Bootstrap the initiative
The new initiative should start with the simple but powerful question: Why do we want to improve the platform experience? The answer forms the vision that should guide every decision that follows. For some organizations, it might be about retaining top talent. Others might focus on boosting developer productivity or accelerating revenue-generating projects. Whatever the reason, the vision must be clear, relevant, and aligned with the organization’s broader goals.
It’s essential that the leaders align on the vision and communicate the “why” to the people. They should actively advocate for it to unlock buy-in, momentum, and a sense of shared purpose.
The next step is to appoint someone to lead the initiative. This person’s job is to connect people, insights, and actions. For this, the ability to build trust, understand the technical landscape and user workflows is required. This person doesn’t need to have all the answers on day one, but does need the credibility, influence, and drive to connect the dots and keep the platform experience program alive.
Deliverables:
A clear vision that outlines why the platform experience needs to improve.
Platform users are aware of and understand the vision.
Someone is assigned to lead the initiative.
2. Understand the platform users
a) Identify the platform personas
Before improving the platform experience, first, it needs to be clear who the platform is designed for. A data platform typically supports a wide range of users, each with distinct responsibilities, workflows, and expertise. These different user types are called personas.
Understanding personas is crucial because user complaints about the platform might stem from groups outside its intended audience. Clear persona mapping ensures we’re addressing the right users, avoiding miscommunication, and accurately prioritizing improvements based on their actual needs.
Common data platform personas include:
Data Engineers: Build and maintain data pipelines.
Data Scientists: Develop machine learning models and experiments.
Data Analysts: Explore and interpret data to support business decisions.
Data Stewards: Manage data quality, lineage, and access.
Example personas acting on a data platform.
In large organizations, these personas often fragment even further. The needs of a data engineer working on real-time systems can look very different from one focused on batch processing. Likewise, analysts from marketing, finance, or operations may all interact with the platform in unique ways.
Understanding these personas is the essential first step. Without knowing who the users are and how they work, it is impossible to design an experience that truly meets their needs.
Deliverables:
A list of personas with their respective goals.
The number of users per persona type.
b) Create a journey map for every persona and goal
With the personas identified, the next step is to understand what they do and how they do it. This is where journey mapping becomes useful.
For each combination of persona and goal, create a detailed journey map. This map should capture the full workflow, step by step, from the initial intent to the final result. Include every task the user performs, the tools they use, and the touchpoints involved.
Example developer journey map [6]
On a data platform, no user operates in a vacuum. An important part of the platform is to facilitate governance and communication across personas. A data scientist may need approval from a data steward. A machine learning engineer might depend on an infrastructure team to deploy a model. These cross-persona interactions are often where friction and bottlenecks occur. Therefore, we also recommend focusing on these interfaces:
Capture both the individual steps and the handoffs between personas.
Identify where governance actions, reviews, or approvals take place.
Once several journeys are mapped out, look across them for patterns:
Are certain tasks repeated across different personas?
Which tasks have a lot of dependencies?
Which personas have a lot of dependencies?
These insights might help identify shared bottlenecks, critical dependencies and high-impact areas for improvement.
Deliverables:
A set of journey maps for every <persona, goal> combination.
A list of common tasks and critical dependencies.
c) Understand user needs and frustrations
With journey maps in hand, the next step is to uncover where and why users experience friction. To discover the user needs and frustrations, we recommend gathering feedback in both pull- and push-based approaches.
The push-based approach allows users to directly share feedback with the platform team. For this to work, there must be a simple and accessible way for users to submit their input. This can be through a feedback form, a Jira ticket, a Teams channel, or any other preferred method. Regardless of the medium, it is helpful to provide users with a feedback template. A template helps users structure their input and reduces the chance of receiving incomplete requests. The template can include questions such as:
What aspect or feature of the product are you giving feedback on?
What is the feedback?
What is the impact on your workflow or productivity?
How frequently do you encounter this situation?
Do you have suggestions on how we could improve this?
The pull-based approach is about actively pulling information from the platform users. For this, it is recommended to combine qualitative and quantitative methods. The qualitative methods help to discover new problems, and the quantitative methods validate and measure the extent of these problems. Both perspectives are essential for building a full picture.
Qualitative methods try to probe the perception of the users to understand their experiences, pain points, and ideas. These insights are rich, nuanced, and often reveal issues you wouldn’t find through metrics alone. Common methods are conducting interviews and running surveys with users across different personas.
Best practices for survey and interview questions:
Use open-ended questions to allow for nuanced responses.
Avoid compound questions (two questions in one) to reduce confusion.
Ask users to describe actual scenarios and concrete examples.
Probe issue severity and filter small annoyances from high-impact blockers by asking follow-up questions such as:
How often does this problem occur?
How much time does your current workaround take?
Have you already explored or requested a fundamental solution before?
Quantitative methods focus on metrics and telemetry that show how users interact with the platform. Example metrics include:
Build and test durations: long cycles can indicate broken feedback loops.
Time to onboard: long onboarding times typically point to high cognitive load or poor documentation.
Incident response or debugging time: delays may break flow and cause user frustration.
Note that these metrics need to be tailored to your platform’s capabilities and the specific tasks your personas perform. For example, a metric that tracks time spent on data discovery might be relevant for analysts but not for ML engineers.
In a later section, we dive deeper into collecting insights from metrics.
d) Identify and prioritise root causes
The previous methods to gather information are essential to get an overview of the user’s needs and frustrations. However, this input typically reflects the many symptoms of a small number of big problems. The next step is to trace the input back to the fundamentals and identify potential systemic root causes. Often, a handful of systemic issues, such as lack of documentation, unclear ownership, or inconsistent tooling, create a ripple effect across many different workflows.
Search for the root causes that cause the symptoms [7]
💡 Tip: to make sense of user feedback, leverage the three dimensions: feedback loops, flow state, and cognitive load. Look for delays in responses or reviews that break momentum, interruptions or context switching that disrupt focus, and complex processes or unclear tools that increase mental effort. Most user issues fall within one or more of these dimensions. Hence, using this model helps in quickly categorizing problems and might help in root cause discovery.
Deliverables:
Users have access to a simple tool or process to submit feedback.
A list of interview and survey questions
A prioritised list of user needs, each with its root cause(s) and estimated impact
3. Discover and prioritise solutions
Solution discovery
Once user problems have been identified, analyzed, and prioritized, it’s time to move into the solution discovery phase. This is where user needs are translated into potential platform improvements. This phase typically involves collaboration with engineering teams to share findings from the user research and brainstorm for solutions. As solutions are explored, user needs can be grouped into three different categories:
a) User needs/frustrations conflicting with platform principles or compliance.
b) Known user issues already in progress.
c) New or unresolved user needs.
a) User needs/frustrations conflicting with platform principles or compliance
Some user requests will conflict with the foundational design of the platform or compliance regulations. For example, users might suggest implementing role-based access control (RBAC) because it feels familiar. However, the platform may intentionally use purpose-based access control (PBAC) to meet specific regulatory or security requirements. In these cases, it’s crucial to understand the underlying frustration:
Is the real issue the complexity of managing permissions?
Is role-switching in PBAC slow or unintuitive?
…
Rather than dismissing these needs outright, dig into the root causes to uncover alternative solutions that may still improve the experience. In addition, transparency to the user is also vital. Clearly communicate why certain architectural decisions were made and which aspects of the platform are non-negotiable. This helps users distinguish between fixed constraints and areas open to change.
💡 Tip: don’t use “industry constraints” as a blanket excuse to avoid change. In regulated environments, it’s easy to point to compliance as a blocker. However, meaningful improvements are often still possible within those constraints.
b) Known issues already in progress
Sometimes, users will raise problems that are already being addressed by the platform team. In this case, the root cause is typically a lack of communication between the platform team and the users. This is an opportunity to improve the communication process:
Share updates on the solution’s status and expected delivery timelines.
Validate that the current solution will actually solve the user’s need.
Manage expectations and ensure users feel heard and informed.
New or unresolved user needs
For issues that aren’t yet on the platform team’s roadmap, a deeper investigation is required. This investigation can involve brainstorming sessions, building POCs to eliminate uncertainties and doing cross-team consultations to learn from other domains or departments that have already solved similar challenges.
In this category, it’s important to assess both the feasibility and impact of the solution. Estimating development effort alongside the potential gain allows the platform team to prioritize effectively and make informed trade-offs.
Deliverables:
A list of user needs that conflict with platform principles or compliance.
A list of user needs currently being addressed, including status updates and estimated delivery timelines.
A list of “new” user needs, with a short solution description, an estimated implementation effort and an expected impact.
Prioritization of solutions
Prioritization within the user experience scope
With a clear list of problem-solution pairs (each with estimated impact and effort) we’re ready to prioritize. We recommend the Impact vs Effort matrix mapping, which helps visualize and categorize initiatives:
Quick wins: High impact, low effort. Prioritize these to build momentum.
Strategic bets: High impact, high effort. Worth the investment but requires planning and alignment.
Incremental improvements: Low impact, low effort. Useful for steady progress.
Money pits: Low impact, high effort. Avoid or deprioritize.
Impact vs Effort matrix
Keep in mind that the Impact vs. Effort matrix is a simplification. In reality, prioritization is more nuanced and involves additional dimensions such as urgency, user learning curves, implementation timelines, and previous initiatives. Nevertheless, this simplified matrix provides clear guidance, aiding prioritization and preventing frustration or misaligned expectations.
Once initiatives are mapped, we recommend the following principles to move from ideas to results:
Start small, deliver fast: Don’t try to solve everything at once. Focus on a few high-leverage improvements and deliver them quickly. Reducing work-in-progress (WIP) keeps teams agile and gives users visible progress to build trust.
Prioritise deliberately: Be strategic in what you tackle first. Quick wins are great for immediate value, but also consider initiatives that align closely with company priorities or unlock broader platform improvements. Keep an open mind, don’t let the platform experience become an isolated goal.
Prioritization within the data platform scope
It’s important to keep in mind that the work identified from the platform experience track is only a part of the full platform story. There are three main work input flows:
Top-down initiatives: typically determined by managers and executives.
Bottom-up initiatives: typically determined by the engineers building the data platform.
User-requested initiatives: typically determined by user complaints or needs.
It’s important to have a clear view of the platform’s strategic direction as a whole and the contribution of each of these three flows. As a result, the initiatives originating from the users cannot all be implemented at once and therefore, quantification and prioritisation are of utmost importance.
Deliverables:
An impact vs effort map of all the user needs.
A prioritised list (or roadmap) of initiatives to be tackled
4. Drive ownership and follow-up
Once the priority is determined, initiatives can be launched. For this, it is essential to assign clear ownership of every problem-solution pair. Every initiative should have a named owner responsible for execution, stakeholder updates, and tracking success. Ownership ensures accountability and keeps progress moving.
In addition, it’s crucial to keep the users in the loop. Be transparent and honest with the users. If something is deprioritised, explain why. If something is added to the agenda of the development team, keep them up to date about the progress and provide an estimation of the new delivery date. This builds trust between the developers and the users of the data platform. Also, keep tight feedback loops with the users and work iteratively on the solution to ensure that the solutions really solve the users’ needs and frustrations.
💡 Tip: Communicate wins widely: Don’t just ship, but celebrate. Share successes broadly to show momentum, boost morale, and reinforce the platform team’s role as an enabler. When users see their feedback turning into tangible improvements, engagement increases.
Outcomes:
Clear initiative owners.
Communication plans to keep stakeholders aligned.
5. Measure outcomes
To ensure that efforts are delivering real impact, it’s recommended to measure progress against the originally defined goals in the vision. Measurements help validate what’s working, reveal where friction still exists, and provide a way to communicate impact to stakeholders.
A key distinction to make is between perception and workflow metrics. Perception metrics capture how users feel about the platform: their satisfaction, pain points, and confidence levels. These are vital because experience is as much about how things feel as how they function. Workflow metrics track how efficiently and effectively users can complete their tasks: reflecting the actual performance of tools, processes, and systems.
There’s no universal set of metrics that works for every organization. The measurement approach should be co-created with the team, tailored to the platform, and aligned with the vision. That said, here are some commonly used metrics that can serve as inspiration:
Perception metrics
Satisfaction Scores: Surveys to gauge how satisfied developers are with tools, processes, and support.
Ease of Onboarding: Measures how quickly new users can ramp up and make meaningful contributions.
Perceived Tooling Effectiveness: How useful users find CI/CD pipelines, debugging tools, documentation, etc.
Perceived Code Complexity: Captures how intuitive and maintainable users find the codebase or platform APIs.
Adoption & Engagement: Tracks recurring platform tool users, signalling utility and ease of use.
Workflow metrics
Delivery performance: Change Lead Time and Deployment Frequency
Build Success Rate: Indicates platform stability and reliability during development cycles.
Time to First E2E Result: Measures how long it takes to spin up a working environment and run a full pipeline or application.
Change Failure Rate & Recovery Time: Shows how often deployments break and how quickly teams recover.
Developer Productivity (Time on Non-Coding Tasks): Highlights time spent waiting on builds, managing dependencies, or handling unclear errors.
⚠️ Cautions:
A common mistake is turning metrics into goals. This often leads to gaming the numbers and missing the point, as captured by Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” For example, pushing every team to deploy daily can result in meaningless changes just to hit the metric.
Another pitfall is relying on a single metric. Developer experience is multi-dimensional, and no one number can capture it all. A balanced set of metrics gives a more honest view. Frameworks like SPACE can help design a well-rounded measurement approach.
Ultimately, the aim isn’t to measure everything, it’s to measure what matters. Good metrics should reflect the real experiences of users, be clearly tied to platform goals, and help teams make better decisions. Over time, metrics should be revisited and refined. They’re not the end goal, but a tool to guide continuous improvement and validate that you’re truly making the platform better for its users.
Deliverables:
A list of items that need to be measured.
Implementation of perception metrics.
Implementation of workflow metrics.
Conclusion
This guide gives an answer to the question: how to establish a systematic process to continuously keep improving the data platform experience?
The answer consists of a five-step process:
Bootstrap the initiative: Set clear goals, secure leadership support, and dedicate resources.
Understand the platform users: Gather user feedback, identify root causes and assess their impact.
Discover and prioritise solutions: Turn insights into actionable improvements and prioritize them based on user value and feasibility.
Drive ownership and follow-up: Assign owners, support delivery, communicate progress, and track adoption.
Measure Outcomes: Define success metrics and compare before-and-after data to validate impact.
By building this process iteratively and listening to the people, the platform experience can be systematically improved.
⚠️ Cautions:
A common mistake is turning metrics into goals. This often leads to gaming the numbers and missing the point, as captured by Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” For example, pushing every team to deploy daily can result in meaningless changes just to hit the metric.
Another pitfall is relying on a single metric. Developer experience is multi-dimensional, and no one number can capture it all. A balanced set of metrics gives a more honest view. Frameworks like SPACE can help design a well-rounded measurement approach.
Ultimately, the aim isn’t to measure everything, it’s to measure what matters. Good metrics should reflect the real experiences of users, be clearly tied to platform goals, and help teams make better decisions. Over time, metrics should be revisited and refined. They’re not the end goal, but a tool to guide continuous improvement and validate that you’re truly making the platform better for its users.
Deliverables:
A list of items that need to be measured.
Implementation of perception metrics.
Implementation of workflow metrics.
Conclusion
This guide gives an answer to the question: how to establish a systematic process to continuously keep improving the data platform experience?
The answer consists of a five-step process:
Bootstrap the initiative: Set clear goals, secure leadership support, and dedicate resources.
Understand the platform users: Gather user feedback, identify root causes and assess their impact.
Discover and prioritise solutions: Turn insights into actionable improvements and prioritize them based on user value and feasibility.
Drive ownership and follow-up: Assign owners, support delivery, communicate progress, and track adoption.
Measure Outcomes: Define success metrics and compare before-and-after data to validate impact.
By building this process iteratively and listening to the people, the platform experience can be systematically improved.
Sources
Latest
A 5-step approach to improve data platform experience
Boost data platform UX with a 5-step process:gather feedback, map user journeys, reduce friction, and continuously improve through iteration
From Good AI to Good Data Engineering. Or how Responsible AI interplays with High Data Quality
Responsible AI depends on high-quality data engineering to ensure ethical, fair, and transparent AI systems.
A glimpse into the life of a data leader
Data leaders face pressure to balance AI hype with data landscape organization. Here’s how they stay focused, pragmatic, and strategic.