Data Product Portal Integrations 2: Helm
Aug 1, 2024
•
Stijn Janssens
Welcome to the next instalment of our Data Product Portal integrations series!
If you missed it, be sure to check out our first blog post on How to integrate with OIDC.
In this post, we’ll focus on installing the portal in production environments. While it’s possible to create a production-ready deployment using our Docker images, the easiest way is to use Helm on an existing Kubernetes cluster. Using our Helm chart will allow you to be up and running in no time.
In this blog post, we’ll go over the specific configuration values in our Helm chart, what they are, and how they should be used.
What is Helm?
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes applications.
We provide a Helm chart that sets up all the necessary components on your Kubernetes cluster to seamlessly interact with everything that is already there.

The Values File
Helm configurations can be adapted to your custom needs through config values. You can pass these values directly to your helm install
commands, but best practices suggest putting them in a values.yaml
file instead. You can find this file in our codebase here.
The values.yaml
file of the portal has two large sections.
Configuration values that you must change.
Optionally configurable values.
Mandatory Configuration Values
These values must be updated to fit your specific environment. If you do a helm install
without adjusting these values you run the risk of deploying an insecure version of portal, with default database passwords and badly configured CNAMEs
OIDC
This configuration sets up the oidc integration. For more detailed information on setting this up, refer to our previous blog post.
PostgreSQL
The Helm chart has built-in support for this PostgreSQL dependency chart. If postgres_enabled: true
, the Helm install will also set up a PostgreSQL database inside your Kubernetes cluster. The values under global
are the configuration values for this database.
Note that we pass port
as an explicit string, to prevent the Helm chart from doing bad casting.
If postgres_enabled: false
then all the other values will be ignored.
Database
This configuration specifies how the portal connects to the database. If postgres_enabled: true
, use the same values as above. If postgres_enabled: false
, these can be any set of values necessary to connect to your database (e.g. RDS endpoints).
API Key
If enabled: true
, you can use the value of key
as an API key in the x-key
header field to authenticate your API requests.
Conveyor
If you have a Conveyor installation available and want to link it to the portal, provide the correct API key and secret here. If not correctly configured, it will be ignored.
Host
This value should be the CNAME of your portal application without a trailing slash. For example https://www.portal.com
.
Cloudwatch
Portal can automatically log to AWS Cloudwatch Logs if your pods have the correct access rights on your linked AWS account.
Optional Configuration Values
These values can be adjusted if you have a mature Kubernetes cluster setup and want to reuse certain volumes, change deployment details, or link your cluster to specific cloud provider features with annotations. For the initial installation these do not need to be adjusted.We might dive deeper in these configuration values in an upcoming blogpost and learn how to add annotations to your service role for automatic AWS IAM role assumption, add annotations to your ingress controller for automatic AWS ELB creation, …
Conclusion
The basic set of configuration values that you should change is fairly small and easy to understand. This allows you to quickly link your production services together in a seamless integration.As always, I hope everything is clear. Feel free to contact us through our Slack community if you have any questions. Stay tuned for the next bog post where we might dive deeper into these optional configuration values.
See you in the next blog post!
Latest
Cloud Independence: Testing a European Cloud Provider Against the Giants
Can a European cloud provider like Ionos replace AWS or Azure? We test it—and find surprising advantages in cost, control, and independence.
Stop loading bad quality data
Ingesting all data without quality checks leads to recurring issues. Prioritize data quality upfront to prevent downstream problems.
A 5-step approach to improve data platform experience
Boost data platform UX with a 5-step process:gather feedback, map user journeys, reduce friction, and continuously improve through iteration