In my experience managing a similar setup, switching the backend service to a ClusterIP type rather than NodePort proved beneficial for internal communication. Deploying both the frontend and backend within the same namespace ensured that Kubernetes DNS resolved service names correctly. I also verified that the pods were in a ready state and that the labels matched across deployments and services to avoid endpoint registration issues. This configuration allowed the Angular application to access the Express backend seamlessly via Kubernetes internal networking, eliminating the need for manual port configuration adjustments.
hey, try swtiching the service to clusterip, then use an ingress to route internally. check ns details too, sometimes the label mismatch messes up dns resolution. that worked fine for me in a similar setup.
hey, ive seen similar dns issues. have u thought about testing a headless service? sometimes a faulty label in the ns can break resolution. any idea if using an ingress might help or if there are missing endpoints? curious to hear more abt your set-up!
In addressing similar connectivity issues, I found that verifying consistency in service definitions and deployment labels was essential. I switched the backend service from a NodePort to a ClusterIP configuration, which allowed easier routing within the cluster. It was important to ensure that both the exposed ports and the label selectors in the service YAML exactly matched those in the pod deployment. Additionally, reviewing any network policies helped to identify if internal traffic was inadvertently blocked, ensuring the Angular frontend could properly access the Express backend.