I’m working on a project with two GraphQL backend services (A and B) and a frontend portal. When the frontend sends a request to service A, it needs to fetch data from service B. Right now, I’m using a GraphQL client in service A to query service B, like this:
const query = gql`
query getPersonInfo {
personId
fullName
}
`;
const personData = await graphqlClient.request({query});
I’m concerned that this approach might not be optimal. Even though service B performs its schema validation and middleware processes upon receiving these internal requests, it feels wasteful and could negatively impact performance. I’ve heard about Apollo Federation, but it appears to be more geared towards frontend requirements. Given that this is an internal service call, I’d appreciate any suggestions on a more efficient method to handle this. Thanks for your help!
hmmm interesting problem! have u thought about using gRPC between ur services? it’s super fast for internal comms. or maybe a shared database? that could simplify things. what’s ur main concern with the current setup - latency or complexity? curious to hear more about ur specific use case!
have u considered using graphql schema stitching? it lets u combine multiple schemas into one. might be more efficient than making separate calls. u could set it up on a gateway layer between ur services. worth looking into, could solve ur problem without needing apollo federation.
Have you explored using a message queue system like RabbitMQ or Apache Kafka for inter-service communication? This approach can decouple your services and improve scalability. It allows Service A to publish messages that Service B can consume asynchronously, potentially reducing latency and improving overall system performance. You could implement an event-driven architecture where Service B updates a local cache based on these messages, allowing Service A to query this cache instead of making direct GraphQL calls. This method can be particularly effective if the data from Service B doesn’t need to be real-time for every request.