Desktop application database connectivity - managing unreliable network connections?

I’m working on a desktop app that needs to connect to a remote database. Multiple users will share the same data through this database connection. I’m planning to use C# with either MSSQL, MySQL, or SQLite.

The problem I’m facing is network reliability. When I tested the app on a busy corporate network, database operations became really slow. Simple queries like INSERT or SELECT sometimes take several minutes to complete. Sometimes they just timeout completely.

I already know I should separate the UI thread from database operations so the interface doesn’t freeze. But I’m not sure how to handle all these network failures properly. Showing error messages to users constantly would be annoying.

I was thinking about creating a local database copy on each machine. The app could work with local data first, then sync changes in the background. This way users could keep working even when the network is having issues.

What approaches work best for this kind of situation? Are there proven patterns or techniques I should research?

Your local database synchronization approach is actually quite sound for handling unreliable networks. I implemented something similar for a field service application where technicians frequently worked in areas with poor connectivity. We used SQLite locally and PostgreSQL remotely, with a custom sync mechanism that tracked changes using timestamps and conflict resolution rules. The key challenges you will encounter are conflict resolution when multiple users modify the same records, and determining what constitutes the authoritative version of data. We solved this by implementing a last-writer-wins strategy with user override capabilities for critical conflicts. Consider implementing a connection health monitor that automatically switches between online and offline modes. This eliminates the need for constant error messages while keeping users informed of their current state. Background synchronization should be batched and prioritized based on data criticality.

tbh, u might be complicating things too much. start simple w/ connection pooling & tweak timeout settings. for flaky nets, retry logic with exponential backoff usually does the trick. local caching is cool, but for ur case it might be unnecessary unless it’s super critical.

interesting challenge youre facing! have you considered using something like message queues for this? curious about your user base size too - are we talking like 10 users or hundreds? also wondering if theres any specific data that needs real-time sync vs stuff that can wait?