I’m working with some C# properties that are defined as double data type. Now I need to save these values into a SQL Server database table. The problem is that SQL Server doesn’t have a double data type like C# does.
I’m trying to figure out which SQL Server data type would be the best choice - should I go with decimal or float? My application needs to store geographic coordinates like latitude and longitude, so precision is really important for my use case.
Can someone help me understand which option would give me the best accuracy?
honestly, the performance hit with decimal is pretty minimal unless you’re doing millions of calculations per second. I’ve used decimal for gps coords for years and never noticed any slowdown. the conversion from C# double to sql decimal happens automatically - just don’t forget to set the precision in your sql schema or you’ll get truncated values.
Interesting point on decimal precision! What about performance though? I’ve heard floats are faster than decimal calculations. Does the speed difference matter for coordinate lookups in your experiance? And how do you handle converting between C# double and SQL decimal? Any gotchas?
For geographic coordinates, use decimal(9,6) instead of float. Sure, float matches C# double, but it uses binary floating-point math that creates rounding errors. These errors add up fast with coordinates. I learned this the hard way building a mapping app - float precision made location markers drift several meters off target. Geographic data needs precision because tiny errors become huge distance problems on the ground. Decimal uses exact math with predictable precision. The decimal(9,6) format gives you three digits before the decimal and six after - that’s roughly one-meter accuracy, which works great for most geographic apps.