Is there a performance difference between COUNT(1) and COUNT(*) in SQL Server?

I've been working on optimizing some SQL queries in our database, and I came across something interesting. Some colleagues prefer using COUNT(1) instead of COUNT(*) in their queries, and I'm wondering if this choice really affects performance.

I'm currently testing on SQL Server 2005 and would love to know if anyone has observed a measurable difference in query speed or resource usage between the two methods. Is it just a legacy habit from older database versions, or does COUNT(1) offer any advantages under certain circumstances?

Your insights and personal experiences would be greatly appreciated.

in my experience, there’s no real difference between COUNT(1) and COUNT(*) in SQL Server. i’ve tested both on large tables and execution plans look identical. it’s mostly preference or old habits from other DBMSs where it mattered. use whichever you find more readable.

From my experience working with SQL Server, including version 2005, there’s negligible performance difference between COUNT(1) and COUNT(*). The query optimizer treats them identically, generating the same execution plan. I’ve run extensive tests on large datasets, and the results consistently show equal performance.

The preference for COUNT(1) often stems from legacy practices or misconceptions about NULL handling. However, in SQL Server, both constructs effectively count all rows. If you’re concerned about optimization, focus on proper indexing and query structure instead. These factors have a far greater impact on performance than the choice between COUNT(1) and COUNT(*).

hmm, interesting question! have u tried benchmarking them yourself? i’m curious about the results you might get. what kind of data volumes are we talking about? maybe theres some edge cases where one outperforms the other? anyone else notice any differences in specific scenarios?