I’m facing an inconsistent behavior with transaction log growth during database schema modifications. Our application includes a database migration tool that applies version-specific changes to customer databases.
The issue appears when we modify large tables (containing 10-20 million rows with approximately 10 strictly typed columns). While physical servers handle these operations without significant log expansion, virtual machines show unpredictable results. Some VMs experience minimal log growth, while others can expand by 30GB or more.
The database uses SIMPLE recovery model, so theoretically the transaction log usage should be minimal. I understand the log serves as temporary storage until disk resources become available for the actual changes.
What factors should I examine in a VM environment to predict this problematic behavior? Should I focus on disk configuration, system information, or CPU resources? I’ve already verified there’s no compression enabled on the VMs and used profiling tools to monitor index growth on the target table.
Additionally, are there any database owner level commands for log management? We’re currently testing CHECKPOINT since SHRINKDB isn’t available due to permission restrictions.
-- Table contains approximately 10-20 million records with 10 typed columns
IF col_length('[dbo].[ProductData]','CreatedDate') IS NULL
BEGIN
ALTER TABLE [dbo].[ProductData] ADD [CreatedDate] datetime NOT NULL CONSTRAINT [df_CreatedDate] DEFAULT (GETDATE())
END
-- This approach causes significant log growth on VMs but not on physical machines
-- Alternative approach to avoid GETDATE() in constraint
IF col_length('[dbo].[ProductData]','CreatedDate') IS NULL
BEGIN
ALTER TABLE [dbo].[ProductData] ADD [CreatedDate] datetime NULL
END
DECLARE @CurrentDate datetime = GETDATE()
UPDATE [dbo].[ProductData] SET [CreatedDate] = @CurrentDate
-- This method actually caused worse log expansion and performance issues