How to reduce database calls in X++
Every unnecessary round-trip to SQL Server costs you latency, locks, and user patience. Here are the patterns that actually move the needle.
If you've ever watched a batch job crawl through thousands of records while CPU usage sits at 2%, you've met the N+1 problem. The logic is fine. The data is small. But every iteration fires another SELECT to the database, and those round-trips add up to minutes — sometimes hours.
D365 F&O runs on Azure SQL, which means every database call carries network latency on top of query execution time. A single call might take 1–3 milliseconds. Multiply that by 50,000 records in a loop, and you're looking at an extra 50–150 seconds of pure waiting — not computation, just waiting for packets to travel between the AOS and the database.
This guide covers eight concrete patterns for reducing those calls. Each one includes the before-and-after X++ so you can see exactly what changes, and where the performance gain comes from.
In this article
- Replace row-by-row updates with
update_recordset - Replace row-by-row inserts with
insert_recordset - Eliminate N+1 selects with joins
- Use
exists joininstead of inner select - Apply
firstOnlyfor single-record lookups - Cache reference data properly
- Use temp tables for complex staging
- Batch your deletes with
delete_from
update_recordsetThis is the single highest-impact change you can make. A while select loop with an update() inside generates one SELECT and N UPDATE statements. An update_recordset generates exactly one UPDATE — regardless of how many rows match.
SalesLine salesLine; ttsbegin; while select forupdate salesLine where salesLine.SalesId == _salesId && salesLine.SalesStatus == SalesStatus::Backorder { salesLine.SalesStatus = SalesStatus::None; salesLine.update(); // N round-trips } ttscommit;
SalesLine salesLine; ttsbegin; update_recordset salesLine setting SalesStatus = SalesStatus::None where salesLine.SalesId == _salesId && salesLine.SalesStatus == SalesStatus::Backorder; ttscommit; // 1 round-trip
Row-by-row (10k rows)
~12 sec
update_recordset
~80 ms
Watch out: update_recordset bypasses update() overrides on the table. If your table has custom logic in the update() method (validation, event raising, computed fields), you'll need to handle that separately — or accept the trade-off. Check for overrides with skipDataMethods(true) awareness.
insert_recordsetThe same principle applies to inserts. When you're populating a table from another table's data — think staging, journaling, or denormalization — insert_recordset compiles to a single INSERT...SELECT on SQL Server.
SalesLine salesLine; ContosoAuditLog auditLog; while select salesLine where salesLine.SalesId == _salesId { auditLog.clear(); auditLog.SalesId = salesLine.SalesId; auditLog.ItemId = salesLine.ItemId; auditLog.Qty = salesLine.QtyOrdered; auditLog.LogDate = DateTimeUtil::getSystemDate(); auditLog.insert(); // N round-trips }
SalesLine salesLine; ContosoAuditLog auditLog; insert_recordset auditLog (SalesId, ItemId, Qty, LogDate) select SalesId, ItemId, QtyOrdered, systemDateGet() from salesLine where salesLine.SalesId == _salesId; // 1 round-trip, no matter how many lines
For cases where you need to insert computed or transformed data that can't be expressed in a simple select-from mapping, use RecordInsertList. It batches your inserts into chunks (default: array insert) rather than firing them one at a time.
RecordInsertList insertList = new RecordInsertList( tableNum(ContosoAuditLog)); ContosoAuditLog auditLog; while select salesLine where salesLine.SalesId == _salesId { auditLog.clear(); auditLog.SalesId = salesLine.SalesId; auditLog.ItemId = salesLine.ItemId; auditLog.Qty = ContosoQtyHelper::convert(salesLine); insertList.add(auditLog); } insertList.insertDatabase(); // Batched insert
The N+1 pattern is the most common performance killer in D365 X++ code. You select from one table, then inside the loop, you select from a second table for each row. The fix is almost always a join.
SalesLine salesLine; InventTable inventTable; while select salesLine where salesLine.SalesId == _salesId { select firstOnly ItemName from inventTable where inventTable.ItemId == salesLine.ItemId; // ↑ This fires once per line — N extra SELECTs info(strFmt("%1: %2", salesLine.ItemId, inventTable.ItemName)); }
SalesLine salesLine; InventTable inventTable; while select ItemId from salesLine join ItemName from inventTable where inventTable.ItemId == salesLine.ItemId && salesLine.SalesId == _salesId { info(strFmt("%1: %2", salesLine.ItemId, inventTable.ItemName)); }
The difference is dramatic. With 500 sales lines, the N+1 version fires 501 queries. The joined version fires one. On Azure SQL with even 2ms of round-trip latency, that's a full second saved — and it gets worse with wider tables and more complex WHERE conditions.
exists join instead of inner selectWhen you only need to check whether a related record exists — not read any of its fields — use exists join. It generates an EXISTS subquery on SQL Server, which stops scanning the moment it finds the first match. An inner select inside an if block actually fetches and deserializes the row, even though you discard all of it.
VendTable vendTable; while select vendTable { VendTrans vendTrans; select firstOnly RecId from vendTrans where vendTrans.AccountNum == vendTable.AccountNum; if (vendTrans.RecId) { // vendor has transactions } }
VendTable vendTable; while select vendTable exists join vendTrans where vendTrans.AccountNum == vendTable.AccountNum { // vendor has transactions — SQL stops at first match }
The inverse — notexists join — is equally valuable when you need to find records that lack a matching row in another table. It compiles to NOT EXISTS, which is almost always faster than a left join with a null check.
// Find vendors with no transactions while select vendTable notexists join vendTrans where vendTrans.AccountNum == vendTable.AccountNum { info(strFmt("Inactive vendor: %1", vendTable.AccountNum)); }
firstOnly for single-record lookupsWhen you know you need exactly one record — a lookup by primary key, a configuration record, a default value — always add firstOnly to your select. Without it, the X++ runtime prepares for a result set and doesn't send a TOP 1 hint to SQL Server.
CustTable custTable; // Missing firstOnly — SQL may scan beyond the first match select custTable where custTable.AccountNum == _accountNum;
CustTable custTable; select firstOnly AccountNum, Name, CustGroup from custTable where custTable.AccountNum == _accountNum; // ↑ TOP 1 + field list = minimal payload
Combine with field lists. Specifying only the columns you need (AccountNum, Name, CustGroup) reduces the data transferred from SQL to the AOS. On wide tables with 100+ columns — like SalesTable or PurchTable — this alone can cut row transfer size by 80%.
Reference data — unit conversions, currency codes, parameter tables, number sequences — rarely changes during a batch run. If you're selecting the same CompanyInfo or InventParameters record 10,000 times, you're wasting 9,999 round-trips.
D365 has three built-in caching mechanisms. Use the right one for the data's volatility:
// Set in AOT → Table Properties → CacheLookup // // Found → Cached after first read; cleared on any update // FoundAndEmpty → Also caches "not found" results // EntireTable → Entire table loaded on first access // Best for small config tables (<500 rows) // NotInTTS → Cache not used inside transactions
// For computed results or transformed lookups: SysGlobalObjectCache cache = ClassFactory::globalObjectCache(); container cacheKey = ['ContosoExRate', _fromCurrency, _toCurrency, _date]; container result; if (cache.find(classStr(ContosoExRateHelper), cacheKey, result)) { return conPeek(result, 1); } // Cache miss — compute and store ExchRate rate = ExchangeRateHelper::getRate(_fromCurrency, _toCurrency, _date); cache.insert(classStr(ContosoExRateHelper), cacheKey, [rate]); return rate;
// Static method with cached result — avoids repeated DB call public static InventParameters cachedParams() { static InventParameters params; if (!params.RecId) { params = InventParameters::find(); } return params; }
When business logic requires multiple passes over the same data — filter, enrich, validate, then insert — hitting the real tables repeatedly is expensive. Instead, load the working set into a TempDB temp table once, operate on it in memory, and write the final result back in one set-based operation.
// 1. Define temp table in AOT with TableType = TempDB // 2. Populate it from source in one shot ContosoStagingTmp staging; insert_recordset staging (ItemId, Qty, UnitPrice) select ItemId, QtyOrdered, SalesPrice from salesLine where salesLine.SalesId == _salesId; // 3. Enrich / transform — all in-memory, no production table hits update_recordset staging setting DiscountPct = 0.10 where staging.Qty > 100; // 4. Write final result to production in one operation insert_recordset contosoOrderResult (ItemId, Qty, UnitPrice, DiscountPct) select ItemId, Qty, UnitPrice, DiscountPct from staging;
TempDB vs InMemory. Use TempDB temp tables (not InMemory) when the staging set could exceed a few thousand rows. TempDB tables live in SQL Server's tempdb, support set-based operations, and don't consume AOS heap memory. InMemory tables are only appropriate for small, short-lived sets used within a single method scope.
delete_fromJust like updates and inserts, deletes should be set-based. A while select forupdate followed by delete() generates N DELETE statements. delete_from generates one.
ContosoAuditLog auditLog; ttsbegin; while select forupdate auditLog where auditLog.LogDate < _cutoffDate { auditLog.delete(); // N round-trips } ttscommit;
ContosoAuditLog auditLog; ttsbegin; delete_from auditLog where auditLog.LogDate < _cutoffDate; ttscommit; // 1 round-trip
The same caveat applies: delete_from bypasses the delete() method override on the table, as well as any DeleteAction rules. If you have cascade deletes defined in the AOT, use skipDeleteActions(true) explicitly so future maintainers know it was intentional, not accidental.
Putting it all together
The decision framework
When you're about to write a database operation in X++, run through this quick mental checklist:
Am I modifying data inside a while-select loop? → Replace with update_recordset / insert_recordset / delete_from Am I selecting from Table B inside a loop over Table A? → Rewrite as a join Am I checking existence without reading fields? → Use exists join / notexists join Am I reading the same config record repeatedly? → Cache it: table-level, SysGlobalObjectCache, or static variable Do I need multiple passes over the same data? → Stage in a TempDB temp table Am I selecting more columns than I use? → Add a field list and firstOnly
Measuring the impact
Before and after — real-world batch scenario
Here's what these patterns look like in aggregate. A typical order processing batch job touching 10,000 sales lines, before and after applying all eight techniques:
The optimized version does the same work — same validation, same calculations, same outputs. It just doesn't ask the database the same question 40,000 times.
Further reading
Where to go from here
These eight patterns cover the most common sources of excessive database calls. For deeper performance work, explore the Trace Parser tool in LCS to identify exactly which X++ methods generate the most SQL statements. Enable SQL statement logging in your dev environment, watch for queries that fire more than once with identical parameters, and refactor them using the patterns above.
The D365 performance SDK and the built-in Performance Timer (in the Action Center) are also worth integrating into your development cycle. Catching an N+1 pattern in code review is infinitely cheaper than debugging it in a production batch job at 2 AM.
No comments:
Post a Comment