In my ten years of working with SQL, I've honed several critical skills that significantly enhance database management and data manipulation. Here’...
For further actions, you may consider blocking this person and/or reporting abuse
Thanks for taking the time to craft a post on SQL. I have to call out a couple of things.
1) I'm not sure which dialect of SQL you are using but it is generally not recommended to use GUIDs or UUIDs as identity keys. There are many reasons but the primary one is index fragmentation.
2) on #2 (if exists) you say it is a good way to prevent duplicates when doing batch updates however your code leads to individual transactions for each update which will be magnitudes slower than a set-based update. In a batch situation this should be a left joint to the target table and only insert records where the joined table identifier is null. Ideally duplicate records would be prevented by a Unique Key.
3) I know code examples are simplified but your temp table would be better as a CTE. By using a temp table you lose any indexing optimisations and table locking as the underlying source data could change after building the temp table.
4) your transaction example is valid, however it is more relevant where you need to update multiple tables or records where system integrity demands all succeed or all fail. Examples would be a} update order status to shipped AND reduction qtys from product availability, b) debit my account and credit your account in a bank transfer.
In your example no other process can update the product table (and possibly read from it) until you choose to commit or rollback.
Hi Aaron, thanks for joining the discussion.
Regarding your first point, while using
BIGINT
can be sufficient for most situations when considering performance, there are cases where GUIDs can be beneficial. In complex scenarios that involve multiple tables, databases, or servers, GUIDs can provide a significant advantage. Additionally, many replication scenarios often require the use of GUIDs to ensure uniqueness across different systems.I'm not saying don't include UUIDs, just don't use them as a primary key. In most scenarios the vast majority of records you are interested in are the most recent and so you want them grouped together in the index to make page writes faster and generate fewer pages splits (and have a higher page full percentage to save space). UUIDs by their nature are spread evenly over the pages so you need more of them, less full and the entire index has to be traversed to get records with a common profile (normally temporal)
If you need database unique identifier I would normally prefer TIMESTAMP to UUID.
Yes Aaron, you are right!
Agreed. Using UUIDs and GUIDs as indexes absolutely destroys performance.
Yes, David, I appreciate your feedback. I will incorporate better practices in my next example. Thank you!
I think what Aaron is saying is that using GUID types as a primary key will create a fragmented clustered index. Primary keys should be a sequential type (INT, BIGINT) otherwise your joins will result in inefficient table scans. With that said, GUIDs as you mentioned are a great way to uniquely identify a records across different systems or environments. But it’s more efficient to put those GUIDs in a separate column and keep a sequential type as the primary key.
Thanks for your clarification, I got the point now.
Point 2 is particularly useful in real-world scenarios. Imagine receiving a ton of unstructured information that needs to be organized for various ad hoc promotional events. In such cases, you might find yourself willing to trade off some performance in exchange for improved data accuracy :)
Really useful article and well written.
How about putting your transaction in a TRY/CATCH? Also, use the SQL Query Analyzer and now, ask your AI friend how to maximize efficiency. I've learnt more from doing this than any book I've read (I've been working with MS SQL Server since v7).
Everyone has a different opinion on how to do things (just read the comments here). The key for me is readability (#3), it's like coding software, use comments, format your code, just remember that someone else will inevitably have to maintain your code for you at some point.
Well said, Neil. Using
TRY...CATCH
with transactions is indeed common in stored procedures, especially for handling issues like division by zero. However, my main point in this article is to emphasize that usingBEGIN TRAN
is a best practice for everyone. It helps prevent human errors during data manipulation, particularly when working long hours 😉And what kind of SQL analyzer you recommend? I usually only use execution plan for analysis.
I use 'Estimated Plan' in Azure Data Studio, so same. There are other tools on the market though, I tend to stick with what I know works for me.
If you want to learn more about SQL, please comment here :)
Interested
I'd love to
damn thats so interesting thanks alottt
I will share more about SQL later, thanks for your support ;)
This is amazing. Thanks for sharing.
wow amazing .
Thanks!
Thank you for sharing