I love Visual Studio Code, I also love finding ways of speeding up my workflow.
As a SQL Analyst, things like intellisense and autocompletion are of immense help, they increase the speed of my coding, reduce the number of typos I write and overall keep my code between scripts consistent.
In previous roles I was able to utilize native applications for my coding, like Oracle Developer. These applications would automatically create an intellisense library and allowed me to customize snippets of code.
In my current role, isn't particularly an option. We use a redshift and it is accessed via a remote desktop connection and an application called Aginity. This solution isn't as mature as I'm used to. The Remote box has noticeable millisecond lag and Aginity is a very barebones application.
This lead to me coding all of my SQL scripts on my local machine using Visual Studio Code and then copying over commands into Aginity as I needed to run them.
But I missed Intellisense....
So I set about replicating an Intellisense library with two key factors in mind. The first, it couldn't connect directly to Redshift due to security reasons, the second, I had to be able to share that library and any updates with any member of my team who wanted to use it.
The second factor immediately discounted VSC own snippet library, which is a JSON file you can populate with whatever you want. However it is stored locally meaning it can't be accessed by more than one VSC instance at once.
So I went looking on the Extension Store and came across an Oracle SQL extension that allowed you to have an Intellisense and Snippet library in a location of your choosing. As a google company this solution worked, as it meant I could store the JSON files on our google drive and everyone would be able to access it.
Yes I know what your thinking, an Oracle Extension for Redshift? Well the pickings are slim to zero for a Redshift Extension that could do this and Oracle is close enough in terms of syntax highlighting and functions. In my experience the only thing that doesn't work is the 'jump to table create' function as we write 'Create table if exists' and the oracle extension thinks all of our tables are called 'if exists'!!!
So next problem is: How do I populate that snippet and intellisense library.
Well the Snippet Library was easy, that's a library that I would just fill with pre-fabricated code of my choosing. I had some chunks of SQL code we always used, I also found the top ten most commonly used column names and made a snippet for each of those.
For example the syntax for that was
"snippet name":{"prefix":"what will you type","body":["what shows up"],"description":"insert whatever you want"},
For Example
"Car ID":{"prefix":"ca","body":["cardid ="],"description":"generic column (car id)"},
So when you start typing ca, it auto fills 'cardid ='
Brilliant, that's half the battle. Now to populate my intellisense library. The extension had a separate JSON file for you to fill in your table structure, it looked something like this:
{
"Table Name"{
"Column Name",
"Column Name",
"Column Name"}
}
While I knew how to pull the table structure from our DW, I had NO clue on how to take thousands of tables and millions of columns and get them into a JSON package per table.
And I know it can be done, I just lacked the skill. However, it wasn't needed. I found, through testing that it would not be fit for purpose.
The Intellisense lookup only wanted to do one level deep, so if you have Databases, then schemas, THEN tables it just didn't work.
So I abandoned that idea... and went back to my snippet library. It worked for my snippets so why not make a snippet for every single table.
And that's what I did. I was pragmatic and only made snippets for the core schemas we use, which still brings it in at a whopping 1000 lines of snippets!
However, my first pass encountered a few issues.
I had tables that had the same name, the snippet file needs unique names only, so I mainly cleaned them up manually, appending the schema name on the end. I did this manually because there were so few. Had there been more, I would have just made all named be table_name (schema name).
Some of our table names have big prefixes like schema.name_of_source_system_database_then_table_name and so the snippet library struggled to look these up.
For example, if your table was called 'sat.cars_made_in_europe_failed_tests' and you refer to it as the 'failed_tests' table, typing 'failed_tests' just wouldn't work, you have to type out 'cars_made_in_europe_f' for the snippet library to understand.
So I pulled all my data again, with some custom case statements and substrings, which changed the prefix (the bit people type in) to 'failed_tests' and then put the database name into the description field in case there were two 'failed_tests' databases (one for the US maybe) the actual body of the snippet did not change, once the snippet knew what you were doing it still inserted 'sat.cars_made_in_europe_failed_tests'
I also had to start thinking of a naming convention for my snippets prefixes. For the most part the column names I just set to being the first 5 characters of the column name.
So for a column called car_start_date, the prefix would just be car_s, which worked because we had things like car_end_date that became car_e, and car_version which became car_v.
However for big chunks of code, I chose to append 'q' at the front of all prefixes.
Reason being, if I have a table called 'all_cars' and I have a chunk of code I always use on that table, the snippet and the table will end up having the same or similar name. So if I want to insert the table name I type 'all_ca' but If I want to insert the chunk of code I type 'qall_cars'. This avoids any conflict. Also Q is an underused letter and in my head I refer to it as 'Quality of life' :)
And so yeah, that's where I am now. I've been refining it now for 6 months and it's a huge benefit to my workflow. I created some documentation for my team to follow so they can set it up themselves, also I've listed all the shortcut codes they can enter to get certain columns or code chunks.
It's not high tech, and at some point in the future I'll need to refresh it for new tables but it's a simple solution that benefits me a lot.
Note:You have to save your file as .sql before the sql snippets will work. You can't open up a fresh unsaved VSC file sadly.
The Technical Stuff
The Extension
Name: Language PL/SQL
Id: xyz.plsql-language
Description: PL/SQL language (Oracle) support
Version: 1.8.1
Publisher: xyz
VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=xyz.plsql-language
How do you get table names
Nearly all databases have a table that has a list of all the table names available to you. In oracle its called DBA_Objects, TSQL it is Sys.objects, and in Redshift it is pg_table_def
Make sure you have access to these tables, you may need to talk to your DBA to get access.
Fun fact: in redshift you need to set your search path. For example if you just executed select distinct(tablename) from pg_table_def you would only get the table names in the public and or catalog schemas. To make it look at a specific schema you will need to execute the following code before your select statement:
set search_path to β$userβ,βschemanameβ;commit;
This does mean that if you have multiple Redshift schemas you will need to run the query once per schema.
Then you need to gear your select statement to output the right JSON syntax. Fro Redshift I did:
set search_path to '$user','<schemaname>'
;
select '"'||tablename||'":{"prefix":"'||tablename||'"'||',"body":["<schemaname>.'||tablename||'"],"description":"<insert useful description here>"},'
from (
select distinct(p.tablename)
from pg_table_def p
where p.schemaname not in ('pg_catalog','public')
)
group by tablename
order by tablename;
See Helens article on System Tablesfor more help.
Any questions let me know!
Top comments (0)