Posted in SQL Server, T-SQL

T-SQL Tuesday #143 – Short Code Examples

It’s another edition of T-SQL Tuesday! This month, John McCormack (b | t) asks us:

“What are your go to handy short scripts”?

I don’t know if I necessarily have a full statement to share, but I definitely I have a snippet that I use a lot that I love:

IF @debug = 1
Continue reading “T-SQL Tuesday #143 – Short Code Examples”
Posted in SQL Server, VS Code

Setting up .NET Interactive Notebooks

As part of the June 2021 GroupBy Conference, I watched Rob Sewell (b|t) present a session called “Notebooks, PowerShell and Excel Automation”. What I ended up learning from this was not the main point of the presentation (although it was some really cool stuff as you would expect from Rob), but more about the tool that he used.

These days, we tend to think Azure Data Studio when we database developers talk about notebooks, specifically SQL Notebooks. But what Rob used for his demos are a new functionality within VS Code called .NET Interactive Notebooks. It was developed in combination with the Azure Data Studio team and it has support for SQL. But the cool thing that intrigued me was that a notebook could support multiple kernels, unlike Azure Data Studio. Knowing how much we love our SQL and PowerShell and this being a feature that many of us want to see in SQL Notebooks, I decided to try and set this up and poke around.

Continue reading “Setting up .NET Interactive Notebooks”
Posted in Docker, SQL Server, T-SQL Tuesday

T-SQL Tuesday #140 – What have I been doing with Containers

It’s another T-SQL Tuesday! Thanks to Anthony Nocentino (t | b) for hosting this month’s T-SQL Tuesday challenge. This month, he invited us to write about how we use containers.

I’ve just used SQL Server containers for testing. It’s essentially an isolated way of testing scripts against a specific database. But in the end, it’s really just been another instance of SQL Server for me to use.

And that’s it.

Continue reading “T-SQL Tuesday #140 – What have I been doing with Containers”
Posted in SQL Server, T-SQL Tuesday

T-SQL Tuesday #138 – Managing Tech Changes

It’s the second Tuesday in May so it must be T-SQL Tuesday! Thanks to Andy Leonard (b | t) for hosting this month’s blog party. (As always, you can read up more about T-SQL Tuesdays here.) Here’s his invitation:

For this month’s T-SQL Tuesday, I want you to write about how you have responded – or plan to respond to – changes in technology.

There are a couple of ways tech changes on you:

First, there are the changes that occur because something is “fixed” or improved when the newest version is released. For example, upgrading SQL Server versions gives you fixes to things like Cardinality Estimators and Intelligent Query Processing. These sort of changes will either suddenly speed up\wreck havoc on your performance whether you expect it to or not. But to me, these are also the sort of changes that could happen if you suddenly have a new index or the amount of data in the table drastically changes or your dev dataset size was significantly smaller than what was in production or your statistics are out of date, etc.

All we can do in these cases is test what we write to the best of our ability and adjust based on the new conditions. Sometimes, we can predict what will happen and can plan for it before it goes into production. Other times, we will learn the hard way. But I feel like these are expected and we just need to roll with these as we find them.

Then you have the changes because what you had been working with has been completely restructured and reimagined. I’m looking at you, SSIS changes from SQL Server 2008 to 2012. The introduction of SSISDB and the idea of environments, along with some of the other improvements made in general, were pretty drastic changes and if you had SSIS packages, you had to figure out what the changes were and how you were going to support them. I worked at one place where things were pretty much kept as they were. And I worked at another where the clients on SQL 2008 had SQL 2008 packages but we updated the model and structure for SQL 2012 and higher so we could take advantage of the new structure. Being about to take advantage of environments meant that we had a really easy way of deploying to clients and setting things up as part of the install\upgrade process.

For changes like these, we need to understand what the changes are and how we need to adapt our current code to those changes. We usually get some warnings about these features so the key is to try to get ahead of the releases or make sure we can test them in a dev environment before the client decides to upgrade on us and we find ourselves learning in production.

And then there are the shiny brand new features or technology. These are the things that the latest buzz words are built around.

Sometimes I feel like an old curmudgeon when it comes to new tech. I feel like I come across as anti-new-tech when it’s proposed as a solution. If it’s not something that I know, I want to understand it first. But the reason I feel like I come across as opposed to the changes is I will hear about it in terms of “we’re having problems with ABC so let’s just replace it with FGH”. What I don’t hear is how have we tried to fix ABC or is the problem with ABC really something that is solved by FGH. Or I’ll hear a misunderstanding of how FGH works so it becomes clear that not only do we not understand what the problem is that we’re trying to solve but also we don’t understand how the new tech will actually be able to help. It sounds better to say we’re implementing the new tech and we’re tech forward. But this approach means that we find ourselves not solving the initial problem along with creating new ones.

Don’t get me wrong – the capabilities of the new tech and the problems that they can solve are really cool and exciting. But we end up treating the new tech as the solution rather than the tool used to implement the solution. So we never get around to the underlying issues and we blame the new tech for not solving the issues rather than our lack of understanding of what we need to set up for the new tech to help.

The solution here is to go back to understanding the problem we’re trying to solve. This is where we need to learn what the new technology changes are for, understand how they work, think about the problem they solve, and then figure out if they will work for the problems we’re trying to solve. In many ways, I don’t always get a chance to play with the new tech because I’m too busy trying to see if I can solve with what I have available already. But the key I find working with these sort of new tech changes is education. We have to educate ourselves as to what the new tech is and then educate others as to what it is.

In many ways, education is the key to all of the different ways that tech changes on us. That’s why I’m here and that’s why I try to go to the different Data Platform events when I can. We have to be open to learning about and embracing these changes. In the end, we have no choice if we’re going to stay on the top of our game. But it always comes down to understanding the problem we’re trying to solve and what role the changes in tech have to play in it if we’re going to be successful.

Posted in Azure Data Studio, Source Control

Azure Data Studio – Multiple Repos and Explorer

If you read my T-SQL Tuesday post from this month, I mentioned that I’ve been using Azure Data Studio on daily basis. One of the things that I find I use it for the most is for Source Control with Git. I’m incredibly surprised by this. Maybe it comes from years of using Management Studio and not being able to check in code from the tool that I’m using to write it. (Or maybe I’ve been able to do that all this time and no one told me…?)

As I’m using it, I found two things that have helped me out. So naturally, I thought I’d share.

Continue reading “Azure Data Studio – Multiple Repos and Explorer”
Posted in SQL Server

T-SQL Tuesday #137 – Using Notebooks Every Day

Happy T-SQL Tuesday! Thanks to Steve Jones (t|b) for hosting this month. (And thanks for all you do for coordinating the T-SQL Tuesdays!) If you’re interested, you can read the whole invite here.

Our challenge this month is:

“For this month’s T-SQL Tuesday, I want you to write about how you have used, or would like to use, a Jupyter notebook.”

I love the potential I see with notebooks. I love the blend of documentation and code. I love the fact that you can save the results with the query so you can easily capture the before/after results and see them in the same document. When I can use notebooks for a blog post or a presentation, I do it because I like what I see so far.

Continue reading “T-SQL Tuesday #137 – Using Notebooks Every Day”
Posted in Professional Development, WIT

A Woman in SQL, 2021

March 5th was Data Platform WIT Day, a day of sessions by women in the data platform community. If you missed it, you can catch all of the sessions here. I had the honor of being part of the mentoring panel. I really enjoyed being on this panel with these women: Leslie Andrews (t|b), Gilda Alvarez (t), Deepthi Goguri (t|b), and Shabnam Watson (t|b) . All of us came here from different backgrounds and experiences and our careers have been affected in so many different ways from formal and informal mentorships.

Photo by @cowomen on Unsplash

https://unsplash.com/photos/OzlpaFzBN9U?utm_source=unsplash&utm_medium=referral&utm_content=creditShareLink
Photo by CoWomen on Unsplash

There is a phrase that Rie (Irish) Merritt (t) says often: “Lift as you climb”. (It’s in her Keynote from March 5th.) This about finding ways to bring women along as you build your career to help them build theirs. This is something that I believe in but I feel like I constantly fail short in. I want to be someone who doesn’t just “talk the talk” but also “walks the walk.”

I seem to get drawn into the topic of mentorship over the past couple of years so perhaps it’s a sign that this is an area where I can be useful and help other women in our industry. Maybe this is a way I can help “pay it forward”.

Continue reading “A Woman in SQL, 2021”
Posted in T-SQL, T-SQL Tuesday

T-SQL Tuesday #136 – Favorite Data Types

Happy T-SQL Tuesday!

This month is hosted by Brent Ozar (t | b). Brent asked us talk about our favorite or least favorite data type this month. You can check out the invitation here.

I never thought about whether I had a favorite data type. I mean, I’m a DBA so naturally, my favorite data type depends on the data.

But I think I’ll “wax poetic” about datetimes and all of the varieties. When I first got started with SQL Server back in the SQL 6.5 days, we really just had datetime, which was the date and time. If you needed just the time, you ignored the date portion. If you needed just the date, you put midnight (00:00:00.000) for the time. And when you dealt with the time portion, you deal with rounding to the .003 millisecond.

Now, we have all sorts of options – datetime, smalldatetime, datetime2, datetimeoffset, date, and time. The problem is when to use each of these.

The easiest ones are date and time. Now if you just need the date portion, you just use the date data type and you don’t have to deal with midnight or ignore the time portion of the datetime. If you just need the time, again you don’t have to use some random substitute date or ignore the date portion because you can use the time data type. Dates can hold any calendar date from January 1, 0001 through December 31, 9999 and Time can hold any time from 00:00:00.0000000 through 23:59.59.9999999

It starts get tricky from here. If you have to deal with multiple time zones, datetimeoffset is going to be your best bet. But then you’re having to deal with time zones, and we know how well we deal with those…. or rather don’t deal well with them. *Checks calendar and see Daylight Savings for the US coming in a few days…*

So now we’re left with smalldatetime, datetime, and datetime2. If you’re dealing with temporal tables, SQL Server says you have to use datetime2 for the system start and end times. Otherwise datetime2 is really a combination of date (all dates) and time (all times out to nanoseconds). Smalldatetime can hold any date from January 1, 1900 through June 6, 2079 (really – June 6??) and the time portion is accurate to the second. Datetime can hold any date from January 1, 1753 through December 31, 9999 and times out to the millisecond.

The fun really starts when you realize that what SQL Server stores internally isn’t what we see. We just see a formatted date time based on the default of the system or whatever we have chosen to have the format be if we specified the format somewhere. Datetime and smalldatetime can be converted to integers, but it looks like these values can be converted to varbinary. What’s even cooler is you can start to see how these different data types are related, regardless of the formatting that gets pulled back.

Query to set the different dates to the current datetime
The results of the previous query.

Which date to choose really depends on the dates and times you need to store and how exact you need to be. You need to understand your data and what you are really trying to capture. The problem of course is when you are stuck with legacy systems so you can’t replace that datetime field with a date because that truly is all that you need.

But if I had to choose, what’s my favorite date? Well, I’ll just leave you with this:

Thanks for the great topic, Brent! Can’t wait to read what data types everyone else likes or dislikes…

Posted in SQL Server, T-SQL, T-SQL Tuesday

T-SQL Tuesday #135 – Tools of the Trade

It’s another T-SQL Tuesday! Thanks to Mikey Bronowski (t|b) for hosting. Our challenge:

I would like you to write about the tools that help you at work, those that helped you the most or were the most effective. 

Whenever there is talk of switching tools, I always here someone say, “once we are on <insert tool>, we won’t have X problem anymore.” This statement is one of my pet peeves. It’s not the tool that solves the problem. It’s making sure we implement and use the tools properly to help solve the problem. The tools themselves are just that – tools. If we don’t understand the problem, then we’re just shifting the problem to another tool.

There area so many tools that get implemented at work these days. While I use them to varying degrees, I am not always using these tools on a daily basis. And in many cases, I’m not the person responsible for setting them up. I’m a SQL Developer so my daily toolset is much smaller.

Honestly, the vast majority of my time is split between Management Studio (SSMS) or Azure Data Studio. I’m pretty simple\straightforward this way. I started playing a lot more with Azure Data Studio over the past year, but I find I’m not able to make the switch to using it full time. It really depends on the task that I need to do.

So what tasks do I do often and which tool do I use?

Continue reading “T-SQL Tuesday #135 – Tools of the Trade”
Posted in SQL Server, T-SQL

T-SQL Tuesday #134 – Give me a break

Welcome to the first T-SQL Tuesday of 2021!

2020 ended up being a year for the record books and 2021 is already making its mark. So I appreciate the topic that our host, James McGillivray (b|t), has given us.

Breaks are critical for our mental health. Write a post about relaxation techniques, dream destinations, vacation plans or anything else relating to taking a break for your own mental health.

Continue reading “T-SQL Tuesday #134 – Give me a break”