It’s another T-SQL Tuesday! Thanks to Anthony Nocentino (t | b) for hosting this month’s T-SQL Tuesday challenge. This month, he invited us to write about how we use containers.
I’ve just used SQL Server containers for testing. It’s essentially an isolated way of testing scripts against a specific database. But in the end, it’s really just been another instance of SQL Server for me to use.
It’s the second Tuesday in May so it must be T-SQL Tuesday! Thanks to Andy Leonard (b | t) for hosting this month’s blog party. (As always, you can read up more about T-SQL Tuesdays here.) Here’s his invitation:
For this month’s T-SQL Tuesday, I want you to write about how you have responded – or plan to respond to – changes in technology.
There are a couple of ways tech changes on you:
First, there are the changes that occur because something is “fixed” or improved when the newest version is released. For example, upgrading SQL Server versions gives you fixes to things like Cardinality Estimators and Intelligent Query Processing. These sort of changes will either suddenly speed up\wreck havoc on your performance whether you expect it to or not. But to me, these are also the sort of changes that could happen if you suddenly have a new index or the amount of data in the table drastically changes or your dev dataset size was significantly smaller than what was in production or your statistics are out of date, etc.
All we can do in these cases is test what we write to the best of our ability and adjust based on the new conditions. Sometimes, we can predict what will happen and can plan for it before it goes into production. Other times, we will learn the hard way. But I feel like these are expected and we just need to roll with these as we find them.
Then you have the changes because what you had been working with has been completely restructured and reimagined. I’m looking at you, SSIS changes from SQL Server 2008 to 2012. The introduction of SSISDB and the idea of environments, along with some of the other improvements made in general, were pretty drastic changes and if you had SSIS packages, you had to figure out what the changes were and how you were going to support them. I worked at one place where things were pretty much kept as they were. And I worked at another where the clients on SQL 2008 had SQL 2008 packages but we updated the model and structure for SQL 2012 and higher so we could take advantage of the new structure. Being about to take advantage of environments meant that we had a really easy way of deploying to clients and setting things up as part of the install\upgrade process.
For changes like these, we need to understand what the changes are and how we need to adapt our current code to those changes. We usually get some warnings about these features so the key is to try to get ahead of the releases or make sure we can test them in a dev environment before the client decides to upgrade on us and we find ourselves learning in production.
And then there are the shiny brand new features or technology. These are the things that the latest buzz words are built around.
Sometimes I feel like an old curmudgeon when it comes to new tech. I feel like I come across as anti-new-tech when it’s proposed as a solution. If it’s not something that I know, I want to understand it first. But the reason I feel like I come across as opposed to the changes is I will hear about it in terms of “we’re having problems with ABC so let’s just replace it with FGH”. What I don’t hear is how have we tried to fix ABC or is the problem with ABC really something that is solved by FGH. Or I’ll hear a misunderstanding of how FGH works so it becomes clear that not only do we not understand what the problem is that we’re trying to solve but also we don’t understand how the new tech will actually be able to help. It sounds better to say we’re implementing the new tech and we’re tech forward. But this approach means that we find ourselves not solving the initial problem along with creating new ones.
Don’t get me wrong – the capabilities of the new tech and the problems that they can solve are really cool and exciting. But we end up treating the new tech as the solution rather than the tool used to implement the solution. So we never get around to the underlying issues and we blame the new tech for not solving the issues rather than our lack of understanding of what we need to set up for the new tech to help.
The solution here is to go back to understanding the problem we’re trying to solve. This is where we need to learn what the new technology changes are for, understand how they work, think about the problem they solve, and then figure out if they will work for the problems we’re trying to solve. In many ways, I don’t always get a chance to play with the new tech because I’m too busy trying to see if I can solve with what I have available already. But the key I find working with these sort of new tech changes is education. We have to educate ourselves as to what the new tech is and then educate others as to what it is.
In many ways, education is the key to all of the different ways that tech changes on us. That’s why I’m here and that’s why I try to go to the different Data Platform events when I can. We have to be open to learning about and embracing these changes. In the end, we have no choice if we’re going to stay on the top of our game. But it always comes down to understanding the problem we’re trying to solve and what role the changes in tech have to play in it if we’re going to be successful.
If you read my T-SQL Tuesday post from this month, I mentioned that I’ve been using Azure Data Studio on daily basis. One of the things that I find I use it for the most is for Source Control with Git. I’m incredibly surprised by this. Maybe it comes from years of using Management Studio and not being able to check in code from the tool that I’m using to write it. (Or maybe I’ve been able to do that all this time and no one told me…?)
As I’m using it, I found two things that have helped me out. So naturally, I thought I’d share.
Happy T-SQL Tuesday! Thanks to Steve Jones (t|b) for hosting this month. (And thanks for all you do for coordinating the T-SQL Tuesdays!) If you’re interested, you can read the whole invite here.
Our challenge this month is:
“For this month’s T-SQL Tuesday, I want you to write about how you have used, or would like to use, a Jupyter notebook.”
I love the potential I see with notebooks. I love the blend of documentation and code. I love the fact that you can save the results with the query so you can easily capture the before/after results and see them in the same document. When I can use notebooks for a blog post or a presentation, I do it because I like what I see so far.
March 5th was Data Platform WIT Day, a day of sessions by women in the data platform community. If you missed it, you can catch all of the sessions here. I had the honor of being part of the mentoring panel. I really enjoyed being on this panel with these women: Leslie Andrews (t|b), Gilda Alvarez (t), Deepthi Goguri (t|b), and Shabnam Watson (t|b) . All of us came here from different backgrounds and experiences and our careers have been affected in so many different ways from formal and informal mentorships.
There is a phrase that Rie (Irish) Merritt (t) says often: “Lift as you climb”. (It’s in her Keynote from March 5th.) This about finding ways to bring women along as you build your career to help them build theirs. This is something that I believe in but I feel like I constantly fail short in. I want to be someone who doesn’t just “talk the talk” but also “walks the walk.”
I seem to get drawn into the topic of mentorship over the past couple of years so perhaps it’s a sign that this is an area where I can be useful and help other women in our industry. Maybe this is a way I can help “pay it forward”.
This month is hosted by Brent Ozar (t | b). Brent asked us talk about our favorite or least favorite data type this month. You can check out the invitation here.
I never thought about whether I had a favorite data type. I mean, I’m a DBA so naturally, my favorite data type depends on the data.
But I think I’ll “wax poetic” about datetimes and all of the varieties. When I first got started with SQL Server back in the SQL 6.5 days, we really just had datetime, which was the date and time. If you needed just the time, you ignored the date portion. If you needed just the date, you put midnight (00:00:00.000) for the time. And when you dealt with the time portion, you deal with rounding to the .003 millisecond.
Now, we have all sorts of options – datetime, smalldatetime, datetime2, datetimeoffset, date, and time. The problem is when to use each of these.
The easiest ones are date and time. Now if you just need the date portion, you just use the date data type and you don’t have to deal with midnight or ignore the time portion of the datetime. If you just need the time, again you don’t have to use some random substitute date or ignore the date portion because you can use the time data type. Dates can hold any calendar date from January 1, 0001 through December 31, 9999 and Time can hold any time from 00:00:00.0000000 through 23:59.59.9999999
It starts get tricky from here. If you have to deal with multiple time zones, datetimeoffset is going to be your best bet. But then you’re having to deal with time zones, and we know how well we deal with those…. or rather don’t deal well with them. *Checks calendar and see Daylight Savings for the US coming in a few days…*
So now we’re left with smalldatetime, datetime, and datetime2. If you’re dealing with temporal tables, SQL Server says you have to use datetime2 for the system start and end times. Otherwise datetime2 is really a combination of date (all dates) and time (all times out to nanoseconds). Smalldatetime can hold any date from January 1, 1900 through June 6, 2079 (really – June 6??) and the time portion is accurate to the second. Datetime can hold any date from January 1, 1753 through December 31, 9999 and times out to the millisecond.
The fun really starts when you realize that what SQL Server stores internally isn’t what we see. We just see a formatted date time based on the default of the system or whatever we have chosen to have the format be if we specified the format somewhere. Datetime and smalldatetime can be converted to integers, but it looks like these values can be converted to varbinary. What’s even cooler is you can start to see how these different data types are related, regardless of the formatting that gets pulled back.
Which date to choose really depends on the dates and times you need to store and how exact you need to be. You need to understand your data and what you are really trying to capture. The problem of course is when you are stuck with legacy systems so you can’t replace that datetime field with a date because that truly is all that you need.
But if I had to choose, what’s my favorite date? Well, I’ll just leave you with this:
Thanks for the great topic, Brent! Can’t wait to read what data types everyone else likes or dislikes…
It’s another T-SQL Tuesday! Thanks to Mikey Bronowski (t|b) for hosting. Our challenge:
I would like you to write about the tools that help you at work, those that helped you the most or were the most effective.
Whenever there is talk of switching tools, I always here someone say, “once we are on <insert tool>, we won’t have X problem anymore.” This statement is one of my pet peeves. It’s not the tool that solves the problem. It’s making sure we implement and use the tools properly to help solve the problem. The tools themselves are just that – tools. If we don’t understand the problem, then we’re just shifting the problem to another tool.
There area so many tools that get implemented at work these days. While I use them to varying degrees, I am not always using these tools on a daily basis. And in many cases, I’m not the person responsible for setting them up. I’m a SQL Developer so my daily toolset is much smaller.
Honestly, the vast majority of my time is split between Management Studio (SSMS) or Azure Data Studio. I’m pretty simple\straightforward this way. I started playing a lot more with Azure Data Studio over the past year, but I find I’m not able to make the switch to using it full time. It really depends on the task that I need to do.
So what tasks do I do often and which tool do I use?
I was privileged to be able to attend the Massachusetts Conference for Women on December 10, 2020. I went last year and was inspired from the moment I stepped foot in the convention center. (Here’s my blog post about that event.) I was curious to see what would happen this year when it was moved to a virtual event.
I think I got my answer watching the number of the keynote attendees steadily increase to over 10,000 in 10-15 minutes. Just seeing so many say hello and where they were from was incredible to watch.
I was hoping to catch most of the sessions in the week after the conference was over but I wasn’t able to. But I was able to catch a fair number of sessions as well as the two keynote sessions. I thought I’d share some of the quotes that inspired me and challenged me the most, along with some context when I can. Some of these just stand on their own.
This time last year, I was so excited for 2020. I had a lot of plans of stepping out in the community. I was looking at all of the SQL Saturdays around the country and even a couple of international events to figure out which ones I wanted to submit to in the hopes of being able to attend.
And for the first two months, things seemed good. And then the pandemic hit. I’ve said this in other posts but everything that I was involved in was suddenly canceled – not just #sqlfamily event.
But looking back, it ended up being a pretty productive year in some ways.
19 Speaking Engagements:
8 User Groups (virtual and local gone virtual)
4 SQL Saturdays
1 PASS Summit
Data Platform Summit
Data Platform Discovery Days
Mentor in the first New Stars of Data
19 Blog Posts (not including this one)
1 Job Change
1 MVP Award – Data Platform
In some ways, I was able to be a part of things that I probably would not have had an opportunity to be a part of otherwise because some of these were new virtual events.
Like so many of us, I am heartbroken about the dissolution of PASS. Getting involved with SQL Saturdays and attending the past several SQL Saturdays really helped me invest in my career in ways that I can’t express. It’s actually fairly bittersweet and poetic that the last SQL Saturday I spoke at was Albany, which was also the first SQL Saturday I spoke at. I am so appreciative of all those who are stepping forward to fill in the void. I’m looking forward to finding a way to jump in and help where I am able.
It will be interesting to see what 2021 will bring. I already know that I will be speaking at a couple of user groups over the next several months, including my local group, NESQL, in January. I will be a mentor for the New Stars of Data again so I’m looking forward to working with a new speaker. I have no clue what sort of additional speaking opportunities will be coming up. Even if I don’t end up speaking much while things are finding their new ground, I have a lot of things that I’ve been hoping to explore more and blog about. Maybe 2021 will be the year I spend more time writing.
While this year has been a challenge on many levels, I am still grateful for my friends in #sqlfamily. It gives me many reasons to be hopeful for the things to come in 2021.
Wishing you and your families a very Happy New Year!