Posted in SQL Server, T-SQL Tuesday

T-SQL Tuesday #138 – Managing Tech Changes

It’s the second Tuesday in May so it must be T-SQL Tuesday! Thanks to Andy Leonard (b | t) for hosting this month’s blog party. (As always, you can read up more about T-SQL Tuesdays here.) Here’s his invitation:

For this month’s T-SQL Tuesday, I want you to write about how you have responded – or plan to respond to – changes in technology.

There are a couple of ways tech changes on you:

First, there are the changes that occur because something is “fixed” or improved when the newest version is released. For example, upgrading SQL Server versions gives you fixes to things like Cardinality Estimators and Intelligent Query Processing. These sort of changes will either suddenly speed up\wreck havoc on your performance whether you expect it to or not. But to me, these are also the sort of changes that could happen if you suddenly have a new index or the amount of data in the table drastically changes or your dev dataset size was significantly smaller than what was in production or your statistics are out of date, etc.

All we can do in these cases is test what we write to the best of our ability and adjust based on the new conditions. Sometimes, we can predict what will happen and can plan for it before it goes into production. Other times, we will learn the hard way. But I feel like these are expected and we just need to roll with these as we find them.

Then you have the changes because what you had been working with has been completely restructured and reimagined. I’m looking at you, SSIS changes from SQL Server 2008 to 2012. The introduction of SSISDB and the idea of environments, along with some of the other improvements made in general, were pretty drastic changes and if you had SSIS packages, you had to figure out what the changes were and how you were going to support them. I worked at one place where things were pretty much kept as they were. And I worked at another where the clients on SQL 2008 had SQL 2008 packages but we updated the model and structure for SQL 2012 and higher so we could take advantage of the new structure. Being about to take advantage of environments meant that we had a really easy way of deploying to clients and setting things up as part of the install\upgrade process.

For changes like these, we need to understand what the changes are and how we need to adapt our current code to those changes. We usually get some warnings about these features so the key is to try to get ahead of the releases or make sure we can test them in a dev environment before the client decides to upgrade on us and we find ourselves learning in production.

And then there are the shiny brand new features or technology. These are the things that the latest buzz words are built around.

Sometimes I feel like an old curmudgeon when it comes to new tech. I feel like I come across as anti-new-tech when it’s proposed as a solution. If it’s not something that I know, I want to understand it first. But the reason I feel like I come across as opposed to the changes is I will hear about it in terms of “we’re having problems with ABC so let’s just replace it with FGH”. What I don’t hear is how have we tried to fix ABC or is the problem with ABC really something that is solved by FGH. Or I’ll hear a misunderstanding of how FGH works so it becomes clear that not only do we not understand what the problem is that we’re trying to solve but also we don’t understand how the new tech will actually be able to help. It sounds better to say we’re implementing the new tech and we’re tech forward. But this approach means that we find ourselves not solving the initial problem along with creating new ones.

Don’t get me wrong – the capabilities of the new tech and the problems that they can solve are really cool and exciting. But we end up treating the new tech as the solution rather than the tool used to implement the solution. So we never get around to the underlying issues and we blame the new tech for not solving the issues rather than our lack of understanding of what we need to set up for the new tech to help.

The solution here is to go back to understanding the problem we’re trying to solve. This is where we need to learn what the new technology changes are for, understand how they work, think about the problem they solve, and then figure out if they will work for the problems we’re trying to solve. In many ways, I don’t always get a chance to play with the new tech because I’m too busy trying to see if I can solve with what I have available already. But the key I find working with these sort of new tech changes is education. We have to educate ourselves as to what the new tech is and then educate others as to what it is.

In many ways, education is the key to all of the different ways that tech changes on us. That’s why I’m here and that’s why I try to go to the different Data Platform events when I can. We have to be open to learning about and embracing these changes. In the end, we have no choice if we’re going to stay on the top of our game. But it always comes down to understanding the problem we’re trying to solve and what role the changes in tech have to play in it if we’re going to be successful.

Posted in T-SQL, T-SQL Tuesday

T-SQL Tuesday #136 – Favorite Data Types

Happy T-SQL Tuesday!

This month is hosted by Brent Ozar (t | b). Brent asked us talk about our favorite or least favorite data type this month. You can check out the invitation here.

I never thought about whether I had a favorite data type. I mean, I’m a DBA so naturally, my favorite data type depends on the data.

But I think I’ll “wax poetic” about datetimes and all of the varieties. When I first got started with SQL Server back in the SQL 6.5 days, we really just had datetime, which was the date and time. If you needed just the time, you ignored the date portion. If you needed just the date, you put midnight (00:00:00.000) for the time. And when you dealt with the time portion, you deal with rounding to the .003 millisecond.

Now, we have all sorts of options – datetime, smalldatetime, datetime2, datetimeoffset, date, and time. The problem is when to use each of these.

The easiest ones are date and time. Now if you just need the date portion, you just use the date data type and you don’t have to deal with midnight or ignore the time portion of the datetime. If you just need the time, again you don’t have to use some random substitute date or ignore the date portion because you can use the time data type. Dates can hold any calendar date from January 1, 0001 through December 31, 9999 and Time can hold any time from 00:00:00.0000000 through 23:59.59.9999999

It starts get tricky from here. If you have to deal with multiple time zones, datetimeoffset is going to be your best bet. But then you’re having to deal with time zones, and we know how well we deal with those…. or rather don’t deal well with them. *Checks calendar and see Daylight Savings for the US coming in a few days…*

So now we’re left with smalldatetime, datetime, and datetime2. If you’re dealing with temporal tables, SQL Server says you have to use datetime2 for the system start and end times. Otherwise datetime2 is really a combination of date (all dates) and time (all times out to nanoseconds). Smalldatetime can hold any date from January 1, 1900 through June 6, 2079 (really – June 6??) and the time portion is accurate to the second. Datetime can hold any date from January 1, 1753 through December 31, 9999 and times out to the millisecond.

The fun really starts when you realize that what SQL Server stores internally isn’t what we see. We just see a formatted date time based on the default of the system or whatever we have chosen to have the format be if we specified the format somewhere. Datetime and smalldatetime can be converted to integers, but it looks like these values can be converted to varbinary. What’s even cooler is you can start to see how these different data types are related, regardless of the formatting that gets pulled back.

Query to set the different dates to the current datetime
The results of the previous query.

Which date to choose really depends on the dates and times you need to store and how exact you need to be. You need to understand your data and what you are really trying to capture. The problem of course is when you are stuck with legacy systems so you can’t replace that datetime field with a date because that truly is all that you need.

But if I had to choose, what’s my favorite date? Well, I’ll just leave you with this:

Thanks for the great topic, Brent! Can’t wait to read what data types everyone else likes or dislikes…

Posted in SQL Server, T-SQL, T-SQL Tuesday

T-SQL Tuesday #135 – Tools of the Trade

It’s another T-SQL Tuesday! Thanks to Mikey Bronowski (t|b) for hosting. Our challenge:

I would like you to write about the tools that help you at work, those that helped you the most or were the most effective. 

Whenever there is talk of switching tools, I always here someone say, “once we are on <insert tool>, we won’t have X problem anymore.” This statement is one of my pet peeves. It’s not the tool that solves the problem. It’s making sure we implement and use the tools properly to help solve the problem. The tools themselves are just that – tools. If we don’t understand the problem, then we’re just shifting the problem to another tool.

There area so many tools that get implemented at work these days. While I use them to varying degrees, I am not always using these tools on a daily basis. And in many cases, I’m not the person responsible for setting them up. I’m a SQL Developer so my daily toolset is much smaller.

Honestly, the vast majority of my time is split between Management Studio (SSMS) or Azure Data Studio. I’m pretty simple\straightforward this way. I started playing a lot more with Azure Data Studio over the past year, but I find I’m not able to make the switch to using it full time. It really depends on the task that I need to do.

So what tasks do I do often and which tool do I use?

Continue reading “T-SQL Tuesday #135 – Tools of the Trade”
Posted in SQL Server, T-SQL Tuesday

T-SQL Tuesday #131 – It’s like this…

It’s another T-SQL Tuesday, that monthly blog party. It’s kind of like a Halloween party but instead of costumes and candy, we write blog posts about a topic related to SQL Server.

Thanks to Rob Volk (b|t)for hosting this month’s T-SQL Tuesday. Rob has tasked us with using an analogy for explaining something in SQL Server.

I don’t know if I have a favorite analogy for what I do. But let’s see if this one works:

Pretend I have to run errands around town. I need to go to the grocery store, hardware store, and return something at the department store.

I’m also making a list as to what I need from each store. At the grocery store, I need to pick up a couple of the basics: eggs, a loaf of bread, half & half for my coffee and some cheddar cheese. At the hardware store, I need to pick up some nails to hang some pictures. In addition, I have to return a shirt that I bought online that is the wrong size.

The first thing I need to do is figure out which stores I need to go to and which order. So I open up Google map and determine which stores are closest to each other and what order it makes sense to go to each one.

Now that I know what I need to do, I run my errands and get everything done fairly efficiently. It worked great. As these are some fairly common errands, I now have my plan of attack for the next time I need to do these.

Over the next couple of months, I notice this works well for the most part. Sometimes my list changes slightly – I need to get mozzarella instead of cheddar or I am exchanging the shirt that didn’t fit for the size that does instead of just returning it, but it doesn’t seem to make a difference so the route I’ve come up with works well.

But other times, I notice things don’t quite work as smoothly. When I run my errands on the weekends, it seems like everyone else has the same idea so I’m having to wait in line a lot more. And why is checkout lane 4 always closed? Or if I need to cook for a holiday or special occasion, it takes me a lot longer to get everything at the grocery store since I need a lot more ingredients.

And then sometimes random things happen. Remember that time I went into the department store to return a dress and walked out with an Instant Pot? And then another time, there was car accident so it took me longer to get from the hardware store to the grocery store; traffic was at a standstill.

Some of these things I can figure out before I leave the house, like when my lists for each store are drastically different than usual, so I can decide on a new route before I go. Other times, I just don’t have that information or something else unexpected comes along and I’m stuck with the plan I have and can’t do anything about it because the problem has nothing to do with the route I’ve mapped out.

So have you figured out the analogy yet? It’s SQL Server execution plans.

Just as I’m planning out where I need to go, SQL Server figures out the good enough plan for getting the data it needs. Sometimes it works well when you have differences – like needed mozzarella instead of cheddar – but then it doesn’t when you suddenly need something completely different, like returning a shirt but buying a household appliance instead. If you have loads that require different sizes, like shopping for a holiday meals, SQL Server can figure out that it may want to use different indexes if they’re available so it may find a more efficient plan to use.

But you also have to remember that at times, queries aren’t slow because of a bad execution plan but other things like heavier server workloads at given times, like running errands on the weekend, or someone else’s query blocking you, like getting stuck in traffic because of a car accident.

These are all things you have to pay attention to when writing and troubleshooting a query using the execution plan.

Now if only my actual errand list could be made more efficient….

Posted in SQL Server, T-SQL Tuesday

T-SQL Tuesday #129 – My Time Capsule Contribution

It’s another T-SQL Tuesday! It’s our monthly blog party where a community member gives us all something to write about. Thanks to Tammy Clark (t|b) for hosting this month. (And if you’re interested in learning more, check out the T-SQL Tuesday website.)

Our directions from Tammy are:

Tell me what few items or bits of information you’d like me to put into the #SQLCommunity time capsule.

To me, time capsules are a snapshot of what’s going on at the time. So here’s a photo of what I’ll be adding:

My time capsule - my laptop surrounded by music, computer accessories, a guitar and mandolin, and two stuffed animals

Continue reading “T-SQL Tuesday #129 – My Time Capsule Contribution”

Posted in SQL Server, T-SQL Tuesday

T-SQL Tuesday #128: Learn From Others

TSQL2SDAY-150x150It’s another T-SQL Tuesday! I feel like it’s been a while since I’ve joined the party. Thanks to Kerry Tyler (b|t) for hosting this month. (If you want to know more about T-SQL Tuesdays, you can find out more info here .)

Kerry’s challenge for us this month is this:

“Tell me (us all, obviously) about something recently that broke or went wrong, and what it took to fix it. “

When I think about this topic, I start trying to wrack my brain for that amazing disaster story. My best story involves SQL Server 6.5 so I’m pretty sure that’s not going to help anyone. But I also realized that I’m not on the administrative side of databases. I work with programmers and help create those problems that people see in the field. (Hey, wait a minute! Wait, no, that probably is accurate… *sigh*)

When I break something, we usually call it a “bug”. Doesn’t it sound better that way? Sometimes the bugs are small where we didn’t know that the client had made a change to the database before we released the code so the script failed when they ran the installer. Sometimes they seem small but later cause big problems, like enabling a process to send messages to a Service Broker queue by default but not enabling the job to process those messages by default. That meant when the job was enabled after 5 million or so messages were waiting in the queue, it unsurprisingly used a lot of resources that suddenly caused poor performance in the main database, also on that instance. Sometimes the problems are quite obvious – writing a query for a small database that didn’t come close to having the statistics that a very large client database would have so the performance ended up being incredibly slow.

For me, a lot of the problem solving comes from knowing how to troubleshoot to find what is actually going on. For example, your monitoring tool is reporting blocking. Does it happen on a regular basis? Is the blocking always on the same table or action or is on something different? Is something else running on the instance at the same time? Do the end users notice the problem and where in the system do they see the problem? Essentially, it’s asking a lot of questions to make sure I understand what the issue is, looking for the repeatable patterns, trying to replicate the issues, then going through the work to try to find the solution – whatever that may look like.

Once I identified the problem, I can then work on the solution. In many cases, the solutions are easy – add the missing index, make sure explicit transactions are properly committed, rewrite a query. Sometimes the solutions take longer to implement – work with the programmer to move SQL generated from a 3rd party data layer into a explicitly defined SQL statement or a stored procedure, rework the processing of Service Broker messages into different sized batches and allow changes to the job so it only processes data during off hours, rework catch-all queries into stored procedures that use dynamic SQL based on the parameters that are based in. Some of these solutions take minutes and others takes weeks or months. Sometimes I wanted to rework the whole approach but couldn’t. The more I think about it, rewriting the query was the solution in a vast majority of the issues I’ve seen.

I’ve definitely learned how to be a better troubleshooter over the year because of all the different things I’ve seen and have had to fix. For example, I got burned really badly with a poor performing CTE so you better believe I’m going to test the CTE against different syntax to see which works better. I’ve also gotten to learn the different behaviors of different tools or really figured out how to make them work to my advantage. If it’s a simple rewrite to test better performance, I may stick with Management Studio while a major rewrite testing all of the small changes that can affect performance at every step of the way may mean I’ll use Plan Explorer instead. Because I’ve had experience with lots of different small things, I can build on them as I come up with different solutions to the problems I face.

For those who were hoping for a great disaster story or epic fail and solution from me, I’m sorry to disappoint you. Trust me when I say I do make plenty of mistakes and have had a lot of things not work the way they should have. But I guess what I’m trying to say it’s all the small failures and the little mistakes rather than the big disasters along the way that I learned from to keep improving and getting better so I can be better equipped to fix the new bugs tomorrow that I’m creating today.

Posted in SQL Server, T-SQL Tuesday

T-SQL Tuesday #124 – Query Store 1st steps

TSQL2SDAY-150x150It’s another T-SQL Tuesday! This month, our host is Tracy Boggiano (b|t)  Thanks to Tracy for hosting.

(If you’re interested in learning more about T-SQL Tuesdays in general, you can check out the website for past invitations and the background.)

Tracy invited us to talk about our experiences adopting Query Store and if we haven’t yet, why not. This couldn’t be a more timely post. I just enabled Query Store on my test database last week. Seriously – just last week.

Continue reading “T-SQL Tuesday #124 – Query Store 1st steps”

Posted in T-SQL Tuesday

T-SQL Tuesday #122 – Dealing with Imposter Syndrome

TSQL2SDAY-150x150Happy First T-SQL Tuesday of 2020!

Thanks to Jon Shaulis (b|t) for hosting this month’s topic. (If you’re interested in knowing more about T-SQL Tuesday, go to tsqltuesday.com and read up.) Our topic is to talk about our experiences with Imposter Syndrome. The goal in sharing is to help others who are also experiencing it.

So I had two moments last week:

Continue reading “T-SQL Tuesday #122 – Dealing with Imposter Syndrome”

Posted in SQL Server, T-SQL Tuesday

T-SQL Tuesday #121 – SQL Gifts

TSQL2SDAY-150x150

It’s been a while since I’ve been able to pull together a T-SQL Tuesday post. But Mala (b|t) has given us a topic that is easy enough that even someone like me, who overthinks and over-edits her posts, can pull something together.

For this month’s topic, Mala asked us what our gifts over the past year have been. The no-brainer answer for me is a supportive #sqlfamily.

I had two pretty big “gifts” recently (you can read about them here and here) – neither of which would have happened if it hadn’t been for the support of this community.

But even more than that, I’ve been able to get to know more people in the SQL community over the past year – too many to mention by name. The fact I can call many of you “my friend” and not just “some random person I stalk follow on Twitter” is a true gift.

I’m look forward to sharing this gift of #sqlfamily with more of you this coming year!

 

Posted in SQL Server, T-SQL Tuesday

Dear 20-Year-Old Me (T-SQL Tuesday #115)

Dear 20-Year-Old Deborah,

Hi there! Yep, you’re doing that thing – you’re writing that letter to your 20-year-old self. But you’re doing it as part of a good cause. There’s this thing called T-SQL Tuesday, which is a monthly blog party in the SQL Server community where someone brings up a topic and everyone blogs about it. This letter is in response to Mohammad Darab (b|t), who is hosting this month. (Thanks for hosting, Mohammad!) He’s the one who had the topic of writing your 20-year old self. You can read all about it on his blog if you want to know more.

So this means a couple of things. First, you have a blog! You don’t write posts as often as you want. But think about it – with your love of books and wanting to know more about editing and publishing, it’s kind of up your alley.

The second part is that you’ve taken an unexpected turn careerwise. You know how you want nothing to do with math and science right now? Surprise! You’re a techie working with databases – Microsoft SQL Server to be specific. You work with computers. Go math & science! The funnier thing is summer temp job you had last year pulling data into ACT! in order to create mail marketing campaigns actually was the start of what your career turns out to be. Who knew?

Before you panic that you are wasting your time studying English and History and try to remember where the COSI department is, just stop. What you love about bringing those two subjects together is the same thing that you love about working with databases and working with the programmers on their applications. You love understanding and seeing the history and how it affects the literature and your understanding of it. With databases, it’s understanding the context of data in the larger picture of what’s going on around it. And you still manage to find ways to bring your love of English and history into what you’re doing – from the idea of a project for your blog to how you pull together your presentations.

Mohammad asked that we give some advice or words of encouragement. So here’s what I have for you – not just in your career but in everything you decide to pursue:

  • First, trust yourself. You have really good instincts so listen to them.
  • Second, it’s OK to have ambitions. You’re very aware of wanting to give everyone a fair shake, which is a good thing. You will also invest yourself in the projects that you work on and take ownership and pride in your work. But you have things you want to do as well. Remember not to let being fair to everyone or your commitment to a project or two mean that you put yourself on hold or lock yourself in a position where you can’t move forward.
  • Third, you have permission to act on your instincts and ambitions. There are all sorts of articles that talk about how women wait for permission to move ahead or take risks or ask for projects, etc. So we end up holding ourselves back because someone else will come along and take them without asking or waiting. Looking back, I know that you will do this along the way in your life. So I want to give you permission not only to be trust your instincts and have ambition, but to use them and move yourself forward. Find your opportunities or make them yourself. Most importantly, don’t be scared to go after what you want.

Even as I give this advice, I can’t tell you that it’s still easier said than done. It’s still something I’m trying to do, which is why I think it’s worthwhile and why I want to pass it on.

At 20 years old, I don’t know if we ever really have an idea of how life will go or even where we want it to go. Some people knew they wanted to go to law school or med school or graduate school but that was never you. And that’s OK. This career as a database profession just finds you and it really suits you.

I don’t want to tell you too much of what’s happens between now and then. (You saw Back to the Future, you know what happens.) But you don’t have to worry because things fall into place. Enjoy the journey you’re on.

Love, Not 20-Year-Old Deborah

TSQL2SDAY-150x150