Posted in SQL Server, T-SQL Tuesday

T-SQL Tuesday #138 – Managing Tech Changes

It’s the second Tuesday in May so it must be T-SQL Tuesday! Thanks to Andy Leonard (b | t) for hosting this month’s blog party. (As always, you can read up more about T-SQL Tuesdays here.) Here’s his invitation:

For this month’s T-SQL Tuesday, I want you to write about how you have responded – or plan to respond to – changes in technology.

There are a couple of ways tech changes on you:

First, there are the changes that occur because something is “fixed” or improved when the newest version is released. For example, upgrading SQL Server versions gives you fixes to things like Cardinality Estimators and Intelligent Query Processing. These sort of changes will either suddenly speed up\wreck havoc on your performance whether you expect it to or not. But to me, these are also the sort of changes that could happen if you suddenly have a new index or the amount of data in the table drastically changes or your dev dataset size was significantly smaller than what was in production or your statistics are out of date, etc.

All we can do in these cases is test what we write to the best of our ability and adjust based on the new conditions. Sometimes, we can predict what will happen and can plan for it before it goes into production. Other times, we will learn the hard way. But I feel like these are expected and we just need to roll with these as we find them.

Then you have the changes because what you had been working with has been completely restructured and reimagined. I’m looking at you, SSIS changes from SQL Server 2008 to 2012. The introduction of SSISDB and the idea of environments, along with some of the other improvements made in general, were pretty drastic changes and if you had SSIS packages, you had to figure out what the changes were and how you were going to support them. I worked at one place where things were pretty much kept as they were. And I worked at another where the clients on SQL 2008 had SQL 2008 packages but we updated the model and structure for SQL 2012 and higher so we could take advantage of the new structure. Being about to take advantage of environments meant that we had a really easy way of deploying to clients and setting things up as part of the install\upgrade process.

For changes like these, we need to understand what the changes are and how we need to adapt our current code to those changes. We usually get some warnings about these features so the key is to try to get ahead of the releases or make sure we can test them in a dev environment before the client decides to upgrade on us and we find ourselves learning in production.

And then there are the shiny brand new features or technology. These are the things that the latest buzz words are built around.

Sometimes I feel like an old curmudgeon when it comes to new tech. I feel like I come across as anti-new-tech when it’s proposed as a solution. If it’s not something that I know, I want to understand it first. But the reason I feel like I come across as opposed to the changes is I will hear about it in terms of “we’re having problems with ABC so let’s just replace it with FGH”. What I don’t hear is how have we tried to fix ABC or is the problem with ABC really something that is solved by FGH. Or I’ll hear a misunderstanding of how FGH works so it becomes clear that not only do we not understand what the problem is that we’re trying to solve but also we don’t understand how the new tech will actually be able to help. It sounds better to say we’re implementing the new tech and we’re tech forward. But this approach means that we find ourselves not solving the initial problem along with creating new ones.

Don’t get me wrong – the capabilities of the new tech and the problems that they can solve are really cool and exciting. But we end up treating the new tech as the solution rather than the tool used to implement the solution. So we never get around to the underlying issues and we blame the new tech for not solving the issues rather than our lack of understanding of what we need to set up for the new tech to help.

The solution here is to go back to understanding the problem we’re trying to solve. This is where we need to learn what the new technology changes are for, understand how they work, think about the problem they solve, and then figure out if they will work for the problems we’re trying to solve. In many ways, I don’t always get a chance to play with the new tech because I’m too busy trying to see if I can solve with what I have available already. But the key I find working with these sort of new tech changes is education. We have to educate ourselves as to what the new tech is and then educate others as to what it is.

In many ways, education is the key to all of the different ways that tech changes on us. That’s why I’m here and that’s why I try to go to the different Data Platform events when I can. We have to be open to learning about and embracing these changes. In the end, we have no choice if we’re going to stay on the top of our game. But it always comes down to understanding the problem we’re trying to solve and what role the changes in tech have to play in it if we’re going to be successful.

Posted in SQL Server

T-SQL Tuesday #137 – Using Notebooks Every Day

Happy T-SQL Tuesday! Thanks to Steve Jones (t|b) for hosting this month. (And thanks for all you do for coordinating the T-SQL Tuesdays!) If you’re interested, you can read the whole invite here.

Our challenge this month is:

“For this month’s T-SQL Tuesday, I want you to write about how you have used, or would like to use, a Jupyter notebook.”

I love the potential I see with notebooks. I love the blend of documentation and code. I love the fact that you can save the results with the query so you can easily capture the before/after results and see them in the same document. When I can use notebooks for a blog post or a presentation, I do it because I like what I see so far.

Continue reading “T-SQL Tuesday #137 – Using Notebooks Every Day”
Posted in SQL Server, T-SQL, T-SQL Tuesday

T-SQL Tuesday #135 – Tools of the Trade

It’s another T-SQL Tuesday! Thanks to Mikey Bronowski (t|b) for hosting. Our challenge:

I would like you to write about the tools that help you at work, those that helped you the most or were the most effective. 

Whenever there is talk of switching tools, I always here someone say, “once we are on <insert tool>, we won’t have X problem anymore.” This statement is one of my pet peeves. It’s not the tool that solves the problem. It’s making sure we implement and use the tools properly to help solve the problem. The tools themselves are just that – tools. If we don’t understand the problem, then we’re just shifting the problem to another tool.

There area so many tools that get implemented at work these days. While I use them to varying degrees, I am not always using these tools on a daily basis. And in many cases, I’m not the person responsible for setting them up. I’m a SQL Developer so my daily toolset is much smaller.

Honestly, the vast majority of my time is split between Management Studio (SSMS) or Azure Data Studio. I’m pretty simple\straightforward this way. I started playing a lot more with Azure Data Studio over the past year, but I find I’m not able to make the switch to using it full time. It really depends on the task that I need to do.

So what tasks do I do often and which tool do I use?

Continue reading “T-SQL Tuesday #135 – Tools of the Trade”
Posted in SQL Server, T-SQL

T-SQL Tuesday #134 – Give me a break

Welcome to the first T-SQL Tuesday of 2021!

2020 ended up being a year for the record books and 2021 is already making its mark. So I appreciate the topic that our host, James McGillivray (b|t), has given us.

Breaks are critical for our mental health. Write a post about relaxation techniques, dream destinations, vacation plans or anything else relating to taking a break for your own mental health.

Continue reading “T-SQL Tuesday #134 – Give me a break”
Posted in SQL Server

My Year In Review

This time last year, I was so excited for 2020. I had a lot of plans of stepping out in the community. I was looking at all of the SQL Saturdays around the country and even a couple of international events to figure out which ones I wanted to submit to in the hopes of being able to attend.

And for the first two months, things seemed good. And then the pandemic hit. I’ve said this in other posts but everything that I was involved in was suddenly canceled – not just #sqlfamily event.

But looking back, it ended up being a pretty productive year in some ways.

Cheers to the past #sqlfamily events!
  • 19 Speaking Engagements:
    • 8 User Groups (virtual and local gone virtual)
    • 4 SQL Saturdays
    • 1 PASS Summit
    • Group By
    • Data Platform Summit
    • Dataweekender
    • dataminds.Connect
    • IDERA GeekSync
    • Data Platform Discovery Days
  • Mentor in the first New Stars of Data
  • 19 Blog Posts (not including this one)
  • 1 Job Change
  • 1 MVP Award – Data Platform

In some ways, I was able to be a part of things that I probably would not have had an opportunity to be a part of otherwise because some of these were new virtual events.

Like so many of us, I am heartbroken about the dissolution of PASS. Getting involved with SQL Saturdays and attending the past several SQL Saturdays really helped me invest in my career in ways that I can’t express. It’s actually fairly bittersweet and poetic that the last SQL Saturday I spoke at was Albany, which was also the first SQL Saturday I spoke at. I am so appreciative of all those who are stepping forward to fill in the void. I’m looking forward to finding a way to jump in and help where I am able.

It will be interesting to see what 2021 will bring. I already know that I will be speaking at a couple of user groups over the next several months, including my local group, NESQL, in January. I will be a mentor for the New Stars of Data again so I’m looking forward to working with a new speaker. I have no clue what sort of additional speaking opportunities will be coming up. Even if I don’t end up speaking much while things are finding their new ground, I have a lot of things that I’ve been hoping to explore more and blog about. Maybe 2021 will be the year I spend more time writing.

While this year has been a challenge on many levels, I am still grateful for my friends in #sqlfamily. It gives me many reasons to be hopeful for the things to come in 2021.

Wishing you and your families a very Happy New Year!

Posted in Professional Development, Speaking, SQL Server, T-SQL

T-SQL Tuesday #133 – What else have I’ve learned presenting

It’s T-SQL Tuesday! The last one of 2020 in fact so I’m glad I’m able to pull things together to contribute.

Lisa Griffin Bohm (t|b) is hosting this month. Her challenge for us is this:

This month, I’d like those of you who have presented, or written a presentation, to share something technical THAT DID NOT RELATE to the topic of the presentation, that you’ve learned in writing or giving the presentation.

This is a great topic, so thanks for hosting this month, Lisa!

Continue reading “T-SQL Tuesday #133 – What else have I’ve learned presenting”
Posted in SQL Server

PASS Summit 2020 – Virtual Edition

Another PASS Summit has come and gone. This one really was different from the rest. I think the build up to the event, some of the stumbling blocks, and poor communication along the way didn’t help. But in the end, despite all of that, I think it was a good Summit. Not a great Summit, but not a waste of money Summit either.

Here are my takeaways….

Continue reading “PASS Summit 2020 – Virtual Edition”
Posted in SQL Server, T-SQL Tuesday

T-SQL Tuesday #131 – It’s like this…

It’s another T-SQL Tuesday, that monthly blog party. It’s kind of like a Halloween party but instead of costumes and candy, we write blog posts about a topic related to SQL Server.

Thanks to Rob Volk (b|t)for hosting this month’s T-SQL Tuesday. Rob has tasked us with using an analogy for explaining something in SQL Server.

I don’t know if I have a favorite analogy for what I do. But let’s see if this one works:

Pretend I have to run errands around town. I need to go to the grocery store, hardware store, and return something at the department store.

I’m also making a list as to what I need from each store. At the grocery store, I need to pick up a couple of the basics: eggs, a loaf of bread, half & half for my coffee and some cheddar cheese. At the hardware store, I need to pick up some nails to hang some pictures. In addition, I have to return a shirt that I bought online that is the wrong size.

The first thing I need to do is figure out which stores I need to go to and which order. So I open up Google map and determine which stores are closest to each other and what order it makes sense to go to each one.

Now that I know what I need to do, I run my errands and get everything done fairly efficiently. It worked great. As these are some fairly common errands, I now have my plan of attack for the next time I need to do these.

Over the next couple of months, I notice this works well for the most part. Sometimes my list changes slightly – I need to get mozzarella instead of cheddar or I am exchanging the shirt that didn’t fit for the size that does instead of just returning it, but it doesn’t seem to make a difference so the route I’ve come up with works well.

But other times, I notice things don’t quite work as smoothly. When I run my errands on the weekends, it seems like everyone else has the same idea so I’m having to wait in line a lot more. And why is checkout lane 4 always closed? Or if I need to cook for a holiday or special occasion, it takes me a lot longer to get everything at the grocery store since I need a lot more ingredients.

And then sometimes random things happen. Remember that time I went into the department store to return a dress and walked out with an Instant Pot? And then another time, there was car accident so it took me longer to get from the hardware store to the grocery store; traffic was at a standstill.

Some of these things I can figure out before I leave the house, like when my lists for each store are drastically different than usual, so I can decide on a new route before I go. Other times, I just don’t have that information or something else unexpected comes along and I’m stuck with the plan I have and can’t do anything about it because the problem has nothing to do with the route I’ve mapped out.

So have you figured out the analogy yet? It’s SQL Server execution plans.

Just as I’m planning out where I need to go, SQL Server figures out the good enough plan for getting the data it needs. Sometimes it works well when you have differences – like needed mozzarella instead of cheddar – but then it doesn’t when you suddenly need something completely different, like returning a shirt but buying a household appliance instead. If you have loads that require different sizes, like shopping for a holiday meals, SQL Server can figure out that it may want to use different indexes if they’re available so it may find a more efficient plan to use.

But you also have to remember that at times, queries aren’t slow because of a bad execution plan but other things like heavier server workloads at given times, like running errands on the weekend, or someone else’s query blocking you, like getting stuck in traffic because of a car accident.

These are all things you have to pay attention to when writing and troubleshooting a query using the execution plan.

Now if only my actual errand list could be made more efficient….

Posted in Docker, SQL Server

Installing Docker

When I hear about a new Broadway show, I usually like to wait until I see it before I listen to the soundtrack. Otherwise, I miss the context and I can’t always appreciate it.

When it comes to tech, containers seemed to fall under that category. I kept hearing and attending sessions about them, but it didn’t really start coming together until I finally set one up. Now that I’m starting to use it a little more, the container is starting to make more sense. I still have a lot of questions but now I’m in a better place to figure them out.

More importantly, you have to start somewhere. So let’s start by install Docker so we can create containers and go from there!

Continue reading “Installing Docker”