Posted in Professional Development, SQL Server, T-SQL Tuesday

T-SQL Tuesday #150 – My First Technical Job

Welcome to May’s T-SQL Tuesday! Thanks to Kenneth Fisher (t|b) for hosting! This month, he’d like to hear about our first technical job. As he says: “I know most DBAs don’t start out working with databases so tell us how you did start.”

My first technical job actually came out of my first job out of college. I was an English and History major who wanted nothing to do with math or science. I was always pretty good with computers; I always credited that to having a computer in my room my senior year of high school – not because I was into computers but my dad was and he got tired of me kicking him off his computer to write papers for class. But I never wanted to take a computer science class or learn to program. (Cue laughter!)

Continue reading “T-SQL Tuesday #150 – My First Technical Job”
Posted in T-SQL, T-SQL Tuesday

T-SQL Tuesday #148 – Advice for Running a User Group

Welcome to another T-SQL Tuesday!

Thanks to Rie Merritt (t) for this month’s topic. She’s asked us to write about various aspects about running a user group. You can find the official invitation from Rie here.

For those of you who don’t know me, I’m a board member of the NESQL User Group, the Microsoft Data Platform user group in the greater Boston, Massachusetts (USA) area. I mostly help out with communications – sending out the monthly tweets and posting reminders to our group on LinkedIn.

Essentially, I want to make sure you know about and remember to attend the great event that we’re hosting each month.

Continue reading “T-SQL Tuesday #148 – Advice for Running a User Group”
Posted in SQL Server, T-SQL Tuesday

T-SQL Tuesday #146 – Upending Preconceived Notions

Happy 2022! Let’s start the year off right by joining this month’s T-SQL Tuesday blog party.

Andy Yun (t | b) is this month’s host. Our challenge this month is:

 …to think about something you’ve learned, that subsequently changed your opinion/viewpoint/etc. on something.

There’s a whole lot of things I could probably talk about, but I think we’ll stick to SQL Server for now.

I was always taught to clean up after myself so I thought dropping my local temp tables after creating them in a proc did that. I knew that once the session was dropped, the table no longer was there so SQL Server would eventually take care of it. But then you had the cases where you ran the same stored proc multiple times in the same session so you wanted to make sure the table was gone at the end of your procedure to prevent issues.

Pam Lahoud (t | b) has a great session called: TempDB, The Good, The Bad, The Ugly. She presented it at the EightKB conference but I happened to catch it at a NESQL meetup. I highly recommend that you watch it; it’s linked in the title. TempDB is one of my weaker areas and I learned so much by watching this.

My “a-ha!” moment was when she mentioned that you don’t need to drop temp tables that you created at the end of the stored procedure.

Wait, wha???

Yes! In SQL Server 2005, Microsoft added temp table caching as a way to solve metadata contention in tempdb. When you “drop” the table as part of a stored procedure, it doesn’t really get dropped at all. There is a copy of the table’s metadata – and its statistics – that gets left in the cache. So when you call the stored procedure again, the statistics are available for the procedure so SQL Server can use that to help create the execution plan.

There were two things about this that kind of blew my mind. First, the statistics of that temp table are reusable. If you set up a temp table and if it’s similar in scope each time you call the stored proc, you already have an execution plan cached that can be reused. Second, I’ve been doing this wrong since SQL Server 2005. That’s a really long time to not know that you don’t have a to drop the temp table at the end of the proc. You still have to make sure your stored procedure can work properly if you run it multiple times in the same session.

I know my overly simplified explanation doesn’t go into all of the details. Luckily, there are other people who can explain it much better than I can. A couple of resources if you’re interested in learning more:

  • Watch Pam’s session. Here’s the link to her EightKB Confererence session again.
  • Paul White has a couple of great blog posts about this. You can read them here and here.
  • Franklin Yamamoto at SQL Grease also has a nice explanation here.

There is a lot that goes into this and a lot of different caveats. It’s worthwhile to read up on these.

I can’t wait to read everyone else’s posts and unlearn something that I had thought was good practice.

Thanks for hosting, Andy!

Posted in SQL Server, T-SQL Tuesday

T-SQL Tuesday #145 – In Person Events

Happy T-SQL Tuesday, everyone! It’s the last one for 2021 so I want to try to finish out the year by participating.

This month is hosted by Xavier Morera (b|t). Xavier has asked us a couple of questions to get us going on this month’s topic:

  1. Which is your favorite conference and why?
  2. Which is the best venue that you have visited for a tech conference?
  3. Who is the best presenter that you have ever listened to?
  4. Which location would you like for your next event to take place and why Costa Rica?

I’m going to start with the last question first. I have never been to Costa Rica but I hear lovely things about it. It sounds like a lovely place to go and talk about the Microsoft Data Platform. I feel like this is a no-brainer. 🙂

Continue reading “T-SQL Tuesday #145 – In Person Events”
Posted in Data Governance, T-SQL Tuesday

T-SQL Tuesday #144 – Data Governance

Happy T-SQL Tuesday! I’m really excited for this month’s topic, hosted by Victoria Holt. (t | b) This month, Victoria invites us to “share our experiences on data governance.”

Almost exactly 2 years ago, I gave a lightning talk about an update statement. It wasn’t just about understanding how that update could affect more than one table but it expanded to show how that update was also part of a larger picture of the entire database ecosystem.

Continue reading “T-SQL Tuesday #144 – Data Governance”
Posted in SQL Server, VS Code

Setting up .NET Interactive Notebooks

As part of the June 2021 GroupBy Conference, I watched Rob Sewell (b|t) present a session called “Notebooks, PowerShell and Excel Automation”. What I ended up learning from this was not the main point of the presentation (although it was some really cool stuff as you would expect from Rob), but more about the tool that he used.

These days, we tend to think Azure Data Studio when we database developers talk about notebooks, specifically SQL Notebooks. But what Rob used for his demos are a new functionality within VS Code called .NET Interactive Notebooks. It was developed in combination with the Azure Data Studio team and it has support for SQL. But the cool thing that intrigued me was that a notebook could support multiple kernels, unlike Azure Data Studio. Knowing how much we love our SQL and PowerShell and this being a feature that many of us want to see in SQL Notebooks, I decided to try and set this up and poke around.

Continue reading “Setting up .NET Interactive Notebooks”
Posted in Docker, SQL Server, T-SQL Tuesday

T-SQL Tuesday #140 – What have I been doing with Containers

It’s another T-SQL Tuesday! Thanks to Anthony Nocentino (t | b) for hosting this month’s T-SQL Tuesday challenge. This month, he invited us to write about how we use containers.

I’ve just used SQL Server containers for testing. It’s essentially an isolated way of testing scripts against a specific database. But in the end, it’s really just been another instance of SQL Server for me to use.

And that’s it.

Continue reading “T-SQL Tuesday #140 – What have I been doing with Containers”
Posted in SQL Server, T-SQL Tuesday

T-SQL Tuesday #138 – Managing Tech Changes

It’s the second Tuesday in May so it must be T-SQL Tuesday! Thanks to Andy Leonard (b | t) for hosting this month’s blog party. (As always, you can read up more about T-SQL Tuesdays here.) Here’s his invitation:

For this month’s T-SQL Tuesday, I want you to write about how you have responded – or plan to respond to – changes in technology.

There are a couple of ways tech changes on you:

First, there are the changes that occur because something is “fixed” or improved when the newest version is released. For example, upgrading SQL Server versions gives you fixes to things like Cardinality Estimators and Intelligent Query Processing. These sort of changes will either suddenly speed up\wreck havoc on your performance whether you expect it to or not. But to me, these are also the sort of changes that could happen if you suddenly have a new index or the amount of data in the table drastically changes or your dev dataset size was significantly smaller than what was in production or your statistics are out of date, etc.

All we can do in these cases is test what we write to the best of our ability and adjust based on the new conditions. Sometimes, we can predict what will happen and can plan for it before it goes into production. Other times, we will learn the hard way. But I feel like these are expected and we just need to roll with these as we find them.

Then you have the changes because what you had been working with has been completely restructured and reimagined. I’m looking at you, SSIS changes from SQL Server 2008 to 2012. The introduction of SSISDB and the idea of environments, along with some of the other improvements made in general, were pretty drastic changes and if you had SSIS packages, you had to figure out what the changes were and how you were going to support them. I worked at one place where things were pretty much kept as they were. And I worked at another where the clients on SQL 2008 had SQL 2008 packages but we updated the model and structure for SQL 2012 and higher so we could take advantage of the new structure. Being about to take advantage of environments meant that we had a really easy way of deploying to clients and setting things up as part of the install\upgrade process.

For changes like these, we need to understand what the changes are and how we need to adapt our current code to those changes. We usually get some warnings about these features so the key is to try to get ahead of the releases or make sure we can test them in a dev environment before the client decides to upgrade on us and we find ourselves learning in production.

And then there are the shiny brand new features or technology. These are the things that the latest buzz words are built around.

Sometimes I feel like an old curmudgeon when it comes to new tech. I feel like I come across as anti-new-tech when it’s proposed as a solution. If it’s not something that I know, I want to understand it first. But the reason I feel like I come across as opposed to the changes is I will hear about it in terms of “we’re having problems with ABC so let’s just replace it with FGH”. What I don’t hear is how have we tried to fix ABC or is the problem with ABC really something that is solved by FGH. Or I’ll hear a misunderstanding of how FGH works so it becomes clear that not only do we not understand what the problem is that we’re trying to solve but also we don’t understand how the new tech will actually be able to help. It sounds better to say we’re implementing the new tech and we’re tech forward. But this approach means that we find ourselves not solving the initial problem along with creating new ones.

Don’t get me wrong – the capabilities of the new tech and the problems that they can solve are really cool and exciting. But we end up treating the new tech as the solution rather than the tool used to implement the solution. So we never get around to the underlying issues and we blame the new tech for not solving the issues rather than our lack of understanding of what we need to set up for the new tech to help.

The solution here is to go back to understanding the problem we’re trying to solve. This is where we need to learn what the new technology changes are for, understand how they work, think about the problem they solve, and then figure out if they will work for the problems we’re trying to solve. In many ways, I don’t always get a chance to play with the new tech because I’m too busy trying to see if I can solve with what I have available already. But the key I find working with these sort of new tech changes is education. We have to educate ourselves as to what the new tech is and then educate others as to what it is.

In many ways, education is the key to all of the different ways that tech changes on us. That’s why I’m here and that’s why I try to go to the different Data Platform events when I can. We have to be open to learning about and embracing these changes. In the end, we have no choice if we’re going to stay on the top of our game. But it always comes down to understanding the problem we’re trying to solve and what role the changes in tech have to play in it if we’re going to be successful.