Posted in PowerApps

Remembering How to Learn New Things

Photo by Pixabay on

It’s been a while since I’ve blogged. It’s just been really busy around here. One of the things that’s kept me busy – from the work perspective, at least – has been learning something new: PowerApps.

If you haven’t heard of it, PowerApps is a low-code offering from Microsoft to quickly build and share apps. I just need a very simple, internal facing application so PowerApps fits our needs. I have dabbled with programming over the years so I’m familiar with a lot of the concepts needed. Plus I’ve spent my career working with developers so I’ve absorbed their processes and approaches as the daily stand-ups covered the different bugs they were working on. Being exposed to the development process and lifecycle has been invaluable to figuring out how to make this work. But there’s still a learning curve as I do this myself.

I have to say – I’ve been enjoying working on this project is working with something new. It’s a good reminder of how to learn something a little outside my normal “bag of tricks”.

So how am I learning PowerApps?

  • Doing my homework before my work. I knew this project was coming so I was able to spend some time learning about PowerApps before I did anything else. I had access to various courses so I watched some of them online. This meant I could hit the ground running – well, maybe not running – but I could get started faster because I knew what the screen looked like, where to find the basic things, and have enough information to understand what I needed to look for as next steps.
  • Making use of the data samples that are provided. PowerApps comes with sample data that I used before I even touched that data that I wanted to work with. I was able to create some screens that interacted with that data so I could learn different items and behaviors that I was then able to apply to the various pieces of the application. Sample data like this is not something I would have thought to look for, but once I realized it was there, it made it much easier for me figure out the basics without having to figure out how to read the data from a database or create a table in Dataverse or import from Excel, etc.; there are a bunch of caveats about data limits and working with data from sources so it was one less thing for me to focus on as I got started.
  • Breaking each piece down into small chunks. Being brand new to PowerApps meant that I didn’t know any of the commands, which meant I had to look up how to do pretty much everything. If I want my button to do X, Y, and Z, I really need to understand what X, Y, and Z entail. So instead of trying to do everything at once, I could work on making the button do X and understand what’s behind making that work. Once I got that piece in place, I could then work on Y. If Y required more involved logic, I would break that down into something smaller. But by looking at each step and really understanding what’s involved, it probably helped me put everything together much faster.
  • Fine tuning my search keywords to really hone in on what problem I am trying to solve. This really is the key to most things that we do as we learn something new, right? And in some ways, this was the more frustrating piece that I’ve had to deal with. It takes a lot of trial and error to figure out how to describe the behavior you are trying to accomplish in a way that those who know the product talk about it. It took me multiple days to find out to make a particular action work the way I wanted it to and it took about 5 minutes to implement and test 6 lines of formatted code once I found the answer. Why did it take so long to find the answer? Because it took time for me to try to get alternate solutions to work based on other searches and then refine my search to get the right answer for my scenario.
  • Reminding myself that just because something is designed for a “non traditional programmer” like me, it’s still programming and it can still be hard. One of the realizations that I’m coming to is that there should be a reframing about tools we label “no code\low code” solutions. The purpose of these is to help speed the development process by allowing business users, or “citizen developers”, create applications without the requirement of knowing how to code or minimal knowledge. What gets lost is that it is still programming. While the barriers to get started are lower, i.e. using of WSIWYG interfaces, we do these solutions as a disservice by forgetting about the “code” part. You will still want to understand basic programming principles to truly make these good applications – responsive screens, security and application permissions, accessibility, coding standards, source control, QA & testing, deployment, etc. My joke has always been that programming made me want to throw things and I’ve hit that frustration level working with PowerApps a few times. But reminding myself that this is programming has actually helped reset my mindset by forcing me to stop, recognize that it’s hard, take a breath, and reattack the problem from a different direction.

I’m sure there’s a lot more details about what I’ve learned so far that I should probably throw a couple more posts together.; hopefully I find some time for that. But there are still some cool things that I’m looking forward to exploring more – like using the VSCode extensions to check my work into source control and potentially even code the app directly (!), using the Test Suite to create unit tests, and learning how to deploy my app to different environments.

Will I become a full time PowerApps developer? Probably not. I still like my T-SQL but I’m happy to have another set of skills for my toolbelt. But if nothing else, this has been a good reminder that we need to be open to learning new things along the way.

Posted in Blogging, Speaking, WIT

A Woman in SQL 2023

When I started this blog back in 2016, I wanted to make sure I wrote a blog post every March in honor of Women’s Month. I missed it last year but I want to get back in the routine. I figure as long as it was about by the end of the month, I’m good, right?

Photo by ThisIsEngineering on

This particular post took me a long time to put together. I actually started putting it together well over a year ago. I’m not sure why it’s taken so long to hit “Post”. Perhaps because I don’t like what it says but I feel it needs to be done anyway.

I see a lot of conversations on social media related to diversity at events. It bubbles up every couple of months or so. Some organizers are great about making sure they have a diverse pool of speakers but aren’t always able to get there. Other times, the conversation is criticizing organizers who are falling short of the goal. But there seems to be a general sense that we aren’t as diverse or representative as we want to be. So I wanted to take a look to see if this is an accurate assessment.

Continue reading “A Woman in SQL 2023”
Posted in WIT, WITspiration

Announcing the launch of: WITspiration

I cannot begin to say how excited I am for this blog post because I finally get to tell you about something that’s been in the works for a while.

My good friend Tracy Boggiano (t | m | b) and I are happy to announce the launch of WITspiration, a women’s mentoring circle.

What is WITspiration?

The goal of this group is simple:

The logo for WITspiration: An image in the center shows a person helping another person climb onto a block. Under the image is the group's name, WITspiration, with the tagline "Lift as We Climb" underneath. The colors are a peach-y pink (or pink-y peach) with the image and tagline in a muted red.

To inspire and empower women in tech, starting with the data platform community, to thrive in their careers through community based mentorship.

Our tagline:

Lift as We Climb.

This is something we hear a lot from a lot of different people; Rie Merritt (t | m) is someone who I often associate with this phrase. It’s important for women to support other women. And this fits our goal perfectly. We’re not only a part of this to get the support we need but help others reach their goals at the same time. We’re truly lifting others as we lift ourselves.

What is a mentoring circle and how does it work?

I first heard about the concept of mentoring circles from Kellyn Pot’vin-Gorman (t | b). I loved the idea because of egalitarian qualities of it. Everyone is a mentee and everyone is a mentor. It’s a collective way to work together to hear the different thoughts and opinions and really work through various issues. For me, it’s also less pressure. I’m always worried that I may give someone bad advice or steer them wrong. With a circle, I have a partner who can also give another perspective and between us, we can provide more support for that third person.

We will create circles of 3 or 4 people, trying to match interests and goals as much as possible. Then we’ll leave it up to each group to find times to get together and mentor each other. We’ll ask one person in the group to help organize when the meetings happen, be someone we can check in with along the way, etc. Tracy and I will be assisting each group as needed along the way. Each group will meet for a year to give them time to develop their rhythms and achieve goals.

Are you interested in being a part of this?

If you would like to participate, here’s the form to fill out:

To find out more information about our organization, including our goals and code of conduct, you can find them here: This link will take you to our GitHub page.

We’re also on social media so make sure to follow us there:

Now that the important part is out of the way, I can share some of the background behind how we got here, if you’re interested in that sort of thing…

Continue reading “Announcing the launch of: WITspiration”
Posted in SQL Server, T-SQL, T-SQL Tuesday

T-SQL Tuesday #159 – Favorite SQL 2022 Feature

It’s another T-SQL Tuesday! This month, it’s hosted by the one and only Deepthi Goguri (b | t). This month, Deepthi has two question for us:

  1. Blog about your new favorite feature in SQL Server 2022 or in Azure. 
  2. What are your new year resolutions and how do you keep the discipline doing it day after day?
Continue reading “T-SQL Tuesday #159 – Favorite SQL 2022 Feature”
Posted in SQL Server, T-SQL

OPENROWSET, Dynamic SQL & Error Handling

Now that we understand a little more how dynamic SQL works, let’s see how it helped me solve the problem.


OPENROWSET is a functionality that allows you to access data sources outside your current server. This could be reading from an Excel file or calling another SQL Server instance. You’re able to treat that other data source as record set, or derived table, and work with the rows returned as you would a local table. One reason you may want to do this is that you need to use a stored procedures to query data from other servers and bring the data together, effectively creating an ELT (Extract – Load – Transform) process without having to use SSIS or Azure Data Factory (ADF).

Continue reading “OPENROWSET, Dynamic SQL & Error Handling”
Posted in SQL Server, T-SQL

Dynamic SQL: Sessions and Execution

I admit it – I do waaayyyy too much with dynamic SQL. But I just keep running into situations that require it. That being said, I ran into an interesting problem that had me puzzled. I found a bunch of different blog posts that pointed to me to the right direction but required a little extra work to find the solution.

There are several concepts that are at play here, so I’ll try to break this out so we can put the pieces together. The first once is centered around dynamic SQL. There are two parts of this I want to make sure we understand first – how it fits into sessions and how it gets executed.

Continue reading “Dynamic SQL: Sessions and Execution”
Posted in T-SQL, T-SQL Tuesday

T-SQL Tuesday #158 – Using the Worse Practices

Happy T-SQL Tuesday! It’s the first one of 2023 so in the spirit of starting the new year on the right foot, it feels wrong to not join in.

This month, our host is Raul Gonzalez (b | t). His challenge for us is are there cases where the commonly agreed upon “worse cases” are cases where they are useful?

I feel like this is where I should say something like, “Hi, my name is Deborah and I’ve used nolock in production.” I would also have to confess to doing things like using correlated sub queries, not using a foreign key, implemented complicated triggers, etc. I often talk about how the first real SQL script I wrote had cursors running over temp tables in SQL Server 6.5, which I’m fairly certain was one of the first thing I read you were NOT supposed to do. And oh, hello there, denomalized table and dynamic SQL! I’m sure I’ve done more things than this too. These are just the ones I can remember doing, or at least I’m willing to admit in public.

I’m not sure if I can remember all the specifics but what I do remember is, there was a reason for it. More importantly, these were also exceptions to the rule.

Why did we use nolocks? Because after more than 2 weeks of doing deep dive investigation of the deadlock situations and being on an older version of SQL Server (or we at least had the requirement to support a legacy version) where snapshot isolation wasn’t an option, using a combination of nolock, index hints, and the application developer adding logic to immediately retry the query if a deadlock happened mitigated the problem. We tested it thoroughly as a solution before being sent to production.

Why did I OK a correlated sub query? Because I knew the SELECT statement was only going to call one or two rows and while it wasn’t great, it shouldn’t have been much of a problem. I made a note in the pull request that this should be looked at with the next change so we could keep track of the problem. There may have been other times where we were using the STUFF + FOR XML string aggregation logic to take a bunch of related rows and have them returned in a single column value as part of a larger dataset.

The cursor over the temp table? I blame a really, really bad database design that even I, a very new and junior DBA at the time, could tell was just a really, really bad design. And redesigning the database wasn’t an option. But I always come back to this code because it’s really how I learned to write SQL.

Sometimes the worse practices are there because people haven’t run into the issues yet. Or you question whether it’s a worse practice because you personally don’t like it or it’s an actually problem or it only is one under different circumstances. I like CTEs but I’m so cautious of them because I’ve been burned by bad performance. Does that fall under “worse” practice or does that fall under “do extra testing first”? What about the MERGE syntax? I know people who use it and haven’t run into an issue but I’ve just read about all the bugs with it so it really feels like something we don’t want use. So is that a “worse” practice to use or is it a “do extra testing first” or “better safe than sorry so let’s just avoid” situation? But more importantly, is it a worse practice to use CTEs or MERGE because of the problems associated with them based on different use cases or do we just make sure we use them when appropriate, with the proper testing of course?

Here’s the thing – there is a reason these things that we consider bad practices are available in the first place. There are situations where they can be useful and help solve problems. I think what makes them bad practices is implementing them as a rule instead of as an exception. “This solved\prevented this one problem so therefore it must solve\prevent all the problems.” Another issue is when those practices are embedded in either legacy code or legacy work culture. It’s harder to educate people on why you have to change to the best practices, especially if you run into that case where you really do need that “worst” practice. But when the worse practice is embedded in the code and culture to the point you can’t change it later, you start to learn the true cost of Tech Debt the hard (and very expensive) way.

The other catch to all of this is while there was a reason these “worse” practices were implemented at the time, we have to remember to keep track so that way when newer features and functionality come along that could be better solutions, we can use those to get rid of the things we shouldn’t be doing. For example, introducing snapshot isolation where nolocks were used since nolock and dirty reads will take precedence over the optimistic row versioning.

I guess it comes down to you have test your code and be willing to defend your decision to use a “worse” practice to someone else. It is very much a calculated risk so you have to make sure you understand what you’re doing and why and keep track of when you make them. More importantly, you have to make sure that you can pivot to fix the problem should you accidentally make the wrong choice. Good luck!

Thanks again to Raul for a great topic. Looking forward to reading all the “bad” things you have been doing in your databases.

Posted in Professional Development, Speaking

New Stars November

I stared this post on my way home from Pass Data Community Summit 2022 and finishing it on Thanksgiving here in the US. There’s something that feels appropriate about this fact. Maybe because it was being inspired after attending PASS Summit 2016 and thinking about what I’m grateful for are intertwined in this one topic.

New Stars of Data is the brainchild of Ben Weissman (t) and William Durkin (t) as a platform for new speakers to get a start in the community. They paired the speaker with a mentor to help them prepare. I was lucky enough to be picked as a mentor and then moderator for this. It’s been amazing to see so many of these speakers become stars in the community so quickly. As a continuation, they have asked other speakers to contribute by writing a blog post about their experience getting started as speakers. (They also have a library of resources for speakers as well so definitely check out the New Stars website!)

Continue reading “New Stars November”